id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
628,083,366 | 225 | [ROUGE] Different scores with `files2rouge` | It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`.
Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing
---
`nlp` : (Only mid F-scores)
>rouge1 0.33508031962733364
rouge2 0.145743337761... | closed | https://github.com/huggingface/datasets/issues/225 | 2020-06-01T00:50:36 | 2020-06-03T15:27:18 | 2020-06-03T15:27:18 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [
{
"name": "Metric discussion",
"color": "d722e8"
}
] | false | [] |
627,791,693 | 224 | [Feature Request/Help] BLEURT model -> PyTorch | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Tw... | closed | https://github.com/huggingface/datasets/issues/224 | 2020-05-30T18:30:40 | 2023-08-26T17:38:48 | 2021-01-04T09:53:32 | {
"login": "adamwlev",
"id": 6889910,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
627,683,386 | 223 | [Feature request] Add FLUE dataset | Hi,
I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.
In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.
If it is not the case, I can provide each of the cleaned FLUE datasets (in the form... | closed | https://github.com/huggingface/datasets/issues/223 | 2020-05-30T08:52:15 | 2020-12-03T13:39:33 | 2020-12-03T13:39:33 | {
"login": "lbourdois",
"id": 58078086,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
627,586,690 | 222 | Colab Notebook breaks when downloading the squad dataset | When I run the notebook in Colab
https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb
breaks when running this cell:

| closed | https://github.com/huggingface/datasets/issues/222 | 2020-05-29T22:55:59 | 2020-06-04T00:21:05 | 2020-06-04T00:21:05 | {
"login": "carlos-aguayo",
"id": 338917,
"type": "User"
} | [] | false | [] |
627,300,648 | 221 | Fix tests/test_dataset_common.py | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/ma... | closed | https://github.com/huggingface/datasets/pull/221 | 2020-05-29T14:12:15 | 2020-06-01T12:20:42 | 2020-05-29T15:02:23 | {
"login": "tayciryahmed",
"id": 13635495,
"type": "User"
} | [] | true | [] |
627,280,683 | 220 | dataset_arcd | Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394 | closed | https://github.com/huggingface/datasets/pull/220 | 2020-05-29T13:46:50 | 2020-05-29T14:58:40 | 2020-05-29T14:57:21 | {
"login": "tayciryahmed",
"id": 13635495,
"type": "User"
} | [] | true | [] |
627,235,893 | 219 | force mwparserfromhell as third party | This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten | closed | https://github.com/huggingface/datasets/pull/219 | 2020-05-29T12:33:17 | 2020-05-29T13:30:13 | 2020-05-29T13:30:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
627,173,407 | 218 | Add Natual Questions and C4 scripts | Scripts are ready !
However they are not processed nor directly available from gcp yet. | closed | https://github.com/huggingface/datasets/pull/218 | 2020-05-29T10:40:30 | 2020-05-29T12:31:01 | 2020-05-29T12:31:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
627,128,403 | 217 | Multi-task dataset mixing | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sam... | open | https://github.com/huggingface/datasets/issues/217 | 2020-05-29T09:22:26 | 2022-10-22T00:45:50 | null | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
626,896,890 | 216 | ❓ How to get ROUGE-2 with the ROUGE metric ? | I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.
---
I compute scores with :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
rouge.add([lp], [lg])
score = rouge.compute()
```
... | closed | https://github.com/huggingface/datasets/issues/216 | 2020-05-28T23:47:32 | 2020-06-01T00:04:35 | 2020-06-01T00:04:35 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
626,867,879 | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded... | closed | https://github.com/huggingface/datasets/issues/215 | 2020-05-28T22:55:19 | 2025-01-04T00:03:12 | 2022-02-10T13:05:45 | {
"login": "cedricconol",
"id": 52105365,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
626,641,549 | 214 | [arrow_dataset.py] add new filter function | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a ... | closed | https://github.com/huggingface/datasets/pull/214 | 2020-05-28T16:21:40 | 2020-05-29T11:43:29 | 2020-05-29T11:32:20 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
626,587,995 | 213 | better message if missing beam options | WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru... | closed | https://github.com/huggingface/datasets/pull/213 | 2020-05-28T15:06:57 | 2020-05-29T09:51:17 | 2020-05-29T09:51:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
626,580,198 | 212 | have 'add' and 'add_batch' for metrics | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | closed | https://github.com/huggingface/datasets/pull/212 | 2020-05-28T14:56:47 | 2020-05-29T10:41:05 | 2020-05-29T10:41:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
626,565,994 | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n... | closed | https://github.com/huggingface/datasets/issues/211 | 2020-05-28T14:38:14 | 2020-07-23T10:15:16 | 2020-07-23T10:15:16 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
626,504,243 | 210 | fix xnli metric kwargs description | The text was wrong as noticed in #202 | closed | https://github.com/huggingface/datasets/pull/210 | 2020-05-28T13:21:44 | 2020-05-28T13:22:11 | 2020-05-28T13:22:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
626,405,849 | 209 | Add a Google Drive exception for small files | I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive.
One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly... | closed | https://github.com/huggingface/datasets/pull/209 | 2020-05-28T10:40:17 | 2020-05-28T15:15:04 | 2020-05-28T15:15:04 | {
"login": "airKlizz",
"id": 25703835,
"type": "User"
} | [] | true | [] |
626,398,519 | 208 | [Dummy data] insert config name instead of config | Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself.
Also, @lhoestq fixed small import bug introduced by beam command I think. | closed | https://github.com/huggingface/datasets/pull/208 | 2020-05-28T10:28:19 | 2020-05-28T12:48:01 | 2020-05-28T12:48:00 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
625,932,200 | 207 | Remove test set from NLP viewer | While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal... | closed | https://github.com/huggingface/datasets/issues/207 | 2020-05-27T18:32:07 | 2022-02-10T13:17:45 | 2022-02-10T13:17:45 | {
"login": "chrisdonahue",
"id": 748399,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
625,842,989 | 206 | [Question] Combine 2 datasets which have the same columns | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-... | closed | https://github.com/huggingface/datasets/issues/206 | 2020-05-27T16:25:52 | 2020-06-10T09:11:14 | 2020-06-10T09:11:14 | {
"login": "airKlizz",
"id": 25703835,
"type": "User"
} | [] | false | [] |
625,839,335 | 205 | Better arrow dataset iter | I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).
With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193. | closed | https://github.com/huggingface/datasets/pull/205 | 2020-05-27T16:20:21 | 2020-05-27T16:39:58 | 2020-05-27T16:39:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
625,655,849 | 204 | Add Dataflow support + Wikipedia + Wiki40b | # Add Dataflow support + Wikipedia + Wiki40b
## Support datasets processing with Apache Beam
Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.
To process such da... | closed | https://github.com/huggingface/datasets/pull/204 | 2020-05-27T12:32:49 | 2020-05-28T08:10:35 | 2020-05-28T08:10:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
625,515,488 | 203 | Raise an error if no config name for datasets like glue | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p... | closed | https://github.com/huggingface/datasets/pull/203 | 2020-05-27T09:03:58 | 2020-05-27T16:40:39 | 2020-05-27T16:40:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
625,493,983 | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
... | closed | https://github.com/huggingface/datasets/issues/202 | 2020-05-27T08:34:42 | 2020-05-28T13:22:36 | 2020-05-28T13:22:36 | {
"login": "phiyodr",
"id": 33572125,
"type": "User"
} | [] | false | [] |
625,235,430 | 201 | Fix typo in README | closed | https://github.com/huggingface/datasets/pull/201 | 2020-05-26T22:18:21 | 2020-05-26T23:40:31 | 2020-05-26T23:00:56 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [] | true | [] | |
625,226,638 | 200 | [ArrowWriter] Set schema at first write example | Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).
I noticed that it was not done if the first example is added via `.write`, so I added it for coherence. | closed | https://github.com/huggingface/datasets/pull/200 | 2020-05-26T21:59:48 | 2020-05-27T09:07:54 | 2020-05-27T09:07:53 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
625,217,440 | 199 | Fix GermEval 2014 dataset infos | Hi,
this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file. | closed | https://github.com/huggingface/datasets/pull/199 | 2020-05-26T21:41:44 | 2020-05-26T21:50:24 | 2020-05-26T21:50:24 | {
"login": "stefan-it",
"id": 20651387,
"type": "User"
} | [] | true | [] |
625,200,627 | 198 | Index outside of table length | The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).
> ValueError: Index (2000) outside of table length (2000).
> Traceback:
> File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _ru... | closed | https://github.com/huggingface/datasets/issues/198 | 2020-05-26T21:09:40 | 2020-05-26T22:43:49 | 2020-05-26T22:43:49 | {
"login": "casajarm",
"id": 305717,
"type": "User"
} | [] | false | [] |
624,966,904 | 197 | Scientific Papers only downloading Pubmed | Hi!
I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following:
```
dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.')
Downloading: 10... | closed | https://github.com/huggingface/datasets/issues/197 | 2020-05-26T15:18:47 | 2020-05-28T08:19:28 | 2020-05-28T08:19:28 | {
"login": "antmarakis",
"id": 17463361,
"type": "User"
} | [] | false | [] |
624,901,266 | 196 | Check invalid config name | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | closed | https://github.com/huggingface/datasets/pull/196 | 2020-05-26T13:52:51 | 2020-05-26T21:04:56 | 2020-05-26T21:04:55 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
624,858,686 | 195 | [Dummy data command] add new case to command | Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. | closed | https://github.com/huggingface/datasets/pull/195 | 2020-05-26T12:50:47 | 2020-05-26T14:38:28 | 2020-05-26T14:38:27 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
624,854,897 | 194 | Add Dataset: Qanta | Fixes dummy data for #169 @EntilZha | closed | https://github.com/huggingface/datasets/pull/194 | 2020-05-26T12:44:35 | 2020-05-26T16:58:17 | 2020-05-26T13:16:20 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
624,655,558 | 193 | [Tensorflow] Use something else than `from_tensor_slices()` | In the example notebook, the TF Dataset is built using `from_tensor_slices()` :
```python
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
label... | closed | https://github.com/huggingface/datasets/issues/193 | 2020-05-26T07:19:14 | 2020-10-27T15:28:11 | 2020-10-27T15:28:11 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
624,397,592 | 192 | [Question] Create Apache Arrow dataset from raw text file | Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide?
Is the worth of send it to you and add i... | closed | https://github.com/huggingface/datasets/issues/192 | 2020-05-25T16:42:47 | 2021-12-18T01:45:34 | 2020-10-27T15:20:22 | {
"login": "mrm8488",
"id": 3653789,
"type": "User"
} | [] | false | [] |
624,394,936 | 191 | [Squad es] add dataset_infos | @mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D | closed | https://github.com/huggingface/datasets/pull/191 | 2020-05-25T16:35:52 | 2020-05-25T16:39:59 | 2020-05-25T16:39:58 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
624,124,600 | 190 | add squad Spanish v1 and v2 | This PR add the Spanish Squad versions 1 and 2 datasets.
Fixes #164 | closed | https://github.com/huggingface/datasets/pull/190 | 2020-05-25T08:08:40 | 2020-05-25T16:28:46 | 2020-05-25T16:28:45 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
624,048,881 | 189 | [Question] BERT-style multiple choice formatting | Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the nu... | closed | https://github.com/huggingface/datasets/issues/189 | 2020-05-25T05:11:05 | 2020-05-25T18:38:28 | 2020-05-25T18:38:28 | {
"login": "sarahwie",
"id": 8027676,
"type": "User"
} | [] | false | [] |
623,890,430 | 188 | When will the remaining math_dataset modules be added as dataset objects | Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help? | closed | https://github.com/huggingface/datasets/issues/188 | 2020-05-24T15:46:52 | 2020-05-24T18:53:48 | 2020-05-24T18:53:48 | {
"login": "tylerroost",
"id": 31251196,
"type": "User"
} | [] | false | [] |
623,627,800 | 187 | [Question] How to load wikipedia ? Beam runner ? | When `nlp.load_dataset('wikipedia')`, I got
* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be ... | closed | https://github.com/huggingface/datasets/issues/187 | 2020-05-23T10:18:52 | 2020-05-25T00:12:02 | 2020-05-25T00:12:02 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | false | [] |
623,595,180 | 186 | Weird-ish: Not creating unique caches for different phases | Sample code:
```python
import nlp
dataset = nlp.load_dataset('boolq')
def func1(x):
return x
def func2(x):
return None
train_output = dataset["train"].map(func1)
valid_output = dataset["validation"].map(func1)
print()
print(len(train_output), len(valid_output))
# Output: 9427 9427
```
Th... | closed | https://github.com/huggingface/datasets/issues/186 | 2020-05-23T06:40:58 | 2020-05-23T20:22:18 | 2020-05-23T20:22:17 | {
"login": "zphang",
"id": 1668462,
"type": "User"
} | [] | false | [] |
623,172,484 | 185 | [Commands] In-detail instructions to create dummy data folder | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_s... | closed | https://github.com/huggingface/datasets/pull/185 | 2020-05-22T12:26:25 | 2020-05-22T14:06:35 | 2020-05-22T14:06:34 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
623,120,929 | 184 | Use IndexError instead of ValueError when index out of range | **`default __iter__ needs IndexError`**.
When I want to create a wrapper of arrow dataset to adapt to fastai,
I don't know how to initialize it, so I didn't use inheritance but use object composition.
I wrote sth like this.
```
clas HF_dataset():
def __init__(self, arrow_dataset):
self.dset = arrow_datas... | closed | https://github.com/huggingface/datasets/pull/184 | 2020-05-22T10:43:42 | 2020-05-28T08:31:18 | 2020-05-28T08:31:18 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | true | [] |
623,054,270 | 183 | [Bug] labels of glue/ax are all -1 | ```
ax = nlp.load_dataset('glue', 'ax')
for i in range(30): print(ax['test'][i]['label'], end=', ')
```
```
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
``` | closed | https://github.com/huggingface/datasets/issues/183 | 2020-05-22T08:43:36 | 2020-05-22T22:14:05 | 2020-05-22T22:14:05 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | false | [] |
622,646,770 | 182 | Update newsroom.py | Updated the URL for Newsroom download so it's more robust to future changes. | closed | https://github.com/huggingface/datasets/pull/182 | 2020-05-21T17:07:43 | 2020-05-22T16:38:23 | 2020-05-22T16:38:23 | {
"login": "yoavartzi",
"id": 3289873,
"type": "User"
} | [] | true | [] |
622,634,420 | 181 | Cannot upload my own dataset | I look into `nlp-cli` and `user.py` to learn how to upload my own data.
It is supposed to work like this
- Register to get username, password at huggingface.co
- `nlp-cli login` and type username, passworld
- I have a single file to upload at `./ttc/ttc_freq_extra.csv`
- `nlp-cli upload ttc/ttc_freq_extra.csv`
... | closed | https://github.com/huggingface/datasets/issues/181 | 2020-05-21T16:45:52 | 2020-06-18T22:14:42 | 2020-06-18T22:14:42 | {
"login": "korakot",
"id": 3155646,
"type": "User"
} | [] | false | [] |
622,556,861 | 180 | Add hall of fame | powered by https://github.com/sourcerer-io/hall-of-fame | closed | https://github.com/huggingface/datasets/pull/180 | 2020-05-21T14:53:48 | 2020-05-22T16:35:16 | 2020-05-22T16:35:14 | {
"login": "clmnt",
"id": 821155,
"type": "User"
} | [] | true | [] |
622,525,410 | 179 | [Feature request] separate split name and split instructions | Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction.
This makes it impossible to have several training sets, which can occur when:
- A dataset corresponds to a collection of sub-datasets
- A dataset was built in stages, adding new examples at each stage
Would it be ... | closed | https://github.com/huggingface/datasets/issues/179 | 2020-05-21T14:10:51 | 2020-05-22T13:31:08 | 2020-05-22T13:31:07 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | false | [] |
621,979,849 | 178 | [Manual data] improve error message for manual data in general | `nlp.load("xsum")` now leads to the following error message:

I guess the manual download instructions for `xsum` can also be improved. | closed | https://github.com/huggingface/datasets/pull/178 | 2020-05-20T18:10:45 | 2020-05-20T18:18:52 | 2020-05-20T18:18:50 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
621,975,368 | 177 | Xsum manual download instruction | closed | https://github.com/huggingface/datasets/pull/177 | 2020-05-20T18:02:41 | 2020-05-20T18:16:50 | 2020-05-20T18:16:49 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] | |
621,934,638 | 176 | [Tests] Refactor MockDownloadManager | Clean mock download manager class.
The print function was not of much help I think.
We should think about adding a command that creates the dummy folder structure for the user. | closed | https://github.com/huggingface/datasets/pull/176 | 2020-05-20T17:07:36 | 2020-05-20T18:17:19 | 2020-05-20T18:17:18 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
621,929,428 | 175 | [Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError | v 0.1.0 from pip
```python
import nlp
xsum = nlp.load_dataset('xsum')
```
Issue is `dl_manager.manual_dir`is `None`
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-8a32f06... | closed | https://github.com/huggingface/datasets/issues/175 | 2020-05-20T17:00:32 | 2020-05-20T18:18:50 | 2020-05-20T18:18:50 | {
"login": "sshleifer",
"id": 6045025,
"type": "User"
} | [] | false | [] |
621,928,403 | 174 | nlp.load_dataset('xsum') -> TypeError | closed | https://github.com/huggingface/datasets/issues/174 | 2020-05-20T16:59:09 | 2020-05-20T17:43:46 | 2020-05-20T17:43:46 | {
"login": "sshleifer",
"id": 6045025,
"type": "User"
} | [] | false | [] | |
621,764,932 | 173 | Rm extracted test dirs | All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories
Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get r... | closed | https://github.com/huggingface/datasets/pull/173 | 2020-05-20T13:30:48 | 2020-05-22T16:34:36 | 2020-05-22T16:34:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
621,377,386 | 172 | Clone not working on Windows environment | Cloning in a windows environment is not working because of use of special character '?' in folder name ..
Please consider changing the folder name ....
Reference to folder -
nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/s... | closed | https://github.com/huggingface/datasets/issues/172 | 2020-05-20T00:45:14 | 2020-05-23T12:49:13 | 2020-05-23T11:27:52 | {
"login": "codehunk628",
"id": 51091425,
"type": "User"
} | [] | false | [] |
621,199,128 | 171 | fix squad metric format | The format of the squad metric was wrong.
This should fix #143
I tested with
```python3
predictions = [
{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
]
references = [
{'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'}
]
``` | closed | https://github.com/huggingface/datasets/pull/171 | 2020-05-19T18:37:36 | 2020-05-22T13:36:50 | 2020-05-22T13:36:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
621,119,747 | 170 | Rename anli dataset | What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)).
I renamed the current `anli` dataset by `art`. | closed | https://github.com/huggingface/datasets/pull/170 | 2020-05-19T16:26:57 | 2020-05-20T12:23:09 | 2020-05-20T12:23:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
621,099,682 | 169 | Adding Qanta (Quizbowl) Dataset | This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This part... | closed | https://github.com/huggingface/datasets/pull/169 | 2020-05-19T16:03:01 | 2020-05-26T12:52:31 | 2020-05-26T12:52:31 | {
"login": "EntilZha",
"id": 1382460,
"type": "User"
} | [] | true | [] |
620,959,819 | 168 | Loading 'wikitext' dataset fails | Loading the 'wikitext' dataset fails with Attribute error:
Code to reproduce (From example notebook):
import nlp
wikitext_dataset = nlp.load_dataset('wikitext')
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most rece... | closed | https://github.com/huggingface/datasets/issues/168 | 2020-05-19T13:04:29 | 2020-05-26T21:46:52 | 2020-05-26T21:46:52 | {
"login": "itay1itzhak",
"id": 25987633,
"type": "User"
} | [] | false | [] |
620,908,786 | 167 | [Tests] refactor tests | This PR separates AWS and Local tests to remove these ugly statements in the script:
```python
if "/" not in dataset_name:
logging.info("Skip {} because it is a canonical dataset")
return
```
To run a `aws` test, one should now run the following command:
```python
pytest -s... | closed | https://github.com/huggingface/datasets/pull/167 | 2020-05-19T11:43:32 | 2020-05-19T16:17:12 | 2020-05-19T16:17:10 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
620,850,218 | 166 | Add a method to shuffle a dataset | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-pl... | closed | https://github.com/huggingface/datasets/issues/166 | 2020-05-19T10:08:46 | 2020-06-23T15:07:33 | 2020-06-23T15:07:32 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
620,758,221 | 165 | ANLI | Can I recommend the following:
For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not
to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.".
Indeed, the paper cited under what is currently called anli says in the abstract "We int... | closed | https://github.com/huggingface/datasets/issues/165 | 2020-05-19T07:50:57 | 2020-05-20T12:23:07 | 2020-05-20T12:23:07 | {
"login": "douwekiela",
"id": 6024930,
"type": "User"
} | [] | false | [] |
620,540,250 | 164 | Add Spanish POR and NER Datasets | Hi guys,
In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.
I can provide it in raw and preprocessed formats. | closed | https://github.com/huggingface/datasets/issues/164 | 2020-05-18T22:18:21 | 2020-05-25T16:28:45 | 2020-05-25T16:28:45 | {
"login": "mrm8488",
"id": 3653789,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
620,534,307 | 163 | [Feature request] Add cos-e v1.0 | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](ht... | closed | https://github.com/huggingface/datasets/issues/163 | 2020-05-18T22:05:26 | 2020-06-16T23:15:25 | 2020-06-16T18:52:06 | {
"login": "sarahwie",
"id": 8027676,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
620,513,554 | 162 | fix prev files hash in map | Fix the `.map` issue in #160.
This makes sure it takes the previous files when computing the hash. | closed | https://github.com/huggingface/datasets/pull/162 | 2020-05-18T21:20:51 | 2020-05-18T21:36:21 | 2020-05-18T21:36:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
620,487,535 | 161 | Discussion on version identifier & MockDataLoaderManager for test data | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers ... | open | https://github.com/huggingface/datasets/issues/161 | 2020-05-18T20:31:30 | 2020-05-24T18:10:03 | null | {
"login": "EntilZha",
"id": 1382460,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
620,448,236 | 160 | caching in map causes same result to be returned for train, validation and test | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.... | closed | https://github.com/huggingface/datasets/issues/160 | 2020-05-18T19:22:03 | 2020-05-18T21:36:20 | 2020-05-18T21:36:20 | {
"login": "dpressel",
"id": 247881,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
620,420,700 | 159 | How can we add more datasets to nlp library? | closed | https://github.com/huggingface/datasets/issues/159 | 2020-05-18T18:35:31 | 2020-05-18T18:37:08 | 2020-05-18T18:37:07 | {
"login": "Tahsin-Mayeesha",
"id": 17886829,
"type": "User"
} | [] | false | [] | |
620,396,658 | 158 | add Toronto Books Corpus | This PR adds the Toronto Books Corpus.
.
It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php ) | closed | https://github.com/huggingface/datasets/pull/158 | 2020-05-18T17:54:45 | 2020-06-11T07:49:15 | 2020-05-19T07:34:56 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
620,356,542 | 157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | closed | https://github.com/huggingface/datasets/issues/157 | 2020-05-18T16:46:38 | 2020-06-05T08:08:58 | 2020-06-05T08:08:58 | {
"login": "saahiluppal",
"id": 47444392,
"type": "User"
} | [] | false | [] |
620,263,687 | 156 | SyntaxError with WMT datasets | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | closed | https://github.com/huggingface/datasets/issues/156 | 2020-05-18T14:38:18 | 2020-07-23T16:41:55 | 2020-07-23T16:41:55 | {
"login": "tomhosking",
"id": 9419158,
"type": "User"
} | [] | false | [] |
620,067,946 | 155 | Include more links in README, fix typos | Include more links and fix typos in README | closed | https://github.com/huggingface/datasets/pull/155 | 2020-05-18T09:47:08 | 2020-05-28T08:31:57 | 2020-05-28T08:31:57 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
620,059,066 | 154 | add Ubuntu Dialogs Corpus datasets | This PR adds the Ubuntu Dialog Corpus datasets version 2.0. | closed | https://github.com/huggingface/datasets/pull/154 | 2020-05-18T09:34:48 | 2020-05-18T10:12:28 | 2020-05-18T10:12:27 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
619,972,246 | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | open | https://github.com/huggingface/datasets/issues/153 | 2020-05-18T07:24:22 | 2020-05-18T21:18:16 | null | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
619,971,900 | 152 | Add GLUE config name check | Fixes #130 by adding a name check to the Glue class | closed | https://github.com/huggingface/datasets/pull/152 | 2020-05-18T07:23:43 | 2020-05-27T22:09:12 | 2020-05-27T22:09:12 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
619,968,480 | 151 | Fix JSON tests. | closed | https://github.com/huggingface/datasets/pull/151 | 2020-05-18T07:17:38 | 2020-05-18T07:21:52 | 2020-05-18T07:21:51 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] | |
619,809,645 | 150 | Add WNUT 17 NER dataset | Hi,
this PR adds the WNUT 17 dataset to `nlp`.
> Emerging and Rare entity recognition
> This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisati... | closed | https://github.com/huggingface/datasets/pull/150 | 2020-05-17T22:19:04 | 2020-05-26T20:37:59 | 2020-05-26T20:37:59 | {
"login": "stefan-it",
"id": 20651387,
"type": "User"
} | [] | true | [] |
619,735,739 | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | closed | https://github.com/huggingface/datasets/issues/149 | 2020-05-17T15:42:39 | 2020-05-18T17:01:46 | 2020-05-18T17:01:46 | {
"login": "danth",
"id": 28959268,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
619,590,555 | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w... | closed | https://github.com/huggingface/datasets/issues/148 | 2020-05-17T01:48:53 | 2020-05-18T07:38:33 | 2020-05-18T07:38:33 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
619,581,907 | 147 | Error with sklearn train_test_split | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)... | closed | https://github.com/huggingface/datasets/issues/147 | 2020-05-17T00:28:24 | 2020-06-18T16:23:23 | 2020-06-18T16:23:23 | {
"login": "ClonedOne",
"id": 6853743,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
619,564,653 | 146 | Add BERTScore to metrics | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [[... | closed | https://github.com/huggingface/datasets/pull/146 | 2020-05-16T22:09:39 | 2020-05-17T22:22:10 | 2020-05-17T22:22:09 | {
"login": "felixgwu",
"id": 7753366,
"type": "User"
} | [] | true | [] |
619,480,549 | 145 | [AWS Tests] Follow-up PR from #144 | I forgot to add this line in PR #145 . | closed | https://github.com/huggingface/datasets/pull/145 | 2020-05-16T13:53:46 | 2020-05-16T13:54:23 | 2020-05-16T13:54:22 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
619,477,367 | 144 | [AWS tests] AWS test should not run for canonical datasets | AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.
This PR changes to logic to the following:
1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical da... | closed | https://github.com/huggingface/datasets/pull/144 | 2020-05-16T13:39:30 | 2020-05-16T13:44:34 | 2020-05-16T13:44:33 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
619,457,641 | 143 | ArrowTypeError in squad metrics | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references lo... | closed | https://github.com/huggingface/datasets/issues/143 | 2020-05-16T12:06:37 | 2020-05-22T13:38:52 | 2020-05-22T13:36:48 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [
{
"name": "metric bug",
"color": "25b21e"
}
] | false | [] |
619,450,068 | 142 | [WMT] Add all wmt | This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng.
The datasets are fully functional though for the "big" languag... | closed | https://github.com/huggingface/datasets/pull/142 | 2020-05-16T11:28:46 | 2020-05-17T12:18:21 | 2020-05-17T12:18:20 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
619,447,090 | 141 | [Clean up] remove bogus folder | @mariamabarham - I think you accidentally placed it there. | closed | https://github.com/huggingface/datasets/pull/141 | 2020-05-16T11:13:42 | 2020-05-16T13:24:27 | 2020-05-16T13:24:26 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
619,443,613 | 140 | [Tests] run local tests as default | This PR also enables local tests by default
I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are... | closed | https://github.com/huggingface/datasets/pull/140 | 2020-05-16T10:56:06 | 2020-05-16T13:21:44 | 2020-05-16T13:21:43 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
619,327,409 | 139 | Add GermEval 2014 NER dataset | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000... | closed | https://github.com/huggingface/datasets/pull/139 | 2020-05-15T23:42:09 | 2020-05-16T13:56:37 | 2020-05-16T13:56:22 | {
"login": "stefan-it",
"id": 20651387,
"type": "User"
} | [] | true | [] |
619,225,191 | 138 | Consider renaming to nld | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | closed | https://github.com/huggingface/datasets/issues/138 | 2020-05-15T20:23:27 | 2022-09-16T05:18:22 | 2020-09-28T00:08:10 | {
"login": "honnibal",
"id": 8059750,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
619,211,018 | 136 | Update README.md | small typo | closed | https://github.com/huggingface/datasets/pull/136 | 2020-05-15T20:01:07 | 2020-05-17T12:17:28 | 2020-05-17T12:17:28 | {
"login": "renaud",
"id": 75369,
"type": "User"
} | [] | true | [] |
619,206,708 | 135 | Fix print statement in READ.md | print statement was throwing generator object instead of printing names of available datasets/metrics | closed | https://github.com/huggingface/datasets/pull/135 | 2020-05-15T19:52:23 | 2020-05-17T12:14:06 | 2020-05-17T12:14:05 | {
"login": "codehunk628",
"id": 51091425,
"type": "User"
} | [] | true | [] |
619,112,641 | 134 | Update README.md | closed | https://github.com/huggingface/datasets/pull/134 | 2020-05-15T16:56:14 | 2020-05-28T08:21:49 | 2020-05-28T08:21:49 | {
"login": "pranv",
"id": 8753078,
"type": "User"
} | [] | true | [] | |
619,094,954 | 133 | [Question] Using/adding a local dataset | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | closed | https://github.com/huggingface/datasets/issues/133 | 2020-05-15T16:26:06 | 2020-07-23T16:44:09 | 2020-07-23T16:44:09 | {
"login": "zphang",
"id": 1668462,
"type": "User"
} | [] | false | [] |
619,077,851 | 132 | [Feature Request] Add the OpenWebText dataset | The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra).
More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/). | closed | https://github.com/huggingface/datasets/issues/132 | 2020-05-15T15:57:29 | 2020-10-07T14:22:48 | 2020-10-07T14:22:48 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
619,073,731 | 131 | [Feature request] Add Toronto BookCorpus dataset | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | closed | https://github.com/huggingface/datasets/issues/131 | 2020-05-15T15:50:44 | 2020-06-28T21:27:31 | 2020-06-28T21:27:31 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
619,035,440 | 130 | Loading GLUE dataset loads CoLA by default | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | closed | https://github.com/huggingface/datasets/issues/130 | 2020-05-15T14:55:50 | 2020-05-27T22:08:15 | 2020-05-27T22:08:15 | {
"login": "zphang",
"id": 1668462,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
618,997,725 | 129 | [Feature request] Add Google Natural Question dataset | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | closed | https://github.com/huggingface/datasets/issues/129 | 2020-05-15T14:14:20 | 2020-07-23T13:21:29 | 2020-07-23T13:21:29 | {
"login": "elyase",
"id": 1175888,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
618,951,117 | 128 | Some error inside nlp.load_dataset() | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
`... | closed | https://github.com/huggingface/datasets/issues/128 | 2020-05-15T13:01:29 | 2020-05-15T13:10:40 | 2020-05-15T13:10:40 | {
"login": "polkaYK",
"id": 18486287,
"type": "User"
} | [] | false | [] |
618,909,042 | 127 | Update Overview.ipynb | update notebook | closed | https://github.com/huggingface/datasets/pull/127 | 2020-05-15T11:46:48 | 2020-05-15T11:47:27 | 2020-05-15T11:47:25 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
618,897,499 | 126 | remove webis | Remove webis from dataset folder.
Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu | closed | https://github.com/huggingface/datasets/pull/126 | 2020-05-15T11:25:20 | 2020-05-15T11:31:24 | 2020-05-15T11:30:26 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
618,869,048 | 125 | [Newsroom] add newsroom | I checked it with the data link of the mail you forwarded @thomwolf => works well! | closed | https://github.com/huggingface/datasets/pull/125 | 2020-05-15T10:34:34 | 2020-05-15T10:37:07 | 2020-05-15T10:37:02 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.