id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
721,073,812 | 730 | Possible caching bug | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | closed | https://github.com/huggingface/datasets/issues/730 | 2020-10-14T02:02:34 | 2022-11-22T01:45:54 | 2020-10-29T09:36:01 | {
"login": "ArneBinder",
"id": 3375489,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
719,558,876 | 729 | Better error message when one forgets to call `add_batch` before `compute` | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
... | closed | https://github.com/huggingface/datasets/issues/729 | 2020-10-12T17:59:22 | 2020-10-29T15:18:24 | 2020-10-29T15:18:24 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | false | [] |
719,555,780 | 728 | Passing `cache_dir` to a metric does not work | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
... | closed | https://github.com/huggingface/datasets/issues/728 | 2020-10-12T17:55:14 | 2020-10-29T09:34:42 | 2020-10-29T09:34:42 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | false | [] |
719,386,366 | 727 | Parallel downloads progress bar flickers | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "... | open | https://github.com/huggingface/datasets/issues/727 | 2020-10-12T13:36:05 | 2020-10-12T13:36:05 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
719,313,754 | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | closed | https://github.com/huggingface/datasets/issues/726 | 2020-10-12T11:45:10 | 2022-02-17T17:53:54 | 2022-02-15T10:38:57 | {
"login": "SparkJiao",
"id": 16469472,
"type": "User"
} | [] | false | [] |
718,985,641 | 725 | pretty print dataset objects | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None... | closed | https://github.com/huggingface/datasets/pull/725 | 2020-10-12T02:03:46 | 2020-10-23T16:24:35 | 2020-10-23T09:00:46 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | true | [] |
718,947,700 | 724 | need to redirect /nlp to /datasets and remove outdated info | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | closed | https://github.com/huggingface/datasets/issues/724 | 2020-10-11T23:12:12 | 2020-10-14T17:00:12 | 2020-10-14T17:00:12 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | false | [] |
718,926,723 | 723 | Adding pseudo-labels to datasets | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | closed | https://github.com/huggingface/datasets/issues/723 | 2020-10-11T21:05:45 | 2021-08-03T05:11:51 | 2021-08-03T05:11:51 | {
"login": "sshleifer",
"id": 6045025,
"type": "User"
} | [] | false | [] |
718,689,117 | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| closed | https://github.com/huggingface/datasets/pull/722 | 2020-10-10T19:44:08 | 2022-09-30T14:53:37 | 2022-09-30T14:53:37 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
718,647,147 | 721 | feat(dl_manager): add support for ftp downloads | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | closed | https://github.com/huggingface/datasets/issues/721 | 2020-10-10T15:50:20 | 2022-02-15T10:44:44 | 2022-02-15T10:44:43 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [] | false | [] |
716,581,266 | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | closed | https://github.com/huggingface/datasets/issues/720 | 2020-10-07T14:27:13 | 2020-12-23T14:04:31 | 2020-12-23T14:04:31 | {
"login": "josemlopez",
"id": 4112135,
"type": "User"
} | [] | false | [] |
716,492,263 | 719 | Fix train_test_split output format | There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.
This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).
This should ... | closed | https://github.com/huggingface/datasets/pull/719 | 2020-10-07T12:39:01 | 2020-10-07T13:38:08 | 2020-10-07T13:38:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
715,694,709 | 718 | Don't use tqdm 4.50.0 | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | closed | https://github.com/huggingface/datasets/pull/718 | 2020-10-06T13:45:53 | 2020-10-06T13:49:24 | 2020-10-06T13:49:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
714,959,268 | 717 | Fixes #712 Error in the Overview.ipynb notebook | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | closed | https://github.com/huggingface/datasets/pull/717 | 2020-10-05T15:50:41 | 2020-10-06T06:31:43 | 2020-10-05T16:25:41 | {
"login": "subhrm",
"id": 850012,
"type": "User"
} | [] | true | [] |
714,952,888 | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | Fixes the Attribute error in cell 3 of the overview notebook | closed | https://github.com/huggingface/datasets/pull/716 | 2020-10-05T15:42:09 | 2020-10-05T15:46:38 | 2020-10-05T15:46:32 | {
"login": "subhrm",
"id": 850012,
"type": "User"
} | [] | true | [] |
714,690,192 | 715 | Use python read for text dataset | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | closed | https://github.com/huggingface/datasets/pull/715 | 2020-10-05T09:47:55 | 2020-10-05T13:13:18 | 2020-10-05T13:13:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
714,487,881 | 714 | Add the official dependabot implementation | This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly. | closed | https://github.com/huggingface/datasets/pull/714 | 2020-10-05T03:49:45 | 2020-10-12T11:49:21 | 2020-10-12T11:49:21 | {
"login": "ALazyMeme",
"id": 12804673,
"type": "User"
} | [] | true | [] |
714,475,732 | 713 | Fix reading text files with carriage return symbols | The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`).
It fails with the following error message:
```
...
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._l... | closed | https://github.com/huggingface/datasets/pull/713 | 2020-10-05T03:07:03 | 2020-10-09T05:58:25 | 2020-10-05T13:49:29 | {
"login": "mozharovsky",
"id": 6762769,
"type": "User"
} | [] | true | [] |
714,242,316 | 712 | Error in the notebooks/Overview.ipynb notebook | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can acc... | closed | https://github.com/huggingface/datasets/issues/712 | 2020-10-04T05:58:31 | 2020-10-05T16:25:40 | 2020-10-05T16:25:40 | {
"login": "subhrm",
"id": 850012,
"type": "User"
} | [] | false | [] |
714,236,408 | 711 | New Update bertscore.py | closed | https://github.com/huggingface/datasets/pull/711 | 2020-10-04T05:13:09 | 2020-10-05T16:26:51 | 2020-10-05T16:26:51 | {
"login": "DayasagarRSalian",
"id": 51692618,
"type": "User"
} | [] | true | [] | |
714,186,999 | 710 | fix README typos/ consistency | closed | https://github.com/huggingface/datasets/pull/710 | 2020-10-03T22:20:56 | 2020-10-17T09:52:45 | 2020-10-17T09:52:45 | {
"login": "discdiver",
"id": 7703961,
"type": "User"
} | [] | true | [] | |
714,067,902 | 709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
=... | closed | https://github.com/huggingface/datasets/issues/709 | 2020-10-03T11:18:49 | 2022-10-04T17:19:37 | 2022-10-04T17:19:37 | {
"login": "nsankar",
"id": 431890,
"type": "User"
} | [] | false | [] |
714,020,953 | 708 | Datasets performance slow? - 6.4x slower than in memory dataset | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | closed | https://github.com/huggingface/datasets/issues/708 | 2020-10-03T06:44:07 | 2021-02-12T14:13:28 | 2021-02-12T14:13:28 | {
"login": "eugeneware",
"id": 38154,
"type": "User"
} | [] | false | [] |
713,954,666 | 707 | Requirements should specify pyarrow<1 | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | closed | https://github.com/huggingface/datasets/issues/707 | 2020-10-02T23:39:39 | 2020-12-04T08:22:39 | 2020-10-04T20:50:28 | {
"login": "mathcass",
"id": 918541,
"type": "User"
} | [] | false | [] |
713,721,959 | 706 | Fix config creation for data files with NamedSplit | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort th... | closed | https://github.com/huggingface/datasets/pull/706 | 2020-10-02T15:46:49 | 2020-10-05T08:15:00 | 2020-10-05T08:14:59 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
713,709,100 | 705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
... | closed | https://github.com/huggingface/datasets/issues/705 | 2020-10-02T15:27:55 | 2020-10-05T08:14:59 | 2020-10-05T08:14:59 | {
"login": "pvcastro",
"id": 12713359,
"type": "User"
} | [] | false | [] |
713,572,556 | 704 | Fix remote tests for new datasets | When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)
To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch | closed | https://github.com/huggingface/datasets/pull/704 | 2020-10-02T12:08:04 | 2020-10-02T12:12:02 | 2020-10-02T12:12:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
713,559,718 | 703 | Add hotpot QA | Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
| closed | https://github.com/huggingface/datasets/pull/703 | 2020-10-02T11:44:28 | 2020-10-02T12:54:41 | 2020-10-02T12:54:41 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
713,499,628 | 702 | Complete rouge kwargs | In #701 we noticed that some kwargs were missing for rouge | closed | https://github.com/huggingface/datasets/pull/702 | 2020-10-02T09:59:01 | 2020-10-02T10:11:04 | 2020-10-02T10:11:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
713,485,757 | 701 | Add rouge 2 and rouge Lsum to rouge metric outputs | Continuation of #700
Rouge 2 and Rouge Lsum were missing in Rouge's outputs.
Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n`
Fix #617 | closed | https://github.com/huggingface/datasets/pull/701 | 2020-10-02T09:35:46 | 2020-10-02T09:55:14 | 2020-10-02T09:52:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
713,450,295 | 700 | Add rouge-2 in rouge_types for metric calculation | The description of the ROUGE metric says,
```
_KWARGS_DESCRIPTION = """
Calculates average rouge scores for a list of hypotheses and references
Args:
predictions: list of predictions to score. Each predictions
should be a string with tokens separated by spaces.
references: list of reference for ... | closed | https://github.com/huggingface/datasets/pull/700 | 2020-10-02T08:36:45 | 2020-10-02T11:08:49 | 2020-10-02T09:59:05 | {
"login": "Shashi456",
"id": 18056781,
"type": "User"
} | [] | true | [] |
713,395,642 | 699 | XNLI dataset is not loading | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | closed | https://github.com/huggingface/datasets/issues/699 | 2020-10-02T06:53:16 | 2020-10-03T17:45:52 | 2020-10-03T17:43:37 | {
"login": "imadarsh1001",
"id": 14936525,
"type": "User"
} | [] | false | [] |
712,979,029 | 697 | Update README.md | Hey I was just telling my subscribers to check out your repositories
Thank you | closed | https://github.com/huggingface/datasets/pull/697 | 2020-10-01T16:02:42 | 2020-10-01T16:12:00 | 2020-10-01T16:12:00 | {
"login": "bishug",
"id": 71011306,
"type": "User"
} | [] | true | [] |
712,942,977 | 696 | Elasticsearch index docs | I added the docs for ES indexes.
I also added a `load_elasticsearch_index` method to load an index that has already been built.
I checked the tests for the ES index and we have tests that mock ElasticSearch.
I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES... | closed | https://github.com/huggingface/datasets/pull/696 | 2020-10-01T15:18:58 | 2020-10-02T07:48:19 | 2020-10-02T07:48:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
712,843,949 | 695 | Update XNLI download link | The old link isn't working anymore. I updated it with the new official link.
Fix #690 | closed | https://github.com/huggingface/datasets/pull/695 | 2020-10-01T13:27:22 | 2020-10-01T14:01:15 | 2020-10-01T14:01:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
712,827,751 | 694 | Use GitHub instead of aws in remote dataset tests | Recently we switched from aws s3 to github to download dataset scripts.
However in the tests, the dummy data were still downloaded from s3.
So I changed that to download them from github instead, in the MockDownloadManager.
Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the ent... | closed | https://github.com/huggingface/datasets/pull/694 | 2020-10-01T13:07:50 | 2020-10-02T07:47:28 | 2020-10-02T07:47:27 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
712,822,200 | 693 | Rachel ker add dataset/mlsum | . | closed | https://github.com/huggingface/datasets/pull/693 | 2020-10-01T13:01:10 | 2023-09-24T09:48:23 | 2020-10-01T17:01:13 | {
"login": "pdhg",
"id": 32742136,
"type": "User"
} | [] | true | [] |
712,818,968 | 692 | Update README.md | closed | https://github.com/huggingface/datasets/pull/692 | 2020-10-01T12:57:22 | 2020-10-02T11:01:59 | 2020-10-02T11:01:59 | {
"login": "mayank1897",
"id": 62796466,
"type": "User"
} | [] | true | [] | |
712,389,499 | 691 | Add UI filter to filter datasets based on task | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following... | closed | https://github.com/huggingface/datasets/issues/691 | 2020-10-01T00:56:18 | 2022-02-15T10:46:50 | 2022-02-15T10:46:50 | {
"login": "praateekmahajan",
"id": 7589415,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
712,150,321 | 690 | XNLI dataset: NonMatchingChecksumError | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | closed | https://github.com/huggingface/datasets/issues/690 | 2020-09-30T17:50:03 | 2020-10-01T17:15:08 | 2020-10-01T14:01:14 | {
"login": "xiey1",
"id": 13307358,
"type": "User"
} | [] | false | [] |
712,095,262 | 689 | Switch to pandas reader for text dataset | Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator.
In this PR I switched to pandas to read the file.
Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text... | closed | https://github.com/huggingface/datasets/pull/689 | 2020-09-30T16:28:12 | 2020-09-30T16:45:32 | 2020-09-30T16:45:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
711,804,828 | 688 | Disable tokenizers parallelism in multiprocessed map | It was reported in #620 that using multiprocessing with a tokenizers shows this message:
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
```
This message is shown when TOKENIZERS_PARALLELISM is... | closed | https://github.com/huggingface/datasets/pull/688 | 2020-09-30T09:53:34 | 2020-10-01T08:45:46 | 2020-10-01T08:45:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
711,664,810 | 687 | `ArrowInvalid` occurs while running `Dataset.map()` function | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | closed | https://github.com/huggingface/datasets/issues/687 | 2020-09-30T06:16:50 | 2020-09-30T09:53:03 | 2020-09-30T09:53:03 | {
"login": "peinan",
"id": 5601012,
"type": "User"
} | [] | false | [] |
711,385,739 | 686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | Might be worth updating to https://huggingface.co/datasets/viewer/ | closed | https://github.com/huggingface/datasets/issues/686 | 2020-09-29T19:21:52 | 2021-01-08T18:29:26 | 2021-01-08T18:29:26 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
711,182,185 | 685 | Add features parameter to CSV | Add support for the `features` parameter when loading a csv dataset:
```python
from datasets import load_dataset, Features
features = Features({...})
csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features)
```
I added tests to make sure that it is also compatible with the ca... | closed | https://github.com/huggingface/datasets/pull/685 | 2020-09-29T14:43:36 | 2020-09-30T08:39:56 | 2020-09-30T08:39:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
711,080,947 | 684 | Fix column order issue in cast | Previously, the order of the columns in the features passes to `cast_` mattered.
However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order.
This issue was reported by @lewtun in #623
To fix that I fi... | closed | https://github.com/huggingface/datasets/pull/684 | 2020-09-29T12:49:13 | 2020-09-29T15:56:46 | 2020-09-29T15:56:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
710,942,704 | 683 | Fix wrong delimiter in text dataset | The delimiter is set to the bell character as it is used nowhere is text files usually.
However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`.
I replace \b by \a
Hopefully it fixes issues mentioned by some users in #622 | closed | https://github.com/huggingface/datasets/pull/683 | 2020-09-29T09:43:24 | 2021-05-05T18:24:31 | 2020-09-29T09:44:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
710,325,399 | 682 | Update navbar chapter titles color | Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423
It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.
see changes [here](https://691-250213286-gh.circle-artifacts.com/0/do... | closed | https://github.com/huggingface/datasets/pull/682 | 2020-09-28T14:35:17 | 2020-09-28T17:30:13 | 2020-09-28T17:30:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
710,075,721 | 681 | Adding missing @property (+2 small flake8 fixes). | Fixes #678 | closed | https://github.com/huggingface/datasets/pull/681 | 2020-09-28T08:53:53 | 2020-09-28T10:26:13 | 2020-09-28T10:26:09 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
710,066,138 | 680 | Fix bug related to boolean in GAP dataset. | ### Why I did
The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`.
This type is `string`, then `bool('FALSE')` is equal to `True` in Python.
So, both rows are transformed into `True` now.
So, I modified this problem.
### What I did
I modified `bool(row["A-coref"])` and `bool(row["B-cor... | closed | https://github.com/huggingface/datasets/pull/680 | 2020-09-28T08:39:39 | 2020-09-29T15:54:47 | 2020-09-29T15:54:47 | {
"login": "otakumesi",
"id": 14996977,
"type": "User"
} | [] | true | [] |
710,065,838 | 679 | Fix negative ids when slicing with an array | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[[0, -1]])
# OverflowError
```
raises an error because of the negative id.
This PR fixes that.
Fix #668 | closed | https://github.com/huggingface/datasets/pull/679 | 2020-09-28T08:39:08 | 2020-09-28T14:42:20 | 2020-09-28T14:42:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
710,060,497 | 678 | The download instructions for c4 datasets are not contained in the error message | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff... | closed | https://github.com/huggingface/datasets/issues/678 | 2020-09-28T08:30:54 | 2020-09-28T10:26:09 | 2020-09-28T10:26:09 | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | false | [] |
710,055,239 | 677 | Move cache dir root creation in builder's init | We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init.
Fix #671 | closed | https://github.com/huggingface/datasets/pull/677 | 2020-09-28T08:22:46 | 2020-09-28T14:42:43 | 2020-09-28T14:42:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
710,014,319 | 676 | train_test_split returns empty dataset item | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | closed | https://github.com/huggingface/datasets/issues/676 | 2020-09-28T07:19:33 | 2020-10-07T13:46:33 | 2020-10-07T13:38:06 | {
"login": "mojave-pku",
"id": 26648528,
"type": "User"
} | [] | false | [] |
709,818,725 | 675 | Add custom dataset to NLP? | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | closed | https://github.com/huggingface/datasets/issues/675 | 2020-09-27T21:22:50 | 2020-10-20T09:08:49 | 2020-10-20T09:08:49 | {
"login": "timpal0l",
"id": 6556710,
"type": "User"
} | [] | false | [] |
709,661,006 | 674 | load_dataset() won't download in Windows | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | closed | https://github.com/huggingface/datasets/issues/674 | 2020-09-27T03:56:25 | 2020-10-05T08:28:18 | 2020-10-05T08:28:18 | {
"login": "ThisDavehead",
"id": 34422661,
"type": "User"
} | [] | false | [] |
709,603,989 | 673 | blog_authorship_corpus crashed | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

| closed | https://github.com/huggingface/datasets/issues/673 | 2020-09-26T20:15:28 | 2022-02-15T10:47:58 | 2022-02-15T10:47:58 | {
"login": "Moshiii",
"id": 7553188,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
709,575,527 | 672 | Questions about XSUM | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | closed | https://github.com/huggingface/datasets/issues/672 | 2020-09-26T17:16:24 | 2022-10-04T17:30:17 | 2022-10-04T17:30:17 | {
"login": "danyaljj",
"id": 2441454,
"type": "User"
} | [] | false | [] |
709,093,151 | 671 | [BUG] No such file or directory | This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested o... | closed | https://github.com/huggingface/datasets/issues/671 | 2020-09-25T16:38:54 | 2020-09-28T14:42:42 | 2020-09-28T14:42:42 | {
"login": "jbragg",
"id": 2238344,
"type": "User"
} | [] | false | [] |
709,061,231 | 670 | Fix SQuAD metric kwargs description | The `answer_start` field was missing in the kwargs docstring.
This should fix #657
FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field.
However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I th... | closed | https://github.com/huggingface/datasets/pull/670 | 2020-09-25T16:08:57 | 2020-09-29T15:57:39 | 2020-09-29T15:57:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
708,857,595 | 669 | How to skip a example when running dataset.map | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | closed | https://github.com/huggingface/datasets/issues/669 | 2020-09-25T11:17:53 | 2022-06-17T21:45:03 | 2020-10-05T16:28:13 | {
"login": "xixiaoyao",
"id": 24541791,
"type": "User"
} | [] | false | [] |
708,310,956 | 668 | OverflowError when slicing with an array containing negative ids | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError ... | closed | https://github.com/huggingface/datasets/issues/668 | 2020-09-24T16:27:14 | 2020-09-28T14:42:19 | 2020-09-28T14:42:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
708,258,392 | 667 | Loss not decrease with Datasets and Transformers | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data... | closed | https://github.com/huggingface/datasets/issues/667 | 2020-09-24T15:14:43 | 2021-01-01T20:01:25 | 2021-01-01T20:01:25 | {
"login": "wangcongcong123",
"id": 23032865,
"type": "User"
} | [] | false | [] |
707,608,578 | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | closed | https://github.com/huggingface/datasets/issues/666 | 2020-09-23T19:02:25 | 2020-10-27T15:19:25 | 2020-10-27T15:19:25 | {
"login": "wahab4114",
"id": 31090427,
"type": "User"
} | [] | false | [] | |
707,037,738 | 665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | closed | https://github.com/huggingface/datasets/issues/665 | 2020-09-23T04:28:14 | 2020-10-08T09:32:16 | 2020-10-08T09:32:16 | {
"login": "xixiaoyao",
"id": 24541791,
"type": "User"
} | [] | false | [] |
707,017,791 | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py') ... | closed | https://github.com/huggingface/datasets/issues/664 | 2020-09-23T03:53:36 | 2023-04-17T09:31:20 | 2020-10-20T09:06:13 | {
"login": "xixiaoyao",
"id": 24541791,
"type": "User"
} | [] | false | [] |
706,732,636 | 663 | Created dataset card snli.md | First draft of a dataset card using the SNLI corpus as an example.
This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around.
- I moved **Who Was Involved** to follow **Language**, ... | closed | https://github.com/huggingface/datasets/pull/663 | 2020-09-22T22:29:37 | 2020-10-13T17:05:20 | 2020-10-12T20:26:52 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | true | [] |
706,689,866 | 662 | Created dataset card snli.md | First draft of a dataset card using the SNLI corpus as an example | closed | https://github.com/huggingface/datasets/pull/662 | 2020-09-22T21:00:17 | 2023-09-24T09:50:16 | 2020-09-22T21:26:21 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | true | [] |
706,465,936 | 661 | Replace pa.OSFile by open | It should fix #643 | closed | https://github.com/huggingface/datasets/pull/661 | 2020-09-22T15:05:59 | 2021-05-05T18:24:36 | 2020-09-22T15:15:25 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
706,324,032 | 660 | add openwebtext | This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA.
It solves #132 .
### Besides dataset buildin... | closed | https://github.com/huggingface/datasets/pull/660 | 2020-09-22T12:05:22 | 2020-10-06T09:20:10 | 2020-09-28T09:07:26 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | true | [] |
706,231,506 | 659 | Keep new columns in transmit format | When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list.
It caused `KeyError` issues in #620
I changed the logic to add those new columns to the list that `__getitem__` should return. | closed | https://github.com/huggingface/datasets/pull/659 | 2020-09-22T09:47:23 | 2020-09-22T10:07:22 | 2020-09-22T10:07:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
706,206,247 | 658 | Fix squad metric's Features | Resolves issue [657](https://github.com/huggingface/datasets/issues/657). | closed | https://github.com/huggingface/datasets/pull/658 | 2020-09-22T09:09:52 | 2020-09-29T15:58:30 | 2020-09-29T15:58:30 | {
"login": "tshrjn",
"id": 8372098,
"type": "User"
} | [] | true | [] |
706,204,383 | 657 | Squad Metric Description & Feature Mismatch | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | closed | https://github.com/huggingface/datasets/issues/657 | 2020-09-22T09:07:00 | 2020-10-13T02:16:56 | 2020-09-29T15:57:38 | {
"login": "tshrjn",
"id": 8372098,
"type": "User"
} | [] | false | [] |
705,736,319 | 656 | Use multiprocess from pathos for multiprocessing | [Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map.
It was suggested to use it by @kandorm.
We're already using dill which is its only dependency. | closed | https://github.com/huggingface/datasets/pull/656 | 2020-09-21T16:12:19 | 2020-09-28T14:45:40 | 2020-09-28T14:45:39 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
705,672,208 | 655 | added Winogrande debiased subset | The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it. | closed | https://github.com/huggingface/datasets/pull/655 | 2020-09-21T14:51:08 | 2020-09-21T16:20:40 | 2020-09-21T16:16:04 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
705,511,058 | 654 | Allow empty inputs in metrics | There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute. | closed | https://github.com/huggingface/datasets/pull/654 | 2020-09-21T11:26:36 | 2020-10-06T03:51:48 | 2020-09-21T16:13:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
705,482,391 | 653 | handle data alteration when trying type | Fix #649
The bug came from the type inference that didn't handle a weird case in Pyarrow.
Indeed this code runs without error but alters the data in arrow:
```python
import pyarrow as pa
type = pa.struct({"a": pa.struct({"b": pa.string()})})
array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}... | closed | https://github.com/huggingface/datasets/pull/653 | 2020-09-21T10:41:49 | 2020-09-21T16:13:06 | 2020-09-21T16:13:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
705,390,850 | 652 | handle connection error in download_prepared_from_hf_gcs | Fix #647 | closed | https://github.com/huggingface/datasets/pull/652 | 2020-09-21T08:21:11 | 2020-09-21T08:28:43 | 2020-09-21T08:28:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
705,212,034 | 651 | Problem with JSON dataset format | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records i... | open | https://github.com/huggingface/datasets/issues/651 | 2020-09-20T23:57:14 | 2020-09-21T12:14:24 | null | {
"login": "vikigenius",
"id": 12724810,
"type": "User"
} | [] | false | [] |
704,861,844 | 650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | closed | https://github.com/huggingface/datasets/issues/650 | 2020-09-19T11:07:03 | 2020-09-22T11:54:10 | 2020-09-22T11:54:09 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | false | [] |
704,838,415 | 649 | Inconsistent behavior in map | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
d... | closed | https://github.com/huggingface/datasets/issues/649 | 2020-09-19T08:41:12 | 2020-09-21T16:13:05 | 2020-09-21T16:13:05 | {
"login": "krandiash",
"id": 10166085,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
704,753,123 | 648 | offset overflow when multiprocessing batched map on large datasets. | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
----------------------------... | closed | https://github.com/huggingface/datasets/issues/648 | 2020-09-19T02:15:11 | 2025-06-17T12:56:07 | 2020-09-19T16:46:31 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
704,734,764 | 647 | Cannot download dataset_info.json | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text... | closed | https://github.com/huggingface/datasets/issues/647 | 2020-09-19T01:35:15 | 2020-09-21T08:28:42 | 2020-09-21T08:28:42 | {
"login": "chiyuzhang94",
"id": 33407613,
"type": "User"
} | [] | false | [] |
704,607,371 | 646 | Fix docs typos | This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add th... | closed | https://github.com/huggingface/datasets/pull/646 | 2020-09-18T19:32:27 | 2020-09-21T16:30:54 | 2020-09-21T16:14:12 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
704,542,234 | 645 | Don't use take on dataset table in pyarrow 1.0.x | Fix #615 | closed | https://github.com/huggingface/datasets/pull/645 | 2020-09-18T17:31:34 | 2023-09-19T07:59:19 | 2020-09-19T16:46:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
704,534,501 | 644 | Better windows support | There are a few differences in the behavior of python and pyarrow on windows.
For example there are restrictions when accessing/deleting files that are open
Fix #590 | closed | https://github.com/huggingface/datasets/pull/644 | 2020-09-18T17:17:36 | 2020-09-25T14:02:30 | 2020-09-25T14:02:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
704,477,164 | 643 | Caching processed dataset at wrong folder | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | closed | https://github.com/huggingface/datasets/issues/643 | 2020-09-18T15:41:26 | 2022-02-16T14:53:29 | 2022-02-16T14:53:29 | {
"login": "mrm8488",
"id": 3653789,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
704,397,499 | 642 | Rename wnut fields | As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets | closed | https://github.com/huggingface/datasets/pull/642 | 2020-09-18T13:51:31 | 2020-09-18T17:18:31 | 2020-09-18T17:18:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
704,373,940 | 641 | Add Polyglot-NER Dataset | Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together. | closed | https://github.com/huggingface/datasets/pull/641 | 2020-09-18T13:21:44 | 2020-09-20T03:04:43 | 2020-09-20T03:04:43 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
704,311,758 | 640 | Make shuffle compatible with temp_seed | This code used to return different dataset at each run
```python
import dataset as ds
dataset = ...
with ds.temp_seed(42):
shuffled = dataset.shuffle()
```
Now it returns the same one since the seed is set | closed | https://github.com/huggingface/datasets/pull/640 | 2020-09-18T11:38:58 | 2020-09-18T11:47:51 | 2020-09-18T11:47:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
704,217,963 | 639 | Update glue QQP checksum | Fix #638 | closed | https://github.com/huggingface/datasets/pull/639 | 2020-09-18T09:08:15 | 2020-09-18T11:37:08 | 2020-09-18T11:37:07 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
704,146,956 | 638 | GLUE/QQP dataset: NonMatchingChecksumError | Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_data... | closed | https://github.com/huggingface/datasets/issues/638 | 2020-09-18T07:09:10 | 2020-09-18T11:37:07 | 2020-09-18T11:37:07 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | false | [] |
703,539,909 | 637 | Add MATINF | closed | https://github.com/huggingface/datasets/pull/637 | 2020-09-17T12:24:53 | 2020-09-17T13:23:18 | 2020-09-17T13:23:17 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | true | [] | |
702,883,989 | 636 | Consistent ner features | As discussed in #613 , this PR aims at making NER feature names consistent across datasets.
I changed the feature names of LinCE and XTREME/PAN-X | closed | https://github.com/huggingface/datasets/pull/636 | 2020-09-16T15:56:25 | 2020-09-17T09:52:59 | 2020-09-17T09:52:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
702,822,439 | 635 | Loglevel | Continuation of #618 | closed | https://github.com/huggingface/datasets/pull/635 | 2020-09-16T14:37:53 | 2020-09-17T09:52:19 | 2020-09-17T09:52:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
702,676,041 | 634 | Add ConLL-2000 dataset | Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR | closed | https://github.com/huggingface/datasets/pull/634 | 2020-09-16T11:14:11 | 2020-09-17T10:38:10 | 2020-09-17T10:38:10 | {
"login": "vblagoje",
"id": 458335,
"type": "User"
} | [] | true | [] |
702,440,484 | 633 | Load large text file for LM pre-training resulting in OOM | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | open | https://github.com/huggingface/datasets/issues/633 | 2020-09-16T04:33:15 | 2021-02-16T12:02:01 | null | {
"login": "leethu2012",
"id": 29704017,
"type": "User"
} | [] | false | [] |
702,358,124 | 632 | Fix typos in the loading datasets docs | This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function. | closed | https://github.com/huggingface/datasets/pull/632 | 2020-09-16T00:27:41 | 2020-09-21T16:31:11 | 2020-09-16T06:52:44 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
701,711,255 | 631 | Fix text delimiter | I changed the delimiter in the `text` dataset script.
It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622
I changed the delimiter to an unused ascii character that is not present in text files : `\b` | closed | https://github.com/huggingface/datasets/pull/631 | 2020-09-15T08:08:42 | 2020-09-22T15:03:06 | 2020-09-15T08:26:25 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
701,636,350 | 630 | Text dataset not working with large files | ```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t... | closed | https://github.com/huggingface/datasets/issues/630 | 2020-09-15T06:02:36 | 2020-09-25T22:21:43 | 2020-09-25T22:21:43 | {
"login": "ksjae",
"id": 17930170,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.