id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
753,860,095 | 934 | small updates to the "add new dataset" guide | small updates (corrections/typos) to the "add new dataset" guide | closed | https://github.com/huggingface/datasets/pull/934 | 2020-11-30T22:49:10 | 2020-12-01T04:56:22 | 2020-11-30T23:14:00 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
753,854,272 | 933 | Add NumerSense | Adds the NumerSense dataset
- Webpage/leaderboard: https://inklab.usc.edu/NumerSense/
- Paper: https://arxiv.org/abs/2005.00683
- Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to ... | closed | https://github.com/huggingface/datasets/pull/933 | 2020-11-30T22:36:33 | 2020-12-01T20:25:50 | 2020-12-01T19:51:56 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
753,840,300 | 932 | adding metooma dataset | closed | https://github.com/huggingface/datasets/pull/932 | 2020-11-30T22:09:49 | 2020-12-02T00:37:54 | 2020-12-02T00:37:54 | {
"login": "akash418",
"id": 23264033,
"type": "User"
} | [] | true | [] | |
753,818,193 | 931 | [WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32 | Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1`
Didn't managed to see how to solve that.
Putting aside for now.
| closed | https://github.com/huggingface/datasets/pull/931 | 2020-11-30T21:30:21 | 2022-10-03T09:40:09 | 2022-10-03T09:40:09 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
753,801,204 | 930 | Lambada | Added LAMBADA dataset.
A couple of points of attention (mostly because I am not sure)
- The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples.
- The dev and test splits don't have the `category` field so I put `None` by defaul... | closed | https://github.com/huggingface/datasets/pull/930 | 2020-11-30T21:02:33 | 2020-12-01T00:37:12 | 2020-12-01T00:37:11 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
753,737,794 | 929 | Add weibo NER dataset | closed | https://github.com/huggingface/datasets/pull/929 | 2020-11-30T19:22:47 | 2020-12-03T13:36:55 | 2020-12-03T13:36:54 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
753,722,324 | 928 | Add the Multilingual Amazon Reviews Corpus | - **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)
- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.
- **Paper:** https://arxiv.org/abs/2010.02573
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` us... | closed | https://github.com/huggingface/datasets/pull/928 | 2020-11-30T18:58:06 | 2020-12-01T16:04:30 | 2020-12-01T16:04:27 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
753,679,020 | 927 | Hello | closed | https://github.com/huggingface/datasets/issues/927 | 2020-11-30T17:50:05 | 2020-11-30T17:50:30 | 2020-11-30T17:50:30 | {
"login": "k125-ak",
"id": 75259546,
"type": "User"
} | [] | false | [] | |
753,676,069 | 926 | add inquisitive | Adding inquisitive qg dataset
More info: https://github.com/wjko2/INQUISITIVE | closed | https://github.com/huggingface/datasets/pull/926 | 2020-11-30T17:45:22 | 2020-12-02T13:45:22 | 2020-12-02T13:40:13 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
753,672,661 | 925 | Add Turku NLP Corpus for Finnish NER | closed | https://github.com/huggingface/datasets/pull/925 | 2020-11-30T17:40:19 | 2020-12-03T14:07:11 | 2020-12-03T14:07:10 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
753,631,951 | 924 | Add DART | - **Name:** *DART*
- **Description:** *DART is a large dataset for open-domain structured data record to text generation.*
- **Paper:** *https://arxiv.org/abs/2007.02871*
- **Data:** *https://github.com/Yale-LILY/dart#leaderboard*
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py... | closed | https://github.com/huggingface/datasets/pull/924 | 2020-11-30T16:42:37 | 2020-12-02T03:13:42 | 2020-12-02T03:13:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
753,569,220 | 923 | Add CC-100 dataset | Add CC-100.
Close #773 | closed | https://github.com/huggingface/datasets/pull/923 | 2020-11-30T15:23:22 | 2021-04-20T13:34:17 | 2021-04-20T13:34:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | true | [] |
753,559,130 | 922 | Add XOR QA Dataset | Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | closed | https://github.com/huggingface/datasets/pull/922 | 2020-11-30T15:10:54 | 2020-12-02T03:12:21 | 2020-12-02T03:12:21 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
753,445,747 | 920 | add dream dataset | Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension
More details:
https://dataset.org/dream/
https://github.com/nlpdata/dream | closed | https://github.com/huggingface/datasets/pull/920 | 2020-11-30T12:40:14 | 2020-12-03T16:45:12 | 2020-12-02T15:39:12 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
753,434,472 | 919 | wrong length with datasets | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
... | closed | https://github.com/huggingface/datasets/issues/919 | 2020-11-30T12:23:39 | 2020-11-30T12:37:27 | 2020-11-30T12:37:26 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
753,397,440 | 918 | Add conll2002 | Adding the Conll2002 dataset for NER.
More info here : https://www.clips.uantwerpen.be/conll2002/ner/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` ... | closed | https://github.com/huggingface/datasets/pull/918 | 2020-11-30T11:29:35 | 2020-11-30T18:34:30 | 2020-11-30T18:34:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
753,391,591 | 917 | Addition of Concode Dataset | ##Overview
Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation)
Reference Links
Paper Link = https://arxiv.org/pdf/1904.09086.pdf
Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code | closed | https://github.com/huggingface/datasets/pull/917 | 2020-11-30T11:20:59 | 2020-12-29T02:55:36 | 2020-12-29T02:55:36 | {
"login": "reshinthadithyan",
"id": 36307201,
"type": "User"
} | [] | true | [] |
753,376,643 | 916 | Add Swedish NER Corpus | closed | https://github.com/huggingface/datasets/pull/916 | 2020-11-30T10:59:51 | 2020-12-02T03:10:50 | 2020-12-02T03:10:49 | {
"login": "abhishekkrthakur",
"id": 1183441,
"type": "User"
} | [] | true | [] | |
753,118,481 | 915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge... | open | https://github.com/huggingface/datasets/issues/915 | 2020-11-30T03:50:46 | 2020-12-24T05:11:49 | null | {
"login": "zhuzilin",
"id": 10428324,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
752,956,106 | 914 | Add list_github_datasets api for retrieving dataset name list in github repo | Thank you for your great effort on unifying data processing for NLP!
This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be rea... | closed | https://github.com/huggingface/datasets/pull/914 | 2020-11-29T16:42:15 | 2020-12-02T07:21:16 | 2020-12-02T07:21:16 | {
"login": "zhuzilin",
"id": 10428324,
"type": "User"
} | [] | true | [] |
752,892,020 | 913 | My new dataset PEC | A new dataset PEC published in EMNLP 2020. | closed | https://github.com/huggingface/datasets/pull/913 | 2020-11-29T11:10:37 | 2020-12-01T10:41:53 | 2020-12-01T10:41:53 | {
"login": "zhongpeixiang",
"id": 11826803,
"type": "User"
} | [] | true | [] |
752,806,215 | 911 | datasets module not found | Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
| closed | https://github.com/huggingface/datasets/issues/911 | 2020-11-29T01:24:15 | 2020-11-29T14:33:09 | 2020-11-29T14:33:09 | {
"login": "sbassam",
"id": 15836274,
"type": "User"
} | [] | false | [] |
752,772,723 | 910 | Grindr meeting app web.Grindr | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | closed | https://github.com/huggingface/datasets/issues/910 | 2020-11-28T21:36:23 | 2020-11-29T10:11:51 | 2020-11-29T10:11:51 | {
"login": "jackin34",
"id": 75184749,
"type": "User"
} | [] | false | [] |
752,508,299 | 909 | Add FiNER dataset | Hi,
this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset.
The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data).
Notice: they provide two testsets. The additional te... | closed | https://github.com/huggingface/datasets/pull/909 | 2020-11-27T23:54:20 | 2020-12-07T16:56:23 | 2020-12-07T16:56:23 | {
"login": "stefan-it",
"id": 20651387,
"type": "User"
} | [] | true | [] |
752,428,652 | 908 | Add dependency on black for tests | Add package 'black' as an installation requirement for tests. | closed | https://github.com/huggingface/datasets/pull/908 | 2020-11-27T19:12:48 | 2020-11-27T21:46:53 | 2020-11-27T21:46:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
752,422,351 | 907 | Remove os.path.join from all URLs | Remove `os.path.join` from all URLs in dataset scripts. | closed | https://github.com/huggingface/datasets/pull/907 | 2020-11-27T18:55:30 | 2020-11-29T22:48:20 | 2020-11-29T22:48:19 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
752,403,395 | 906 | Fix url with backslash in windows for blimp and pg19 | Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls
cc @albertvillanova | closed | https://github.com/huggingface/datasets/pull/906 | 2020-11-27T17:59:11 | 2020-11-27T18:19:56 | 2020-11-27T18:19:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
752,395,456 | 905 | Disallow backslash in urls | Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows.
I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts.
The tests ... | closed | https://github.com/huggingface/datasets/pull/905 | 2020-11-27T17:38:28 | 2020-11-29T22:48:37 | 2020-11-29T22:48:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
752,372,743 | 904 | Very detailed step-by-step on how to add a dataset | Add very detailed step-by-step instructions to add a new dataset to the library. | closed | https://github.com/huggingface/datasets/pull/904 | 2020-11-27T16:45:21 | 2020-11-30T09:56:27 | 2020-11-30T09:56:26 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
752,360,614 | 903 | Fix URL with backslash in Windows | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | closed | https://github.com/huggingface/datasets/pull/903 | 2020-11-27T16:26:24 | 2020-11-27T18:04:46 | 2020-11-27T18:04:46 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
752,345,739 | 902 | Follow cache_dir parameter to gcs downloader | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | closed | https://github.com/huggingface/datasets/pull/902 | 2020-11-27T16:02:06 | 2020-11-29T22:48:54 | 2020-11-29T22:48:53 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
752,233,851 | 901 | Addition of Nl2Bash Dataset | ## Overview
The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.
## Footnotes
The following dataset marks the first ML on source code related... | closed | https://github.com/huggingface/datasets/pull/901 | 2020-11-27T12:53:55 | 2020-11-29T18:09:25 | 2020-11-29T18:08:51 | {
"login": "reshinthadithyan",
"id": 36307201,
"type": "User"
} | [] | true | [] |
752,214,066 | 900 | datasets.load_dataset() custom chaching directory bug | Hello,
I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to
`~/.cache`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```p... | closed | https://github.com/huggingface/datasets/issues/900 | 2020-11-27T12:18:53 | 2020-11-29T22:48:53 | 2020-11-29T22:48:53 | {
"login": "SapirWeissbuch",
"id": 44585792,
"type": "User"
} | [] | false | [] |
752,191,227 | 899 | Allow arrow based builder in auto dummy data generation | Following #898 I added support for arrow based builder for the auto dummy data generator | closed | https://github.com/huggingface/datasets/pull/899 | 2020-11-27T11:39:38 | 2020-11-27T13:30:09 | 2020-11-27T13:30:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
752,148,284 | 898 | Adding SQA dataset | As discussed in #880
Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ? | closed | https://github.com/huggingface/datasets/pull/898 | 2020-11-27T10:29:18 | 2020-12-15T12:54:40 | 2020-12-15T12:54:19 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
752,100,256 | 897 | Dataset viewer issues | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. T... | closed | https://github.com/huggingface/datasets/issues/897 | 2020-11-27T09:14:34 | 2021-10-31T09:12:01 | 2021-10-31T09:12:01 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
751,834,265 | 896 | Add template and documentation for dataset card | This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora
New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and... | closed | https://github.com/huggingface/datasets/pull/896 | 2020-11-26T21:30:25 | 2020-11-28T01:10:15 | 2020-11-28T01:10:15 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
751,782,295 | 895 | Better messages regarding split naming | I made explicit the error message when a bad split name is used.
Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in t... | closed | https://github.com/huggingface/datasets/pull/895 | 2020-11-26T18:55:46 | 2020-11-27T13:31:00 | 2020-11-27T13:30:59 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
751,734,905 | 894 | Allow several tags sets | Hi !
Currently we have three dataset cards : snli, cnn_dailymail and allocine.
For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.
For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnl... | closed | https://github.com/huggingface/datasets/pull/894 | 2020-11-26T17:04:13 | 2021-05-05T18:24:17 | 2020-11-27T20:15:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
751,703,696 | 893 | add metrec: arabic poetry dataset | closed | https://github.com/huggingface/datasets/pull/893 | 2020-11-26T16:10:16 | 2020-12-01T16:24:55 | 2020-12-01T15:15:07 | {
"login": "zaidalyafeai",
"id": 15667714,
"type": "User"
} | [] | true | [] | |
751,658,262 | 892 | Add a few datasets of reference in the documentation | I started making a small list of various datasets of reference in the documentation.
Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.
Let me know what you think, and if you have ideas of other datasets that we may add to this list, please l... | closed | https://github.com/huggingface/datasets/pull/892 | 2020-11-26T15:02:39 | 2020-11-27T18:08:45 | 2020-11-27T18:08:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
751,576,869 | 891 | gitignore .python-version | ignore `.python-version` added by `pyenv` | closed | https://github.com/huggingface/datasets/pull/891 | 2020-11-26T13:05:58 | 2020-11-26T13:28:27 | 2020-11-26T13:28:26 | {
"login": "patil-suraj",
"id": 27137566,
"type": "User"
} | [] | true | [] |
751,534,050 | 890 | Add LER | closed | https://github.com/huggingface/datasets/pull/890 | 2020-11-26T11:58:23 | 2020-12-01T13:33:35 | 2020-12-01T13:26:16 | {
"login": "JoelNiklaus",
"id": 3775944,
"type": "User"
} | [] | true | [] | |
751,115,691 | 889 | Optional per-dataset default config name | This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following... | closed | https://github.com/huggingface/datasets/pull/889 | 2020-11-25T21:02:30 | 2020-11-30T17:27:33 | 2020-11-30T17:27:27 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
750,944,422 | 888 | Nested lists are zipped unexpectedly | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
... | closed | https://github.com/huggingface/datasets/issues/888 | 2020-11-25T16:07:46 | 2020-11-25T17:30:39 | 2020-11-25T17:30:39 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [] | false | [] |
750,868,831 | 887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and ... | open | https://github.com/huggingface/datasets/issues/887 | 2020-11-25T14:32:21 | 2021-09-09T17:03:40 | null | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
750,829,314 | 886 | Fix wikipedia custom config | It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wi... | closed | https://github.com/huggingface/datasets/pull/886 | 2020-11-25T13:44:12 | 2021-06-25T05:24:16 | 2020-11-25T15:42:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
750,789,052 | 885 | Very slow cold-start | Hi,
I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant.
When I load a metric, or a dataset, its fine that it takes time.
The following ranges from 3 to 9 seconds:
```
python -m timeit -n 1 -r 1 'from datasets import load_dataset'
```
edi... | closed | https://github.com/huggingface/datasets/issues/885 | 2020-11-25T12:47:58 | 2021-01-13T11:31:25 | 2021-01-13T11:31:25 | {
"login": "AmitMY",
"id": 5757359,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
749,862,034 | 884 | Auto generate dummy data | When adding a new dataset to the library, dummy data creation can take some time.
To make things easier I added a command line tool that automatically generates dummy data when possible.
The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml.
Here are some examples:
```
python data... | closed | https://github.com/huggingface/datasets/pull/884 | 2020-11-24T16:31:34 | 2020-11-26T14:18:47 | 2020-11-26T14:18:46 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
749,750,801 | 883 | Downloading/caching only a part of a datasets' dataset. | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | open | https://github.com/huggingface/datasets/issues/883 | 2020-11-24T14:25:18 | 2020-11-27T13:51:55 | null | {
"login": "SapirWeissbuch",
"id": 44585792,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
749,662,188 | 882 | Update README.md | "no label" is "-" in the original dataset but "-1" in Huggingface distribution. | closed | https://github.com/huggingface/datasets/pull/882 | 2020-11-24T12:23:52 | 2021-01-29T10:41:07 | 2021-01-29T10:41:07 | {
"login": "vaibhavad",
"id": 32997732,
"type": "User"
} | [] | true | [] |
749,548,107 | 881 | Use GCP download url instead of tensorflow custom download for boolq | BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket.
It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError.
Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and ... | closed | https://github.com/huggingface/datasets/pull/881 | 2020-11-24T09:47:11 | 2020-11-24T10:12:34 | 2020-11-24T10:12:33 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
748,949,606 | 880 | Add SQA | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/r... | closed | https://github.com/huggingface/datasets/issues/880 | 2020-11-23T16:31:55 | 2020-12-23T13:58:24 | 2020-12-23T13:58:23 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
748,848,847 | 879 | boolq does not load | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
d... | closed | https://github.com/huggingface/datasets/issues/879 | 2020-11-23T14:28:28 | 2022-10-05T12:23:32 | 2022-10-05T12:23:32 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
748,621,981 | 878 | Loading Data From S3 Path in Sagemaker | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | open | https://github.com/huggingface/datasets/issues/878 | 2020-11-23T09:17:22 | 2020-12-23T09:53:08 | null | {
"login": "mahesh1amour",
"id": 42795522,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
748,234,438 | 877 | DataLoader(datasets) become more and more slowly within iterations | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, th... | closed | https://github.com/huggingface/datasets/issues/877 | 2020-11-22T12:41:10 | 2024-11-22T03:02:53 | 2020-11-29T15:45:12 | {
"login": "shexuan",
"id": 25664170,
"type": "User"
} | [] | false | [] |
748,195,104 | 876 | imdb dataset cannot be loaded | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/... | closed | https://github.com/huggingface/datasets/issues/876 | 2020-11-22T08:24:43 | 2024-05-10T03:03:29 | 2020-12-24T17:38:47 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
748,194,311 | 875 | bug in boolq dataset loading | Hi
I am trying to load boolq dataset:
```
import datasets
datasets.load_dataset("boolq")
```
I am getting the following errors, thanks for your help
```
>>> import datasets
2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda... | closed | https://github.com/huggingface/datasets/issues/875 | 2020-11-22T08:18:34 | 2020-11-24T10:12:33 | 2020-11-24T10:12:33 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
748,193,140 | 874 | trec dataset unavailable | Hi
when I try to load the trec dataset I am getting these errors, thanks for your help
`datasets.load_dataset("trec", split="train")
`
```
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
... | closed | https://github.com/huggingface/datasets/issues/874 | 2020-11-22T08:09:36 | 2020-11-27T13:56:42 | 2020-11-27T13:56:42 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
747,959,523 | 873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | closed | https://github.com/huggingface/datasets/issues/873 | 2020-11-21T06:30:45 | 2023-08-03T12:07:03 | 2020-11-22T12:18:05 | {
"login": "vishal-burman",
"id": 19861874,
"type": "User"
} | [] | false | [] |
747,653,697 | 872 | Add IndicGLUE dataset and Metrics | Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | closed | https://github.com/huggingface/datasets/pull/872 | 2020-11-20T17:09:34 | 2020-11-25T17:01:11 | 2020-11-25T15:26:07 | {
"login": "sumanthd17",
"id": 28291870,
"type": "User"
} | [] | true | [] |
747,470,136 | 871 | terminate called after throwing an instance of 'google::protobuf::FatalException' | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████████████████████████████████████████████████████████████████████████████████████████... | closed | https://github.com/huggingface/datasets/issues/871 | 2020-11-20T12:56:24 | 2020-12-12T21:16:32 | 2020-12-12T21:16:32 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
747,021,996 | 870 | [Feature Request] Add optional parameter in text loading script to preserve linebreaks | I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great.
But the first time I processed all of ... | closed | https://github.com/huggingface/datasets/issues/870 | 2020-11-19T23:51:31 | 2022-06-01T15:25:53 | 2022-06-01T15:25:52 | {
"login": "jncasey",
"id": 31020859,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
746,495,711 | 869 | Update ner datasets infos | Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel)
I also fixed the ner types of conll2003 | closed | https://github.com/huggingface/datasets/pull/869 | 2020-11-19T11:28:03 | 2020-11-19T14:14:18 | 2020-11-19T14:14:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
745,889,882 | 868 | Consistent metric outputs | To automate the use of metrics, they should return consistent outputs.
In particular I'm working on adding a conversion of metrics to keras metrics.
To achieve this we need two things:
- have each metric return dictionaries of string -> floats since each keras metrics should return one float
- define in the metric ... | closed | https://github.com/huggingface/datasets/pull/868 | 2020-11-18T18:05:59 | 2023-09-24T09:50:25 | 2023-07-11T09:35:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true | [] |
745,773,955 | 867 | Fix some metrics feature types | Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics:
- accuracy
- precision
- recall
- f1
I also added the sklearn citation and used keyword arguments to remove future warnings | closed | https://github.com/huggingface/datasets/pull/867 | 2020-11-18T15:46:11 | 2020-11-19T17:35:58 | 2020-11-19T17:35:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
745,719,222 | 866 | OSCAR from Inria group | ## Adding a Dataset
- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/).
- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la... | closed | https://github.com/huggingface/datasets/issues/866 | 2020-11-18T14:40:54 | 2020-11-18T15:01:30 | 2020-11-18T15:01:30 | {
"login": "jchwenger",
"id": 34098722,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
745,430,497 | 865 | Have Trouble importing `datasets` | I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.
I cloned the newest version of datasets (master branch), and do `pip install -e .`.
Then, `import datasets` causes the error below.
```
~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ... | closed | https://github.com/huggingface/datasets/issues/865 | 2020-11-18T08:04:41 | 2020-11-18T08:16:35 | 2020-11-18T08:16:35 | {
"login": "forest1988",
"id": 2755894,
"type": "User"
} | [] | false | [] |
745,322,357 | 864 | Unable to download cnn_dailymail dataset | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
-------------------------------------------------------------... | closed | https://github.com/huggingface/datasets/issues/864 | 2020-11-18T04:38:02 | 2020-11-20T05:22:11 | 2020-11-20T05:22:10 | {
"login": "rohitashwa1907",
"id": 46031058,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
744,954,534 | 863 | Add clear_cache parameter in the test command | For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space.
I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier gen... | closed | https://github.com/huggingface/datasets/pull/863 | 2020-11-17T17:52:29 | 2020-11-18T14:44:25 | 2020-11-18T14:44:24 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
744,906,131 | 862 | Update head requests | Get requests and Head requests didn't have the same parameters. | closed | https://github.com/huggingface/datasets/pull/862 | 2020-11-17T16:49:06 | 2020-11-18T14:43:53 | 2020-11-18T14:43:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
744,753,458 | 861 | Possible Bug: Small training/dataset file creates gigantic output | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | closed | https://github.com/huggingface/datasets/issues/861 | 2020-11-17T13:48:59 | 2021-03-30T14:04:04 | 2021-03-22T12:04:55 | {
"login": "NebelAI",
"id": 7240417,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
744,750,691 | 860 | wmt16 cs-en does not donwload | Hi
I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "finetune_t5_trainer.py", line 109, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/hom... | closed | https://github.com/huggingface/datasets/issues/860 | 2020-11-17T13:45:35 | 2022-10-05T12:27:00 | 2022-10-05T12:26:59 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
743,917,091 | 859 | Integrate file_lock inside the lib for better logging control | Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors.
For example
```python
import logging
logging.basicConfig(level=logging.INFO)
import datasets
datasets.set_verbo... | closed | https://github.com/huggingface/datasets/pull/859 | 2020-11-16T15:13:39 | 2020-11-16T17:06:44 | 2020-11-16T17:06:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
743,904,516 | 858 | Add SemEval-2010 task 8 | Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel | closed | https://github.com/huggingface/datasets/pull/858 | 2020-11-16T14:57:57 | 2020-11-26T17:28:55 | 2020-11-26T17:28:55 | {
"login": "JoelNiklaus",
"id": 3775944,
"type": "User"
} | [] | true | [] |
743,863,214 | 857 | Use pandas reader in csv | The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ).
To fix that I switched to the pandas csv reader.
The new reader is compatible with all the pandas parameters to read csv files.
Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory.
Fix #836... | closed | https://github.com/huggingface/datasets/pull/857 | 2020-11-16T14:05:45 | 2020-11-19T17:35:40 | 2020-11-19T17:35:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
743,799,239 | 856 | Add open book corpus | Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically... | closed | https://github.com/huggingface/datasets/pull/856 | 2020-11-16T12:30:02 | 2024-01-04T13:20:51 | 2020-11-17T15:22:18 | {
"login": "vblagoje",
"id": 458335,
"type": "User"
} | [] | true | [] |
743,690,839 | 855 | Fix kor nli csv reader | The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.
I fixed that by iterating through the lines directly instead of using a csv reader.
I also changed the feature names to match the other NLI datasets (i.e. use "premise"... | closed | https://github.com/huggingface/datasets/pull/855 | 2020-11-16T09:53:41 | 2020-11-16T13:59:14 | 2020-11-16T13:59:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
743,675,376 | 854 | wmt16 does not download | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | closed | https://github.com/huggingface/datasets/issues/854 | 2020-11-16T09:31:51 | 2022-10-05T12:27:42 | 2022-10-05T12:27:42 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
743,426,583 | 853 | concatenate_datasets support axis=0 or 1 ? | I want to achieve the following result

| closed | https://github.com/huggingface/datasets/issues/853 | 2020-11-16T02:46:23 | 2021-04-19T16:07:18 | 2021-04-19T16:07:18 | {
"login": "renqingcolin",
"id": 12437751,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "help wanted",
"color": "008672"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
743,396,240 | 852 | wmt cannot be downloaded | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | closed | https://github.com/huggingface/datasets/issues/852 | 2020-11-16T01:04:41 | 2020-11-16T09:31:58 | 2020-11-16T09:31:58 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
742,369,419 | 850 | Create ClassLabel for labelling tasks datasets | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | closed | https://github.com/huggingface/datasets/pull/850 | 2020-11-13T11:07:22 | 2020-11-16T10:32:05 | 2020-11-16T10:31:58 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
742,263,333 | 849 | Load amazon dataset | Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amaz... | closed | https://github.com/huggingface/datasets/issues/849 | 2020-11-13T08:34:24 | 2020-11-17T07:22:59 | 2020-11-17T07:22:59 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | false | [] |
742,240,942 | 848 | Error when concatenate_datasets | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | closed | https://github.com/huggingface/datasets/issues/848 | 2020-11-13T07:56:02 | 2020-11-13T17:40:59 | 2020-11-13T15:55:10 | {
"login": "shexuan",
"id": 25664170,
"type": "User"
} | [] | false | [] |
742,179,495 | 847 | multiprocessing in dataset map "can only test a child process" | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | closed | https://github.com/huggingface/datasets/issues/847 | 2020-11-13T06:01:04 | 2022-10-05T12:22:51 | 2022-10-05T12:22:51 | {
"login": "timothyjlaurent",
"id": 2000204,
"type": "User"
} | [] | false | [] |
741,885,174 | 846 | Add HoVer multi-hop fact verification dataset | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction... | closed | https://github.com/huggingface/datasets/issues/846 | 2020-11-12T19:55:46 | 2020-12-10T21:47:33 | 2020-12-10T21:47:33 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
741,841,350 | 845 | amazon description fields as bullets | One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown. | closed | https://github.com/huggingface/datasets/pull/845 | 2020-11-12T18:50:41 | 2020-11-12T18:50:54 | 2020-11-12T18:50:54 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
741,835,661 | 844 | add newlines to amazon desc | Just a quick formatting fix to hopefully make it render nicer on Viewer | closed | https://github.com/huggingface/datasets/pull/844 | 2020-11-12T18:41:20 | 2020-11-12T18:42:25 | 2020-11-12T18:42:21 | {
"login": "joeddav",
"id": 9353833,
"type": "User"
} | [] | true | [] |
741,531,121 | 843 | use_custom_baseline still produces errors for bertscore | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | closed | https://github.com/huggingface/datasets/issues/843 | 2020-11-12T11:44:32 | 2024-05-28T16:30:17 | 2021-02-09T14:21:48 | {
"login": "penatbater",
"id": 37921244,
"type": "User"
} | [
{
"name": "metric bug",
"color": "25b21e"
}
] | false | [] |
741,208,428 | 842 | How to enable `.map()` pre-processing pipelines to support multi-node parallelism? | Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ... | open | https://github.com/huggingface/datasets/issues/842 | 2020-11-12T02:04:38 | 2025-03-26T09:10:22 | null | {
"login": "shangw-nvidia",
"id": 66387198,
"type": "User"
} | [] | false | [] |
740,737,448 | 841 | Can not reuse datasets already downloaded | Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but... | closed | https://github.com/huggingface/datasets/issues/841 | 2020-11-11T12:42:15 | 2020-11-11T18:17:16 | 2020-11-11T18:17:16 | {
"login": "jc-hou",
"id": 30210529,
"type": "User"
} | [] | false | [] |
740,632,771 | 840 | Update squad_v2.py | Change lines 100 and 102 to prevent overwriting ```predictions``` variable. | closed | https://github.com/huggingface/datasets/pull/840 | 2020-11-11T09:58:41 | 2020-11-11T15:29:34 | 2020-11-11T15:26:35 | {
"login": "Javier-Jimenez99",
"id": 38747614,
"type": "User"
} | [] | true | [] |
740,355,270 | 839 | XSum dataset missing spaces between sentences | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like ... | open | https://github.com/huggingface/datasets/issues/839 | 2020-11-11T00:34:43 | 2020-11-11T00:34:43 | null | {
"login": "loganlebanoff",
"id": 10007282,
"type": "User"
} | [] | false | [] |
740,328,382 | 838 | CNN/Dailymail Dataset Card | Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail
One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may... | closed | https://github.com/huggingface/datasets/pull/838 | 2020-11-10T23:56:43 | 2020-11-25T21:09:51 | 2020-11-25T21:09:50 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [] | true | [] |
740,250,215 | 837 | AlloCiné dataset card | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat... | closed | https://github.com/huggingface/datasets/pull/837 | 2020-11-10T21:19:53 | 2020-11-25T21:56:27 | 2020-11-25T21:56:27 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [] | true | [] |
740,187,613 | 836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | closed | https://github.com/huggingface/datasets/issues/836 | 2020-11-10T19:35:40 | 2021-11-24T16:59:19 | 2020-11-19T17:35:38 | {
"login": "randubin",
"id": 8919490,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
740,102,210 | 835 | Wikipedia postprocessing | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir... | closed | https://github.com/huggingface/datasets/issues/835 | 2020-11-10T17:26:38 | 2020-11-10T18:23:20 | 2020-11-10T17:49:21 | {
"login": "bminixhofer",
"id": 13353204,
"type": "User"
} | [] | false | [] |
740,082,890 | 834 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** h... | closed | https://github.com/huggingface/datasets/issues/834 | 2020-11-10T17:00:43 | 2021-04-15T12:04:09 | 2021-04-15T12:01:38 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
740,079,692 | 833 | [GEM] add ASSET text simplification dataset | ## Adding a Dataset
- **Name:** ASSET
- **Description:** ASSET is a crowdsourced
multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf
- **Dat... | closed | https://github.com/huggingface/datasets/issues/833 | 2020-11-10T16:56:30 | 2020-12-03T13:38:15 | 2020-12-03T13:38:15 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
740,077,228 | 832 | [GEM] add WikiAuto text simplification dataset | ## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.70... | closed | https://github.com/huggingface/datasets/issues/832 | 2020-11-10T16:53:23 | 2020-12-03T13:38:08 | 2020-12-03T13:38:08 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.