id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
803,557,521 | 1,838 | Add tedlium | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51... | closed | https://github.com/huggingface/datasets/issues/1838 | 2021-02-08T13:17:52 | 2022-10-04T14:34:12 | 2022-10-04T14:34:12 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | false | [] |
803,555,650 | 1,837 | Add VCTK | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch... | closed | https://github.com/huggingface/datasets/issues/1837 | 2021-02-08T13:15:28 | 2021-12-28T15:05:08 | 2021-12-28T15:05:08 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | false | [] |
803,531,837 | 1,836 | test.json has been removed from the limit dataset repo (breaks dataset) | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd... | closed | https://github.com/huggingface/datasets/issues/1836 | 2021-02-08T12:45:53 | 2021-02-10T16:14:58 | 2021-02-10T16:14:58 | {
"login": "Paethon",
"id": 237550,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
803,524,790 | 1,835 | Add CHiME4 dataset | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape... | open | https://github.com/huggingface/datasets/issues/1835 | 2021-02-08T12:36:38 | 2025-01-26T16:18:59 | null | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | false | [] |
803,517,094 | 1,834 | Fixes base_url of limit dataset | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | closed | https://github.com/huggingface/datasets/pull/1834 | 2021-02-08T12:26:35 | 2021-02-08T12:42:50 | 2021-02-08T12:42:50 | {
"login": "Paethon",
"id": 237550,
"type": "User"
} | [] | true | [] |
803,120,978 | 1,833 | Add OSCAR dataset card | I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824). | closed | https://github.com/huggingface/datasets/pull/1833 | 2021-02-08T01:39:49 | 2021-02-12T14:09:25 | 2021-02-12T14:08:24 | {
"login": "pjox",
"id": 635220,
"type": "User"
} | [] | true | [] |
802,880,897 | 1,832 | Looks like nokogumbo is up-to-date now, so this is no longer needed. | Looks like nokogumbo is up-to-date now, so this is no longer needed.
__Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__ | closed | https://github.com/huggingface/datasets/issues/1832 | 2021-02-07T06:52:07 | 2021-02-08T17:27:29 | 2021-02-08T17:27:29 | {
"login": "JimmyJim1",
"id": 68724553,
"type": "User"
} | [] | false | [] |
802,868,854 | 1,831 | Some question about raw dataset download info in the project . | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic i... | closed | https://github.com/huggingface/datasets/issues/1831 | 2021-02-07T05:33:36 | 2021-02-25T14:10:18 | 2021-02-25T14:10:18 | {
"login": "svjack",
"id": 27874014,
"type": "User"
} | [] | false | [] |
802,790,075 | 1,830 | using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? | This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower:
````
def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"):
words_u... | open | https://github.com/huggingface/datasets/issues/1830 | 2021-02-06T21:00:26 | 2021-02-24T21:56:14 | null | {
"login": "wumpusman",
"id": 7662740,
"type": "User"
} | [] | false | [] |
802,693,600 | 1,829 | Add Tweet Eval Dataset | Closes Draft PR #1407.
Notes:
1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels.
2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/... | closed | https://github.com/huggingface/datasets/pull/1829 | 2021-02-06T12:36:25 | 2021-02-08T13:17:54 | 2021-02-08T13:17:53 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
802,449,234 | 1,828 | Add CelebA Dataset | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]... | closed | https://github.com/huggingface/datasets/pull/1828 | 2021-02-05T20:20:55 | 2021-02-18T14:17:07 | 2021-02-18T14:17:07 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
802,353,974 | 1,827 | Regarding On-the-fly Data Loading | Hi,
I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.
Thanks,
Gunjan | closed | https://github.com/huggingface/datasets/issues/1827 | 2021-02-05T17:43:48 | 2021-02-18T13:55:16 | 2021-02-18T13:55:16 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | false | [] |
802,074,744 | 1,826 | Print error message with filename when malformed CSV | Print error message specifying filename when malformed CSV file.
Close #1821 | closed | https://github.com/huggingface/datasets/pull/1826 | 2021-02-05T11:07:59 | 2021-02-09T17:39:27 | 2021-02-09T17:39:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
802,073,925 | 1,825 | Datasets library not suitable for huge text datasets. | Hi,
I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ... | closed | https://github.com/huggingface/datasets/issues/1825 | 2021-02-05T11:06:50 | 2021-03-30T14:04:01 | 2021-03-16T09:44:00 | {
"login": "avacaondata",
"id": 35173563,
"type": "User"
} | [] | false | [] |
802,048,281 | 1,824 | Add OSCAR dataset card | I started adding the dataset card for OSCAR !
For now it's just basic info for all the different configurations in `Dataset Structure`.
In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB.... | closed | https://github.com/huggingface/datasets/pull/1824 | 2021-02-05T10:30:26 | 2021-05-05T18:24:14 | 2021-02-08T11:30:33 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
802,042,181 | 1,823 | Add FewRel Dataset | Hi,
This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757.
I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key... | closed | https://github.com/huggingface/datasets/pull/1823 | 2021-02-05T10:22:03 | 2021-03-01T11:56:20 | 2021-03-01T10:21:39 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
802,003,835 | 1,822 | Add Hindi Discourse Analysis Natural Language Inference Dataset | # Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#dat... | closed | https://github.com/huggingface/datasets/pull/1822 | 2021-02-05T09:30:54 | 2021-02-15T09:57:39 | 2021-02-15T09:57:39 | {
"login": "avinsit123",
"id": 33565881,
"type": "User"
} | [] | true | [] |
801,747,647 | 1,821 | Provide better exception message when one of many files results in an exception | I find when I process many files, i.e.
```
train_files = glob.glob('rain*.csv')
validation_files = glob.glob(validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
```
I sometimes encounter an error due to one of the files being misformed (i.e. no dat... | closed | https://github.com/huggingface/datasets/issues/1821 | 2021-02-05T00:49:03 | 2021-02-09T17:39:27 | 2021-02-09T17:39:27 | {
"login": "david-waterworth",
"id": 5028974,
"type": "User"
} | [] | false | [] |
801,529,936 | 1,820 | Add metrics usage examples and tests | All metrics finally have usage examples and proper fast + slow tests :)
I added examples of usage for every metric, and I use doctest to make sure they all work as expected.
For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do... | closed | https://github.com/huggingface/datasets/pull/1820 | 2021-02-04T18:23:50 | 2021-02-05T14:00:01 | 2021-02-05T14:00:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
801,448,670 | 1,819 | Fixed spelling `S3Fileystem` to `S3FileSystem` | Fixed documentation spelling errors.
Wrong `S3Fileystem`
Right `S3FileSystem` | closed | https://github.com/huggingface/datasets/pull/1819 | 2021-02-04T16:36:46 | 2021-02-04T16:52:27 | 2021-02-04T16:52:26 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
800,958,776 | 1,818 | Loading local dataset raise requests.exceptions.ConnectTimeout | Load local dataset:
```
dataset = load_dataset('json', data_files=["../../data/json.json"])
train = dataset["train"]
print(train.features)
train1 = train.map(lambda x: {"labels": 1})
print(train1[:2])
```
but it raised requests.exceptions.ConnectTimeout:
```
/Users/littlely/myvirtual/tf2/bin/python3.7 /Us... | closed | https://github.com/huggingface/datasets/issues/1818 | 2021-02-04T05:55:23 | 2022-06-01T15:38:42 | 2022-06-01T15:38:42 | {
"login": "Alxe1",
"id": 15032072,
"type": "User"
} | [] | false | [] |
800,870,652 | 1,817 | pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 | I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end
https://github.com/LuCeHe/GenericTools/blob/maste... | closed | https://github.com/huggingface/datasets/issues/1817 | 2021-02-04T02:30:23 | 2022-10-05T12:42:57 | 2022-10-05T12:42:57 | {
"login": "LuCeHe",
"id": 9610770,
"type": "User"
} | [] | false | [] |
800,660,995 | 1,816 | Doc2dial rc update to latest version | closed | https://github.com/huggingface/datasets/pull/1816 | 2021-02-03T20:08:54 | 2021-02-15T15:15:24 | 2021-02-15T15:04:33 | {
"login": "songfeng",
"id": 2062185,
"type": "User"
} | [] | true | [] | |
800,610,017 | 1,815 | Add CCAligned Multilingual Dataset | Hello,
I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756.
This dataset has two types - Document-Pairs, and Sentence-Pairs.
The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo... | closed | https://github.com/huggingface/datasets/pull/1815 | 2021-02-03T18:59:52 | 2021-03-01T12:33:03 | 2021-03-01T10:36:21 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
800,516,236 | 1,814 | Add Freebase QA Dataset | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | closed | https://github.com/huggingface/datasets/pull/1814 | 2021-02-03T16:57:49 | 2021-02-04T19:47:51 | 2021-02-04T16:21:48 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
800,435,973 | 1,813 | Support future datasets | If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak... | closed | https://github.com/huggingface/datasets/pull/1813 | 2021-02-03T15:26:49 | 2021-02-05T10:33:48 | 2021-02-05T10:33:47 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
799,379,178 | 1,812 | Add CIFAR-100 Dataset | Adding CIFAR-100 Dataset. | closed | https://github.com/huggingface/datasets/pull/1812 | 2021-02-02T15:22:59 | 2021-02-08T11:10:18 | 2021-02-08T10:39:06 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
799,211,060 | 1,811 | Unable to add Multi-label Datasets | I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as
`supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse... | closed | https://github.com/huggingface/datasets/issues/1811 | 2021-02-02T11:50:56 | 2021-02-18T14:16:31 | 2021-02-18T14:16:31 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | false | [] |
799,168,650 | 1,810 | Add Hateful Memes Dataset | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [Thi... | open | https://github.com/huggingface/datasets/issues/1810 | 2021-02-02T10:53:59 | 2021-12-08T12:03:59 | null | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
799,059,141 | 1,809 | Add FreebaseQA dataset | Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.
Requesting @lhoestq to review. | closed | https://github.com/huggingface/datasets/pull/1809 | 2021-02-02T08:35:53 | 2021-02-03T17:15:05 | 2021-02-03T16:43:06 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
798,879,180 | 1,808 | writing Datasets in a human readable format | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | closed | https://github.com/huggingface/datasets/issues/1808 | 2021-02-02T02:55:40 | 2022-06-01T15:38:13 | 2022-06-01T15:38:13 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
798,823,591 | 1,807 | Adding an aggregated dataset for the GEM benchmark | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar... | closed | https://github.com/huggingface/datasets/pull/1807 | 2021-02-02T00:39:53 | 2021-02-02T22:48:41 | 2021-02-02T18:06:58 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
798,607,869 | 1,806 | Update details to MLSUM dataset | Update details to MLSUM dataset | closed | https://github.com/huggingface/datasets/pull/1806 | 2021-02-01T18:35:12 | 2021-02-01T18:46:28 | 2021-02-01T18:46:21 | {
"login": "padipadou",
"id": 15138872,
"type": "User"
} | [] | true | [] |
798,498,053 | 1,805 | can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index | So, I have the following instances in my dataset
```
{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of
this increase in rotation?',
'answer': 'C',
'example_id': 'ARCCH_Mercury_7175875',
'options':[{'option_context': 'One effect of ... | closed | https://github.com/huggingface/datasets/issues/1805 | 2021-02-01T16:14:17 | 2021-03-06T14:32:46 | 2021-03-06T14:32:46 | {
"login": "abarbosa94",
"id": 6608232,
"type": "User"
} | [] | false | [] |
798,483,881 | 1,804 | Add SICK dataset | Adds the SICK dataset (http://marcobaroni.org/composes/sick.html).
Closes #1772.
Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate. | closed | https://github.com/huggingface/datasets/pull/1804 | 2021-02-01T15:57:44 | 2021-02-05T17:46:28 | 2021-02-05T15:49:25 | {
"login": "calpt",
"id": 36051308,
"type": "User"
} | [] | true | [] |
798,243,904 | 1,803 | Querying examples from big datasets is slower than small datasets | After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.
For example
```python
from datasets import load_dataset
b1 = load_dataset("bookcorpus", split="train[:1%]")
b50 = load_dataset("bookcorpus", split="train[:50%]")
b100 = load_dataset("bookcorp... | closed | https://github.com/huggingface/datasets/issues/1803 | 2021-02-01T11:08:23 | 2021-08-04T18:11:01 | 2021-08-04T18:10:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
797,924,468 | 1,802 | add github of contributors | This PR will add contributors GitHub id at the end of every dataset cards. | closed | https://github.com/huggingface/datasets/pull/1802 | 2021-02-01T03:49:19 | 2021-02-03T10:09:52 | 2021-02-03T10:06:30 | {
"login": "thevasudevgupta",
"id": 53136577,
"type": "User"
} | [] | true | [] |
797,814,275 | 1,801 | [GEM] Updated the source link of the data to update correct tokenized version. | closed | https://github.com/huggingface/datasets/pull/1801 | 2021-01-31T21:17:19 | 2021-02-02T13:17:38 | 2021-02-02T13:17:28 | {
"login": "mounicam",
"id": 11708999,
"type": "User"
} | [] | true | [] | |
797,798,689 | 1,800 | Add DuoRC Dataset | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or... | closed | https://github.com/huggingface/datasets/pull/1800 | 2021-01-31T20:01:59 | 2021-02-03T05:01:45 | 2021-02-02T22:49:26 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
797,789,439 | 1,799 | Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c… | This is a dataset I currently use my research and I realized some features are not being returned.
Previous code was not using all available metadata and was kind of messy
I fixed code to use all metadata and made some modification to be more efficient and better formatted.
Please let me know if I need to ma... | closed | https://github.com/huggingface/datasets/pull/1799 | 2021-01-31T19:18:55 | 2021-02-09T22:06:13 | 2021-02-09T15:49:58 | {
"login": "gmihaila",
"id": 22454783,
"type": "User"
} | [] | true | [] |
797,766,818 | 1,798 | Add Arabic sarcasm dataset | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | closed | https://github.com/huggingface/datasets/pull/1798 | 2021-01-31T17:38:55 | 2021-02-10T20:39:13 | 2021-02-03T10:35:54 | {
"login": "mapmeld",
"id": 643918,
"type": "User"
} | [] | true | [] |
797,357,901 | 1,797 | Connection error | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | closed | https://github.com/huggingface/datasets/issues/1797 | 2021-01-30T07:32:45 | 2021-08-04T18:09:37 | 2021-08-04T18:09:37 | {
"login": "smile0925",
"id": 46243662,
"type": "User"
} | [] | false | [] |
797,329,905 | 1,796 | Filter on dataset too much slowww | I have a dataset with 50M rows.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.
When I applied the `filter()` function it is taking too much time. I need to filter se... | open | https://github.com/huggingface/datasets/issues/1796 | 2021-01-30T04:09:19 | 2025-05-15T13:19:55 | null | {
"login": "ayubSubhaniya",
"id": 20911334,
"type": "User"
} | [] | false | [] |
797,021,730 | 1,795 | Custom formatting for lazy map + arrow data extraction refactor | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p... | closed | https://github.com/huggingface/datasets/pull/1795 | 2021-01-29T16:35:53 | 2022-07-30T09:50:11 | 2021-02-05T09:54:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
796,975,588 | 1,794 | Move silicone directory | The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets | closed | https://github.com/huggingface/datasets/pull/1794 | 2021-01-29T15:33:15 | 2021-01-29T16:31:39 | 2021-01-29T16:31:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
796,940,299 | 1,793 | Minor fix the docstring of load_metric | Minor fix:
- duplicated attributes
- format fix | closed | https://github.com/huggingface/datasets/pull/1793 | 2021-01-29T14:47:35 | 2021-01-29T16:53:32 | 2021-01-29T16:53:32 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
796,934,627 | 1,792 | Allow loading dataset in-memory | Allow loading datasets either from:
- memory-mapped file (current implementation)
- from file descriptor, copying data to physical memory
Close #708 | closed | https://github.com/huggingface/datasets/pull/1792 | 2021-01-29T14:39:50 | 2021-02-12T14:13:28 | 2021-02-12T14:13:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
796,924,519 | 1,791 | Small fix with corrected logging of train vectors | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | closed | https://github.com/huggingface/datasets/pull/1791 | 2021-01-29T14:26:06 | 2021-01-29T18:51:10 | 2021-01-29T17:05:07 | {
"login": "TezRomacH",
"id": 7549587,
"type": "User"
} | [] | true | [] |
796,678,157 | 1,790 | ModuleNotFoundError: No module named 'apache_beam', when specific languages. | ```py
import datasets
wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets')
```
then `ModuleNotFoundError: No module named 'apache_beam'` happend.
The error doesn't appear when it's '20200501.en'.
I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo... | open | https://github.com/huggingface/datasets/issues/1790 | 2021-01-29T08:17:24 | 2021-03-25T12:10:51 | null | {
"login": "miyamonz",
"id": 6331508,
"type": "User"
} | [] | false | [] |
796,229,721 | 1,789 | [BUG FIX] typo in the import path for metrics | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | closed | https://github.com/huggingface/datasets/pull/1789 | 2021-01-28T18:01:37 | 2021-01-28T18:13:56 | 2021-01-28T18:13:56 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
795,544,422 | 1,788 | Doc2dial rc | closed | https://github.com/huggingface/datasets/pull/1788 | 2021-01-27T23:51:00 | 2021-01-28T18:46:13 | 2021-01-28T18:46:13 | {
"login": "songfeng",
"id": 2062185,
"type": "User"
} | [] | true | [] | |
795,485,842 | 1,787 | Update the CommonGen citation information | closed | https://github.com/huggingface/datasets/pull/1787 | 2021-01-27T22:12:47 | 2021-01-28T13:56:29 | 2021-01-28T13:56:29 | {
"login": "yuchenlin",
"id": 10104354,
"type": "User"
} | [] | true | [] | |
795,462,816 | 1,786 | How to use split dataset | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | closed | https://github.com/huggingface/datasets/issues/1786 | 2021-01-27T21:37:47 | 2021-04-23T15:17:39 | 2021-04-23T15:17:39 | {
"login": "kkhan188",
"id": 78090287,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
795,458,856 | 1,785 | Not enough disk space (Needed: Unknown size) when caching on a cluster | I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.
The exact error thrown:
```bash
>>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path")
OSError: Not eno... | closed | https://github.com/huggingface/datasets/issues/1785 | 2021-01-27T21:30:59 | 2024-12-04T02:57:00 | 2021-01-30T01:07:56 | {
"login": "olinguyen",
"id": 4341867,
"type": "User"
} | [] | false | [] |
794,659,174 | 1,784 | JSONDecodeError on JSON with multiple lines | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with th... | closed | https://github.com/huggingface/datasets/issues/1784 | 2021-01-27T00:19:22 | 2021-01-31T08:47:18 | 2021-01-31T08:47:18 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | false | [] |
794,544,495 | 1,783 | Dataset Examples Explorer | In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.
Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ... | closed | https://github.com/huggingface/datasets/issues/1783 | 2021-01-26T20:39:02 | 2021-02-01T13:58:44 | 2021-02-01T13:58:44 | {
"login": "ChewKokWah",
"id": 30875246,
"type": "User"
} | [] | false | [] |
794,167,920 | 1,782 | Update pyarrow import warning | Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.
I also moved the check at the top of the __init__.py | closed | https://github.com/huggingface/datasets/pull/1782 | 2021-01-26T11:47:11 | 2021-01-26T13:50:50 | 2021-01-26T13:50:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
793,914,556 | 1,781 | AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | I'm using Colab. And suddenly this morning, there is this error. Have a look below!

| closed | https://github.com/huggingface/datasets/issues/1781 | 2021-01-26T04:18:35 | 2024-07-07T17:55:12 | 2022-10-05T12:37:06 | {
"login": "PalaashAgrawal",
"id": 45964869,
"type": "User"
} | [] | false | [] |
793,882,132 | 1,780 | Update SciFact URL | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re... | closed | https://github.com/huggingface/datasets/pull/1780 | 2021-01-26T02:49:06 | 2021-01-28T18:48:00 | 2021-01-28T10:19:45 | {
"login": "dwadden",
"id": 3091916,
"type": "User"
} | [] | true | [] |
793,539,703 | 1,779 | Ignore definition line number of functions for caching | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition f... | closed | https://github.com/huggingface/datasets/pull/1779 | 2021-01-25T16:42:29 | 2021-01-26T10:20:20 | 2021-01-26T10:20:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
793,474,507 | 1,778 | Narrative QA Manual | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | closed | https://github.com/huggingface/datasets/pull/1778 | 2021-01-25T15:22:31 | 2021-01-29T09:35:14 | 2021-01-29T09:34:51 | {
"login": "rsanjaykamath",
"id": 18527321,
"type": "User"
} | [] | true | [] |
793,273,770 | 1,777 | GPT2 MNLI training using run_glue.py | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accu... | closed | https://github.com/huggingface/datasets/issues/1777 | 2021-01-25T10:53:52 | 2021-01-25T11:12:53 | 2021-01-25T11:12:53 | {
"login": "nlp-student",
"id": 76427077,
"type": "User"
} | [] | false | [] |
792,755,249 | 1,776 | [Question & Bug Report] Can we preprocess a dataset on the fly? | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_si... | closed | https://github.com/huggingface/datasets/issues/1776 | 2021-01-24T09:28:24 | 2021-05-20T04:15:58 | 2021-05-20T04:15:58 | {
"login": "shuaihuaiyi",
"id": 14048129,
"type": "User"
} | [] | false | [] |
792,742,120 | 1,775 | Efficient ways to iterate the dataset | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | closed | https://github.com/huggingface/datasets/issues/1775 | 2021-01-24T07:54:31 | 2021-01-24T09:50:39 | 2021-01-24T09:50:39 | {
"login": "zhongpeixiang",
"id": 11826803,
"type": "User"
} | [] | false | [] |
792,730,559 | 1,774 | is it possible to make slice to be more compatible like python list and numpy? | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | closed | https://github.com/huggingface/datasets/issues/1774 | 2021-01-24T06:15:52 | 2024-01-31T15:54:18 | 2024-01-31T15:54:18 | {
"login": "world2vec",
"id": 7607120,
"type": "User"
} | [] | false | [] |
792,708,160 | 1,773 | bug in loading datasets | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | closed | https://github.com/huggingface/datasets/issues/1773 | 2021-01-24T02:53:45 | 2021-09-06T08:54:46 | 2021-08-04T18:13:01 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [] | false | [] |
792,703,797 | 1,772 | Adding SICK dataset | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instruction... | closed | https://github.com/huggingface/datasets/issues/1772 | 2021-01-24T02:15:31 | 2021-02-05T15:49:25 | 2021-02-05T15:49:25 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
792,701,276 | 1,771 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | closed | https://github.com/huggingface/datasets/issues/1771 | 2021-01-24T01:53:52 | 2021-01-24T23:06:29 | 2021-01-24T23:06:29 | {
"login": "world2vec",
"id": 7607120,
"type": "User"
} | [] | false | [] |
792,698,148 | 1,770 | how can I combine 2 dataset with different/same features? | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | closed | https://github.com/huggingface/datasets/issues/1770 | 2021-01-24T01:26:06 | 2022-06-01T15:43:15 | 2022-06-01T15:43:15 | {
"login": "world2vec",
"id": 7607120,
"type": "User"
} | [] | false | [] |
792,523,284 | 1,769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | closed | https://github.com/huggingface/datasets/issues/1769 | 2021-01-23T10:13:00 | 2022-10-05T12:38:51 | 2022-10-05T12:38:51 | {
"login": "shuaihuaiyi",
"id": 14048129,
"type": "User"
} | [] | false | [] |
792,150,745 | 1,768 | Mention kwargs in the Dataset Formatting docs | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lho... | closed | https://github.com/huggingface/datasets/pull/1768 | 2021-01-22T16:43:20 | 2021-01-31T12:33:10 | 2021-01-25T09:14:59 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
792,068,497 | 1,767 | Add Librispeech ASR | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f... | closed | https://github.com/huggingface/datasets/pull/1767 | 2021-01-22T14:54:37 | 2021-01-25T20:38:07 | 2021-01-25T20:37:42 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
792,044,105 | 1,766 | Issues when run two programs compute the same metrics | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | closed | https://github.com/huggingface/datasets/issues/1766 | 2021-01-22T14:22:55 | 2021-02-02T10:38:06 | 2021-02-02T10:38:06 | {
"login": "lamthuy",
"id": 8089862,
"type": "User"
} | [] | false | [] |
791,553,065 | 1,765 | Error iterating over Dataset with DataLoader | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | closed | https://github.com/huggingface/datasets/issues/1765 | 2021-01-21T22:56:45 | 2022-10-28T02:16:38 | 2021-01-23T03:44:14 | {
"login": "EvanZ",
"id": 1295082,
"type": "User"
} | [] | false | [] |
791,486,860 | 1,764 | Connection Issues | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_da... | closed | https://github.com/huggingface/datasets/issues/1764 | 2021-01-21T20:56:09 | 2021-01-21T21:00:19 | 2021-01-21T21:00:02 | {
"login": "SaeedNajafi",
"id": 12455298,
"type": "User"
} | [] | false | [] |
791,389,763 | 1,763 | PAWS-X: Fix csv Dictreader splitting data on quotes |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
... | closed | https://github.com/huggingface/datasets/pull/1763 | 2021-01-21T18:21:01 | 2021-01-22T10:14:33 | 2021-01-22T10:13:45 | {
"login": "gowtham1997",
"id": 9641196,
"type": "User"
} | [] | true | [] |
791,226,007 | 1,762 | Unable to format dataset to CUDA Tensors | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | closed | https://github.com/huggingface/datasets/issues/1762 | 2021-01-21T15:31:23 | 2021-02-02T07:13:22 | 2021-02-02T07:13:22 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | false | [] |
791,150,858 | 1,761 | Add SILICONE benchmark | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| closed | https://github.com/huggingface/datasets/pull/1761 | 2021-01-21T14:29:12 | 2021-02-04T14:32:48 | 2021-01-26T13:50:31 | {
"login": "eusip",
"id": 1551356,
"type": "User"
} | [] | true | [] |
791,110,857 | 1,760 | More tags | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | closed | https://github.com/huggingface/datasets/pull/1760 | 2021-01-21T13:50:10 | 2021-01-22T09:40:01 | 2021-01-22T09:40:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
790,992,226 | 1,759 | wikipedia dataset incomplete | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | closed | https://github.com/huggingface/datasets/issues/1759 | 2021-01-21T11:47:15 | 2021-01-21T17:22:11 | 2021-01-21T17:21:06 | {
"login": "ChrisDelClea",
"id": 19912393,
"type": "User"
} | [] | false | [] |
790,626,116 | 1,758 | dataset.search() (elastic) cannot reliably retrieve search results | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | closed | https://github.com/huggingface/datasets/issues/1758 | 2021-01-21T02:26:37 | 2021-01-22T00:25:50 | 2021-01-22T00:25:50 | {
"login": "afogarty85",
"id": 49048309,
"type": "User"
} | [] | false | [] |
790,466,509 | 1,757 | FewRel | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | closed | https://github.com/huggingface/datasets/issues/1757 | 2021-01-20T23:56:03 | 2021-03-09T02:52:05 | 2021-03-08T14:34:52 | {
"login": "dspoka",
"id": 6183050,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
790,380,028 | 1,756 | Ccaligned multilingual translation dataset | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ... | closed | https://github.com/huggingface/datasets/issues/1756 | 2021-01-20T22:18:44 | 2021-03-01T10:36:21 | 2021-03-01T10:36:21 | {
"login": "flozi00",
"id": 47894090,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
790,324,734 | 1,755 | Using select/reordering datasets slows operations down immensely | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | closed | https://github.com/huggingface/datasets/issues/1755 | 2021-01-20T21:12:12 | 2021-01-20T22:03:39 | 2021-01-20T22:03:39 | {
"login": "afogarty85",
"id": 49048309,
"type": "User"
} | [] | false | [] |
789,881,730 | 1,754 | Use a config id in the cache directory names for custom configs | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from ... | closed | https://github.com/huggingface/datasets/pull/1754 | 2021-01-20T11:11:00 | 2021-01-25T09:12:07 | 2021-01-25T09:12:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
789,867,685 | 1,753 | fix comet citations | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | closed | https://github.com/huggingface/datasets/pull/1753 | 2021-01-20T10:52:38 | 2021-01-20T14:39:30 | 2021-01-20T14:39:30 | {
"login": "ricardorei",
"id": 17256847,
"type": "User"
} | [] | true | [] |
789,822,459 | 1,752 | COMET metric citation | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8... | closed | https://github.com/huggingface/datasets/pull/1752 | 2021-01-20T09:54:43 | 2021-01-20T10:27:07 | 2021-01-20T10:25:02 | {
"login": "ricardorei",
"id": 17256847,
"type": "User"
} | [] | true | [] |
789,232,980 | 1,751 | Updated README for the Social Bias Frames dataset | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | closed | https://github.com/huggingface/datasets/pull/1751 | 2021-01-19T17:53:00 | 2021-01-20T14:56:52 | 2021-01-20T14:56:52 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [] | true | [] |
788,668,085 | 1,750 | Fix typo in README.md of cnn_dailymail | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | closed | https://github.com/huggingface/datasets/pull/1750 | 2021-01-19T03:06:05 | 2021-01-19T11:07:29 | 2021-01-19T09:48:43 | {
"login": "forest1988",
"id": 2755894,
"type": "User"
} | [] | true | [] |
788,476,639 | 1,749 | Added metadata and correct splits for swda. | Switchboard Dialog Act Corpus
I made some changes following @bhavitvyamalik recommendation in #1678:
* Contains all metadata.
* Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo.
* Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur... | closed | https://github.com/huggingface/datasets/pull/1749 | 2021-01-18T18:36:32 | 2021-01-29T19:35:52 | 2021-01-29T18:38:08 | {
"login": "gmihaila",
"id": 22454783,
"type": "User"
} | [] | true | [] |
788,431,642 | 1,748 | add Stuctured Argument Extraction for Korean dataset | closed | https://github.com/huggingface/datasets/pull/1748 | 2021-01-18T17:14:19 | 2021-09-17T16:53:18 | 2021-01-19T11:26:58 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] | |
788,299,775 | 1,747 | datasets slicing with seed | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | closed | https://github.com/huggingface/datasets/issues/1747 | 2021-01-18T14:08:55 | 2022-10-05T12:37:27 | 2022-10-05T12:37:27 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [] | false | [] |
788,188,184 | 1,746 | Fix release conda worflow | The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110 | closed | https://github.com/huggingface/datasets/pull/1746 | 2021-01-18T11:29:10 | 2021-01-18T11:31:24 | 2021-01-18T11:31:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
787,838,256 | 1,745 | difference between wsc and wsc.fixed for superglue | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | closed | https://github.com/huggingface/datasets/issues/1745 | 2021-01-18T00:50:19 | 2021-01-18T11:02:43 | 2021-01-18T00:59:34 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [] | false | [] |
787,649,811 | 1,744 | Add missing "brief" entries to reuters | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | closed | https://github.com/huggingface/datasets/pull/1744 | 2021-01-17T07:58:49 | 2021-01-18T11:26:09 | 2021-01-18T11:26:09 | {
"login": "jbragg",
"id": 2238344,
"type": "User"
} | [] | true | [] |
787,631,412 | 1,743 | Issue while Creating Custom Metric | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | closed | https://github.com/huggingface/datasets/issues/1743 | 2021-01-17T07:01:14 | 2022-06-01T15:49:34 | 2022-06-01T15:49:34 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | false | [] |
787,623,640 | 1,742 | Add GLUE Compat (compatible with transformers<3.5.0) | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MN... | closed | https://github.com/huggingface/datasets/pull/1742 | 2021-01-17T05:54:25 | 2023-09-24T09:52:12 | 2021-03-29T12:43:30 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | true | [] |
787,327,060 | 1,741 | error when run fine_tuning on text_classification | dataset:sem_eval_2014_task_1
pretrained_model:bert-base-uncased
error description:
when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc... | closed | https://github.com/huggingface/datasets/issues/1741 | 2021-01-16T02:23:19 | 2021-01-16T02:39:28 | 2021-01-16T02:39:18 | {
"login": "XiaoYang66",
"id": 43234824,
"type": "User"
} | [] | false | [] |
787,264,605 | 1,740 | add id_liputan6 dataset | id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679 | closed | https://github.com/huggingface/datasets/pull/1740 | 2021-01-15T22:58:34 | 2021-01-20T13:41:26 | 2021-01-20T13:41:26 | {
"login": "cahya-wirawan",
"id": 7669893,
"type": "User"
} | [] | true | [] |
787,219,138 | 1,739 | fixes and improvements for the WebNLG loader | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | closed | https://github.com/huggingface/datasets/pull/1739 | 2021-01-15T21:45:23 | 2021-01-29T14:34:06 | 2021-01-29T10:53:03 | {
"login": "Shimorina",
"id": 9607332,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.