id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
844,673,244 | 2,146 | Dataset file size on disk is very large with 3D Array | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | open | https://github.com/huggingface/datasets/issues/2146 | 2021-03-30T14:46:09 | 2021-04-16T13:07:02 | null | {
"login": "jblemoine",
"id": 22685854,
"type": "User"
} | [] | false | [] |
844,603,518 | 2,145 | Implement Dataset add_column | Implement `Dataset.add_column`.
Close #1954. | closed | https://github.com/huggingface/datasets/pull/2145 | 2021-03-30T14:02:14 | 2021-04-29T14:50:44 | 2021-04-29T14:50:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
844,352,067 | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | open | https://github.com/huggingface/datasets/issues/2144 | 2021-03-30T10:38:31 | 2021-04-01T09:21:17 | null | {
"login": "TomPyonsuke",
"id": 26637405,
"type": "User"
} | [] | false | [] |
844,313,228 | 2,143 | task casting via load_dataset | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | closed | https://github.com/huggingface/datasets/pull/2143 | 2021-03-30T10:00:42 | 2021-06-11T13:20:41 | 2021-06-11T13:20:36 | {
"login": "theo-m",
"id": 17948980,
"type": "User"
} | [] | true | [] |
843,919,420 | 2,142 | Gem V1.1 | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | closed | https://github.com/huggingface/datasets/pull/2142 | 2021-03-29T23:47:02 | 2021-03-30T00:10:02 | 2021-03-30T00:10:02 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
843,914,790 | 2,141 | added spans field for the wikiann datasets | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | closed | https://github.com/huggingface/datasets/pull/2141 | 2021-03-29T23:38:26 | 2021-03-31T13:27:50 | 2021-03-31T13:27:50 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | true | [] |
843,830,451 | 2,140 | add banking77 dataset | Intent classification/detection dataset from banking category with 77 unique intents. | closed | https://github.com/huggingface/datasets/pull/2140 | 2021-03-29T21:32:23 | 2021-04-09T09:32:18 | 2021-04-09T09:32:18 | {
"login": "dkajtoch",
"id": 32985207,
"type": "User"
} | [] | true | [] |
843,662,613 | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | closed | https://github.com/huggingface/datasets/issues/2139 | 2021-03-29T18:23:54 | 2021-03-30T09:12:53 | 2021-03-30T09:12:53 | {
"login": "PedroMLF",
"id": 22480495,
"type": "User"
} | [] | false | [] |
843,508,402 | 2,138 | Add CER metric | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self)... | closed | https://github.com/huggingface/datasets/pull/2138 | 2021-03-29T15:52:27 | 2021-04-06T16:16:11 | 2021-04-06T07:14:38 | {
"login": "chutaklee",
"id": 6931004,
"type": "User"
} | [] | true | [] |
843,502,835 | 2,137 | Fix missing infos from concurrent dataset loading | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| closed | https://github.com/huggingface/datasets/pull/2137 | 2021-03-29T15:46:12 | 2021-03-31T10:35:56 | 2021-03-31T10:35:55 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
843,492,015 | 2,136 | fix dialogue action slot name and value | fix #2128 | closed | https://github.com/huggingface/datasets/pull/2136 | 2021-03-29T15:34:13 | 2021-03-31T12:48:02 | 2021-03-31T12:48:01 | {
"login": "adamlin120",
"id": 31605305,
"type": "User"
} | [] | true | [] |
843,246,344 | 2,135 | en language data from MLQA dataset is missing | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | closed | https://github.com/huggingface/datasets/issues/2135 | 2021-03-29T10:47:50 | 2021-03-30T10:20:23 | 2021-03-30T10:20:23 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [] | false | [] |
843,242,849 | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | closed | https://github.com/huggingface/datasets/issues/2134 | 2021-03-29T10:43:15 | 2021-05-03T17:59:21 | 2021-05-03T17:59:21 | {
"login": "prokopCerny",
"id": 5815801,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
843,149,680 | 2,133 | bug in mlqa dataset | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | closed | https://github.com/huggingface/datasets/issues/2133 | 2021-03-29T09:03:09 | 2021-03-30T17:40:57 | 2021-03-30T17:40:57 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
843,142,822 | 2,132 | TydiQA dataset is mixed and is not split per language | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | open | https://github.com/huggingface/datasets/issues/2132 | 2021-03-29T08:56:21 | 2021-04-04T09:57:15 | null | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
843,133,112 | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | closed | https://github.com/huggingface/datasets/issues/2131 | 2021-03-29T08:45:58 | 2021-04-10T11:08:55 | 2021-04-10T11:08:55 | {
"login": "andy-yangz",
"id": 23011317,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
843,111,936 | 2,130 | wikiann dataset is missing columns | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | closed | https://github.com/huggingface/datasets/issues/2130 | 2021-03-29T08:23:00 | 2021-08-27T14:44:18 | 2021-08-27T14:44:18 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
843,033,656 | 2,129 | How to train BERT model with next sentence prediction? | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| closed | https://github.com/huggingface/datasets/issues/2129 | 2021-03-29T06:48:03 | 2021-04-01T04:58:40 | 2021-04-01T04:58:40 | {
"login": "jnishi",
"id": 836541,
"type": "User"
} | [] | false | [] |
843,023,910 | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p... | closed | https://github.com/huggingface/datasets/issues/2128 | 2021-03-29T06:34:02 | 2021-03-31T12:48:01 | 2021-03-31T12:48:01 | {
"login": "adamlin120",
"id": 31605305,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
843,017,199 | 2,127 | make documentation more clear to use different cloud storage | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | closed | https://github.com/huggingface/datasets/pull/2127 | 2021-03-29T06:24:06 | 2021-03-29T12:16:24 | 2021-03-29T12:16:24 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
842,779,966 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | closed | https://github.com/huggingface/datasets/pull/2126 | 2021-03-28T16:57:30 | 2021-03-29T09:27:14 | 2021-03-29T09:27:13 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
842,690,570 | 2,125 | Is dataset timit_asr broken? | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_example... | closed | https://github.com/huggingface/datasets/issues/2125 | 2021-03-28T08:30:18 | 2021-03-28T12:29:25 | 2021-03-28T12:29:25 | {
"login": "kosuke-kitahara",
"id": 42398050,
"type": "User"
} | [] | false | [] |
842,627,729 | 2,124 | Adding ScaNN library to do MIPS? | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

d... | closed | https://github.com/huggingface/datasets/issues/2123 | 2021-03-27T18:41:28 | 2021-05-12T16:15:18 | 2021-05-12T16:15:17 | {
"login": "mille-s",
"id": 29705940,
"type": "User"
} | [] | false | [] |
842,194,588 | 2,122 | Fast table queries with interpolation search | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default ch... | closed | https://github.com/huggingface/datasets/pull/2122 | 2021-03-26T18:09:20 | 2021-08-04T18:11:59 | 2021-04-06T14:33:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
842,148,633 | 2,121 | Add Validation For README | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
... | closed | https://github.com/huggingface/datasets/pull/2121 | 2021-03-26T17:02:17 | 2021-05-10T13:17:18 | 2021-05-10T09:41:41 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
841,954,521 | 2,120 | dataset viewer does not work anymore | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | closed | https://github.com/huggingface/datasets/issues/2120 | 2021-03-26T13:22:13 | 2021-03-26T15:52:22 | 2021-03-26T15:52:22 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
841,567,199 | 2,119 | copy.deepcopy os.environ instead of copy | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, lik... | closed | https://github.com/huggingface/datasets/pull/2119 | 2021-03-26T03:58:38 | 2021-03-26T15:13:52 | 2021-03-26T15:13:52 | {
"login": "NihalHarish",
"id": 5506053,
"type": "User"
} | [] | true | [] |
841,563,329 | 2,118 | Remove os.environ.copy in Dataset.map | Replace `os.environ.copy` with in-place modification
Fixes #2115 | closed | https://github.com/huggingface/datasets/pull/2118 | 2021-03-26T03:48:17 | 2021-03-26T12:03:23 | 2021-03-26T12:00:05 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
841,535,283 | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent... | closed | https://github.com/huggingface/datasets/issues/2117 | 2021-03-26T02:35:22 | 2021-08-25T21:44:05 | 2021-03-26T02:40:26 | {
"login": "Frankie123421",
"id": 54012361,
"type": "User"
} | [] | false | [] |
841,481,292 | 2,116 | Creating custom dataset results in error while calling the map() function | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the ... | closed | https://github.com/huggingface/datasets/issues/2116 | 2021-03-26T00:37:46 | 2021-03-31T14:30:32 | 2021-03-31T14:30:32 | {
"login": "GeetDsa",
"id": 13940397,
"type": "User"
} | [] | false | [] |
841,283,974 | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes... | closed | https://github.com/huggingface/datasets/issues/2115 | 2021-03-25T20:29:19 | 2021-03-26T15:13:52 | 2021-03-26T15:13:52 | {
"login": "leleamol",
"id": 19983848,
"type": "User"
} | [] | false | [] |
841,207,878 | 2,114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | closed | https://github.com/huggingface/datasets/pull/2114 | 2021-03-25T18:40:17 | 2021-03-31T10:38:50 | 2021-03-31T10:38:50 | {
"login": "iliaschalkidis",
"id": 1626984,
"type": "User"
} | [] | true | [] |
841,191,303 | 2,113 | Implement Dataset as context manager | When used as context manager, it would be safely deleted if some exception is raised.
This will avoid
> During handling of the above exception, another exception occurred: | closed | https://github.com/huggingface/datasets/pull/2113 | 2021-03-25T18:18:30 | 2021-03-31T11:30:14 | 2021-03-31T08:30:11 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
841,098,008 | 2,112 | Support for legal NLP datasets (EURLEX and ECtHR cases) | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084) | closed | https://github.com/huggingface/datasets/pull/2112 | 2021-03-25T16:24:17 | 2021-03-25T18:39:31 | 2021-03-25T18:34:31 | {
"login": "iliaschalkidis",
"id": 1626984,
"type": "User"
} | [] | true | [] |
841,082,087 | 2,111 | Compute WER metric iteratively | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | closed | https://github.com/huggingface/datasets/pull/2111 | 2021-03-25T16:06:48 | 2021-04-06T07:20:43 | 2021-04-06T07:20:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
840,794,995 | 2,110 | Fix incorrect assertion in builder.py | Fix incorrect num_examples comparison assertion in builder.py | closed | https://github.com/huggingface/datasets/pull/2110 | 2021-03-25T10:39:20 | 2021-04-12T13:33:03 | 2021-04-12T13:33:03 | {
"login": "dreamgonfly",
"id": 2340721,
"type": "User"
} | [] | true | [] |
840,746,598 | 2,109 | Add more issue templates and customize issue template chooser | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` templa... | closed | https://github.com/huggingface/datasets/pull/2109 | 2021-03-25T09:41:53 | 2021-04-19T06:20:11 | 2021-04-19T06:20:11 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
840,181,055 | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6... | open | https://github.com/huggingface/datasets/issues/2108 | 2021-03-24T21:32:16 | 2021-03-25T06:31:43 | null | {
"login": "shamanez",
"id": 16892570,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
839,495,825 | 2,107 | Metadata validation | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365... | closed | https://github.com/huggingface/datasets/pull/2107 | 2021-03-24T08:52:41 | 2021-04-26T08:27:14 | 2021-04-26T08:27:13 | {
"login": "theo-m",
"id": 17948980,
"type": "User"
} | [] | true | [] |
839,084,264 | 2,106 | WMT19 Dataset for Kazakh-English is not formatted correctly | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> ... | open | https://github.com/huggingface/datasets/issues/2106 | 2021-03-23T20:14:47 | 2021-03-25T21:36:20 | null | {
"login": "trina731",
"id": 22580542,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
839,059,226 | 2,105 | Request to remove S2ORC dataset | Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks! | open | https://github.com/huggingface/datasets/issues/2105 | 2021-03-23T19:43:06 | 2021-08-04T19:18:02 | null | {
"login": "kyleclo",
"id": 13603748,
"type": "User"
} | [] | false | [] |
839,027,834 | 2,104 | Trouble loading wiki_movies | Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa... | closed | https://github.com/huggingface/datasets/issues/2104 | 2021-03-23T18:59:54 | 2022-03-30T08:22:58 | 2022-03-30T08:22:58 | {
"login": "adityaarunsinghal",
"id": 35391599,
"type": "User"
} | [] | false | [] |
838,946,916 | 2,103 | citation, homepage, and license fields of `dataset_info.json` are duplicated many times | This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {... | closed | https://github.com/huggingface/datasets/issues/2103 | 2021-03-23T17:18:09 | 2021-04-06T14:39:59 | 2021-04-06T14:39:59 | {
"login": "samsontmr",
"id": 15007950,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
838,794,090 | 2,102 | Move Dataset.to_csv to csv module | Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`. | closed | https://github.com/huggingface/datasets/pull/2102 | 2021-03-23T14:35:46 | 2021-03-24T14:07:35 | 2021-03-24T14:07:34 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "refactoring",
"color": "B67A40"
}
] | true | [] |
838,586,184 | 2,101 | MIAM dataset - new citation details | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | closed | https://github.com/huggingface/datasets/pull/2101 | 2021-03-23T10:41:23 | 2021-03-23T18:08:10 | 2021-03-23T18:08:10 | {
"login": "eusip",
"id": 1551356,
"type": "User"
} | [] | true | [] |
838,574,631 | 2,100 | Fix deprecated warning message and docstring | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | closed | https://github.com/huggingface/datasets/pull/2100 | 2021-03-23T10:27:52 | 2021-03-24T08:19:41 | 2021-03-23T18:03:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
838,523,819 | 2,099 | load_from_disk takes a long time to load local dataset | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin... | closed | https://github.com/huggingface/datasets/issues/2099 | 2021-03-23T09:28:37 | 2021-03-23T17:12:16 | 2021-03-23T17:12:16 | {
"login": "samsontmr",
"id": 15007950,
"type": "User"
} | [] | false | [] |
838,447,959 | 2,098 | SQuAD version | Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. | closed | https://github.com/huggingface/datasets/issues/2098 | 2021-03-23T07:47:54 | 2021-03-26T09:48:54 | 2021-03-26T09:48:54 | {
"login": "h-peng17",
"id": 39556019,
"type": "User"
} | [] | false | [] |
838,105,289 | 2,097 | fixes issue #1110 by descending further if `obj["_type"]` is a dict | Check metrics | closed | https://github.com/huggingface/datasets/pull/2097 | 2021-03-22T21:00:55 | 2021-03-22T21:01:11 | 2021-03-22T21:01:11 | {
"login": "dcfidalgo",
"id": 15979778,
"type": "User"
} | [] | true | [] |
838,038,379 | 2,096 | CoNLL 2003 dataset not including German | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it ... | closed | https://github.com/huggingface/datasets/issues/2096 | 2021-03-22T19:23:56 | 2023-07-25T16:49:07 | 2023-07-25T16:49:07 | {
"login": "rxian",
"id": 8406802,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
837,209,211 | 2,093 | Fix: Allows a feature to be named "_type" | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | closed | https://github.com/huggingface/datasets/pull/2093 | 2021-03-21T23:21:57 | 2021-03-25T14:35:54 | 2021-03-25T14:35:54 | {
"login": "dcfidalgo",
"id": 15979778,
"type": "User"
} | [] | true | [] |
836,984,043 | 2,092 | How to disable making arrow tables in load_dataset ? | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | closed | https://github.com/huggingface/datasets/issues/2092 | 2021-03-21T04:50:07 | 2022-06-01T16:49:52 | 2022-06-01T16:49:52 | {
"login": "Jeevesh8",
"id": 48825663,
"type": "User"
} | [] | false | [] |
836,831,403 | 2,091 | Fix copy snippet in docs | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | closed | https://github.com/huggingface/datasets/pull/2091 | 2021-03-20T15:08:22 | 2021-03-24T08:20:50 | 2021-03-23T17:18:31 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
836,807,498 | 2,090 | Add machine translated multilingual STS benchmark dataset | also see here https://github.com/PhilipMay/stsb-multi-mt | closed | https://github.com/huggingface/datasets/pull/2090 | 2021-03-20T13:28:07 | 2021-03-29T13:24:42 | 2021-03-29T13:00:15 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | true | [] |
836,788,019 | 2,089 | Add documentaton for dataset README.md files | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which valu... | closed | https://github.com/huggingface/datasets/issues/2089 | 2021-03-20T11:44:38 | 2023-07-25T16:45:38 | 2023-07-25T16:45:37 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | false | [] |
836,763,733 | 2,088 | change bibtex template to author instead of authors | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | closed | https://github.com/huggingface/datasets/pull/2088 | 2021-03-20T09:23:44 | 2021-03-23T15:40:12 | 2021-03-23T15:40:12 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | true | [] |
836,587,392 | 2,087 | Update metadata if dataset features are modified | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| closed | https://github.com/huggingface/datasets/pull/2087 | 2021-03-20T02:05:23 | 2021-04-09T09:25:33 | 2021-04-09T09:25:33 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
836,249,587 | 2,086 | change user permissions to -rw-r--r-- | Fix for #2065 | closed | https://github.com/huggingface/datasets/pull/2086 | 2021-03-19T18:14:56 | 2021-03-24T13:59:04 | 2021-03-24T13:59:04 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
835,870,994 | 2,085 | Fix max_wait_time in requests | it was handled as a min time, not max cc @SBrandeis | closed | https://github.com/huggingface/datasets/pull/2085 | 2021-03-19T11:22:26 | 2021-03-23T15:36:38 | 2021-03-23T15:36:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
835,750,671 | 2,084 | CUAD - Contract Understanding Atticus Dataset | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** http... | closed | https://github.com/huggingface/datasets/issues/2084 | 2021-03-19T09:27:43 | 2021-04-16T08:50:44 | 2021-04-16T08:50:44 | {
"login": "theo-m",
"id": 17948980,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
835,695,425 | 2,083 | `concatenate_datasets` throws error when changing the order of datasets to concatenate | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou... | closed | https://github.com/huggingface/datasets/issues/2083 | 2021-03-19T08:29:48 | 2021-04-09T09:25:33 | 2021-04-09T09:25:33 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | false | [] |
835,401,555 | 2,082 | Updated card using information from data statement and datasheet | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated... | closed | https://github.com/huggingface/datasets/pull/2082 | 2021-03-19T00:39:38 | 2021-03-19T14:29:09 | 2021-03-19T14:29:09 | {
"login": "mcmillanmajora",
"id": 26722925,
"type": "User"
} | [] | true | [] |
835,112,968 | 2,081 | Fix docstrings issues | Fix docstring issues. | closed | https://github.com/huggingface/datasets/pull/2081 | 2021-03-18T18:11:01 | 2021-04-07T14:37:43 | 2021-04-07T14:37:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
835,023,000 | 2,080 | Multidimensional arrays in a Dataset | Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
... | closed | https://github.com/huggingface/datasets/issues/2080 | 2021-03-18T16:29:14 | 2021-03-25T12:46:53 | 2021-03-25T12:46:53 | {
"login": "vermouthmjl",
"id": 3142085,
"type": "User"
} | [] | false | [] |
834,920,493 | 2,079 | Refactorize Metric.compute signature to force keyword arguments only | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | closed | https://github.com/huggingface/datasets/pull/2079 | 2021-03-18T15:05:50 | 2021-03-23T15:31:44 | 2021-03-23T15:31:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
834,694,819 | 2,078 | MemoryError when computing WER metric | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File ... | closed | https://github.com/huggingface/datasets/issues/2078 | 2021-03-18T11:30:05 | 2021-05-01T08:31:49 | 2021-04-06T07:20:43 | {
"login": "diego-fustes",
"id": 5707233,
"type": "User"
} | [
{
"name": "metric bug",
"color": "25b21e"
}
] | false | [] |
834,649,536 | 2,077 | Bump huggingface_hub version | `0.0.2 => 0.0.6` | closed | https://github.com/huggingface/datasets/pull/2077 | 2021-03-18T10:54:34 | 2021-03-18T11:33:26 | 2021-03-18T11:33:26 | {
"login": "SBrandeis",
"id": 33657802,
"type": "User"
} | [] | true | [] |
834,445,296 | 2,076 | Issue: Dataset download error | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | open | https://github.com/huggingface/datasets/issues/2076 | 2021-03-18T06:36:06 | 2021-03-22T11:52:31 | null | {
"login": "XuhuiZhou",
"id": 20436061,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
834,301,246 | 2,075 | ConnectionError: Couldn't reach common_voice.py | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma... | closed | https://github.com/huggingface/datasets/issues/2075 | 2021-03-18T01:19:06 | 2021-03-20T10:29:41 | 2021-03-20T10:29:41 | {
"login": "LifaSun",
"id": 6188893,
"type": "User"
} | [] | false | [] |
834,268,463 | 2,074 | Fix size categories in YAML Tags | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for datas... | closed | https://github.com/huggingface/datasets/pull/2074 | 2021-03-18T00:02:36 | 2021-03-23T17:11:10 | 2021-03-23T17:11:10 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
834,192,501 | 2,073 | Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | closed | https://github.com/huggingface/datasets/pull/2073 | 2021-03-17T21:28:53 | 2021-03-18T09:09:25 | 2021-03-18T09:09:24 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
834,054,837 | 2,072 | Fix docstring issues | Fix docstring issues. | closed | https://github.com/huggingface/datasets/pull/2072 | 2021-03-17T18:13:44 | 2021-03-24T08:20:57 | 2021-03-18T12:41:21 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
833,950,824 | 2,071 | Multiprocessing is slower than single process | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
... | closed | https://github.com/huggingface/datasets/issues/2071 | 2021-03-17T16:08:58 | 2021-03-18T09:10:23 | 2021-03-18T09:10:23 | {
"login": "theo-m",
"id": 17948980,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
833,799,035 | 2,070 | ArrowInvalid issue for squad v2 dataset | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co... | closed | https://github.com/huggingface/datasets/issues/2070 | 2021-03-17T13:51:49 | 2021-08-04T17:57:16 | 2021-08-04T17:57:16 | {
"login": "MichaelYxWang",
"id": 29818977,
"type": "User"
} | [] | false | [] |
833,768,926 | 2,069 | Add and fix docstring for NamedSplit | Add and fix docstring for `NamedSplit`, which was missing. | closed | https://github.com/huggingface/datasets/pull/2069 | 2021-03-17T13:19:28 | 2021-03-18T10:27:40 | 2021-03-18T10:27:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
833,602,832 | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*a... | closed | https://github.com/huggingface/datasets/issues/2068 | 2021-03-17T10:04:27 | 2021-06-14T04:47:30 | 2021-06-14T04:47:30 | {
"login": "sivakhno",
"id": 1651457,
"type": "User"
} | [] | false | [] |
833,559,940 | 2,067 | Multiprocessing windows error | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log c... | closed | https://github.com/huggingface/datasets/issues/2067 | 2021-03-17T09:12:28 | 2021-08-04T17:59:08 | 2021-08-04T17:59:08 | {
"login": "flozi00",
"id": 47894090,
"type": "User"
} | [] | false | [] |
833,480,551 | 2,066 | Fix docstring rendering of Dataset/DatasetDict.from_csv args | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | closed | https://github.com/huggingface/datasets/pull/2066 | 2021-03-17T07:23:10 | 2021-03-17T09:21:21 | 2021-03-17T09:21:21 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
833,291,432 | 2,065 | Only user permission of saved cache files, not group | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno... | closed | https://github.com/huggingface/datasets/issues/2065 | 2021-03-17T00:20:22 | 2023-03-31T12:17:06 | 2021-05-10T06:45:29 | {
"login": "lorr1",
"id": 57237365,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
833,002,360 | 2,064 | Fix ted_talks_iwslt version error | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | closed | https://github.com/huggingface/datasets/pull/2064 | 2021-03-16T16:43:45 | 2021-03-16T18:00:08 | 2021-03-16T18:00:08 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
832,993,705 | 2,063 | [Common Voice] Adapt dataset script so that no manual data download is actually needed | This PR changes the dataset script so that no manual data dir is needed anymore. | closed | https://github.com/huggingface/datasets/pull/2063 | 2021-03-16T16:33:44 | 2021-03-17T09:42:52 | 2021-03-17T09:42:37 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
832,625,483 | 2,062 | docs: fix missing quotation | The json code misses a quote | closed | https://github.com/huggingface/datasets/pull/2062 | 2021-03-16T10:07:54 | 2021-03-17T09:21:57 | 2021-03-17T09:21:57 | {
"login": "neal2018",
"id": 46561493,
"type": "User"
} | [] | true | [] |
832,596,228 | 2,061 | Cannot load udpos subsets from xtreme dataset using load_dataset() | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ... | closed | https://github.com/huggingface/datasets/issues/2061 | 2021-03-16T09:32:13 | 2021-06-18T11:54:11 | 2021-06-18T11:54:10 | {
"login": "adzcodez",
"id": 55791365,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
832,588,591 | 2,060 | Filtering refactor | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
t... | closed | https://github.com/huggingface/datasets/pull/2060 | 2021-03-16T09:23:30 | 2023-09-24T09:52:57 | 2021-10-13T09:09:03 | {
"login": "theo-m",
"id": 17948980,
"type": "User"
} | [] | true | [] |
832,579,156 | 2,059 | Error while following docs to load the `ted_talks_iwslt` dataset | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error ... | closed | https://github.com/huggingface/datasets/issues/2059 | 2021-03-16T09:12:19 | 2021-03-16T18:00:31 | 2021-03-16T18:00:07 | {
"login": "ekdnam",
"id": 40426312,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
832,159,844 | 2,058 | Is it possible to convert a `tfds` to HuggingFace `dataset`? | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` ... | closed | https://github.com/huggingface/datasets/issues/2058 | 2021-03-15T20:18:47 | 2023-07-25T16:47:40 | 2023-07-25T16:47:40 | {
"login": "abarbosa94",
"id": 6608232,
"type": "User"
} | [] | false | [] |
832,120,522 | 2,057 | update link to ZEST dataset | Updating the link as the original one is no longer working. | closed | https://github.com/huggingface/datasets/pull/2057 | 2021-03-15T19:22:57 | 2021-03-16T17:06:28 | 2021-03-16T17:06:28 | {
"login": "matt-peters",
"id": 619844,
"type": "User"
} | [] | true | [] |
831,718,397 | 2,056 | issue with opus100/en-fr dataset | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked... | closed | https://github.com/huggingface/datasets/issues/2056 | 2021-03-15T11:32:42 | 2021-03-16T15:49:00 | 2021-03-16T15:48:59 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
831,684,312 | 2,055 | is there a way to override a dataset object saved with save_to_disk? | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | closed | https://github.com/huggingface/datasets/issues/2055 | 2021-03-15T10:50:53 | 2021-03-22T04:06:17 | 2021-03-22T04:06:17 | {
"login": "shamanez",
"id": 16892570,
"type": "User"
} | [] | false | [] |
831,597,665 | 2,054 | Could not find file for ZEST dataset | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: ... | closed | https://github.com/huggingface/datasets/issues/2054 | 2021-03-15T09:11:58 | 2021-05-03T09:30:24 | 2021-05-03T09:30:24 | {
"login": "bhadreshpsavani",
"id": 26653468,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
831,151,728 | 2,053 | Add bAbI QA tasks | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many mor... | closed | https://github.com/huggingface/datasets/pull/2053 | 2021-03-14T13:04:39 | 2021-03-29T12:41:48 | 2021-03-29T12:41:48 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
831,135,704 | 2,052 | Timit_asr dataset repeats examples | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']... | closed | https://github.com/huggingface/datasets/issues/2052 | 2021-03-14T11:43:43 | 2021-03-15T10:37:16 | 2021-03-15T10:37:16 | {
"login": "fermaat",
"id": 7583522,
"type": "User"
} | [] | false | [] |
831,027,021 | 2,051 | Add MDD Dataset | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
... | closed | https://github.com/huggingface/datasets/pull/2051 | 2021-03-14T00:01:05 | 2021-03-19T11:15:44 | 2021-03-19T10:31:59 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
831,006,551 | 2,050 | Build custom dataset to fine-tune Wav2Vec2 | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| closed | https://github.com/huggingface/datasets/issues/2050 | 2021-03-13T22:01:10 | 2021-03-15T09:27:28 | 2021-03-15T09:27:28 | {
"login": "Omarnabk",
"id": 72882909,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
830,978,687 | 2,049 | Fix text-classification tags | There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
| closed | https://github.com/huggingface/datasets/pull/2049 | 2021-03-13T19:51:42 | 2021-03-16T15:47:46 | 2021-03-16T15:47:46 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
830,953,431 | 2,048 | github is not always available - probably need a back up | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubuser... | closed | https://github.com/huggingface/datasets/issues/2048 | 2021-03-13T18:03:32 | 2022-04-01T15:27:10 | 2022-04-01T15:27:10 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | false | [] |
830,626,430 | 2,047 | Multilingual dIalogAct benchMark (miam) | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | closed | https://github.com/huggingface/datasets/pull/2047 | 2021-03-12T23:02:55 | 2021-03-23T10:36:34 | 2021-03-19T10:47:13 | {
"login": "eusip",
"id": 1551356,
"type": "User"
} | [] | true | [] |
830,423,033 | 2,046 | add_faisis_index gets very slow when doing it interatively | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | closed | https://github.com/huggingface/datasets/issues/2046 | 2021-03-12T20:27:18 | 2021-03-24T22:29:11 | 2021-03-24T22:29:11 | {
"login": "shamanez",
"id": 16892570,
"type": "User"
} | [] | false | [] |
830,351,527 | 2,045 | Preserve column ordering in Dataset.rename_column | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', '... | closed | https://github.com/huggingface/datasets/pull/2045 | 2021-03-12T18:26:47 | 2021-03-16T14:48:05 | 2021-03-16T14:35:05 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.