id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,053,698,898 | 3,275 | Force data files extraction if download_mode='force_redownload' | Avoids weird issues when redownloading a dataset due to cached data not being fully updated.
With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows:
```python
dset = load_dataset(..., download_mode="force_redownload")
``` | closed | https://github.com/huggingface/datasets/pull/3275 | 2021-11-15T14:00:24 | 2021-11-15T14:45:23 | 2021-11-15T14:45:23 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,053,689,140 | 3,274 | Fix some contact information formats | As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly.
This PR fixes this for CoNLL-2002 and some other datasets with the same issue | closed | https://github.com/huggingface/datasets/pull/3274 | 2021-11-15T13:50:34 | 2021-11-15T14:43:55 | 2021-11-15T14:43:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,053,554,038 | 3,273 | Respect row ordering when concatenating datasets along axis=1 | Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored.
A minimal reproducible example:
```python
>>> from datasets import Dataset, concatenate_datasets
>>> a = Data... | closed | https://github.com/huggingface/datasets/issues/3273 | 2021-11-15T11:27:14 | 2021-11-17T15:41:11 | 2021-11-17T15:41:11 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,053,516,479 | 3,272 | Make iter_archive work with ZIP files | Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive.
It would be nice if it could work with ZIP files too ! | open | https://github.com/huggingface/datasets/issues/3272 | 2021-11-15T10:50:42 | 2021-11-25T00:08:47 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,053,482,919 | 3,271 | Decode audio from remote | Currently the Audio feature type can only decode local audio files, not remote files.
To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py
cc @albertvillanova @mariosasko | closed | https://github.com/huggingface/datasets/pull/3271 | 2021-11-15T10:25:56 | 2021-11-16T11:35:58 | 2021-11-16T11:35:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,053,465,662 | 3,270 | Add os.listdir for streaming | Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example | closed | https://github.com/huggingface/datasets/pull/3270 | 2021-11-15T10:14:04 | 2021-11-15T10:27:03 | 2021-11-15T10:27:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,053,218,769 | 3,269 | coqa NonMatchingChecksumError | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s] ... | closed | https://github.com/huggingface/datasets/issues/3269 | 2021-11-15T05:04:07 | 2022-01-19T13:58:19 | 2022-01-19T13:58:19 | {
"login": "ZhaofengWu",
"id": 11954789,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,052,992,681 | 3,268 | Dataset viewer issue for 'liweili/c4_200m' | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
... | closed | https://github.com/huggingface/datasets/issues/3268 | 2021-11-14T17:18:46 | 2021-12-21T10:25:20 | 2021-12-21T10:24:51 | {
"login": "liliwei25",
"id": 22389228,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,052,750,084 | 3,267 | Replacing .format() and % by f-strings | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
Will follow in the next PR the modules left :
- [ ] **src**
Module **datasets** will not be edited as asked by @mariosasko
PS... | closed | https://github.com/huggingface/datasets/pull/3267 | 2021-11-13T19:12:02 | 2021-11-16T21:00:26 | 2021-11-16T14:55:43 | {
"login": "Mehdi2402",
"id": 56029953,
"type": "User"
} | [] | true | [] |
1,052,700,155 | 3,266 | Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution | [#3264](https://github.com/huggingface/datasets/issues/3264) | closed | https://github.com/huggingface/datasets/pull/3266 | 2021-11-13T15:01:34 | 2021-12-06T11:16:31 | 2021-12-06T11:16:31 | {
"login": "LashaO",
"id": 28014149,
"type": "User"
} | [] | true | [] |
1,052,666,558 | 3,265 | Checksum error for kilt_task_wow | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_ta... | closed | https://github.com/huggingface/datasets/issues/3265 | 2021-11-13T12:04:17 | 2021-11-16T11:23:53 | 2021-11-16T11:21:58 | {
"login": "slyviacassell",
"id": 22296717,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,052,663,513 | 3,264 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution | ## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author.
```
https://github.com/chaoj... | closed | https://github.com/huggingface/datasets/issues/3264 | 2021-11-13T11:47:12 | 2022-06-01T17:38:16 | 2022-06-01T17:38:16 | {
"login": "slyviacassell",
"id": 22296717,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,052,552,516 | 3,263 | FET DATA | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | closed | https://github.com/huggingface/datasets/issues/3263 | 2021-11-13T05:46:06 | 2021-11-13T13:31:47 | 2021-11-13T13:31:47 | {
"login": "FStell01",
"id": 90987031,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,052,455,082 | 3,262 | asserts replaced with exception for image classification task, csv, json | Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171 | closed | https://github.com/huggingface/datasets/pull/3262 | 2021-11-12T22:34:59 | 2021-11-15T11:08:37 | 2021-11-15T11:08:37 | {
"login": "manisnesan",
"id": 153142,
"type": "User"
} | [] | true | [] |
1,052,346,381 | 3,261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https:/... | closed | https://github.com/huggingface/datasets/issues/3261 | 2021-11-12T19:25:19 | 2021-12-21T10:24:10 | 2021-12-21T10:24:10 | {
"login": "lara-martin",
"id": 37913218,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,052,247,373 | 3,260 | Fix ConnectionError in Scielo dataset | This PR:
* allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint)
* makes the Scielo dataset streamable
Fixes #3255. | closed | https://github.com/huggingface/datasets/pull/3260 | 2021-11-12T18:02:37 | 2021-11-16T18:18:17 | 2021-11-16T17:55:22 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,052,189,775 | 3,259 | Updating details of IRC disentanglement data | I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation. | closed | https://github.com/huggingface/datasets/pull/3259 | 2021-11-12T17:16:58 | 2021-11-18T17:19:33 | 2021-11-18T17:19:33 | {
"login": "jkkummerfeld",
"id": 1298052,
"type": "User"
} | [] | true | [] |
1,052,188,195 | 3,258 | Reload dataset that was already downloaded with `load_from_disk` from cloud storage | `load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once.
It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file. | open | https://github.com/huggingface/datasets/issues/3258 | 2021-11-12T17:14:59 | 2021-11-12T17:14:59 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,052,118,365 | 3,257 | Use f-strings for string formatting | f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files ... | closed | https://github.com/huggingface/datasets/issues/3257 | 2021-11-12T16:02:15 | 2021-11-17T16:18:38 | 2021-11-17T16:18:38 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,052,000,613 | 3,256 | asserts replaced by exception for text classification task with test. | I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 .
I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many ch... | closed | https://github.com/huggingface/datasets/pull/3256 | 2021-11-12T14:05:36 | 2021-11-12T15:09:33 | 2021-11-12T14:59:32 | {
"login": "manisnesan",
"id": 153142,
"type": "User"
} | [] | true | [] |
1,051,783,129 | 3,255 | SciELO dataset ConnectionError | ## Describe the bug
I get `ConnectionError` when I am trying to load the SciELO dataset.
When I try the URL with `requests` I get:
```
>>> requests.head("https://ndownloader.figstatic.com/files/14019287")
<Response [302]>
```
And as far as I understand redirections in `datasets` are not supported for downlo... | closed | https://github.com/huggingface/datasets/issues/3255 | 2021-11-12T09:57:14 | 2021-11-16T17:55:22 | 2021-11-16T17:55:22 | {
"login": "WojciechKusa",
"id": 2575047,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,051,351,172 | 3,254 | Update xcopa dataset (fix checksum issues + add translated data) | This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib. | closed | https://github.com/huggingface/datasets/pull/3254 | 2021-11-11T20:51:33 | 2021-11-12T10:30:58 | 2021-11-12T10:30:57 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,051,308,972 | 3,253 | `GeneratorBasedBuilder` does not support `None` values | ## Describe the bug
`GeneratorBasedBuilder` does not support `None` values.
## Steps to reproduce the bug
See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction.
## Expected results
Dataset is initialized with a `None` value in the `value` column.
... | closed | https://github.com/huggingface/datasets/issues/3253 | 2021-11-11T19:51:21 | 2021-12-09T14:26:58 | 2021-12-09T14:26:58 | {
"login": "pavel-lexyr",
"id": 69010336,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,051,124,749 | 3,252 | Fix failing CER metric test in CI after update | Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the cha... | closed | https://github.com/huggingface/datasets/pull/3252 | 2021-11-11T15:57:16 | 2021-11-12T14:06:44 | 2021-11-12T14:06:43 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,050,541,348 | 3,250 | Add ETHICS dataset | This PR adds the ETHICS dataset, including all 5 sub-datasets.
From https://arxiv.org/abs/2008.02275 | closed | https://github.com/huggingface/datasets/pull/3250 | 2021-11-11T03:45:34 | 2022-10-03T09:37:25 | 2022-10-03T09:37:25 | {
"login": "ssss1029",
"id": 7088559,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,050,193,138 | 3,249 | Fix streaming for id_newspapers_2018 | To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file | closed | https://github.com/huggingface/datasets/pull/3249 | 2021-11-10T18:55:30 | 2021-11-12T14:01:32 | 2021-11-12T14:01:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,050,171,082 | 3,248 | Stream from Google Drive and other hosts | Streaming from Google Drive is a bit more challenging than the other host we've been supporting:
- the download URL must be updated to add the confirm token obtained by HEAD request
- it requires to use cookies to keep the connection alive
- the URL doesn't tell any information about whether the file is compressed o... | closed | https://github.com/huggingface/datasets/pull/3248 | 2021-11-10T18:32:32 | 2021-11-30T16:03:43 | 2021-11-12T17:18:11 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,049,699,088 | 3,247 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError | ## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Splitting the big file into smaller ones and then loading it with the `lo... | closed | https://github.com/huggingface/datasets/issues/3247 | 2021-11-10T11:17:59 | 2022-04-10T14:05:57 | 2022-04-10T14:05:57 | {
"login": "maxzirps",
"id": 29249513,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,049,662,746 | 3,246 | [tiny] fix typo in stream docs | null | closed | https://github.com/huggingface/datasets/pull/3246 | 2021-11-10T10:40:02 | 2021-11-10T11:10:39 | 2021-11-10T11:10:39 | {
"login": "verbiiyo",
"id": 26421036,
"type": "User"
} | [] | true | [] |
1,048,726,062 | 3,245 | Fix load_from_disk temporary directory | `load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected.
In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, b... | closed | https://github.com/huggingface/datasets/pull/3245 | 2021-11-09T15:15:15 | 2021-11-09T15:30:52 | 2021-11-09T15:30:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,048,675,741 | 3,244 | Fix filter method for batched=True | null | closed | https://github.com/huggingface/datasets/pull/3244 | 2021-11-09T14:30:59 | 2021-11-09T15:52:58 | 2021-11-09T15:52:57 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,048,630,754 | 3,243 | Remove redundant isort module placement | `isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI work... | closed | https://github.com/huggingface/datasets/pull/3243 | 2021-11-09T13:50:30 | 2021-11-12T14:02:45 | 2021-11-12T14:02:45 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,048,527,232 | 3,242 | Adding ANERcorp-CAMeLLab dataset | null | open | https://github.com/huggingface/datasets/issues/3242 | 2021-11-09T12:04:04 | 2021-11-09T12:41:15 | null | {
"login": "vitalyshalumov",
"id": 33824221,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,048,461,852 | 3,241 | Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata | Fix #3237, fix #795. | closed | https://github.com/huggingface/datasets/pull/3241 | 2021-11-09T10:54:15 | 2022-02-14T15:46:00 | 2021-11-09T13:49:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,048,376,021 | 3,240 | Couldn't reach data file for disaster_response_messages | ## Describe the bug
Following command gives an ConnectionError.
## Steps to reproduce the bug
```python
disaster = load_dataset('disaster_response_messages')
```
## Error
```
ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.... | closed | https://github.com/huggingface/datasets/issues/3240 | 2021-11-09T09:26:42 | 2021-12-14T14:38:29 | 2021-12-14T14:38:29 | {
"login": "pandya6988",
"id": 81331791,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,048,360,232 | 3,239 | Inconsistent performance of the "arabic_billion_words" dataset | ## Describe the bug
When downloaded from macine 1 the dataset is downloaded and parsed correctly.
When downloaded from machine two (which has a different cache directory),
the following script:
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alitti... | open | https://github.com/huggingface/datasets/issues/3239 | 2021-11-09T09:11:00 | 2021-11-09T09:11:00 | null | {
"login": "vitalyshalumov",
"id": 33824221,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,048,226,086 | 3,238 | Reuters21578 Couldn't reach | ``## Adding a Dataset
- **Name:** *Reuters21578*
- **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz*
- **Data:** *https://huggingface.co/datasets/reuters21578*
`from datasets import load_dataset`
`dataset = load_dataset("reuters21578", 'ModLewis... | closed | https://github.com/huggingface/datasets/issues/3238 | 2021-11-09T06:08:56 | 2021-11-11T00:02:57 | 2021-11-11T00:02:57 | {
"login": "TingNLP",
"id": 54096137,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,048,165,525 | 3,237 | wikitext description wrong | ## Describe the bug
Descriptions of the wikitext datasests are wrong.
## Steps to reproduce the bug
Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50
## Expected results
The descriptions for raw-v1 and v1 should be switched. | closed | https://github.com/huggingface/datasets/issues/3237 | 2021-11-09T04:06:52 | 2022-02-14T15:45:11 | 2021-11-09T13:49:28 | {
"login": "hongyuanmei",
"id": 19693633,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,048,026,358 | 3,236 | Loading of datasets changed in #3110 returns no examples | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id',... | closed | https://github.com/huggingface/datasets/issues/3236 | 2021-11-08T23:29:46 | 2021-11-09T16:46:05 | 2021-11-09T16:45:47 | {
"login": "eladsegal",
"id": 13485709,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,047,808,263 | 3,235 | Addd options to use updated bleurt checkpoints | Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions.
Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20
This change won't affect the default behavior of metrics/bleurt. ... | closed | https://github.com/huggingface/datasets/pull/3235 | 2021-11-08T18:53:54 | 2021-11-12T14:05:28 | 2021-11-12T14:05:28 | {
"login": "jaehlee",
"id": 11873078,
"type": "User"
} | [] | true | [] |
1,047,634,236 | 3,234 | Avoid PyArrow type optimization if it fails | Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization.
Fix #2206 | closed | https://github.com/huggingface/datasets/pull/3234 | 2021-11-08T16:10:27 | 2021-11-10T12:04:29 | 2021-11-10T12:04:28 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,047,474,931 | 3,233 | Improve repository structure docs | Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments | closed | https://github.com/huggingface/datasets/pull/3233 | 2021-11-08T13:51:35 | 2021-11-09T10:02:18 | 2021-11-09T10:02:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,047,361,573 | 3,232 | The Xsum datasets seems not able to download. | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
r... | closed | https://github.com/huggingface/datasets/issues/3232 | 2021-11-08T11:58:54 | 2021-11-09T15:07:16 | 2021-11-09T15:07:16 | {
"login": "FYYFU",
"id": 37999885,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,047,170,906 | 3,231 | Group tests in multiprocessing workers by test file | By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker.
Therefore, the fixture `hf_token` will be called only once (and from the same worker).
Related to: #3200.
Fix #3219. | closed | https://github.com/huggingface/datasets/pull/3231 | 2021-11-08T08:46:03 | 2021-11-08T13:19:18 | 2021-11-08T08:59:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,047,135,583 | 3,230 | Add full tagset to conll2003 README | Even though it is possible to manually get the tagset list with
```python
dset.features[field_name].feature.names
```
I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately ob... | closed | https://github.com/huggingface/datasets/pull/3230 | 2021-11-08T08:06:04 | 2021-11-09T10:48:38 | 2021-11-09T10:40:58 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | true | [] |
1,046,706,425 | 3,229 | Fix URL in CITATION file | Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL):
```
@inproceedings{Lhoest_Datasets_A_Community_2021,
author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite,... | closed | https://github.com/huggingface/datasets/pull/3229 | 2021-11-07T10:04:35 | 2021-11-07T10:04:46 | 2021-11-07T10:04:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,046,702,143 | 3,228 | Add CITATION file | Add CITATION file. | closed | https://github.com/huggingface/datasets/pull/3228 | 2021-11-07T09:40:19 | 2021-11-07T09:51:47 | 2021-11-07T09:51:46 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,046,667,845 | 3,227 | Error in `Json(datasets.ArrowBasedBuilder)` class | ## Describe the bug
When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails.
## Steps to reproduce the bug
Create a folder that contains the following:
```
.
├── testdata
│ └── mydata.json
└── test.py
```
Please download [this file](https://github.com/... | closed | https://github.com/huggingface/datasets/issues/3227 | 2021-11-07T05:50:32 | 2021-11-09T19:09:15 | 2021-11-09T19:09:15 | {
"login": "JunShern",
"id": 7796965,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,046,584,518 | 3,226 | Fix paper BibTeX citation with proceedings reference | Fix paper BibTeX citation with proceedings reference. | closed | https://github.com/huggingface/datasets/pull/3226 | 2021-11-06T19:52:59 | 2021-11-07T07:05:28 | 2021-11-07T07:05:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,046,530,493 | 3,225 | Update tatoeba to v2021-07-22 | Tatoeba's latest version is v2021-07-22 | closed | https://github.com/huggingface/datasets/pull/3225 | 2021-11-06T15:14:31 | 2021-11-12T11:13:13 | 2021-11-12T11:13:13 | {
"login": "KoichiYasuoka",
"id": 15098598,
"type": "User"
} | [] | true | [] |
1,046,495,831 | 3,224 | User-pickling with dynamic sub-classing | This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this.
In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they ha... | closed | https://github.com/huggingface/datasets/pull/3224 | 2021-11-06T12:08:24 | 2025-03-26T19:45:37 | 2025-03-26T19:45:36 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | true | [] |
1,046,445,507 | 3,223 | Update BibTeX entry | Update BibTeX entry. | closed | https://github.com/huggingface/datasets/pull/3223 | 2021-11-06T06:41:52 | 2021-11-06T07:06:38 | 2021-11-06T07:06:38 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,046,299,725 | 3,222 | Add docs for audio processing | This PR adds documentation for the `Audio` feature. It describes:
- The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them.
- Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rat... | closed | https://github.com/huggingface/datasets/pull/3222 | 2021-11-05T23:07:59 | 2021-11-24T16:32:08 | 2021-11-24T15:35:52 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,045,890,512 | 3,221 | Resolve data_files by split name | As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames.
I added the support for different kinds of patterns, for both dataset repositories and local directories:
```
Input structure:
... | closed | https://github.com/huggingface/datasets/pull/3221 | 2021-11-05T14:07:35 | 2021-11-08T13:52:20 | 2021-11-05T17:49:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,045,549,029 | 3,220 | Add documentation about dataset viewer feature | Add to the docs more details about the dataset viewer feature in the Hub.
CC: @julien-c
| open | https://github.com/huggingface/datasets/issues/3220 | 2021-11-05T08:11:19 | 2023-09-25T11:48:38 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,045,095,000 | 3,219 | Eventual Invalid Token Error at setup of private datasets | ## Describe the bug
From time to time, there appear Invalid Token errors with private datasets:
- https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534
```
____________ ERROR at setup of test_load_streaming_private_dataset _____________
... | closed | https://github.com/huggingface/datasets/issues/3219 | 2021-11-04T18:50:45 | 2021-11-08T13:23:06 | 2021-11-08T08:59:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,045,032,313 | 3,218 | Fix code quality in riddle_sense dataset | Fix trailing whitespace.
Fix #3217. | closed | https://github.com/huggingface/datasets/pull/3218 | 2021-11-04T17:43:20 | 2021-11-04T17:50:03 | 2021-11-04T17:50:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,045,029,710 | 3,217 | Fix code quality bug in riddle_sense dataset | ## Describe the bug
```
datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace
``` | closed | https://github.com/huggingface/datasets/issues/3217 | 2021-11-04T17:40:32 | 2021-11-04T17:50:02 | 2021-11-04T17:50:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,045,027,733 | 3,216 | Pin version exclusion for tensorflow incompatible with keras | Once `tensorflow` version 2.6.2 is released:
- https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb
- https://pypi.org/project/tensorflow/2.6.2/
with the patch:
- tensorflow/tensorflow#52927
we can remove the temporary fix we introduced in:
- #3208
Fix #3209. | closed | https://github.com/huggingface/datasets/pull/3216 | 2021-11-04T17:38:06 | 2021-11-05T10:57:38 | 2021-11-05T10:57:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,045,011,207 | 3,215 | Small updates to to_tf_dataset documentation | I added a little more description about `to_tf_dataset` compared to just setting the format | closed | https://github.com/huggingface/datasets/pull/3215 | 2021-11-04T17:22:01 | 2021-11-04T18:55:38 | 2021-11-04T18:55:37 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,044,924,050 | 3,214 | Add ACAV100M Dataset | ## Adding a Dataset
- **Name:** *ACAV100M*
- **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.*
- **Paper:** *https://arxiv.org/abs/2101.10803*
- **Data:** *https://github.com/sangho-vision/acav100m*
- **Motivation:** *The ... | open | https://github.com/huggingface/datasets/issues/3214 | 2021-11-04T15:59:58 | 2021-12-08T12:00:30 | null | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,044,745,313 | 3,213 | Fix tuple_ie download url | Fix #3204 | closed | https://github.com/huggingface/datasets/pull/3213 | 2021-11-04T13:09:07 | 2021-11-05T14:16:06 | 2021-11-05T14:16:05 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,044,640,967 | 3,212 | Sort files before loading | When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`.
This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in d... | closed | https://github.com/huggingface/datasets/issues/3212 | 2021-11-04T11:08:31 | 2021-11-05T17:49:58 | 2021-11-05T17:49:58 | {
"login": "lvwerra",
"id": 8264887,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,044,617,913 | 3,211 | Fix disable_nullable default value to False | Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`.
This creates unexpected behaviors like this
```python
from datasets import Dataset, concatenate_datasets
d1 = Dataset.from_dict({"a": [0, 1, 2, 3]})
d2... | closed | https://github.com/huggingface/datasets/pull/3211 | 2021-11-04T10:52:06 | 2021-11-04T11:08:21 | 2021-11-04T11:08:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,044,611,471 | 3,210 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py | when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra... | closed | https://github.com/huggingface/datasets/issues/3210 | 2021-11-04T10:47:26 | 2022-03-30T08:26:35 | 2022-03-30T08:26:35 | {
"login": "xiuzhilu",
"id": 28184983,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,044,505,771 | 3,209 | Unpin keras once TF fixes its release | Related to:
- #3208 | closed | https://github.com/huggingface/datasets/issues/3209 | 2021-11-04T09:15:32 | 2021-11-05T10:57:37 | 2021-11-05T10:57:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,044,504,093 | 3,208 | Pin keras version until TF fixes its release | Fix #3207. | closed | https://github.com/huggingface/datasets/pull/3208 | 2021-11-04T09:13:32 | 2021-11-04T09:30:55 | 2021-11-04T09:30:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,044,496,389 | 3,207 | CI error: Another metric with the same name already exists in Keras 2.7.0 | ## Describe the bug
Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See:
- keras-team/keras#15579
This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
| closed | https://github.com/huggingface/datasets/issues/3207 | 2021-11-04T09:04:11 | 2021-11-04T09:30:54 | 2021-11-04T09:30:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,044,216,270 | 3,206 | [WIP] Allow user-defined hash functions via a registry | Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** o... | closed | https://github.com/huggingface/datasets/pull/3206 | 2021-11-03T23:25:42 | 2021-11-05T12:38:11 | 2021-11-05T12:38:04 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | true | [] |
1,044,099,561 | 3,205 | Add Multidoc2dial Dataset | This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf ) | closed | https://github.com/huggingface/datasets/pull/3205 | 2021-11-03T20:48:31 | 2021-11-24T17:32:49 | 2021-11-24T16:55:08 | {
"login": "sivasankalpp",
"id": 7344617,
"type": "User"
} | [] | true | [] |
1,043,707,307 | 3,204 | FileNotFoundError for TupleIE dataste | Hi,
`dataset = datasets.load_dataset('tuple_ie', 'all')`
returns a FileNotFound error. Is the data not available?
Many thanks.
| closed | https://github.com/huggingface/datasets/issues/3204 | 2021-11-03T14:56:55 | 2021-11-05T15:51:15 | 2021-11-05T14:16:05 | {
"login": "arda-vianai",
"id": 75334917,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,043,552,766 | 3,203 | Updated: DaNE - updated URL for download | It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here... | closed | https://github.com/huggingface/datasets/pull/3203 | 2021-11-03T12:55:13 | 2021-11-04T13:14:36 | 2021-11-04T11:46:43 | {
"login": "MalteHB",
"id": 47593213,
"type": "User"
} | [] | true | [] |
1,043,213,660 | 3,202 | Add mIoU metric | **Is your feature request related to a problem? Please describe.**
Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html).
Semantic seg... | closed | https://github.com/huggingface/datasets/issues/3202 | 2021-11-03T08:42:32 | 2022-06-01T17:39:05 | 2022-06-01T17:39:04 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,043,209,142 | 3,201 | Add GSM8K dataset | ## Adding a Dataset
- **Name:** GSM8K (short for Grade School Math 8k)
- **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers.
- **Paper:** https://openai.com/blog/grade-school-math/
- **Data:** https://github.com/openai/gra... | closed | https://github.com/huggingface/datasets/issues/3201 | 2021-11-03T08:36:44 | 2022-04-13T11:56:12 | 2022-04-13T11:56:11 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,042,887,291 | 3,200 | Catch token invalid error in CI | The staging back end sometimes returns invalid token errors when trying to delete a repo.
I modified the fixture in the test that uses staging to ignore this error | closed | https://github.com/huggingface/datasets/pull/3200 | 2021-11-02T21:56:26 | 2021-11-03T09:41:08 | 2021-11-03T09:41:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,042,860,935 | 3,199 | Bump huggingface_hub | huggingface_hub just released its first minor version, so we need to update the dependency
It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1 | closed | https://github.com/huggingface/datasets/pull/3199 | 2021-11-02T21:29:10 | 2021-11-14T01:48:11 | 2021-11-02T21:41:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,042,679,548 | 3,198 | Add Multi-Lingual LibriSpeech | Add https://www.openslr.org/94/ | closed | https://github.com/huggingface/datasets/pull/3198 | 2021-11-02T18:23:59 | 2021-11-04T17:09:22 | 2021-11-04T17:09:22 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
1,042,541,127 | 3,197 | Fix optimized encoding for arrays | Hi !
#3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists.
cc @eladsegal fyi (no big deal) | closed | https://github.com/huggingface/datasets/pull/3197 | 2021-11-02T15:55:53 | 2021-11-02T19:12:24 | 2021-11-02T19:12:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,042,223,913 | 3,196 | QOL improvements: auto-flatten_indices and desc in map calls | This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040 | closed | https://github.com/huggingface/datasets/pull/3196 | 2021-11-02T11:28:50 | 2021-11-02T15:41:09 | 2021-11-02T15:41:08 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,042,204,044 | 3,195 | More robust `None` handling | PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well.
[Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing)
Changes:
* allow None for the features types with special encoding (`ClassLabel, Translatio... | closed | https://github.com/huggingface/datasets/pull/3195 | 2021-11-02T11:15:10 | 2021-12-09T14:27:00 | 2021-12-09T14:26:58 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,041,999,535 | 3,194 | Update link to Datasets Tagging app in Spaces | Fix #3193. | closed | https://github.com/huggingface/datasets/pull/3194 | 2021-11-02T08:13:50 | 2021-11-08T10:36:23 | 2021-11-08T10:36:22 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,041,971,117 | 3,193 | Update link to datasets-tagging app | Once datasets-tagging has been transferred to Spaces:
- huggingface/datasets-tagging#22
We should update the link in Datasets. | closed | https://github.com/huggingface/datasets/issues/3193 | 2021-11-02T07:39:59 | 2021-11-08T10:36:22 | 2021-11-08T10:36:22 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,041,308,086 | 3,192 | Multiprocessing filter/map (tests) not working on Windows | While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking).
#... | open | https://github.com/huggingface/datasets/issues/3192 | 2021-11-01T15:36:08 | 2021-11-01T15:57:03 | null | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,041,225,111 | 3,191 | Dataset viewer issue for '*compguesswhat*' | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| closed | https://github.com/huggingface/datasets/issues/3191 | 2021-11-01T14:16:49 | 2022-09-12T08:02:29 | 2022-09-12T08:02:29 | {
"login": "benotti",
"id": 2545336,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,041,153,631 | 3,190 | combination of shuffle and filter results in a bug | ## Describe the bug
Hi,
I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any su... | closed | https://github.com/huggingface/datasets/issues/3190 | 2021-11-01T13:07:29 | 2021-11-02T10:50:49 | 2021-11-02T10:50:49 | {
"login": "rabeehk",
"id": 6278280,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,041,044,986 | 3,189 | conll2003 incorrect label explanation | In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows
> - `id`: a `string` feature.
> - `tokens`: a `list` of `string` features.
> - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(`... | closed | https://github.com/huggingface/datasets/issues/3189 | 2021-11-01T11:03:30 | 2021-11-09T10:40:58 | 2021-11-09T10:40:58 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,040,980,712 | 3,188 | conll2002 issues | **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
I... | closed | https://github.com/huggingface/datasets/issues/3188 | 2021-11-01T09:49:24 | 2021-11-15T13:50:59 | 2021-11-12T17:18:11 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,040,412,869 | 3,187 | Add ChrF(++) (as implemented in sacrebleu) | Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for y... | closed | https://github.com/huggingface/datasets/pull/3187 | 2021-10-31T08:53:58 | 2021-11-02T14:50:50 | 2021-11-02T14:31:26 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | true | [] |
1,040,369,397 | 3,186 | Dataset viewer for nli_tr | ## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be d... | closed | https://github.com/huggingface/datasets/issues/3186 | 2021-10-31T03:56:33 | 2022-09-12T09:15:34 | 2022-09-12T08:43:09 | {
"login": "e-budur",
"id": 2246791,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,040,291,961 | 3,185 | 7z dataset preview not implemented? | ## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
| closed | https://github.com/huggingface/datasets/issues/3185 | 2021-10-30T20:18:27 | 2022-04-12T11:48:16 | 2022-04-12T11:48:07 | {
"login": "Kirili4ik",
"id": 30757466,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,040,114,102 | 3,184 | RONEC v2 | Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid.
In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1).
... | closed | https://github.com/huggingface/datasets/pull/3184 | 2021-10-30T10:50:03 | 2021-11-02T16:02:23 | 2021-11-02T16:02:22 | {
"login": "dumitrescustefan",
"id": 22746816,
"type": "User"
} | [] | true | [] |
1,039,761,120 | 3,183 | Add missing docstring to DownloadConfig | Document the `use_etag` and `num_proc` attributes in `DownloadConig`. | closed | https://github.com/huggingface/datasets/pull/3183 | 2021-10-29T16:56:35 | 2021-11-02T10:25:38 | 2021-11-02T10:25:37 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,039,739,606 | 3,182 | Don't memoize strings when hashing since two identical strings may have different python ids | When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not.
Here is an example code that shows how the issue can affect the caching:
```python
import json
import pyarrow as pa
from datasets.features import ... | closed | https://github.com/huggingface/datasets/pull/3182 | 2021-10-29T16:26:17 | 2021-11-02T09:35:38 | 2021-11-02T09:35:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,039,682,097 | 3,181 | `None` converted to `"None"` when loading a dataset | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text... | closed | https://github.com/huggingface/datasets/issues/3181 | 2021-10-29T15:23:53 | 2021-12-11T01:16:40 | 2021-12-09T14:26:57 | {
"login": "eladsegal",
"id": 13485709,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,039,641,316 | 3,180 | fix label mapping | Fixing label mapping for hlgd.
0 correponds to same event and 1 corresponds to different event
<img width="642" alt="Capture d’écran 2021-10-29 à 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png">
<img width="638" alt="Capture d’écran 2021-10-... | closed | https://github.com/huggingface/datasets/pull/3180 | 2021-10-29T14:42:24 | 2021-11-02T13:41:07 | 2021-11-02T10:37:12 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
1,039,571,928 | 3,179 | Cannot load dataset when the config name is "special" | ## Describe the bug
After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1".
But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/hugg... | closed | https://github.com/huggingface/datasets/issues/3179 | 2021-10-29T13:30:47 | 2021-10-29T13:35:21 | 2021-10-29T13:35:21 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,039,539,076 | 3,178 | "Property couldn't be hashed properly" even though fully picklable | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
... | closed | https://github.com/huggingface/datasets/issues/3178 | 2021-10-29T12:56:09 | 2024-08-19T13:03:49 | 2022-11-02T17:18:43 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,039,487,780 | 3,177 | More control over TQDM when using map/filter with multiple processes | It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets.
```python
dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6)
```
The above snippet leads to a lot of TQDM bars and depending on your... | closed | https://github.com/huggingface/datasets/issues/3177 | 2021-10-29T11:56:16 | 2023-02-13T20:16:40 | 2023-02-13T20:16:40 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,039,068,312 | 3,176 | OpenSLR dataset: update generate_examples to properly extract data for SLR83 | Fixed #3168.
The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table.
I've added code to properly process these CSV files. | closed | https://github.com/huggingface/datasets/pull/3176 | 2021-10-29T00:59:27 | 2021-11-04T16:20:45 | 2021-10-29T10:04:09 | {
"login": "tyrius02",
"id": 4561309,
"type": "User"
} | [] | true | [] |
1,038,945,271 | 3,175 | Add docs for `to_tf_dataset` | This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`:
- Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 😅).
- Add an example for load... | closed | https://github.com/huggingface/datasets/pull/3175 | 2021-10-28T20:55:22 | 2021-11-03T15:39:36 | 2021-11-03T10:07:23 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.