id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,244,839,185 | 4,391 | Refactor column mappings for question answering datasets | This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/436... | closed | https://github.com/huggingface/datasets/pull/4391 | 2022-05-23T09:13:14 | 2022-05-24T12:57:00 | 2022-05-24T12:48:48 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
1,244,835,877 | 4,390 | Fix metadata validation | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | closed | https://github.com/huggingface/datasets/pull/4390 | 2022-05-23T09:11:20 | 2022-06-01T09:27:52 | 2022-06-01T09:19:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,244,693,690 | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | This PR fixes some URLs.
Fix #4386. | closed | https://github.com/huggingface/datasets/pull/4389 | 2022-05-23T07:19:49 | 2022-05-23T10:38:26 | 2022-05-23T10:29:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,244,645,158 | 4,388 | Set builder name from module instead of class | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory conta... | closed | https://github.com/huggingface/datasets/pull/4388 | 2022-05-23T06:26:35 | 2022-05-25T05:24:43 | 2022-05-25T05:16:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,244,147,817 | 4,387 | device/google/accessory/adk2012 - Git at Google | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | closed | https://github.com/huggingface/datasets/issues/4387 | 2022-05-22T04:57:19 | 2022-05-23T06:36:27 | 2022-05-23T06:36:27 | {
"login": "Aeckard45",
"id": 87345839,
"type": "User"
} | [] | false | [] |
1,243,965,532 | 4,386 | Bug for wiki_auto_asset_turk from GEM | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/... | closed | https://github.com/huggingface/datasets/issues/4386 | 2022-05-21T12:31:30 | 2022-05-24T05:55:52 | 2022-05-23T10:29:55 | {
"login": "StevenTang1998",
"id": 37647985,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,243,921,287 | 4,385 | Test dill | Regression test for future releases of `dill`.
Related to #4379. | closed | https://github.com/huggingface/datasets/pull/4385 | 2022-05-21T08:57:43 | 2022-05-25T08:30:13 | 2022-05-25T08:21:48 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,243,919,748 | 4,384 | Refactor download | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of sc... | closed | https://github.com/huggingface/datasets/pull/4384 | 2022-05-21T08:49:24 | 2022-05-25T10:52:02 | 2022-05-25T10:43:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,243,856,981 | 4,383 | L | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | closed | https://github.com/huggingface/datasets/issues/4383 | 2022-05-21T03:47:58 | 2022-05-21T19:20:13 | 2022-05-21T19:20:13 | {
"login": "AronCodes21",
"id": 99847861,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,243,839,783 | 4,382 | First time trying | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | closed | https://github.com/huggingface/datasets/issues/4382 | 2022-05-21T02:15:18 | 2022-05-21T19:20:44 | 2022-05-21T19:20:44 | {
"login": "Aeckard45",
"id": 87345839,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,243,478,863 | 4,381 | Bug in caching 2 datasets both with the same builder class name | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datas... | closed | https://github.com/huggingface/datasets/issues/4381 | 2022-05-20T18:18:03 | 2022-06-02T08:18:37 | 2022-05-25T05:16:15 | {
"login": "NouamaneTazi",
"id": 29777165,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,243,183,054 | 4,380 | Pin dill | Hotfix #4379.
CC: @sgugger | closed | https://github.com/huggingface/datasets/pull/4380 | 2022-05-20T13:54:19 | 2022-06-13T10:03:52 | 2022-05-20T16:33:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,243,175,854 | 4,379 | Latest dill release raises exception | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
s... | closed | https://github.com/huggingface/datasets/issues/4379 | 2022-05-20T13:48:36 | 2022-05-21T15:53:26 | 2022-05-20T17:06:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,242,935,373 | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | closed | https://github.com/huggingface/datasets/pull/4378 | 2022-05-20T10:16:12 | 2022-05-24T13:50:23 | 2022-05-24T13:10:27 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [] | true | [] |
1,242,746,186 | 4,377 | Fix checksum and bug in irc_disentangle dataset | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| closed | https://github.com/huggingface/datasets/pull/4377 | 2022-05-20T07:29:28 | 2022-05-20T09:34:36 | 2022-05-20T09:26:32 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,242,218,144 | 4,376 | irc_disentagle viewer error | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when usi... | closed | https://github.com/huggingface/datasets/issues/4376 | 2022-05-19T19:15:16 | 2023-01-12T16:56:13 | 2022-06-02T08:20:00 | {
"login": "labouz",
"id": 25671683,
"type": "User"
} | [] | false | [] |
1,241,921,147 | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | ### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingfa... | closed | https://github.com/huggingface/datasets/pull/4375 | 2022-05-19T15:00:31 | 2022-07-04T16:05:14 | 2022-06-10T20:47:27 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,241,860,535 | 4,374 | extremely slow processing when using a custom dataset | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the d... | closed | https://github.com/huggingface/datasets/issues/4374 | 2022-05-19T14:18:05 | 2023-07-25T15:07:17 | 2023-07-25T15:07:16 | {
"login": "StephennFernandes",
"id": 32235549,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
1,241,769,310 | 4,373 | Remove links in docs to old dataset viewer | Remove the links in the docs to the no longer maintained dataset viewer. | closed | https://github.com/huggingface/datasets/pull/4373 | 2022-05-19T13:24:39 | 2022-05-20T15:24:28 | 2022-05-20T15:16:05 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,241,703,826 | 4,372 | Check if dataset features match before push in `DatasetDict.push_to_hub` | Fix #4211 | closed | https://github.com/huggingface/datasets/pull/4372 | 2022-05-19T12:32:30 | 2022-05-20T15:23:36 | 2022-05-20T15:15:30 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,241,500,906 | 4,371 | Add missing language tags for udhr dataset | Related to #4362. | closed | https://github.com/huggingface/datasets/pull/4371 | 2022-05-19T09:34:10 | 2022-06-08T12:03:24 | 2022-05-20T09:43:10 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,240,245,642 | 4,369 | Add redirect to dataset script in the repo structure page | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | closed | https://github.com/huggingface/datasets/pull/4369 | 2022-05-18T17:05:33 | 2022-05-19T08:19:01 | 2022-05-19T08:10:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,240,064,860 | 4,368 | Add long answer candidates to natural questions dataset | This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does ... | closed | https://github.com/huggingface/datasets/pull/4368 | 2022-05-18T14:35:42 | 2022-07-26T20:30:41 | 2022-07-26T20:18:42 | {
"login": "seirasto",
"id": 4257308,
"type": "User"
} | [] | true | [] |
1,240,011,602 | 4,367 | Remove config names as yaml keys | Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anywa... | closed | https://github.com/huggingface/datasets/pull/4367 | 2022-05-18T13:59:24 | 2022-05-20T09:35:26 | 2022-05-20T09:27:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,239,534,165 | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0... | closed | https://github.com/huggingface/datasets/issues/4366 | 2022-05-18T07:17:29 | 2022-05-18T16:36:22 | 2022-05-18T16:36:21 | {
"login": "jffgitt",
"id": 99231535,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,239,109,943 | 4,365 | Remove dots in config names | 20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in th... | closed | https://github.com/huggingface/datasets/pull/4365 | 2022-05-17T20:12:57 | 2023-09-24T10:02:53 | 2022-05-18T13:59:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,238,976,106 | 4,364 | Support complex feature types as `features` in packaged loaders | This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to... | closed | https://github.com/huggingface/datasets/pull/4364 | 2022-05-17T17:53:23 | 2022-05-31T12:26:23 | 2022-05-31T12:16:32 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,238,897,652 | 4,363 | The dataset preview is not available for this split. | I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. ... | closed | https://github.com/huggingface/datasets/issues/4363 | 2022-05-17T16:34:43 | 2022-06-08T12:32:10 | 2022-06-08T09:26:56 | {
"login": "roholazandie",
"id": 7584674,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,238,680,112 | 4,362 | Update dataset_infos for UDHN/udhr dataset | Checksum update to `udhr` for issue #4361 | closed | https://github.com/huggingface/datasets/pull/4362 | 2022-05-17T13:52:59 | 2022-06-08T19:20:11 | 2022-06-08T19:11:21 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [] | true | [] |
1,238,671,931 | 4,361 | `udhr` doesn't load, dataset checksum mismatch | ## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode... | closed | https://github.com/huggingface/datasets/issues/4361 | 2022-05-17T13:47:09 | 2022-06-08T19:11:21 | 2022-06-08T19:11:21 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,237,239,096 | 4,360 | Fix example in opus_ubuntu, Add license info | This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type | closed | https://github.com/huggingface/datasets/pull/4360 | 2022-05-16T14:22:28 | 2022-06-01T13:06:07 | 2022-06-01T12:57:09 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [] | true | [] |
1,237,149,578 | 4,359 | Fix Version equality | I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (Fals... | closed | https://github.com/huggingface/datasets/pull/4359 | 2022-05-16T13:19:26 | 2022-05-24T16:25:37 | 2022-05-24T16:17:14 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,237,147,692 | 4,358 | Missing dataset tags and sections in some dataset cards | Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citati... | open | https://github.com/huggingface/datasets/issues/4358 | 2022-05-16T13:18:16 | 2022-05-30T15:36:52 | null | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,237,037,069 | 4,357 | Fix warning in push_to_hub | Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
``` | closed | https://github.com/huggingface/datasets/pull/4357 | 2022-05-16T11:50:17 | 2022-05-16T15:18:49 | 2022-05-16T15:10:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,236,846,308 | 4,356 | Fix dataset builder default version | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the... | closed | https://github.com/huggingface/datasets/pull/4356 | 2022-05-16T09:05:10 | 2022-05-30T13:56:58 | 2022-05-30T13:47:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,236,797,490 | 4,355 | Fix warning in upload_file | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | closed | https://github.com/huggingface/datasets/pull/4355 | 2022-05-16T08:21:31 | 2022-05-16T11:28:02 | 2022-05-16T11:19:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,236,404,383 | 4,354 | Problems with WMT dataset | ## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingfac... | closed | https://github.com/huggingface/datasets/issues/4354 | 2022-05-15T20:58:26 | 2022-07-11T14:54:02 | 2022-07-11T14:54:01 | {
"login": "eldarkurtic",
"id": 8884008,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,236,092,176 | 4,353 | Don't strip proceeding hyphen | Closes #4320. | closed | https://github.com/huggingface/datasets/pull/4353 | 2022-05-14T18:25:29 | 2022-05-16T18:51:38 | 2022-05-16T13:52:11 | {
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
} | [] | true | [] |
1,236,086,170 | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not... | open | https://github.com/huggingface/datasets/issues/4352 | 2022-05-14T17:55:15 | 2022-05-16T15:09:17 | null | {
"login": "plamb-viso",
"id": 99206017,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,235,950,209 | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(a... | closed | https://github.com/huggingface/datasets/issues/4351 | 2022-05-14T11:30:42 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | {
"login": "Rexhaif",
"id": 5154447,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,235,505,104 | 4,350 | Add a new metric: CTC_Consistency | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | closed | https://github.com/huggingface/datasets/pull/4350 | 2022-05-13T17:31:19 | 2022-05-19T10:23:04 | 2022-05-19T10:23:03 | {
"login": "YEdenZ",
"id": 92551194,
"type": "User"
} | [] | true | [] |
1,235,474,765 | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace tr... | closed | https://github.com/huggingface/datasets/issues/4349 | 2022-05-13T16:55:12 | 2022-06-02T12:51:11 | 2022-05-14T15:08:08 | {
"login": "plamb-viso",
"id": 99206017,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,235,432,976 | 4,348 | `inspect` functions can't fetch dataset script from the Hub | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: C... | closed | https://github.com/huggingface/datasets/issues/4348 | 2022-05-13T16:08:26 | 2022-06-09T10:26:06 | 2022-06-09T10:26:06 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,235,318,064 | 4,347 | Support remote cache_dir | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | closed | https://github.com/huggingface/datasets/pull/4347 | 2022-05-13T14:26:35 | 2022-05-25T16:35:23 | 2022-05-25T16:27:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,235,067,062 | 4,346 | GH Action to build documentation never ends | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | closed | https://github.com/huggingface/datasets/issues/4346 | 2022-05-13T10:44:44 | 2022-05-13T11:22:00 | 2022-05-13T11:22:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,235,062,787 | 4,345 | Fix never ending GH Action to build documentation | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should... | closed | https://github.com/huggingface/datasets/pull/4345 | 2022-05-13T10:40:10 | 2022-05-13T11:29:43 | 2022-05-13T11:22:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,234,882,542 | 4,344 | Fix docstring in DatasetDict::shuffle | I think due to #1626, the docstring contained this error ever since `seed` was added. | closed | https://github.com/huggingface/datasets/pull/4344 | 2022-05-13T08:06:00 | 2022-05-25T09:23:43 | 2022-05-24T15:35:21 | {
"login": "felixdivo",
"id": 4403130,
"type": "User"
} | [] | true | [] |
1,234,864,168 | 4,343 | Metrics documentation is not accessible in the datasets doc UI | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the met... | closed | https://github.com/huggingface/datasets/issues/4343 | 2022-05-13T07:46:30 | 2022-06-03T08:50:25 | 2022-06-03T08:50:25 | {
"login": "fxmarty",
"id": 9808326,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "Metric discussion",
"color": "d722e8"
}
] | false | [] |
1,234,743,765 | 4,342 | Fix failing CI on Windows for sari and wiki_split metrics | This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341. | closed | https://github.com/huggingface/datasets/pull/4342 | 2022-05-13T05:03:38 | 2022-05-13T05:47:42 | 2022-05-13T05:47:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,234,739,703 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/githu... | closed | https://github.com/huggingface/datasets/issues/4341 | 2022-05-13T04:55:17 | 2022-05-13T05:47:41 | 2022-05-13T05:47:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,234,671,025 | 4,340 | Fix irc_disentangle dataset script | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | closed | https://github.com/huggingface/datasets/pull/4340 | 2022-05-13T02:37:57 | 2022-05-24T15:37:30 | 2022-05-24T15:37:29 | {
"login": "i-am-pad",
"id": 32005017,
"type": "User"
} | [] | true | [] |
1,234,496,289 | 4,339 | Dataset loader for the MSLR2022 shared task | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
``... | closed | https://github.com/huggingface/datasets/pull/4339 | 2022-05-12T21:23:41 | 2022-07-18T17:19:27 | 2022-07-18T16:58:34 | {
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
} | [] | true | [] |
1,234,478,851 | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | closed | https://github.com/huggingface/datasets/pull/4338 | 2022-05-12T21:02:08 | 2022-05-16T15:51:02 | 2022-05-16T15:42:59 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,470,083 | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | closed | https://github.com/huggingface/datasets/pull/4337 | 2022-05-12T20:52:02 | 2022-05-16T16:26:19 | 2022-05-16T16:18:30 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,446,174 | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | closed | https://github.com/huggingface/datasets/pull/4336 | 2022-05-12T20:24:45 | 2022-05-16T16:25:00 | 2022-05-16T16:24:59 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,157,123 | 4,335 | Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech | Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive | closed | https://github.com/huggingface/datasets/pull/4335 | 2022-05-12T15:28:16 | 2022-05-16T16:31:10 | 2022-05-16T16:23:09 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,103,477 | 4,334 | Adding eval metadata for billsum | Adding eval metadata for billsum | closed | https://github.com/huggingface/datasets/pull/4334 | 2022-05-12T14:49:08 | 2023-09-24T10:02:46 | 2022-05-12T14:49:24 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,038,705 | 4,333 | Adding eval metadata for Banking 77 | Adding eval metadata for Banking 77 | closed | https://github.com/huggingface/datasets/pull/4333 | 2022-05-12T14:05:05 | 2022-05-12T21:03:32 | 2022-05-12T21:03:31 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,021,188 | 4,332 | Adding eval metadata for arabic speech corpus | Adding eval metadata for arabic speech corpus | closed | https://github.com/huggingface/datasets/pull/4332 | 2022-05-12T13:51:38 | 2022-05-12T21:03:21 | 2022-05-12T21:03:20 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,234,016,110 | 4,331 | Adding eval metadata to Amazon Polarity | Adding eval metadata to Amazon Polarity | closed | https://github.com/huggingface/datasets/pull/4331 | 2022-05-12T13:47:59 | 2022-05-12T21:03:14 | 2022-05-12T21:03:13 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,233,992,681 | 4,330 | Adding eval metadata to Allociné dataset | Adding eval metadata to Allociné dataset | closed | https://github.com/huggingface/datasets/pull/4330 | 2022-05-12T13:31:39 | 2022-05-12T21:03:05 | 2022-05-12T21:03:05 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,233,991,207 | 4,329 | Adding eval metadata for AG News | Adding eval metadata for AG News | closed | https://github.com/huggingface/datasets/pull/4329 | 2022-05-12T13:30:32 | 2022-05-12T21:02:41 | 2022-05-12T21:02:40 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,233,856,690 | 4,328 | Fix and clean Apache Beam functionality | null | closed | https://github.com/huggingface/datasets/pull/4328 | 2022-05-12T11:41:07 | 2022-05-24T13:43:11 | 2022-05-24T13:34:32 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,233,840,020 | 4,327 | `wikipedia` pre-processed datasets | ## Describe the bug
[Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.
## Steps to reproduce the bug
```python
f... | closed | https://github.com/huggingface/datasets/issues/4327 | 2022-05-12T11:25:42 | 2022-08-31T08:26:57 | 2022-08-31T08:26:57 | {
"login": "vpj",
"id": 81152,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,233,818,489 | 4,326 | Fix type hint and documentation for `new_fingerprint` | Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.... | closed | https://github.com/huggingface/datasets/pull/4326 | 2022-05-12T11:05:08 | 2022-06-01T13:04:45 | 2022-06-01T12:56:18 | {
"login": "fxmarty",
"id": 9808326,
"type": "User"
} | [] | true | [] |
1,233,812,191 | 4,325 | Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance | ### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. May... | closed | https://github.com/huggingface/datasets/issues/4325 | 2022-05-12T10:59:08 | 2022-05-13T10:57:15 | 2022-05-13T10:57:02 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,233,780,870 | 4,324 | Support >1 PWC dataset per dataset card | **Is your feature request related to a problem? Please describe.**
Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/stromb... | open | https://github.com/huggingface/datasets/issues/4324 | 2022-05-12T10:29:07 | 2022-05-13T11:25:29 | null | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,233,634,928 | 4,323 | Audio can not find value["bytes"] | ## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## ... | closed | https://github.com/huggingface/datasets/issues/4323 | 2022-05-12T08:31:58 | 2022-07-07T13:16:08 | 2022-07-07T13:16:08 | {
"login": "YooSungHyun",
"id": 34292279,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,233,596,947 | 4,322 | Added stratify option to train_test_split function. | This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.
It fixes #3452.
@lhoestq Please review and let me know, if any changes are required.
| closed | https://github.com/huggingface/datasets/pull/4322 | 2022-05-12T08:00:31 | 2022-11-22T14:53:55 | 2022-05-25T20:43:51 | {
"login": "nandwalritik",
"id": 48522685,
"type": "User"
} | [] | true | [] |
1,233,273,351 | 4,321 | Adding dataset enwik8 | Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗 | closed | https://github.com/huggingface/datasets/pull/4321 | 2022-05-11T23:25:02 | 2022-06-01T14:27:30 | 2022-06-01T14:04:06 | {
"login": "HallerPatrick",
"id": 22773355,
"type": "User"
} | [] | true | [] |
1,233,208,864 | 4,320 | Multi-news dataset loader attempts to strip wrong character from beginning of summaries | ## Describe the bug
The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"–... | closed | https://github.com/huggingface/datasets/issues/4320 | 2022-05-11T21:36:41 | 2022-05-16T13:52:10 | 2022-05-16T13:52:10 | {
"login": "JohnGiorgi",
"id": 8917831,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,232,982,023 | 4,319 | Adding eval metadata for ade v2 | Adding metadata to allow evaluation | closed | https://github.com/huggingface/datasets/pull/4319 | 2022-05-11T17:36:20 | 2022-05-12T13:29:51 | 2022-05-12T13:22:19 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,232,905,488 | 4,318 | Don't check f.loc in _get_extraction_protocol_with_magic_number | `f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)
Fix https://github.com/huggingface/datasets/issues/4310 | closed | https://github.com/huggingface/datasets/pull/4318 | 2022-05-11T16:27:09 | 2022-05-11T16:57:02 | 2022-05-11T16:46:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,232,737,401 | 4,317 | Fix cnn_dailymail (dm stories were ignored) | https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged | closed | https://github.com/huggingface/datasets/pull/4317 | 2022-05-11T14:25:25 | 2022-05-11T16:00:09 | 2022-05-11T15:52:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,232,681,207 | 4,316 | Support passing config_kwargs to CLI run_beam | This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
``` | closed | https://github.com/huggingface/datasets/pull/4316 | 2022-05-11T13:53:37 | 2022-05-11T14:36:49 | 2022-05-11T14:28:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,232,549,330 | 4,315 | Fix CLI run_beam namespace | Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
``` | closed | https://github.com/huggingface/datasets/pull/4315 | 2022-05-11T12:21:00 | 2022-05-11T13:13:00 | 2022-05-11T13:05:08 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,232,326,726 | 4,314 | Catch pull error when mirroring | Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed. | closed | https://github.com/huggingface/datasets/pull/4314 | 2022-05-11T09:38:35 | 2022-05-11T12:54:07 | 2022-05-11T12:46:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,231,764,100 | 4,313 | Add API code examples for Builder classes | This PR adds API code examples for the Builder classes. | closed | https://github.com/huggingface/datasets/pull/4313 | 2022-05-10T22:22:32 | 2022-05-12T17:02:43 | 2022-05-12T12:36:57 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,231,662,775 | 4,312 | added TR-News dataset | null | closed | https://github.com/huggingface/datasets/pull/4312 | 2022-05-10T20:33:00 | 2022-10-03T09:36:45 | 2022-10-03T09:36:45 | {
"login": "batubayk",
"id": 25901065,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,231,369,438 | 4,311 | [Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly | I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- rai... | closed | https://github.com/huggingface/datasets/pull/4311 | 2022-05-10T15:52:15 | 2022-05-10T17:19:42 | 2022-05-10T17:11:47 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,231,319,815 | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems ... | closed | https://github.com/huggingface/datasets/issues/4310 | 2022-05-10T15:12:53 | 2022-05-11T16:46:31 | 2022-05-11T16:46:31 | {
"login": "milmin",
"id": 72745467,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,231,232,935 | 4,309 | [WIP] Add TEDLIUM dataset | Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for conti... | closed | https://github.com/huggingface/datasets/pull/4309 | 2022-05-10T14:12:47 | 2022-06-17T12:54:40 | 2022-06-17T11:44:01 | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | true | [] |
1,231,217,783 | 4,308 | Remove unused multiprocessing args from test CLI | Multiprocessing is not used in the test CLI. | closed | https://github.com/huggingface/datasets/pull/4308 | 2022-05-10T14:02:15 | 2022-05-11T12:58:25 | 2022-05-11T12:50:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,231,175,639 | 4,307 | Add packaged builder configs to the documentation | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | closed | https://github.com/huggingface/datasets/pull/4307 | 2022-05-10T13:34:19 | 2022-05-10T14:03:50 | 2022-05-10T13:55:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,231,137,204 | 4,306 | `load_dataset` does not work with certain filename. | ## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
... | closed | https://github.com/huggingface/datasets/issues/4306 | 2022-05-10T13:14:04 | 2022-05-10T18:58:36 | 2022-05-10T18:58:09 | {
"login": "whatever60",
"id": 57242693,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,231,099,934 | 4,305 | Fixes FrugalScore | There are two minor modifications in this PR:
1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.
2) I switched to d... | open | https://github.com/huggingface/datasets/pull/4305 | 2022-05-10T12:44:06 | 2022-09-22T16:42:06 | null | {
"login": "moussaKam",
"id": 28675016,
"type": "User"
} | [
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true | [] |
1,231,047,051 | 4,304 | Language code search does direct matches | ## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-taggin... | open | https://github.com/huggingface/datasets/issues/4304 | 2022-05-10T11:59:16 | 2022-05-10T12:38:42 | null | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,230,867,728 | 4,303 | Fix: Add missing comma | null | closed | https://github.com/huggingface/datasets/pull/4303 | 2022-05-10T09:21:38 | 2022-05-11T08:50:15 | 2022-05-11T08:50:14 | {
"login": "mrm8488",
"id": 3653789,
"type": "User"
} | [] | true | [] |
1,230,651,117 | 4,302 | Remove hacking license tags when mirroring datasets on the Hub | Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub.
I guess this hacking is no longer necessary:
- it is not applied... | closed | https://github.com/huggingface/datasets/pull/4302 | 2022-05-10T05:52:46 | 2022-05-20T09:48:30 | 2022-05-20T09:40:20 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,230,401,256 | 4,301 | Add ImageNet-Sketch dataset | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | closed | https://github.com/huggingface/datasets/pull/4301 | 2022-05-09T23:38:45 | 2022-05-23T18:14:14 | 2022-05-23T18:05:29 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [] | true | [] |
1,230,272,761 | 4,300 | Add API code examples for loading methods | This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`,... | closed | https://github.com/huggingface/datasets/pull/4300 | 2022-05-09T21:30:26 | 2022-05-25T16:23:15 | 2022-05-25T09:20:13 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,230,236,782 | 4,299 | Remove manual download from imagenet-1k | Remove the manual download code from `imagenet-1k` to make it a regular dataset. | closed | https://github.com/huggingface/datasets/pull/4299 | 2022-05-09T20:49:18 | 2022-05-25T14:54:59 | 2022-05-25T14:46:16 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,229,748,006 | 4,298 | Normalise license names | **Is your feature request related to a problem? Please describe.**
When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the ... | closed | https://github.com/huggingface/datasets/issues/4298 | 2022-05-09T13:51:32 | 2022-05-20T09:51:50 | 2022-05-20T09:51:50 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,229,735,498 | 4,297 | Datasets YAML tagging space is down | ## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
T... | closed | https://github.com/huggingface/datasets/issues/4297 | 2022-05-09T13:45:05 | 2022-05-09T14:44:25 | 2022-05-09T14:44:25 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,229,554,645 | 4,296 | Fix URL query parameters in compression hop path when streaming | Fix #3488. | open | https://github.com/huggingface/datasets/pull/4296 | 2022-05-09T11:18:22 | 2022-07-06T15:19:53 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,229,527,283 | 4,295 | Fix missing lz4 dependency for tests | Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped. | closed | https://github.com/huggingface/datasets/pull/4295 | 2022-05-09T10:53:20 | 2022-05-09T11:21:22 | 2022-05-09T11:13:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,229,455,582 | 4,294 | Fix CLI run_beam save_infos | Currently, it raises TypeError:
```
TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'
``` | closed | https://github.com/huggingface/datasets/pull/4294 | 2022-05-09T09:47:43 | 2022-05-10T07:04:04 | 2022-05-10T06:56:10 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,228,815,477 | 4,293 | Fix wrong map parameter name in cache docs | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | closed | https://github.com/huggingface/datasets/pull/4293 | 2022-05-08T07:27:46 | 2022-06-14T16:49:00 | 2022-06-14T16:07:00 | {
"login": "h4iku",
"id": 3812788,
"type": "User"
} | [] | true | [] |
1,228,216,788 | 4,292 | Add API code examples for remaining main classes | This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :) | closed | https://github.com/huggingface/datasets/pull/4292 | 2022-05-06T18:15:31 | 2022-05-25T18:05:13 | 2022-05-25T17:56:36 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,227,777,500 | 4,291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | closed | https://github.com/huggingface/datasets/issues/4291 | 2022-05-06T12:03:27 | 2022-05-09T08:25:58 | 2022-05-09T08:25:58 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.