id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,070,454,913 | 3,375 | Support streaming zipped dataset repo by passing only repo name | Proposed solution:
- I have added the method `iter_files` to DownloadManager and StreamingDownloadManager
- I use this in modules: "csv", "json", "text"
- I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes
Fix #3373. | closed | https://github.com/huggingface/datasets/pull/3375 | 2021-12-03T10:43:05 | 2021-12-16T18:03:32 | 2021-12-16T18:03:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,070,426,462 | 3,374 | NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews | Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error. | closed | https://github.com/huggingface/datasets/issues/3374 | 2021-12-03T10:10:54 | 2021-12-08T14:14:41 | 2021-12-08T14:14:41 | {
"login": "Namco0816",
"id": 34687537,
"type": "User"
} | [] | false | [] |
1,070,406,391 | 3,373 | Support streaming zipped CSV dataset repo by passing only repo name | Given a community 🤗 dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`:
```
ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab"
ds = load_dataset(ds_name, split="train", streaming=True,... | closed | https://github.com/huggingface/datasets/issues/3373 | 2021-12-03T09:48:24 | 2021-12-16T18:03:31 | 2021-12-16T18:03:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,069,948,178 | 3,372 | [SEO improvement] Add Dataset Metadata to make datasets indexable | Some people who host datasets on github seem to include a table of metadata at the end of their README.md to make the dataset indexable by [Google Dataset Search](https://datasetsearch.research.google.com/) (See [here](https://github.com/google-research/google-research/tree/master/goemotions#dataset-metadata) and [here... | closed | https://github.com/huggingface/datasets/issues/3372 | 2021-12-02T20:21:07 | 2022-03-18T09:36:48 | 2022-03-18T09:36:48 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,069,821,335 | 3,371 | New: Americas NLI dataset | This PR adds the [Americas NLI](https://arxiv.org/abs/2104.08726) dataset, extension of XNLI to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika.
One odd thing (not sure) is that I had to set
`datasets-... | closed | https://github.com/huggingface/datasets/pull/3371 | 2021-12-02T17:44:59 | 2021-12-08T13:58:12 | 2021-12-08T13:58:11 | {
"login": "fdschmidt93",
"id": 39233597,
"type": "User"
} | [] | true | [] |
1,069,735,423 | 3,370 | Document a training loop for streaming dataset | I added some docs about streaming dataset. In particular I added two subsections:
- one on how to use `map` for preprocessing
- one on how to use a streaming dataset in a pytorch training loop
cc @patrickvonplaten @stevhliu if you have some comments
cc @Rocketknight1 later we can add the one for TF and I might ne... | closed | https://github.com/huggingface/datasets/pull/3370 | 2021-12-02T16:17:00 | 2021-12-03T13:34:35 | 2021-12-03T13:34:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,069,587,674 | 3,369 | [Audio] Allow resampling for audio datasets in streaming mode | Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows:
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "ab", split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
```
However in strea... | closed | https://github.com/huggingface/datasets/issues/3369 | 2021-12-02T14:04:57 | 2021-12-16T15:55:19 | 2021-12-16T15:55:19 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,069,403,624 | 3,368 | Fix dict source_datasets tagset validator | Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys.
This PR:
- Extends `tagset_validator` to support regex tags
- Uses `tagset_validator` to validate dict `source_datasets` | closed | https://github.com/huggingface/datasets/pull/3368 | 2021-12-02T10:52:20 | 2021-12-02T15:48:38 | 2021-12-02T15:48:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,069,241,274 | 3,367 | Fix typo in other-structured-to-text task tag | Fix typo in task tag:
- `other-stuctured-to-text` (before)
- `other-structured-to-text` (now) | closed | https://github.com/huggingface/datasets/pull/3367 | 2021-12-02T08:02:27 | 2021-12-02T16:07:14 | 2021-12-02T16:07:13 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,069,214,022 | 3,366 | Add multimodal datasets | Epic issue to track the addition of multimodal datasets:
- [ ] #2526
- [x] #1842
- [ ] #1810
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
@VictorSanh feel free to add and sort by priority any interesting dataset. I have added the... | open | https://github.com/huggingface/datasets/issues/3366 | 2021-12-02T07:24:04 | 2023-02-28T16:29:22 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,069,195,887 | 3,365 | Add task tags for multimodal datasets | ## **Is your feature request related to a problem? Please describe.**
Currently, task tags are either exclusively related to text or speech processing:
- https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json
## **Describe the solution you'd like**
We should also add tasks... | closed | https://github.com/huggingface/datasets/issues/3365 | 2021-12-02T06:58:20 | 2023-07-25T18:21:33 | 2023-07-25T18:21:32 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,068,851,196 | 3,364 | Use the Audio feature in the AutomaticSpeechRecognition template | This updates the ASR template and all supported datasets to use the `Audio` feature | closed | https://github.com/huggingface/datasets/pull/3364 | 2021-12-01T20:42:26 | 2022-03-24T14:34:09 | 2022-03-24T14:34:08 | {
"login": "anton-l",
"id": 26864830,
"type": "User"
} | [] | true | [] |
1,068,824,340 | 3,363 | Update URL of Jeopardy! dataset | Updates the URL of the Jeopardy! dataset.
Fix #3361 | closed | https://github.com/huggingface/datasets/pull/3363 | 2021-12-01T20:08:10 | 2022-10-06T13:45:49 | 2021-12-03T12:35:01 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,068,809,768 | 3,362 | Adapt image datasets | This PR:
* adapts the ImageClassification template to use the new Image feature
* adapts the following datasets to use the new Image feature:
* beans (+ fixes streaming)
* cast_vs_dogs (+ fixes streaming)
* cifar10
* cifar100
* fashion_mnist
* mnist
* head_qa
cc @nateraw | closed | https://github.com/huggingface/datasets/pull/3362 | 2021-12-01T19:52:01 | 2021-12-09T18:37:42 | 2021-12-09T18:37:41 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,068,736,268 | 3,361 | Jeopardy _URL access denied | ## Describe the bug
http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now.
However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_f... | closed | https://github.com/huggingface/datasets/issues/3361 | 2021-12-01T18:21:33 | 2021-12-11T12:50:23 | 2021-12-06T11:16:31 | {
"login": "tianjianjiang",
"id": 4812544,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,068,724,697 | 3,360 | Add The Pile USPTO subset | Add:
- USPTO subset of The Pile: "uspto" config
Close bigscience-workshop/data_tooling#297.
CC: @StellaAthena | closed | https://github.com/huggingface/datasets/pull/3360 | 2021-12-01T18:08:05 | 2021-12-03T11:45:29 | 2021-12-03T11:45:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,068,638,213 | 3,359 | Add The Pile Free Law subset | Add:
- Free Law subset of The Pile: "free_law" config
Close bigscience-workshop/data_tooling#75.
CC: @StellaAthena | closed | https://github.com/huggingface/datasets/pull/3359 | 2021-12-01T16:46:04 | 2021-12-06T10:12:17 | 2021-12-01T17:30:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,068,623,216 | 3,358 | add new field, and get errors | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', '... | closed | https://github.com/huggingface/datasets/issues/3358 | 2021-12-01T16:35:38 | 2021-12-02T02:26:22 | 2021-12-02T02:26:22 | {
"login": "PatricYan",
"id": 38966558,
"type": "User"
} | [] | false | [] |
1,068,607,382 | 3,357 | Update languages in aeslc dataset card | After having worked a bit with the dataset.
As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate). | closed | https://github.com/huggingface/datasets/pull/3357 | 2021-12-01T16:20:46 | 2022-09-23T13:16:49 | 2022-09-23T13:16:49 | {
"login": "apergo-ai",
"id": 68908804,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,068,503,932 | 3,356 | to_tf_dataset() refactor | This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are:
- A collator is always required (there was way too much hackiness making things like labels work without it)
- Lots of cleanup and a lot of code moved to `_get_output_signature`
- Should now handle it gra... | closed | https://github.com/huggingface/datasets/pull/3356 | 2021-12-01T14:54:30 | 2021-12-09T10:26:53 | 2021-12-09T10:26:53 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,068,468,573 | 3,355 | Extend support for streaming datasets that use pd.read_excel | This PR fixes error:
```
ValueError: Cannot seek streaming HTTP file
```
CC: @severo | closed | https://github.com/huggingface/datasets/pull/3355 | 2021-12-01T14:22:43 | 2021-12-17T07:24:19 | 2021-12-17T07:24:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,068,307,271 | 3,354 | Remove duplicate name from dataset cards | Remove duplicate name from dataset card for:
- ajgt_twitter_ar
- emotone_ar | closed | https://github.com/huggingface/datasets/pull/3354 | 2021-12-01T11:45:40 | 2021-12-01T13:14:30 | 2021-12-01T13:14:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,068,173,783 | 3,353 | add one field "example_id", but I can't see it in the "comput_loss" function | Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs
```
*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
... | closed | https://github.com/huggingface/datasets/issues/3353 | 2021-12-01T09:35:09 | 2021-12-01T16:02:39 | 2021-12-01T16:02:39 | {
"login": "PatricYan",
"id": 38966558,
"type": "User"
} | [] | false | [] |
1,068,102,994 | 3,352 | Make LABR dataset streamable | Fix LABR dataset to make it streamable.
Related to: #3350. | closed | https://github.com/huggingface/datasets/pull/3352 | 2021-12-01T08:22:27 | 2021-12-01T10:49:02 | 2021-12-01T10:49:01 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,068,094,873 | 3,351 | Add VCTK dataset | Fixes #1837. | closed | https://github.com/huggingface/datasets/pull/3351 | 2021-12-01T08:13:17 | 2022-02-28T09:22:03 | 2021-12-28T15:05:08 | {
"login": "jaketae",
"id": 25360440,
"type": "User"
} | [] | true | [] |
1,068,078,160 | 3,350 | Avoid content-encoding issue while streaming datasets | This PR will fix streaming of datasets served with gzip content-encoding:
```
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
Fix #2918.
CC: @severo | closed | https://github.com/huggingface/datasets/pull/3350 | 2021-12-01T07:56:48 | 2021-12-01T08:15:01 | 2021-12-01T08:15:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,067,853,601 | 3,349 | raise exception instead of using assertions. | fix for the remaining files https://github.com/huggingface/datasets/issues/3171 | closed | https://github.com/huggingface/datasets/pull/3349 | 2021-12-01T01:37:51 | 2021-12-20T16:07:27 | 2021-12-20T16:07:27 | {
"login": "manisnesan",
"id": 153142,
"type": "User"
} | [] | true | [] |
1,067,831,113 | 3,348 | BLEURT: Match key names to correspond with filename | In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235 | closed | https://github.com/huggingface/datasets/pull/3348 | 2021-12-01T01:01:18 | 2021-12-07T16:06:57 | 2021-12-07T16:06:57 | {
"login": "jaehlee",
"id": 11873078,
"type": "User"
} | [] | true | [] |
1,067,738,902 | 3,347 | iter_archive for zip files | * In this PR, I added the option to iterate through zipfiles for `download_manager.py` only.
* Next PR will be the same applied to `streaming_download_manager.py`.
* Related issue #3272.
## Comments :
* There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead ... | closed | https://github.com/huggingface/datasets/pull/3347 | 2021-11-30T22:34:17 | 2021-12-04T00:22:22 | 2021-12-04T00:22:11 | {
"login": "Mehdi2402",
"id": 56029953,
"type": "User"
} | [] | true | [] |
1,067,632,365 | 3,346 | Failed to convert `string` with pyarrow for QED since 1.15.0 | ## Describe the bug
Loading QED was fine until 1.15.0.
related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670
Not sure where the root cause is, but here are some candidates:
- #3158
- #3120
- #3196
- #2891
## Steps to reproduce the bug
```python
load_dataset("qed")
```
## ... | closed | https://github.com/huggingface/datasets/issues/3346 | 2021-11-30T20:11:42 | 2021-12-14T14:39:05 | 2021-12-14T14:39:05 | {
"login": "tianjianjiang",
"id": 4812544,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,067,622,951 | 3,345 | Failed to download species_800 from Google Drive zip file | ## Describe the bug
One can manually download the zip file on Google Drive, but `load_dataset()` cannot.
related: #3248
## Steps to reproduce the bug
```shell
> python
Python 3.7.12 (default, Sep 5 2021, 08:34:29)
[Clang 11.0.3 (clang-1103.0.32.62)] on darwin
Type "help", "copyright", "credits" or "license" ... | closed | https://github.com/huggingface/datasets/issues/3345 | 2021-11-30T20:00:28 | 2021-12-01T17:53:15 | 2021-12-01T17:53:15 | {
"login": "tianjianjiang",
"id": 4812544,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,067,567,603 | 3,344 | Add ArrayXD docs | Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general.
Let me know if I'm missing anything @lhoestq :) | closed | https://github.com/huggingface/datasets/pull/3344 | 2021-11-30T18:53:31 | 2021-12-01T20:16:03 | 2021-12-01T19:35:32 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,067,505,507 | 3,343 | Better error message when download fails | From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails.
In particular the error now shows:
- the error from the HEAD request if there's one
- otherwise the response code of the ... | closed | https://github.com/huggingface/datasets/pull/3343 | 2021-11-30T17:38:50 | 2021-12-01T11:27:59 | 2021-12-01T11:27:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,067,481,390 | 3,342 | Fix ASSET dataset data URLs | Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that. | closed | https://github.com/huggingface/datasets/pull/3342 | 2021-11-30T17:13:30 | 2021-12-14T14:50:00 | 2021-12-14T14:50:00 | {
"login": "tianjianjiang",
"id": 4812544,
"type": "User"
} | [] | true | [] |
1,067,449,569 | 3,341 | Mirror the canonical datasets to the Hugging Face Hub | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | closed | https://github.com/huggingface/datasets/issues/3341 | 2021-11-30T16:42:05 | 2022-01-26T14:47:37 | 2022-01-26T14:47:37 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,067,292,636 | 3,340 | Fix JSON ClassLabel casting for integers | Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already.
For example this currently fails:
```python
from datasets import load_dataset, Feature... | closed | https://github.com/huggingface/datasets/pull/3340 | 2021-11-30T14:19:54 | 2021-12-01T11:27:30 | 2021-12-01T11:27:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,066,662,477 | 3,339 | to_tf_dataset fails on TPU | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s... | open | https://github.com/huggingface/datasets/issues/3339 | 2021-11-30T00:50:52 | 2021-12-02T14:21:27 | null | {
"login": "nbroad1881",
"id": 24982805,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,066,371,235 | 3,338 | [WIP] Add doctests for tutorials | Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown.
### Issues
A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle ... | closed | https://github.com/huggingface/datasets/pull/3338 | 2021-11-29T18:40:46 | 2023-05-05T17:18:20 | 2023-05-05T17:18:15 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,066,232,936 | 3,337 | Typing of Dataset.__getitem__ could be improved. | ## Describe the bug
The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload)
## Steps... | closed | https://github.com/huggingface/datasets/issues/3337 | 2021-11-29T16:20:11 | 2021-12-14T10:28:54 | 2021-12-14T10:28:54 | {
"login": "Dref360",
"id": 8976546,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,066,208,436 | 3,336 | Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays | Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays.
TODOs:
* [ ] Cleaner code
* [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object)
* [ ] Fix some issues with zero-dim tensors
* [ ... | closed | https://github.com/huggingface/datasets/pull/3336 | 2021-11-29T15:58:59 | 2023-09-24T09:53:52 | 2023-05-16T18:24:46 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,066,064,126 | 3,335 | add Speech commands dataset | closes #3283 | closed | https://github.com/huggingface/datasets/pull/3335 | 2021-11-29T13:52:47 | 2021-12-10T10:37:21 | 2021-12-10T10:30:15 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,065,983,923 | 3,334 | Integrate Polars library | Check potential integration of the Polars library: https://github.com/pola-rs/polars
- Benchmark: https://h2oai.github.io/db-benchmark/
CC: @thomwolf @lewtun
| closed | https://github.com/huggingface/datasets/issues/3334 | 2021-11-29T12:31:54 | 2024-08-31T05:31:28 | 2024-08-31T05:31:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,065,346,919 | 3,333 | load JSON files, get the errors | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | closed | https://github.com/huggingface/datasets/issues/3333 | 2021-11-28T14:29:58 | 2021-12-01T09:34:31 | 2021-12-01T03:57:48 | {
"login": "PatricYan",
"id": 38966558,
"type": "User"
} | [] | false | [] |
1,065,345,853 | 3,332 | Fix error message and add extension fallback | Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust.
In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix orderi... | closed | https://github.com/huggingface/datasets/pull/3332 | 2021-11-28T14:25:29 | 2021-11-29T13:34:15 | 2021-11-29T13:34:14 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,065,275,896 | 3,331 | AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' | ## Describe the bug
I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets)
But when I load the dataset, an error raised:
```bash
AttributeError: 'CommunityDatas... | closed | https://github.com/huggingface/datasets/issues/3331 | 2021-11-28T08:54:05 | 2021-11-29T13:49:44 | 2021-11-29T13:34:14 | {
"login": "luozhouyang",
"id": 34032031,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,065,176,619 | 3,330 | Change TriviaQA license (#3313) | Fixes (#3313) | closed | https://github.com/huggingface/datasets/pull/3330 | 2021-11-28T03:26:45 | 2021-11-29T11:24:21 | 2021-11-29T11:24:21 | {
"login": "avinashsai",
"id": 22453634,
"type": "User"
} | [] | true | [] |
1,065,096,971 | 3,329 | Map function: Type error on iter #999 | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text ... | closed | https://github.com/huggingface/datasets/issues/3329 | 2021-11-27T17:53:05 | 2021-11-29T20:40:15 | 2021-11-29T20:40:15 | {
"login": "josephkready666",
"id": 52659318,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,065,015,262 | 3,328 | Quick fix error formatting | While working on a dataset, I got the error
```
TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`.
``... | closed | https://github.com/huggingface/datasets/pull/3328 | 2021-11-27T11:47:48 | 2021-11-29T13:32:42 | 2021-11-29T13:32:42 | {
"login": "NouamaneTazi",
"id": 29777165,
"type": "User"
} | [] | true | [] |
1,064,675,888 | 3,327 | "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" | ## Describe the bug
Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
Probably the reason for this is a wrongly converted assertion.
1.15.1:
`assert len(query.shape) == 1 or (len(query.shape) == 2... | closed | https://github.com/huggingface/datasets/issues/3327 | 2021-11-26T16:26:36 | 2021-11-26T16:44:11 | 2021-11-26T16:44:11 | {
"login": "eliasws",
"id": 19492473,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,064,664,479 | 3,326 | Fix import `datasets` on python 3.10 | In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`.
To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators
Fix #3324 | closed | https://github.com/huggingface/datasets/pull/3326 | 2021-11-26T16:10:00 | 2021-11-26T16:31:23 | 2021-11-26T16:31:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,064,663,075 | 3,325 | Update conda dependencies | Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub` | closed | https://github.com/huggingface/datasets/pull/3325 | 2021-11-26T16:08:07 | 2021-11-26T16:20:37 | 2021-11-26T16:20:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,064,661,212 | 3,324 | Can't import `datasets` in python 3.10 | When importing `datasets` I'm getting this error in python 3.10:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/Use... | closed | https://github.com/huggingface/datasets/issues/3324 | 2021-11-26T16:06:14 | 2021-11-26T16:31:23 | 2021-11-26T16:31:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,064,660,452 | 3,323 | Fix wrongly converted assert | Seems like this assertion was replaced by an exception but the condition got wrongly converted. | closed | https://github.com/huggingface/datasets/pull/3323 | 2021-11-26T16:05:39 | 2021-11-26T16:44:12 | 2021-11-26T16:44:11 | {
"login": "eliasws",
"id": 19492473,
"type": "User"
} | [] | true | [] |
1,064,429,705 | 3,322 | Add missing tags to XTREME | Add missing tags to the XTREME benchmark for better discoverability. | closed | https://github.com/huggingface/datasets/pull/3322 | 2021-11-26T12:37:05 | 2021-11-29T13:40:07 | 2021-11-29T13:40:06 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,063,858,386 | 3,321 | Update URL of tatoeba subset of xtreme | Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows.
Fix #3320 | closed | https://github.com/huggingface/datasets/pull/3321 | 2021-11-25T18:42:31 | 2021-11-26T10:30:30 | 2021-11-26T10:30:30 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,063,531,992 | 3,320 | Can't get tatoeba.rus dataset | ## Describe the bug
It gives an error.
> FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus
## Steps to reproduce the bug
```python
data=load_dataset("xtreme","tatoeba.rus", split="validation")
```
## Solution
The library tries... | closed | https://github.com/huggingface/datasets/issues/3320 | 2021-11-25T12:31:11 | 2021-11-26T10:30:29 | 2021-11-26T10:30:29 | {
"login": "mmg10",
"id": 65535131,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,062,749,654 | 3,319 | Add push_to_hub docs | Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method.
I just added a section in the "Upload a dataset to the Hub" tutorial.
I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :) | closed | https://github.com/huggingface/datasets/pull/3319 | 2021-11-24T18:21:11 | 2021-11-25T14:47:46 | 2021-11-25T14:47:46 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,062,369,717 | 3,318 | Finish transition to PyArrow 3.0.0 | Finish transition to PyArrow 3.0.0 that was started in #3098. | closed | https://github.com/huggingface/datasets/pull/3318 | 2021-11-24T12:30:14 | 2021-11-24T15:35:05 | 2021-11-24T15:35:04 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,062,284,447 | 3,317 | Add desc parameter to Dataset filter method | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | closed | https://github.com/huggingface/datasets/issues/3317 | 2021-11-24T11:01:36 | 2022-01-05T18:31:24 | 2022-01-05T18:31:24 | {
"login": "vblagoje",
"id": 458335,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,062,185,822 | 3,316 | Add RedCaps dataset | ## Adding a Dataset
- **Name:** RedCaps
- **Description:** Web-curated image-text data created by the people, for the people
- **Paper:** https://arxiv.org/abs/2111.11431
- **Data:** https://redcaps.xyz/
- **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs
Instructions to add a new dataset c... | closed | https://github.com/huggingface/datasets/issues/3316 | 2021-11-24T09:23:02 | 2022-01-12T14:13:15 | 2022-01-12T14:13:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,061,678,452 | 3,315 | Removing query params for dynamic URL caching | The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic.
Usage example:
```python
import datasets
class CommonVoice(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo... | closed | https://github.com/huggingface/datasets/pull/3315 | 2021-11-23T20:24:12 | 2021-11-25T14:44:32 | 2021-11-25T14:44:31 | {
"login": "anton-l",
"id": 26864830,
"type": "User"
} | [] | true | [] |
1,061,448,227 | 3,314 | Adding arg to pass process rank to `map` | This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll... | closed | https://github.com/huggingface/datasets/pull/3314 | 2021-11-23T15:55:21 | 2021-11-24T11:54:13 | 2021-11-24T11:54:13 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
1,060,933,392 | 3,313 | TriviaQA License Mismatch | ## Describe the bug
TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License
Is the License Information on HuggingFace correct? | closed | https://github.com/huggingface/datasets/issues/3313 | 2021-11-23T08:00:15 | 2021-11-29T11:24:21 | 2021-11-29T11:24:21 | {
"login": "akhilkedia",
"id": 16665267,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,060,440,346 | 3,312 | add bl books genre dataset | First of all thanks for the fantastic library/collection of datasets 🤗
This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In... | closed | https://github.com/huggingface/datasets/pull/3312 | 2021-11-22T17:54:50 | 2021-12-02T16:10:29 | 2021-12-02T16:07:47 | {
"login": "davanstrien",
"id": 8995957,
"type": "User"
} | [] | true | [] |
1,060,387,957 | 3,311 | Add WebSRC | ## Adding a Dataset
- **Name:** WebSRC
- **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata.
- **Paper:** https://arxiv.org/abs/2101.0... | open | https://github.com/huggingface/datasets/issues/3311 | 2021-11-22T16:58:33 | 2021-11-22T16:58:33 | null | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,060,098,104 | 3,310 | Fatal error condition occurred in aws-c-io | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou... | closed | https://github.com/huggingface/datasets/issues/3310 | 2021-11-22T12:27:54 | 2023-02-08T10:31:05 | 2021-11-29T22:22:37 | {
"login": "Crabzmatic",
"id": 31850219,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,059,496,154 | 3,309 | fix: files counted twice in inferred structure | Files were counted twice in a structure like:
```
my_dataset_local_path/
├── README.md
└── data/
├── train/
│ ├── shard_0.csv
│ ├── shard_1.csv
│ ├── shard_2.csv
│ └── shard_3.csv
└── valid/
├── shard_0.csv
└── shard_1.csv
```
The reason is that they were ... | closed | https://github.com/huggingface/datasets/pull/3309 | 2021-11-21T21:50:38 | 2021-11-23T17:00:58 | 2021-11-23T17:00:58 | {
"login": "borisdayma",
"id": 715491,
"type": "User"
} | [] | true | [] |
1,059,255,705 | 3,308 | "dataset_infos.json" missing for chr_en and mc4 | ## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/... | open | https://github.com/huggingface/datasets/issues/3308 | 2021-11-21T00:07:22 | 2022-01-19T13:55:32 | null | {
"login": "amitness",
"id": 8587189,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,059,226,297 | 3,307 | Add IndoNLI dataset | This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/ | closed | https://github.com/huggingface/datasets/pull/3307 | 2021-11-20T20:46:03 | 2021-11-25T14:51:48 | 2021-11-25T14:51:48 | {
"login": "afaji",
"id": 6201626,
"type": "User"
} | [] | true | [] |
1,059,185,860 | 3,306 | nested sequence feature won't encode example if the first item of the outside sequence is an empty list | ## Describe the bug
As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list.
## Steps to reproduce the bug
```python
from datasets import Features, Sequence, ClassLabel
features = Features({
'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))),
... | closed | https://github.com/huggingface/datasets/issues/3306 | 2021-11-20T16:57:54 | 2021-12-08T13:02:15 | 2021-12-08T13:02:15 | {
"login": "function2-llx",
"id": 38486514,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,059,161,000 | 3,305 | asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` | Addresses #3171
Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests | closed | https://github.com/huggingface/datasets/pull/3305 | 2021-11-20T14:51:23 | 2021-11-22T18:24:32 | 2021-11-22T17:08:13 | {
"login": "Ishan-Kumar2",
"id": 46553104,
"type": "User"
} | [] | true | [] |
1,059,130,494 | 3,304 | Dataset object has no attribute `to_tf_dataset` | I am following HuggingFace Course. I am at Fine-tuning a model.
Link: https://huggingface.co/course/chapter3/2?fw=tf
I use tokenize_function and `map` as mentioned in the course to process data.
`# define a tokenize function`
`def Tokenize_function(example):`
` return tokenizer(example['sentence'], truncat... | closed | https://github.com/huggingface/datasets/issues/3304 | 2021-11-20T12:03:59 | 2021-11-21T07:07:25 | 2021-11-21T07:07:25 | {
"login": "RajkumarGalaxy",
"id": 59993678,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,059,129,732 | 3,303 | DataCollatorWithPadding: TypeError | Hi,
I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a ... | closed | https://github.com/huggingface/datasets/issues/3303 | 2021-11-20T11:59:55 | 2021-11-21T07:05:37 | 2021-11-21T07:05:37 | {
"login": "RajkumarGalaxy",
"id": 59993678,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,058,907,168 | 3,302 | fix old_val typo in f-string |
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment.
Related closed issue : #3257
Sorry about that 😅. | closed | https://github.com/huggingface/datasets/pull/3302 | 2021-11-19T20:51:08 | 2021-11-25T22:14:43 | 2021-11-22T17:04:19 | {
"login": "Mehdi2402",
"id": 56029953,
"type": "User"
} | [] | true | [] |
1,058,718,957 | 3,301 | Add wikipedia tags | Add the missing tags to the wikipedia dataset card.
I also added the missing languages code in our language codes list.
This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292 | closed | https://github.com/huggingface/datasets/pull/3301 | 2021-11-19T16:39:25 | 2021-11-19T16:49:30 | 2021-11-19T16:49:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,058,644,459 | 3,300 | ❓ Dataset loading script from Hugging Face Hub | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do s... | closed | https://github.com/huggingface/datasets/issues/3300 | 2021-11-19T15:20:52 | 2021-12-22T10:57:56 | 2021-12-22T10:57:56 | {
"login": "pietrolesci",
"id": 61748653,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,058,518,213 | 3,299 | Add option to find unique elements in nested sequences when calling `Dataset.unique` | It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~ | open | https://github.com/huggingface/datasets/issues/3299 | 2021-11-19T13:16:06 | 2023-05-19T14:45:40 | null | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,058,420,201 | 3,298 | Agnews dataset viewer is not working | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/ag_news
Hi there, the `ag_news` dataset viewer is not working.
Am I the one who added this dataset? No
| closed | https://github.com/huggingface/datasets/issues/3298 | 2021-11-19T11:18:59 | 2021-12-21T16:24:05 | 2021-12-21T16:24:05 | {
"login": "pietrolesci",
"id": 61748653,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,058,263,859 | 3,297 | .map() cache is wrongfully reused - only happens when the mapping function is imported | ## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411).
I guess... | open | https://github.com/huggingface/datasets/issues/3297 | 2021-11-19T08:18:36 | 2023-01-30T12:40:17 | null | {
"login": "eladsegal",
"id": 13485709,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,057,970,638 | 3,296 | Fix temporary dataset_path creation for URIs related to remote fs | This aims to close #3295 | closed | https://github.com/huggingface/datasets/pull/3296 | 2021-11-18T23:32:45 | 2021-12-06T10:45:04 | 2021-12-06T10:45:04 | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"type": "User"
} | [] | true | [] |
1,057,954,892 | 3,295 | Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk | ## Describe the bug
When trying to build a temporary dataset path from a remote URI in this block of code:
https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042
the result is not the expected when passing an absolute path in an URI like `h... | closed | https://github.com/huggingface/datasets/issues/3295 | 2021-11-18T23:24:02 | 2021-12-06T10:45:04 | 2021-12-06T10:45:04 | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,057,495,473 | 3,294 | Add Natural Adversarial Objects dataset | ## Adding a Dataset
- **Name:** Natural Adversarial Objects (NAO)
- **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-... | open | https://github.com/huggingface/datasets/issues/3294 | 2021-11-18T15:34:44 | 2021-12-08T12:00:02 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,057,004,431 | 3,293 | Pin version exclusion for Markdown | As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment.
Related to #3289, #3286. | closed | https://github.com/huggingface/datasets/pull/3293 | 2021-11-18T06:56:01 | 2021-11-18T10:28:05 | 2021-11-18T10:28:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,056,962,554 | 3,292 | Not able to load 'wikipedia' dataset | ## Describe the bug
I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error.
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("wikipedia")
```
## Expected results
A clear and concise description of the expected res... | closed | https://github.com/huggingface/datasets/issues/3292 | 2021-11-18T05:41:18 | 2021-11-19T16:49:29 | 2021-11-19T16:49:29 | {
"login": "abhibisht89",
"id": 13541524,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,056,689,876 | 3,291 | Use f-strings in the dataset scripts | Uses f-strings to format the .py files in the dataset folder | closed | https://github.com/huggingface/datasets/pull/3291 | 2021-11-17T22:20:19 | 2021-11-22T16:40:16 | 2021-11-22T16:40:16 | {
"login": "Carlosbogo",
"id": 84228424,
"type": "User"
} | [] | true | [] |
1,056,414,856 | 3,290 | Make several audio datasets streamable | <s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s>
Make those audio datasets streamable:
- [x] common_voice
- [x] openslr
- [x] vivos
- [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok*
- [ ] <s>multilingual_librispeech (yet to be converted)</S> *T... | closed | https://github.com/huggingface/datasets/pull/3290 | 2021-11-17T17:43:41 | 2022-02-01T21:00:52 | 2021-11-19T15:08:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,056,323,715 | 3,289 | Unpin markdown for build_docs now that it's fixed | `markdown`'s bug has been fixed, so this PR reverts #3286 | closed | https://github.com/huggingface/datasets/pull/3289 | 2021-11-17T16:22:53 | 2021-11-17T16:23:09 | 2021-11-17T16:23:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,056,145,703 | 3,288 | Allow datasets with indices table when concatenating along axis=1 | Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`.
cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example:
```... | closed | https://github.com/huggingface/datasets/pull/3288 | 2021-11-17T13:41:28 | 2021-11-17T15:41:12 | 2021-11-17T15:41:11 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,056,079,724 | 3,287 | Add The Pile dataset and PubMed Central subset | Add:
- The complete final version of The Pile dataset: "all" config
- PubMed Central subset of The Pile: "pubmed_central" config
Close #1675, close bigscience-workshop/data_tooling#74.
CC: @StellaAthena, @lewtun | closed | https://github.com/huggingface/datasets/pull/3287 | 2021-11-17T12:35:58 | 2021-12-01T15:29:08 | 2021-12-01T15:29:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,056,008,586 | 3,286 | Fix build_docs CI | Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues | closed | https://github.com/huggingface/datasets/pull/3286 | 2021-11-17T11:18:56 | 2021-11-17T11:19:20 | 2021-11-17T11:19:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,055,506,730 | 3,285 | Add IEMOCAP dataset | ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new datase... | open | https://github.com/huggingface/datasets/issues/3285 | 2021-11-16T22:47:20 | 2023-06-10T08:14:52 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,055,502,909 | 3,284 | Add VoxLingua107 dataset | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with som... | open | https://github.com/huggingface/datasets/issues/3284 | 2021-11-16T22:44:08 | 2021-12-06T09:49:45 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | false | [] |
1,055,495,874 | 3,283 | Add Speech Commands dataset | ## Adding a Dataset
- **Name:** Speech commands
- **Description:** A Dataset for Limited-Vocabulary Speech Recognition
- **Paper:** https://arxiv.org/abs/1804.03209
- **Data:** https://www.tensorflow.org/datasets/catalog/speech_commands, Available:
http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
... | closed | https://github.com/huggingface/datasets/issues/3283 | 2021-11-16T22:39:56 | 2021-12-10T10:30:15 | 2021-12-10T10:30:15 | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "speech",
"color": "d93f0b"
}
] | false | [] |
1,055,054,898 | 3,282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
... | closed | https://github.com/huggingface/datasets/issues/3282 | 2021-11-16T16:05:19 | 2022-04-12T11:57:43 | 2022-04-12T11:57:43 | {
"login": "MinionAttack",
"id": 10078549,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,055,018,876 | 3,281 | [Datasets] Improve Covost 2 | It's currently quite confusing to understand the manual data download instruction of Covost and not very user-friendly.
Currenty the user has to:
1. Go on Common Voice website
2. Find the correct dataset which is **not** mentioned in the error message
3. Download it
4. Untar it
5. Create a language id folder ... | closed | https://github.com/huggingface/datasets/pull/3281 | 2021-11-16T15:32:19 | 2022-01-26T16:17:06 | 2021-11-18T10:44:04 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
1,054,766,828 | 3,280 | Fix bookcorpusopen RAM usage | Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory
Fix #3167. | closed | https://github.com/huggingface/datasets/pull/3280 | 2021-11-16T11:27:52 | 2021-11-17T15:53:28 | 2021-11-16T13:34:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,054,711,852 | 3,279 | Minor Typo Fix - Precision to Recall | null | closed | https://github.com/huggingface/datasets/pull/3279 | 2021-11-16T10:32:22 | 2021-11-16T11:18:03 | 2021-11-16T11:18:02 | {
"login": "SebastinSanty",
"id": 13795788,
"type": "User"
} | [] | true | [] |
1,054,249,463 | 3,278 | Proposed update to the documentation for WER | I wanted to submit a minor update to the description of WER for your consideration.
Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0:
```
>>> from datasets import load_metric
>>> metric = load_metric("wer")
>>> metric.... | closed | https://github.com/huggingface/datasets/pull/3278 | 2021-11-15T23:28:31 | 2021-11-16T11:19:37 | 2021-11-16T11:19:37 | {
"login": "wooters",
"id": 2111202,
"type": "User"
} | [] | true | [] |
1,054,122,656 | 3,277 | f-string formatting | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
- [x] **src/Datasets/\*.py**
Modules in **_src/Datasets/_**:
- [x] **commands**
- [x] **features**
- [x] **formatting**
- [x] **... | closed | https://github.com/huggingface/datasets/pull/3277 | 2021-11-15T21:37:05 | 2021-11-19T20:40:08 | 2021-11-17T16:18:38 | {
"login": "Mehdi2402",
"id": 56029953,
"type": "User"
} | [] | true | [] |
1,053,793,063 | 3,276 | Update KILT metadata JSON | Fix #3265. | closed | https://github.com/huggingface/datasets/pull/3276 | 2021-11-15T15:25:25 | 2021-11-16T11:21:59 | 2021-11-16T11:21:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.