id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,331,337,418 | 4,801 | Fix fine classes in trec dataset | This PR:
- replaces the fine labels, so that there are 50 instead of 47
- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html
- the feature names have been fixed: `fine_label` instead of `label-fi... | closed | https://github.com/huggingface/datasets/pull/4801 | 2022-08-08T05:11:02 | 2022-08-22T16:29:14 | 2022-08-22T16:14:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,331,288,128 | 4,800 | support LargeListArray in pyarrow | ```python
import numpy as np
import datasets
a = np.zeros((5000000, 768))
res = datasets.Dataset.from_dict({'embedding': a})
'''
File '/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py', line 178, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/h... | closed | https://github.com/huggingface/datasets/pull/4800 | 2022-08-08T03:58:46 | 2024-09-27T09:54:17 | 2024-08-12T14:43:46 | {
"login": "Jiaxin-Wen",
"id": 48146603,
"type": "User"
} | [] | true | [] |
1,330,889,854 | 4,799 | video dataset loader/parser | you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really... | closed | https://github.com/huggingface/datasets/issues/4799 | 2022-08-07T01:54:12 | 2023-10-01T00:08:31 | 2022-08-09T16:42:51 | {
"login": "verbiiyo",
"id": 26421036,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,330,699,942 | 4,798 | Shard generator | Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided... | closed | https://github.com/huggingface/datasets/pull/4798 | 2022-08-06T09:14:06 | 2022-10-03T15:35:10 | 2022-10-03T15:35:10 | {
"login": "marianna13",
"id": 43296932,
"type": "User"
} | [] | true | [] |
1,330,000,998 | 4,797 | Torgo dataset creation | null | closed | https://github.com/huggingface/datasets/pull/4797 | 2022-08-05T14:18:26 | 2022-08-09T18:46:00 | 2022-08-09T18:46:00 | {
"login": "YingLi001",
"id": 75192317,
"type": "User"
} | [] | true | [] |
1,329,887,810 | 4,796 | ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset | ## Describe the bug
When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from PIL import Image
dataset = load_dataset("hf-inte... | open | https://github.com/huggingface/datasets/issues/4796 | 2022-08-05T12:41:19 | 2024-11-29T16:35:17 | null | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,329,525,732 | 4,795 | Missing MBPP splits | (@albertvillanova)
The [MBPP dataset on the Hub](https://huggingface.co/datasets/mbpp) has only a test split for both its "full" and its "sanitized" subset, while the [paper](https://arxiv.org/abs/2108.07732) states in subsection 2.1 regarding the full split:
> In the experiments described later in the paper, we hold... | closed | https://github.com/huggingface/datasets/issues/4795 | 2022-08-05T06:51:01 | 2022-09-13T12:27:24 | 2022-09-13T12:27:24 | {
"login": "stadlerb",
"id": 2452384,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,328,593,929 | 4,792 | Add DocVQA | ## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this ... | open | https://github.com/huggingface/datasets/issues/4792 | 2022-08-04T13:07:26 | 2022-08-08T05:31:20 | null | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,328,571,064 | 4,791 | Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english | ### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
... | closed | https://github.com/huggingface/datasets/issues/4791 | 2022-08-04T12:49:16 | 2022-08-04T13:43:16 | 2022-08-04T13:43:16 | {
"login": "xplip",
"id": 25847814,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,328,546,904 | 4,790 | Issue with fine classes in trec dataset | ## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated fo... | closed | https://github.com/huggingface/datasets/issues/4790 | 2022-08-04T12:28:51 | 2022-08-22T16:14:16 | 2022-08-22T16:14:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,328,409,253 | 4,789 | Update doc upload_dataset.mdx | null | closed | https://github.com/huggingface/datasets/pull/4789 | 2022-08-04T10:24:00 | 2022-09-09T16:37:10 | 2022-09-09T16:34:58 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,328,246,021 | 4,788 | Fix NonMatchingChecksumError in mbpp dataset | Fix issue reported on the Hub: https://huggingface.co/datasets/mbpp/discussions/1
Fix #4787. | closed | https://github.com/huggingface/datasets/pull/4788 | 2022-08-04T08:17:40 | 2022-08-04T17:34:00 | 2022-08-04T17:21:01 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,328,243,911 | 4,787 | NonMatchingChecksumError in mbpp dataset | ## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset with... | closed | https://github.com/huggingface/datasets/issues/4787 | 2022-08-04T08:15:51 | 2022-08-04T17:21:01 | 2022-08-04T17:21:01 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,327,340,828 | 4,786 | .save_to_disk('path', fs=s3) TypeError | The following code:
```python
import datasets
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)
train_dataset.save_to_disk("s3://datasets/", fs=s3)
```
produces following traceback:
```she... | closed | https://github.com/huggingface/datasets/issues/4786 | 2022-08-03T14:49:29 | 2022-08-03T15:23:00 | 2022-08-03T15:23:00 | {
"login": "h-k-dev",
"id": 110547763,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,327,225,826 | 4,785 | Require torchaudio<0.12.0 in docs | This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.
Subsequent to PR:
- #4777 | closed | https://github.com/huggingface/datasets/pull/4785 | 2022-08-03T13:32:00 | 2022-08-03T15:07:43 | 2022-08-03T14:52:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,326,395,280 | 4,784 | Add Multiface dataset | ## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **... | open | https://github.com/huggingface/datasets/issues/4784 | 2022-08-02T21:00:22 | 2022-08-08T14:42:36 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,326,375,011 | 4,783 | Docs for creating a loading script for image datasets | This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations. | closed | https://github.com/huggingface/datasets/pull/4783 | 2022-08-02T20:36:03 | 2022-09-09T17:08:14 | 2022-09-07T19:07:34 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,326,247,158 | 4,782 | pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648 | ## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.u... | closed | https://github.com/huggingface/datasets/issues/4782 | 2022-08-02T18:36:05 | 2022-08-22T09:46:28 | 2022-08-20T02:11:53 | {
"login": "conceptofmind",
"id": 25208228,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,326,114,161 | 4,781 | Fix label renaming and add a battery of tests | This PR makes some changes to label renaming in `to_tf_dataset()`, both to fix some issues when users input something we weren't expecting, and also to make it easier to deprecate label renaming in future, if/when we want to move this special-casing logic to a function in `transformers`.
The main changes are:
- Lab... | closed | https://github.com/huggingface/datasets/pull/4781 | 2022-08-02T16:42:07 | 2022-09-12T11:27:06 | 2022-09-12T11:24:45 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,326,034,767 | 4,780 | Remove apache_beam import from module level in natural_questions dataset | Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`.
Fix #4779. | closed | https://github.com/huggingface/datasets/pull/4780 | 2022-08-02T15:34:54 | 2022-08-02T16:16:33 | 2022-08-02T16:03:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,325,997,225 | 4,779 | Loading natural_questions requires apache_beam even with existing preprocessed data | ## Describe the bug
When loading "natural_questions", the package "apache_beam" is required:
```
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
This requirement is unnecessary, once ... | closed | https://github.com/huggingface/datasets/issues/4779 | 2022-08-02T15:06:57 | 2022-08-02T16:03:18 | 2022-08-02T16:03:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,324,928,750 | 4,778 | Update local loading script docs | This PR clarifies the local loading script section to include how to load a dataset after you've modified the local loading script (closes #4732). | closed | https://github.com/huggingface/datasets/pull/4778 | 2022-08-01T20:21:07 | 2022-08-23T16:32:26 | 2022-08-23T16:32:22 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,324,548,784 | 4,777 | Require torchaudio<0.12.0 to avoid RuntimeError | Related to:
- https://github.com/huggingface/transformers/issues/18379
Fix partially #4776. | closed | https://github.com/huggingface/datasets/pull/4777 | 2022-08-01T14:50:50 | 2022-08-02T17:35:14 | 2022-08-02T17:21:39 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,324,493,860 | 4,776 | RuntimeError when using torchaudio 0.12.0 to load MP3 audio file | Current version of `torchaudio` (0.12.0) raises a RuntimeError when trying to use `sox_io` backend but non-Python dependency `sox` is not installed:
https://github.com/pytorch/audio/blob/2e1388401c434011e9f044b40bc8374f2ddfc414/torchaudio/backend/sox_io_backend.py#L21-L29
```python
def _fail_load(
filepath: str... | closed | https://github.com/huggingface/datasets/issues/4776 | 2022-08-01T14:11:23 | 2023-03-02T15:58:16 | 2023-03-02T15:58:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,324,136,486 | 4,775 | Streaming not supported in Theivaprakasham/wildreceipt | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4775 | 2022-08-01T09:46:17 | 2022-08-01T10:30:29 | 2022-08-01T10:30:29 | {
"login": "NitishkKarra",
"id": 100361173,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,323,375,844 | 4,774 | Training hangs at the end of epoch, with set_transform/with_transform+multiple workers | ## Describe the bug
I use load_dataset() (I tried with [wiki](https://huggingface.co/datasets/wikipedia) and my own json data) and use set_transform/with_transform for preprocessing. But it hangs at the end of the 1st epoch if dataloader_num_workers>=1. No problem with single worker.
## Steps to reproduce the bu... | open | https://github.com/huggingface/datasets/issues/4774 | 2022-07-31T06:32:28 | 2022-07-31T06:36:43 | null | {
"login": "memray",
"id": 4197249,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,322,796,721 | 4,773 | Document loading from relative path | This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757). | closed | https://github.com/huggingface/datasets/pull/4773 | 2022-07-29T23:32:21 | 2022-08-25T18:36:45 | 2022-08-25T18:34:23 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,322,693,123 | 4,772 | AssertionError when using label_cols in to_tf_dataset | ## Describe the bug
An incorrect `AssertionError` is raised when using `label_cols` in `to_tf_dataset` and the label's key name is `label`.
The assertion is in this line:
https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/arrow_dataset.py#L475
## Steps to reproduce the bug
```python
from datasets... | closed | https://github.com/huggingface/datasets/issues/4772 | 2022-07-29T21:32:12 | 2022-09-12T11:24:46 | 2022-09-12T11:24:46 | {
"login": "lehrig",
"id": 9555494,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,322,600,725 | 4,771 | Remove dummy data generation docs | This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo.
Close #4744 | closed | https://github.com/huggingface/datasets/pull/4771 | 2022-07-29T19:20:46 | 2022-08-03T00:04:01 | 2022-08-02T23:50:29 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,322,147,855 | 4,770 | fix typo | By defaul -> By default | closed | https://github.com/huggingface/datasets/pull/4770 | 2022-07-29T11:46:12 | 2022-07-29T16:02:07 | 2022-07-29T16:02:07 | {
"login": "Jiaxin-Wen",
"id": 48146603,
"type": "User"
} | [] | true | [] |
1,322,121,554 | 4,769 | Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96. | ## Describe the bug
datasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets["train"].train_dataset.map().
## Steps to reproduce the bug
I used huggingface[ TF2 question-answering examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-a... | open | https://github.com/huggingface/datasets/issues/4769 | 2022-07-29T11:18:24 | 2022-07-29T11:18:24 | null | {
"login": "zhuango",
"id": 5491519,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,321,913,645 | 4,768 | Unpin rouge_score test dependency | Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735 | closed | https://github.com/huggingface/datasets/pull/4768 | 2022-07-29T08:17:40 | 2022-07-29T16:42:28 | 2022-07-29T16:29:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,321,843,538 | 4,767 | Add 2.4.0 version added to docstrings | null | closed | https://github.com/huggingface/datasets/pull/4767 | 2022-07-29T07:01:56 | 2022-07-29T11:16:49 | 2022-07-29T11:03:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,321,787,428 | 4,765 | Fix version in map_nested docstring | After latest release, `map_nested` docstring needs being updated with the right version for versionchanged and versionadded. | closed | https://github.com/huggingface/datasets/pull/4765 | 2022-07-29T05:44:32 | 2022-07-29T11:51:25 | 2022-07-29T11:38:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,321,295,961 | 4,764 | Update CI badge | Replace the old CircleCI badge with a new one for GH Actions. | closed | https://github.com/huggingface/datasets/pull/4764 | 2022-07-28T18:04:20 | 2022-07-29T11:36:37 | 2022-07-29T11:23:51 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,321,295,876 | 4,763 | More rigorous shape inference in to_tf_dataset | `tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause trainin... | closed | https://github.com/huggingface/datasets/pull/4763 | 2022-07-28T18:04:15 | 2022-09-08T19:17:54 | 2022-09-08T19:15:41 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,321,261,733 | 4,762 | Improve features resolution in streaming | `IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well.
I also fixed `interleave_datasets`... | closed | https://github.com/huggingface/datasets/pull/4762 | 2022-07-28T17:28:11 | 2022-09-09T17:17:39 | 2022-09-09T17:15:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,321,068,411 | 4,761 | parallel searching in multi-gpu setting using faiss | While I notice that `add_faiss_index` has supported assigning multiple GPUs, I am still confused about how it works.
Does the `search-batch` function automatically parallelizes the input queries to different gpus?https://github.com/huggingface/datasets/blob/d76599bdd4d186b2e7c4f468b05766016055a0a5/src/datasets/sea... | open | https://github.com/huggingface/datasets/issues/4761 | 2022-07-28T14:57:03 | 2023-07-21T02:07:10 | null | {
"login": "Jiaxin-Wen",
"id": 48146603,
"type": "User"
} | [] | false | [] |
1,320,878,223 | 4,760 | Issue with offline mode | ## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_i... | closed | https://github.com/huggingface/datasets/issues/4760 | 2022-07-28T12:45:14 | 2025-05-04T16:44:59 | 2024-01-23T10:58:22 | {
"login": "SaulLu",
"id": 55560583,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,320,783,300 | 4,759 | Dataset Viewer issue for Toygar/turkish-offensive-language-detection | ### Link
https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection
### Description
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
Hi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist.
Should I n... | closed | https://github.com/huggingface/datasets/issues/4759 | 2022-07-28T11:21:43 | 2022-07-28T13:17:56 | 2022-07-28T13:17:48 | {
"login": "tanyelai",
"id": 44132720,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,320,602,532 | 4,757 | Document better when relative paths are transformed to URLs | As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.
Currently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize split... | closed | https://github.com/huggingface/datasets/issues/4757 | 2022-07-28T08:46:27 | 2022-08-25T18:34:24 | 2022-08-25T18:34:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,319,687,044 | 4,755 | Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size | ## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because ea... | open | https://github.com/huggingface/datasets/issues/4755 | 2022-07-27T14:54:11 | 2023-12-13T19:34:43 | null | {
"login": "srobertjames",
"id": 662612,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,319,681,541 | 4,754 | Remove "unkown" language tags | Following https://github.com/huggingface/datasets/pull/4753 there was still a "unknown" langauge tag in `wikipedia` so the job at https://github.com/huggingface/datasets/runs/7542567336?check_suite_focus=true failed for wikipedia | closed | https://github.com/huggingface/datasets/pull/4754 | 2022-07-27T14:50:12 | 2022-07-27T15:03:00 | 2022-07-27T14:51:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,319,571,745 | 4,753 | Add `language_bcp47` tag | Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed ... | closed | https://github.com/huggingface/datasets/pull/4753 | 2022-07-27T13:31:16 | 2022-07-27T14:50:03 | 2022-07-27T14:37:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,319,464,409 | 4,752 | DatasetInfo issue when testing multiple configs: mixed task_templates | ## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data fr... | open | https://github.com/huggingface/datasets/issues/4752 | 2022-07-27T12:04:54 | 2022-08-08T18:20:50 | null | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,319,440,903 | 4,751 | Added dataset information in clinic oos dataset card | This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card. | closed | https://github.com/huggingface/datasets/pull/4751 | 2022-07-27T11:44:28 | 2022-07-28T10:53:21 | 2022-07-28T10:40:37 | {
"login": "arnav-ladkat",
"id": 84362194,
"type": "User"
} | [] | true | [] |
1,319,333,645 | 4,750 | Easily create loading script for benchmark comprising multiple huggingface datasets | Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a si... | closed | https://github.com/huggingface/datasets/issues/4750 | 2022-07-27T10:13:38 | 2022-07-27T13:58:07 | 2022-07-27T13:58:07 | {
"login": "JoelNiklaus",
"id": 3775944,
"type": "User"
} | [] | false | [] |
1,318,874,913 | 4,748 | Add image classification processing guide | This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset. | closed | https://github.com/huggingface/datasets/pull/4748 | 2022-07-27T00:11:11 | 2022-07-27T17:28:21 | 2022-07-27T17:16:12 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,318,586,932 | 4,747 | Shard parquet in `download_and_prepare` | Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datase... | closed | https://github.com/huggingface/datasets/pull/4747 | 2022-07-26T18:05:01 | 2022-09-15T13:43:55 | 2022-09-15T13:41:26 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,318,486,599 | 4,746 | Dataset Viewer issue for yanekyuk/wikikey | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4746 | 2022-07-26T16:25:16 | 2022-09-08T08:15:22 | 2022-09-08T08:15:22 | {
"login": "ai-ashok",
"id": 91247690,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,318,016,655 | 4,745 | Allow `list_datasets` to include private datasets | I am working with a large collection of private datasets, it would be convenient for me to be able to list them.
I would envision extending the convention of using `use_auth_token` keyword argument to `list_datasets` function, then calling:
```
list_datasets(use_auth_token="my_token")
```
would return the li... | closed | https://github.com/huggingface/datasets/issues/4745 | 2022-07-26T10:16:08 | 2023-07-25T15:01:49 | 2023-07-25T15:01:49 | {
"login": "ola13",
"id": 1528523,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,317,822,345 | 4,744 | Remove instructions to generate dummy data from our docs | In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI t... | closed | https://github.com/huggingface/datasets/issues/4744 | 2022-07-26T07:32:58 | 2022-08-02T23:50:30 | 2022-08-02T23:50:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,317,362,561 | 4,743 | Update map docs | This PR updates the `map` docs for processing text to include `return_tensors="np"` to make it run faster (see #4676). | closed | https://github.com/huggingface/datasets/pull/4743 | 2022-07-25T20:59:35 | 2022-07-27T16:22:04 | 2022-07-27T16:10:04 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,317,260,663 | 4,742 | Dummy data nowhere to be found | ## Describe the bug
To finalize my dataset, I wanted to create dummy data as per the guide and I ran
```shell
datasets-cli dummy_data datasets/hebban-reviews --auto_generate
```
where hebban-reviews is [this repo](https://huggingface.co/datasets/BramVanroy/hebban-reviews). And even though the scripts runs an... | closed | https://github.com/huggingface/datasets/issues/4742 | 2022-07-25T19:18:42 | 2022-11-04T14:04:24 | 2022-11-04T14:04:10 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,316,621,272 | 4,741 | Fix to dict conversion of `DatasetInfo`/`Features` | Fix #4681 | closed | https://github.com/huggingface/datasets/pull/4741 | 2022-07-25T10:41:27 | 2022-07-25T12:50:36 | 2022-07-25T12:37:53 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,316,478,007 | 4,740 | Fix multiprocessing in map_nested | As previously discussed:
Before, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.
- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download
- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was ... | closed | https://github.com/huggingface/datasets/pull/4740 | 2022-07-25T08:44:19 | 2022-07-28T10:53:23 | 2022-07-28T10:40:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,316,400,915 | 4,739 | Deprecate metrics | Deprecate metrics:
- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning
- test deprecation warnings are issues
- deprecate metrics in all docs
- remove mentions to metrics in docs and README
- deprecate internal functions/classes
Maybe we should also stop testi... | closed | https://github.com/huggingface/datasets/pull/4739 | 2022-07-25T07:35:55 | 2022-07-28T11:44:27 | 2022-07-28T11:32:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,315,222,166 | 4,738 | Use CI unit/integration tests | This PR:
- Implements separate unit/integration tests
- A fail in integration tests does not cancel the rest of the jobs
- We should implement more robust integration tests: work in progress in a subsequent PR
- For the moment, test involving network requests are marked as integration: to be evolved | closed | https://github.com/huggingface/datasets/pull/4738 | 2022-07-22T16:48:00 | 2022-07-26T20:19:22 | 2022-07-26T20:07:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,315,011,004 | 4,737 | Download error on scene_parse_150 | ```
from datasets import load_dataset
dataset = load_dataset("scene_parse_150", "scene_parsing")
FileNotFoundError: Couldn't find file at http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
```
| closed | https://github.com/huggingface/datasets/issues/4737 | 2022-07-22T13:28:28 | 2022-09-01T15:37:11 | 2022-09-01T15:37:11 | {
"login": "juliensimon",
"id": 3436143,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,314,931,996 | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is cs... | closed | https://github.com/huggingface/datasets/issues/4736 | 2022-07-22T12:14:18 | 2022-07-22T13:46:38 | 2022-07-22T13:46:38 | {
"login": "dk-crazydiv",
"id": 47515542,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,314,501,641 | 4,735 | Pin rouge_score test dependency | Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.
Fix #4734 | closed | https://github.com/huggingface/datasets/pull/4735 | 2022-07-22T07:18:21 | 2022-07-22T07:58:14 | 2022-07-22T07:45:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,314,495,382 | 4,734 | Package rouge-score cannot be imported | ## Describe the bug
After the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https://github.com/huggingface/datasets/runs/7463218591?check_suite_focus=true
```
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench
FAILED tests/test_dataset_common.py::L... | closed | https://github.com/huggingface/datasets/issues/4734 | 2022-07-22T07:15:05 | 2022-07-22T07:45:19 | 2022-07-22T07:45:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,314,479,616 | 4,733 | rouge metric | ## Describe the bug
A clear and concise description of what the bug is.
Loading Rouge metric gives error after latest rouge-score==0.0.7 release.
Downgrading rougemetric==0.0.4 works fine.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concis... | closed | https://github.com/huggingface/datasets/issues/4733 | 2022-07-22T07:06:51 | 2022-07-22T09:08:02 | 2022-07-22T09:05:35 | {
"login": "asking28",
"id": 29248466,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,314,371,566 | 4,732 | Document better that loading a dataset passing its name does not use the local script | As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/... | closed | https://github.com/huggingface/datasets/issues/4732 | 2022-07-22T06:07:31 | 2022-08-23T16:32:23 | 2022-08-23T16:32:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,313,773,348 | 4,731 | docs: ✏️ fix TranslationVariableLanguages example | null | closed | https://github.com/huggingface/datasets/pull/4731 | 2022-07-21T20:35:41 | 2022-07-22T07:01:00 | 2022-07-22T06:48:42 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
1,313,421,263 | 4,730 | Loading imagenet-1k validation split takes much more RAM than expected | ## Describe the bug
Loading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.
## Steps to reproduce the bug
```python
from datasets import... | closed | https://github.com/huggingface/datasets/issues/4730 | 2022-07-21T15:14:06 | 2022-07-21T16:41:04 | 2022-07-21T16:41:04 | {
"login": "fxmarty",
"id": 9808326,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,313,374,015 | 4,729 | Refactor Hub tests | This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests ... | closed | https://github.com/huggingface/datasets/pull/4729 | 2022-07-21T14:43:13 | 2022-07-22T15:09:49 | 2022-07-22T14:56:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,312,897,454 | 4,728 | load_dataset gives "403" error when using Financial Phrasebank | I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.
```
from datasets import load_dataset, DownloadMode
load... | closed | https://github.com/huggingface/datasets/issues/4728 | 2022-07-21T08:43:32 | 2022-08-04T08:32:35 | 2022-08-04T08:32:35 | {
"login": "rohitvincent",
"id": 2209134,
"type": "User"
} | [] | false | [] |
1,312,645,391 | 4,727 | Dataset Viewer issue for TheNoob3131/mosquito-data | ### Link
https://huggingface.co/datasets/TheNoob3131/mosquito-data/viewer/TheNoob3131--mosquito-data/test
### Description
Dataset preview not showing with large files. Says 'split cache is empty' even though there are train and test splits.
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4727 | 2022-07-21T05:24:48 | 2022-07-21T07:51:56 | 2022-07-21T07:45:01 | {
"login": "thenerd31",
"id": 53668030,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,312,082,175 | 4,726 | Fix broken link to the Hub | The Markdown link fails to render if it is in the same line as the `<span>`. This PR implements @mishig25's fix by using `<a href=" ">` instead.
 | closed | https://github.com/huggingface/datasets/pull/4726 | 2022-07-20T22:57:27 | 2022-07-21T14:33:18 | 2022-07-21T08:00:54 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,311,907,096 | 4,725 | the_pile datasets URL broken. | https://github.com/huggingface/datasets/pull/3627 changed the Eleuther AI Pile dataset URL from https://the-eye.eu/ to https://mystic.the-eye.eu/ but the latter is now broken and the former works again.
Note that when I git clone the repo and use `pip install -e .` and then edit the URL back the codebase doesn't se... | closed | https://github.com/huggingface/datasets/issues/4725 | 2022-07-20T20:57:30 | 2022-07-22T06:09:46 | 2022-07-21T07:38:19 | {
"login": "TrentBrick",
"id": 12433427,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,311,127,404 | 4,724 | Download and prepare as Parquet for cloud storage | Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark/dask/ray.
This PR adds support for `fsspec` URIs like `s3://...`, `gcs://...` etc. and ads the `file_format` to save as parquet instead of arrow:
```python
from datasets import *
cache_dir = "s3://..."
build... | closed | https://github.com/huggingface/datasets/pull/4724 | 2022-07-20T13:39:02 | 2022-09-05T17:27:25 | 2022-09-05T17:25:27 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,310,970,604 | 4,723 | Refactor conftest fixtures | Previously, fixture modules `hub_fixtures` and `s3_fixtures`:
- were both at the root test directory
- were imported using `import *`
- as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`
This PR:
- puts both fixture modules in a dedicated directory `fixtures`
- re... | closed | https://github.com/huggingface/datasets/pull/4723 | 2022-07-20T12:15:22 | 2022-07-21T14:37:11 | 2022-07-21T14:24:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,310,785,916 | 4,722 | Docs: Fix same-page haslinks | `href="/docs/datasets/quickstart#audio"` implicitly goes to `href="/docs/datasets/{$LATEST_STABLE_VERSION}/quickstart#audio"`. Therefore, https://huggingface.co/docs/datasets/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)
to preserve the version, it... | closed | https://github.com/huggingface/datasets/pull/4722 | 2022-07-20T10:04:37 | 2022-07-20T17:02:33 | 2022-07-20T16:49:36 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,310,253,552 | 4,721 | PyArrow Dataset error when calling `load_dataset` | ## Describe the bug
I am fine tuning a wav2vec2 model following the script here using my own dataset: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Loading my Audio dataset from the hub which was originally generated from disk results in th... | open | https://github.com/huggingface/datasets/issues/4721 | 2022-07-20T01:16:03 | 2022-07-22T14:11:47 | null | {
"login": "piraka9011",
"id": 16828657,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,309,980,195 | 4,720 | Dataset Viewer issue for shamikbose89/lancaster_newsbooks | ### Link
https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks
### Description
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer sti... | closed | https://github.com/huggingface/datasets/issues/4720 | 2022-07-19T20:00:07 | 2022-09-08T16:47:21 | 2022-09-08T16:47:21 | {
"login": "shamikbose",
"id": 50837285,
"type": "User"
} | [] | false | [] |
1,309,854,492 | 4,719 | Issue loading TheNoob3131/mosquito-data dataset | 
So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to ... | closed | https://github.com/huggingface/datasets/issues/4719 | 2022-07-19T17:47:37 | 2022-07-20T06:46:57 | 2022-07-20T06:46:02 | {
"login": "thenerd31",
"id": 53668030,
"type": "User"
} | [] | false | [] |
1,309,520,453 | 4,718 | Make Extractor accept Path as input | This PR:
- Makes `Extractor` accept instance of `Path` as input
- Removes unnecessary castings of `Path` to `str` | closed | https://github.com/huggingface/datasets/pull/4718 | 2022-07-19T13:25:06 | 2022-07-22T13:42:27 | 2022-07-22T13:29:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,309,512,483 | 4,717 | Dataset Viewer issue for LawalAfeez/englishreview-ds-mini | ### Link
_No response_
### Description
Unable to view the split data
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4717 | 2022-07-19T13:19:39 | 2022-07-20T08:32:57 | 2022-07-20T08:32:57 | {
"login": "lawalAfeez820",
"id": 69974956,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,309,455,838 | 4,716 | Support "tags" yaml tag | Added the "tags" YAML tag, so that users can specify data domain/topics keywords for dataset search | closed | https://github.com/huggingface/datasets/pull/4716 | 2022-07-19T12:34:31 | 2022-07-20T13:44:50 | 2022-07-20T13:31:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,309,405,980 | 4,715 | Fix POS tags | We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https://github.com/huggingface/datasets/commit/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777 | closed | https://github.com/huggingface/datasets/pull/4715 | 2022-07-19T11:52:54 | 2022-07-19T12:54:34 | 2022-07-19T12:41:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,309,265,682 | 4,714 | Fix named split sorting and remove unnecessary casting | This PR:
- makes `NamedSplit` sortable: so that `sorted()` can be called on them
- removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set`
- removes unnecessary casting of `NamedSplit` to `str` | closed | https://github.com/huggingface/datasets/pull/4714 | 2022-07-19T09:48:28 | 2022-07-22T09:39:45 | 2022-07-22T09:10:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,309,184,756 | 4,713 | Document installation of sox OS dependency for audio | The `sox` OS package needs being installed manually using the distribution package manager.
This PR adds this explanation to the docs. | closed | https://github.com/huggingface/datasets/pull/4713 | 2022-07-19T08:42:35 | 2022-07-21T08:16:59 | 2022-07-21T08:04:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,309,177,302 | 4,712 | Highlight non-commercial license in amazon_reviews_multi dataset card | Highlight that the licence granted by Amazon only covers non-commercial research use. | closed | https://github.com/huggingface/datasets/pull/4712 | 2022-07-19T08:36:20 | 2022-07-27T16:09:40 | 2022-07-27T15:57:41 | {
"login": "sbroadhurst-hf",
"id": 108879611,
"type": "User"
} | [] | true | [] |
1,309,138,570 | 4,711 | Document how to create a dataset loading script for audio/vision | Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
... | closed | https://github.com/huggingface/datasets/issues/4711 | 2022-07-19T08:03:40 | 2023-07-25T16:07:52 | 2023-07-25T16:07:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,308,958,525 | 4,710 | Add object detection processing tutorial | The following adds a quick guide on how to process object detection datasets with `albumentations`. | closed | https://github.com/huggingface/datasets/pull/4710 | 2022-07-19T04:23:46 | 2022-07-21T20:10:35 | 2022-07-21T19:56:42 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [] | true | [] |
1,308,633,093 | 4,709 | WMT21 & WMT22 | ## Adding a Dataset
- **Name:** WMT21 & WMT22
- **Description:** We are going to have three tracks: two small tasks and a large task.
The small tracks evaluate translation between fairly related languages and English (all pairs). The large track uses 101 languages.
- **Paper:** /
- **Data:** https://statmt.org/wmt... | open | https://github.com/huggingface/datasets/issues/4709 | 2022-07-18T21:05:33 | 2023-06-20T09:02:11 | null | {
"login": "Muennighoff",
"id": 62820084,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,308,279,700 | 4,708 | Fix require torchaudio and refactor test requirements | Currently there is a bug in `require_torchaudio` (indeed it is requiring `sox` instead):
```python
def require_torchaudio(test_case):
if find_spec("sox") is None:
...
```
The bug was introduced by:
- #3685
- Commit: https://github.com/huggingface/datasets/pull/3685/commits/b5a3e7122d49c4dcc9333b1d8d18a8... | closed | https://github.com/huggingface/datasets/pull/4708 | 2022-07-18T17:24:28 | 2022-07-22T06:30:56 | 2022-07-22T06:18:11 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,308,251,405 | 4,707 | Dataset Viewer issue for TheNoob3131/mosquito-data | ### Link
_No response_
### Description
Getting this error when trying to view dataset preview:
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/TheNoob3131/mosquito-data/resolve/8aceebd6c4a359d216d10ef020868bd9e8c986dd/0_Africa_train.csv')
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4707 | 2022-07-18T17:07:19 | 2022-07-18T19:44:46 | 2022-07-18T17:15:50 | {
"login": "thenerd31",
"id": 53668030,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,308,198,454 | 4,706 | Fix empty examples in xtreme dataset for bucc18 config | As reported in https://huggingface.co/muibk, there are empty examples in xtreme/bucc18.de
I applied your fix @mustaszewski
I also used a dict to make the dataset generation much faster | closed | https://github.com/huggingface/datasets/pull/4706 | 2022-07-18T16:22:46 | 2022-07-19T06:41:14 | 2022-07-19T06:29:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,308,161,794 | 4,705 | Fix crd3 | As reported in https://huggingface.co/datasets/crd3/discussions/1#62cc377073b2512b81662794, each split of the dataset was containing the same data. This issues comes from a bug in the dataset script
I fixed it and also uploaded the data to hf.co to make the dataset work in streaming mode | closed | https://github.com/huggingface/datasets/pull/4705 | 2022-07-18T15:53:44 | 2022-07-21T17:18:44 | 2022-07-21T17:06:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,308,147,876 | 4,704 | Skip tests only for lz4/zstd params if not installed | Currently, if `zstandard` or `lz4` are not installed, `test_compression_filesystems` and `test_streaming_dl_manager_extract_all_supported_single_file_compression_types` are skipped for all compression format parameters.
This PR fixes these tests, so that if `zstandard` or `lz4` are not installed, the tests are skipp... | closed | https://github.com/huggingface/datasets/pull/4704 | 2022-07-18T15:41:40 | 2022-07-19T13:02:31 | 2022-07-19T12:49:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,307,844,097 | 4,703 | Make cast in `from_pandas` more robust | Make the cast in `from_pandas` more robust (as it was done for the packaged modules in https://github.com/huggingface/datasets/pull/4364)
This should be useful in situations like [this one](https://discuss.huggingface.co/t/loading-custom-audio-dataset-and-fine-tuning-model/8836/4). | closed | https://github.com/huggingface/datasets/pull/4703 | 2022-07-18T11:55:49 | 2022-07-22T11:17:42 | 2022-07-22T11:05:24 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,307,793,811 | 4,702 | Domain specific dataset discovery on the Hugging Face hub | **Is your feature request related to a problem? Please describe.**
## The problem
The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data).
There are various ways of identifying datasets that may be releva... | open | https://github.com/huggingface/datasets/issues/4702 | 2022-07-18T11:14:03 | 2024-02-12T09:53:43 | null | {
"login": "davanstrien",
"id": 8995957,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,307,689,625 | 4,701 | Added more information in the README about contributors of the Arabic Speech Corpus | Added more information in the README about contributors and encouraged reading the thesis for more infos | closed | https://github.com/huggingface/datasets/pull/4701 | 2022-07-18T09:48:03 | 2022-07-28T10:33:05 | 2022-07-28T10:33:05 | {
"login": "nawarhalabi",
"id": 2845798,
"type": "User"
} | [] | true | [] |
1,307,599,161 | 4,700 | Support extract lz4 compressed data files | null | closed | https://github.com/huggingface/datasets/pull/4700 | 2022-07-18T08:41:31 | 2022-07-18T14:43:59 | 2022-07-18T14:31:47 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,307,555,592 | 4,699 | Fix Authentification Error while streaming | I fixed a few errors when it occurs while streaming the private dataset on the Huggingface Hub.
```
from datasets import load_dataset
dataset = load_dataset(<repo_id>, use_auth_token=<private_token>, streaming=True)
for d in dataset['train']:
print(d)
break # this is for checking
```
This code is an e... | closed | https://github.com/huggingface/datasets/pull/4699 | 2022-07-18T08:03:41 | 2022-07-20T13:10:44 | 2022-07-20T13:10:43 | {
"login": "hkjeon13",
"id": 37480967,
"type": "User"
} | [] | true | [] |
1,307,539,585 | 4,698 | Enable streaming dataset to use the "all" split | Fixes #4637 | closed | https://github.com/huggingface/datasets/pull/4698 | 2022-07-18T07:47:39 | 2025-05-21T13:17:19 | 2025-05-21T13:17:19 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
1,307,332,253 | 4,697 | Trouble with streaming frgfm/imagenette vision dataset with TAR archive | ### Link
https://huggingface.co/datasets/frgfm/imagenette
### Description
Hello there :wave:
Thanks for the amazing work you've done with HF Datasets! I've just started playing with it, and managed to upload my first dataset. But for the second one, I'm having trouble with the preview since there is some archive... | closed | https://github.com/huggingface/datasets/issues/4697 | 2022-07-18T02:51:09 | 2022-08-01T15:10:57 | 2022-08-01T15:10:57 | {
"login": "frgfm",
"id": 26927750,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,307,183,099 | 4,696 | Cannot load LinCE dataset | ## Describe the bug
Cannot load LinCE dataset due to a connection error
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lince", "ner_spaeng")
```
A notebook with this code and corresponding error can be found at https://colab.research.google.com/drive/1... | closed | https://github.com/huggingface/datasets/issues/4696 | 2022-07-17T19:01:54 | 2022-07-18T09:20:40 | 2022-07-18T07:24:22 | {
"login": "finiteautomata",
"id": 167943,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.