id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,307,134,701 | 4,695 | Add MANtIS dataset | This PR adds MANtIS dataset.
Arxiv: [https://arxiv.org/abs/1912.04639](https://arxiv.org/abs/1912.04639)
Github: [https://github.com/Guzpenha/MANtIS](https://github.com/Guzpenha/MANtIS)
README and dataset tags are WIP. | closed | https://github.com/huggingface/datasets/pull/4695 | 2022-07-17T15:53:05 | 2022-09-30T14:39:30 | 2022-09-30T14:37:16 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,306,958,380 | 4,694 | Distributed data parallel training for streaming datasets | ### Feature request
Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training?
### Motivation
Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation?
### Your contribution
Does it requires manually spli... | open | https://github.com/huggingface/datasets/issues/4694 | 2022-07-17T01:29:43 | 2023-04-26T18:21:09 | null | {
"login": "cyk1337",
"id": 13767887,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,306,788,322 | 4,693 | update `samsum` script | update `samsum` script after #4672 was merged (citation is also updated) | closed | https://github.com/huggingface/datasets/pull/4693 | 2022-07-16T11:53:05 | 2022-09-23T11:40:11 | 2022-09-23T11:37:57 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,306,609,680 | 4,692 | Unable to cast a column with `Image()` by using the `cast_column()` feature | ## Describe the bug
A clear and concise description of what the bug is.
When I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I ge... | closed | https://github.com/huggingface/datasets/issues/4692 | 2022-07-15T22:56:03 | 2022-07-19T13:36:24 | 2022-07-19T13:36:24 | {
"login": "skrishnan99",
"id": 28833916,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,306,389,656 | 4,691 | Dataset Viewer issue for rajistics/indian_food_images | ### Link
https://huggingface.co/datasets/rajistics/indian_food_images/viewer/rajistics--indian_food_images/train
### Description
I have a train/test split in my dataset
<img width="410" alt="Screen Shot 2022-07-15 at 11 44 42 AM" src="https://user-images.githubusercontent.com/6808012/179293215-7b419ec3-3527-46f2-8... | closed | https://github.com/huggingface/datasets/issues/4691 | 2022-07-15T19:03:15 | 2022-07-18T15:02:03 | 2022-07-18T15:02:03 | {
"login": "rajshah4",
"id": 6808012,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,306,321,975 | 4,690 | Refactor base extractors | This PR:
- Refactors base extractors as subclasses of `BaseExtractor`:
- this is an abstract class defining the interface with:
- `is_extractable`: abstract class method
- `extract`: abstract static method
- Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`):
- this has a... | closed | https://github.com/huggingface/datasets/pull/4690 | 2022-07-15T17:47:48 | 2022-07-18T08:46:56 | 2022-07-18T08:34:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,306,230,203 | 4,689 | Test extractors for all compression formats | This PR:
- Adds all compression formats to `test_extractor`
- Tests each base extractor for all compression formats
Note that all compression formats are tested except "rar". | closed | https://github.com/huggingface/datasets/pull/4689 | 2022-07-15T16:29:55 | 2022-07-15T17:47:02 | 2022-07-15T17:35:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,306,100,488 | 4,688 | Skip test_extractor only for zstd param if zstandard not installed | Currently, if `zstandard` is not installed, `test_extractor` is skipped for all compression format parameters.
This PR fixes `test_extractor` so that if `zstandard` is not installed, `test_extractor` is skipped only for the `zstd` compression parameter, that is, it is not skipped for all the other compression parame... | closed | https://github.com/huggingface/datasets/pull/4688 | 2022-07-15T14:23:47 | 2022-07-15T15:27:53 | 2022-07-15T15:15:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,306,021,415 | 4,687 | Trigger CI also on push to main | Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main.
This PR also triggers the CI when a PR is merged to main branch. | closed | https://github.com/huggingface/datasets/pull/4687 | 2022-07-15T13:11:29 | 2022-07-15T13:47:21 | 2022-07-15T13:35:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,305,974,924 | 4,686 | Align logging with Transformers (again) | Fix #2832 | closed | https://github.com/huggingface/datasets/pull/4686 | 2022-07-15T12:24:29 | 2023-09-24T10:05:34 | 2023-07-11T18:29:27 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,305,861,708 | 4,685 | Fix mock fsspec | This PR:
- Removes an unused method from `DummyTestFS`
- Refactors `mock_fsspec` to make it simpler | closed | https://github.com/huggingface/datasets/pull/4685 | 2022-07-15T10:23:12 | 2022-07-15T13:05:03 | 2022-07-15T12:52:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,305,554,654 | 4,684 | How to assign new values to Dataset? | 
Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import l... | closed | https://github.com/huggingface/datasets/issues/4684 | 2022-07-15T04:17:57 | 2023-03-20T15:50:41 | 2022-10-10T11:53:38 | {
"login": "beyondguo",
"id": 37113676,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,305,443,253 | 4,683 | Update create dataset card docs | This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all... | closed | https://github.com/huggingface/datasets/pull/4683 | 2022-07-15T00:41:29 | 2022-07-18T17:26:00 | 2022-07-18T13:24:10 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,304,788,215 | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key... | open | https://github.com/huggingface/datasets/issues/4682 | 2022-07-14T13:26:47 | 2022-07-14T13:26:47 | null | {
"login": "eunseojo",
"id": 12104720,
"type": "User"
} | [] | false | [] |
1,304,617,484 | 4,681 | IndexError when loading ImageFolder | ## Describe the bug
Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).
## Steps to reproduce the bug
Put a csv file in a folder with images and load it:
```python
import datasets
datasets.load_dataset("imagefold... | closed | https://github.com/huggingface/datasets/issues/4681 | 2022-07-14T10:57:55 | 2022-07-25T12:37:54 | 2022-07-25T12:37:54 | {
"login": "johko",
"id": 2843485,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,304,534,770 | 4,680 | Dataset Viewer issue for codeparrot/xlcost-text-to-code | ### Link
https://huggingface.co/datasets/codeparrot/xlcost-text-to-code
### Description
Error
```
Server Error
Status code: 400
Exception: TypeError
Message: 'NoneType' object is not iterable
```
Before I did a minor change in the dataset script (removing some comments), the viewer was working but... | closed | https://github.com/huggingface/datasets/issues/4680 | 2022-07-14T09:45:50 | 2022-07-18T16:37:00 | 2022-07-18T16:04:36 | {
"login": "loubnabnl",
"id": 44069155,
"type": "User"
} | [] | false | [] |
1,303,980,648 | 4,679 | Added method to remove excess nesting in a DatasetDict | Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505).
@stas0... | closed | https://github.com/huggingface/datasets/pull/4679 | 2022-07-13T21:49:37 | 2022-07-21T15:55:26 | 2022-07-21T10:55:02 | {
"login": "CakeCrusher",
"id": 37946988,
"type": "User"
} | [] | true | [] |
1,303,741,432 | 4,678 | Cant pass streaming dataset to dataloader after take() | ## Describe the bug
I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take.
## Steps to reproduce the bug
```python
import datasets
import torch
dset = ... | open | https://github.com/huggingface/datasets/issues/4678 | 2022-07-13T17:34:18 | 2022-07-14T13:07:21 | null | {
"login": "zankner",
"id": 39166683,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,302,258,440 | 4,677 | Random 400 Client Error when pushing dataset | ## Describe the bug
When pushing a dataset, the client errors randomly with `Bad Request for url:...`.
At the next call, a new parquet file is created for each shard.
The client may fail at any random shard.
## Steps to reproduce the bug
```python
dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
... | closed | https://github.com/huggingface/datasets/issues/4677 | 2022-07-12T15:56:44 | 2023-02-07T13:54:10 | 2023-02-07T13:54:10 | {
"login": "msis",
"id": 577139,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,302,202,028 | 4,676 | Dataset.map gets stuck on _cast_to_python_objects | ## Describe the bug
`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.
Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_an... | closed | https://github.com/huggingface/datasets/issues/4676 | 2022-07-12T15:09:58 | 2022-10-03T13:01:04 | 2022-10-03T13:01:03 | {
"login": "srobertjames",
"id": 662612,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,302,193,649 | 4,675 | Unable to use dataset with PyTorch dataloader | ## Describe the bug
When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
... | open | https://github.com/huggingface/datasets/issues/4675 | 2022-07-12T15:04:04 | 2022-07-14T14:17:46 | null | {
"login": "BlueskyFR",
"id": 25421460,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,301,294,844 | 4,674 | Issue loading datasets -- pyarrow.lib has no attribute | ## Describe the bug
I am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error:
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Steps to reproduce the bug
```python
dataset = load_dataset("glue", "cola")
```
... | closed | https://github.com/huggingface/datasets/issues/4674 | 2022-07-11T22:10:44 | 2023-02-28T18:06:55 | 2023-02-28T18:06:55 | {
"login": "margotwagner",
"id": 39107794,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,301,010,331 | 4,673 | load_datasets on csv returns everything as a string | ## Describe the bug
If you use:
`conll_dataset.to_csv("ner_conll.csv")`
It will create a csv file with all of your data as expected, however when you load it with:
`conll_dataset = load_dataset("csv", data_files="ner_conll.csv")`
everything is read in as a string. For example if I look at everything in 'n... | closed | https://github.com/huggingface/datasets/issues/4673 | 2022-07-11T17:30:24 | 2024-11-05T03:55:10 | 2022-07-12T13:33:08 | {
"login": "courtneysprouse",
"id": 25102613,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,300,911,467 | 4,672 | Support extract 7-zip compressed data files | Fix partially #3541, fix #4670. | closed | https://github.com/huggingface/datasets/pull/4672 | 2022-07-11T15:56:51 | 2022-07-15T13:14:27 | 2022-07-15T13:02:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,300,385,909 | 4,671 | Dataset Viewer issue for wmt16 | ### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status cod... | closed | https://github.com/huggingface/datasets/issues/4671 | 2022-07-11T08:34:11 | 2022-09-13T13:27:02 | 2022-09-08T08:16:06 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,299,984,246 | 4,670 | Can't extract files from `.7z` zipfile using `download_and_extract` | ## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration d... | closed | https://github.com/huggingface/datasets/issues/4670 | 2022-07-10T18:16:49 | 2022-07-15T13:02:07 | 2022-07-15T13:02:07 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,299,848,003 | 4,669 | loading oscar-corpus/OSCAR-2201 raises an error | ## Describe the bug
load_dataset('oscar-2201', 'af')
raises an error:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
... | closed | https://github.com/huggingface/datasets/issues/4669 | 2022-07-10T07:09:30 | 2022-07-11T09:27:49 | 2022-07-11T09:27:49 | {
"login": "vitalyshalumov",
"id": 33824221,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,299,735,893 | 4,668 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | ### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes | closed | https://github.com/huggingface/datasets/issues/4668 | 2022-07-09T18:04:13 | 2022-07-11T07:47:47 | 2022-07-11T07:47:47 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,299,735,703 | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4667 | 2022-07-09T18:03:15 | 2022-07-11T07:47:15 | 2022-07-11T07:47:15 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,299,732,238 | 4,666 | Issues with concatenating datasets | ## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datas... | closed | https://github.com/huggingface/datasets/issues/4666 | 2022-07-09T17:45:14 | 2022-07-12T17:16:15 | 2022-07-12T17:16:14 | {
"login": "ChenghaoMou",
"id": 32014649,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,299,652,638 | 4,665 | Unable to create dataset having Python dataset script only | ## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo a... | closed | https://github.com/huggingface/datasets/issues/4665 | 2022-07-09T11:45:46 | 2022-07-11T07:10:09 | 2022-07-11T07:10:01 | {
"login": "aleSuglia",
"id": 1479733,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,299,571,212 | 4,664 | Add stanford dog dataset | This PR is for adding dataset, related to issue #4504.
We are adding Stanford dog breed dataset. It is a multi class image classification dataset.
Details can be found here - http://vision.stanford.edu/aditya86/ImageNetDogs/
Tests on dummy data is failing currently, which I am looking into. | closed | https://github.com/huggingface/datasets/pull/4664 | 2022-07-09T04:46:07 | 2022-07-15T13:30:32 | 2022-07-15T13:15:42 | {
"login": "khushmeeet",
"id": 8711912,
"type": "User"
} | [] | true | [] |
1,299,298,693 | 4,663 | Add text decorators | This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides!
.
**IMPORTANT NOTE**: The fast-fail policy (described below) is not finally implemented, so that:
- we c... | closed | https://github.com/huggingface/datasets/pull/4659 | 2022-07-07T09:29:47 | 2022-07-12T11:30:20 | 2022-07-12T11:18:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,297,001,390 | 4,658 | Transfer CI tests to GitHub Actions | Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI. | closed | https://github.com/huggingface/datasets/issues/4658 | 2022-07-07T08:10:50 | 2022-07-12T11:18:25 | 2022-07-12T11:18:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,296,743,133 | 4,657 | Add SQuAD2.0 Dataset | ## Adding a Dataset
- **Name:** *SQuAD2.0*
- **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading ... | closed | https://github.com/huggingface/datasets/issues/4657 | 2022-07-07T03:19:36 | 2022-07-12T16:14:52 | 2022-07-12T16:14:52 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,740,266 | 4,656 | Add Amazon-QA Dataset | ## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa... | closed | https://github.com/huggingface/datasets/issues/4656 | 2022-07-07T03:15:11 | 2022-07-14T02:20:12 | 2022-07-14T02:20:12 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,720,896 | 4,655 | Simple Wikipedia | ## Adding a Dataset
- **Name:** *Simple Wikipedia*
- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task... | closed | https://github.com/huggingface/datasets/issues/4655 | 2022-07-07T02:51:26 | 2022-07-14T02:16:33 | 2022-07-14T02:16:33 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,716,119 | 4,654 | Add Quora Question Triplets Dataset | ## Adding a Dataset
- **Name:** *Quora Question Triplets*
- **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a du... | closed | https://github.com/huggingface/datasets/issues/4654 | 2022-07-07T02:43:42 | 2022-07-14T02:13:50 | 2022-07-14T02:13:50 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,702,834 | 4,653 | Add Altlex dataset | ## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embed... | closed | https://github.com/huggingface/datasets/issues/4653 | 2022-07-07T02:23:02 | 2022-07-14T02:12:39 | 2022-07-14T02:12:39 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,697,498 | 4,652 | Add Sentence Compression Dataset | ## Adding a Dataset
- **Name:** *Sentence Compression*
- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*
- **Paper:** *https://www.aclweb.org/anthology/D13-1155/*
- **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data*
- **Motivat... | closed | https://github.com/huggingface/datasets/issues/4652 | 2022-07-07T02:13:46 | 2022-07-14T02:11:48 | 2022-07-14T02:11:48 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,689,414 | 4,651 | Add Flickr 30k Dataset | ## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved ... | closed | https://github.com/huggingface/datasets/issues/4651 | 2022-07-07T01:59:08 | 2022-07-14T02:09:45 | 2022-07-14T02:09:45 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,680,037 | 4,650 | Add SPECTER dataset | ## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/spe... | open | https://github.com/huggingface/datasets/issues/4650 | 2022-07-07T01:41:32 | 2022-07-14T02:07:49 | null | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,673,712 | 4,649 | Add PAQ dataset | ## Adding a Dataset
- **Name:** *PAQ*
- **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*
- **Paper:** *https://arxiv.org/abs/2102.07033*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/... | closed | https://github.com/huggingface/datasets/issues/4649 | 2022-07-07T01:29:42 | 2022-07-14T02:06:27 | 2022-07-14T02:06:27 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,659,335 | 4,648 | Add WikiAnswers dataset | ## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *ht... | closed | https://github.com/huggingface/datasets/issues/4648 | 2022-07-07T01:06:37 | 2022-07-14T02:03:40 | 2022-07-14T02:03:40 | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,311,270 | 4,647 | Add Reddit dataset | ## Adding a Dataset
- **Name:** *Reddit comments (2015-2018)*
- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using th... | open | https://github.com/huggingface/datasets/issues/4647 | 2022-07-06T19:49:18 | 2022-07-06T19:49:18 | null | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,296,027,785 | 4,645 | Set HF_SCRIPTS_VERSION to main | After renaming "master" to "main", the CI fails with
```
AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/main/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at /home/circleci/datasets/_dummy/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on th... | closed | https://github.com/huggingface/datasets/pull/4645 | 2022-07-06T15:43:21 | 2022-07-06T15:56:21 | 2022-07-06T15:45:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,296,018,052 | 4,644 | [Minor fix] Typo correction | recieve -> receive | closed | https://github.com/huggingface/datasets/pull/4644 | 2022-07-06T15:37:02 | 2022-07-06T15:56:32 | 2022-07-06T15:45:16 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
1,295,852,650 | 4,643 | Rename master to main | This PR renames mentions of "master" by "main" in the code base for several cases:
- set the default dataset script version to "main" if the local installation of `datasets` is a dev installation
- update URLs to this github repository to use "main"
- update the DVC benchmark
- update the github workflows
- update... | closed | https://github.com/huggingface/datasets/pull/4643 | 2022-07-06T13:34:30 | 2022-07-06T15:36:46 | 2022-07-06T15:25:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,295,748,083 | 4,642 | Streaming issue for ccdv/pubmed-summarization | ### Link
https://huggingface.co/datasets/ccdv/pubmed-summarization
### Description
This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?
```
Status c... | closed | https://github.com/huggingface/datasets/issues/4642 | 2022-07-06T12:13:07 | 2022-07-06T14:17:34 | 2022-07-06T14:17:34 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
1,295,633,250 | 4,641 | Dataset Viewer issue for kmfoda/booksum | ### Link
https://huggingface.co/datasets/kmfoda/booksum
### Description
A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to:
```
Status code: 400
Exception: ClientResponseError
Message: 401, messa... | closed | https://github.com/huggingface/datasets/issues/4641 | 2022-07-06T10:38:16 | 2022-07-06T13:25:28 | 2022-07-06T11:58:06 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,295,495,699 | 4,640 | Support all split in streaming mode | Fix #4637. | open | https://github.com/huggingface/datasets/pull/4640 | 2022-07-06T08:56:38 | 2022-07-06T15:19:55 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,295,367,322 | 4,639 | Add HaGRID -- HAnd Gesture Recognition Image Dataset | ## Adding a Dataset
- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset
- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows t... | open | https://github.com/huggingface/datasets/issues/4639 | 2022-07-06T07:41:32 | 2022-07-06T07:41:32 | null | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,295,233,315 | 4,638 | The speechocean762 dataset | [speechocean762](https://www.openslr.org/101/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use.
I believe it will be easier to use if it could be available on Hugging Face. | closed | https://github.com/huggingface/datasets/pull/4638 | 2022-07-06T06:17:30 | 2022-10-03T09:34:36 | 2022-10-03T09:34:36 | {
"login": "jimbozhang",
"id": 1777456,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,294,818,236 | 4,637 | The "all" split breaks streaming | ## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad ... | open | https://github.com/huggingface/datasets/issues/4637 | 2022-07-05T21:56:49 | 2022-07-15T13:59:30 | null | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,294,547,836 | 4,636 | Add info in docs about behavior of download_config.num_proc | **Is your feature request related to a problem? Please describe.**
I went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it.
**Describe the solution you'd li... | closed | https://github.com/huggingface/datasets/issues/4636 | 2022-07-05T17:01:00 | 2022-07-28T10:40:32 | 2022-07-28T10:40:32 | {
"login": "nateraw",
"id": 32437151,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,294,475,931 | 4,635 | Dataset Viewer issue for vadis/sv-ident | ### Link
https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation
### Description
Error message when loading validation split in the viewer:
```
Status code: 400
Exception: Status400Error
Message: The split cache is empty.
```
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4635 | 2022-07-05T15:48:13 | 2022-07-06T07:13:33 | 2022-07-06T07:12:14 | {
"login": "e-tornike",
"id": 20404466,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,294,405,251 | 4,634 | Can't load the Hausa audio dataset | common_voice_train = load_dataset("common_voice", "ha", split="train+validation") | closed | https://github.com/huggingface/datasets/issues/4634 | 2022-07-05T14:47:36 | 2022-09-13T14:07:32 | 2022-09-13T14:07:32 | {
"login": "moro23",
"id": 19976800,
"type": "User"
} | [] | false | [] |
1,294,367,783 | 4,633 | [data_files] Only match separated split names | As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval").
In this PR I made ... | closed | https://github.com/huggingface/datasets/pull/4633 | 2022-07-05T14:18:11 | 2022-07-18T13:20:29 | 2022-07-18T13:07:33 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,294,166,880 | 4,632 | 'sort' method sorts one column only | The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order. | closed | https://github.com/huggingface/datasets/issues/4632 | 2022-07-05T11:25:26 | 2023-07-25T15:04:27 | 2023-07-25T15:04:27 | {
"login": "shachardon",
"id": 42108562,
"type": "User"
} | [] | false | [] |
1,293,545,900 | 4,631 | Update WinoBias README | I'm adding some information about Winobias that I got from the paper :smile:
I think this makes it a bit clearer! | closed | https://github.com/huggingface/datasets/pull/4631 | 2022-07-04T20:24:40 | 2022-07-07T13:23:32 | 2022-07-07T13:11:47 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,293,470,728 | 4,630 | fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py. | Fix #4612.
Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.
Thus, @mariosasko suggested to add the missing part to the module import to allow for its access. | closed | https://github.com/huggingface/datasets/pull/4630 | 2022-07-04T18:26:55 | 2022-07-05T15:19:52 | 2022-07-05T15:08:21 | {
"login": "gugarosa",
"id": 4120639,
"type": "User"
} | [] | true | [] |
1,293,418,800 | 4,629 | Rename repo default branch to main | Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin... | closed | https://github.com/huggingface/datasets/issues/4629 | 2022-07-04T17:16:10 | 2022-07-06T15:49:57 | 2022-07-06T15:49:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
1,293,361,308 | 4,628 | Fix time type `_arrow_to_datasets_dtype` conversion | Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(... | closed | https://github.com/huggingface/datasets/pull/4628 | 2022-07-04T16:20:15 | 2022-07-07T14:08:38 | 2022-07-07T13:57:12 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,293,287,798 | 4,627 | fixed duplicate calculation of spearmanr function in metrics wrapper. | During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary v... | closed | https://github.com/huggingface/datasets/pull/4627 | 2022-07-04T15:02:01 | 2022-07-07T12:41:09 | 2022-07-07T12:41:09 | {
"login": "benlipkin",
"id": 38060297,
"type": "User"
} | [] | true | [] |
1,293,256,269 | 4,626 | Add non-commercial licensing info for datasets for which we removed tags | We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c... | open | https://github.com/huggingface/datasets/issues/4626 | 2022-07-04T14:32:43 | 2022-07-08T14:27:29 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
1,293,163,744 | 4,625 | Unpack `dl_manager.iter_files` to allow parallization | Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode.
(The issue reported [here](https://discuss.huggingface.co/t/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo/19887))
PS: Another option would be to override `FilesIterable.__getitem__` to make it indexa... | closed | https://github.com/huggingface/datasets/pull/4625 | 2022-07-04T13:16:58 | 2022-07-05T11:11:54 | 2022-07-05T11:00:48 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,293,085,058 | 4,624 | Remove all paperswithcode_id: null | On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png">
We've been using `null` to specify that we checked on pwc but the dataset doesn'... | closed | https://github.com/huggingface/datasets/pull/4624 | 2022-07-04T12:11:32 | 2023-09-24T10:05:19 | 2022-07-04T13:10:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,293,042,894 | 4,623 | Loading MNIST as Pytorch Dataset | ## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and l... | open | https://github.com/huggingface/datasets/issues/4623 | 2022-07-04T11:33:10 | 2022-07-04T14:40:50 | null | {
"login": "jameschapman19",
"id": 56592797,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,293,031,939 | 4,622 | Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present) | Will fix #4621
ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following co... | closed | https://github.com/huggingface/datasets/pull/4622 | 2022-07-04T11:23:20 | 2022-07-15T14:37:23 | 2022-07-15T14:24:24 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,293,030,128 | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass fe... | closed | https://github.com/huggingface/datasets/issues/4621 | 2022-07-04T11:21:44 | 2022-07-15T14:24:24 | 2022-07-15T14:24:24 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,292,797,878 | 4,620 | Data type is not recognized when using datetime.time | ## Describe the bug
Creating a dataset from a pandas dataframe with `datetime.time` format generates an error.
## Steps to reproduce the bug
```python
import pandas as pd
from datetime import time
from datasets import Dataset
df = pd.DataFrame({"feature_name": [time(1, 1, 1)]})
dataset = Dataset.from_pandas... | closed | https://github.com/huggingface/datasets/issues/4620 | 2022-07-04T08:13:38 | 2022-07-07T13:57:11 | 2022-07-07T13:57:11 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,292,107,275 | 4,619 | np arrays get turned into native lists | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datas... | open | https://github.com/huggingface/datasets/issues/4619 | 2022-07-02T17:54:57 | 2022-07-03T20:27:07 | null | {
"login": "ZhaofengWu",
"id": 11954789,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,292,078,225 | 4,618 | contribute data loading for object detection datasets with yolo data format | **Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://hugging... | open | https://github.com/huggingface/datasets/issues/4618 | 2022-07-02T15:21:59 | 2022-07-21T14:10:44 | null | {
"login": "faizankshaikh",
"id": 8406903,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,291,307,428 | 4,615 | Fix `embed_storage` on features inside lists/sequences | Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general).
Fix #4591
~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done! | closed | https://github.com/huggingface/datasets/pull/4615 | 2022-07-01T11:52:08 | 2022-07-08T12:13:10 | 2022-07-08T12:01:36 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,291,218,020 | 4,614 | Ensure ConcatenationTable.cast uses target_schema metadata | Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable.
Code example of where issue arrises:
```
from datasets import Dataset, Image
column1 = [0, 1]
image_paths = ['/images/im... | closed | https://github.com/huggingface/datasets/pull/4614 | 2022-07-01T10:22:08 | 2022-07-19T13:48:45 | 2022-07-19T13:36:24 | {
"login": "dtuit",
"id": 8114067,
"type": "User"
} | [] | true | [] |
1,291,181,193 | 4,613 | Align/fix license metadata info | fix bad "other-*" licenses and add the corresponding "license_details" when relevant | closed | https://github.com/huggingface/datasets/pull/4613 | 2022-07-01T09:50:50 | 2022-07-01T12:53:57 | 2022-07-01T12:42:47 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,290,984,660 | 4,612 | Release 2.3.0 broke custom iterable datasets | ## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should retu... | closed | https://github.com/huggingface/datasets/issues/4612 | 2022-07-01T06:46:07 | 2022-07-05T15:08:21 | 2022-07-05T15:08:21 | {
"login": "aapot",
"id": 19529125,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,290,940,874 | 4,611 | Preserve member order by MockDownloadManager.iter_archive | Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yield... | closed | https://github.com/huggingface/datasets/pull/4611 | 2022-07-01T05:48:20 | 2022-07-01T16:59:11 | 2022-07-01T16:48:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,290,603,827 | 4,610 | codeparrot/github-code failing to load | ## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
`... | closed | https://github.com/huggingface/datasets/issues/4610 | 2022-06-30T20:24:48 | 2022-07-05T14:24:13 | 2022-07-05T09:19:56 | {
"login": "PyDataBlog",
"id": 29863388,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,290,392,083 | 4,609 | librispeech dataset has to download whole subset when specifing the split to use | ## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
... | closed | https://github.com/huggingface/datasets/issues/4609 | 2022-06-30T16:38:24 | 2022-07-12T21:44:32 | 2022-07-12T21:44:32 | {
"login": "sunhaozhepy",
"id": 73462159,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,290,298,002 | 4,608 | Fix xisfile, xgetsize, xisdir, xlistdir in private repo | `xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip... | closed | https://github.com/huggingface/datasets/pull/4608 | 2022-06-30T15:23:21 | 2022-07-06T12:45:59 | 2022-07-06T12:34:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,290,171,941 | 4,607 | Align more metadata with other repo types (models,spaces) | see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged) | closed | https://github.com/huggingface/datasets/pull/4607 | 2022-06-30T13:52:12 | 2022-07-01T12:00:37 | 2022-07-01T11:49:14 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,290,083,534 | 4,606 | evaluation result changes after `datasets` version change | ## Describe the bug
evaluation result changes after `datasets` version change
## Steps to reproduce the bug
1. Train a model on WikiAnn
2. reload the ckpt -> test accuracy becomes same as eval accuracy
3. such behavior is gone after downgrading `datasets`
https://colab.research.google.com/drive/1kYz7-aZRGdaya... | closed | https://github.com/huggingface/datasets/issues/4606 | 2022-06-30T12:43:26 | 2023-07-25T15:05:26 | 2023-07-25T15:05:26 | {
"login": "thnkinbtfly",
"id": 70014488,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,290,058,970 | 4,605 | Dataset Viewer issue for boris/gis_filtered | ### Link
https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train
### Description
When I try to access this from the website I get this error:
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datase... | closed | https://github.com/huggingface/datasets/issues/4605 | 2022-06-30T12:23:34 | 2022-07-06T12:34:19 | 2022-07-06T12:34:19 | {
"login": "WaterKnight1998",
"id": 41203448,
"type": "User"
} | [
{
"name": "streaming",
"color": "fef2c0"
}
] | false | [] |
1,289,963,962 | 4,604 | Update CI Windows orb | This PR tries to fix recurrent random CI failures on Windows.
After 2 runs, it seems to have fixed the issue.
Fix #4603. | closed | https://github.com/huggingface/datasets/pull/4604 | 2022-06-30T11:00:31 | 2022-06-30T13:33:11 | 2022-06-30T13:22:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,289,963,331 | 4,603 | CI fails recurrently and randomly on Windows | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\to... | closed | https://github.com/huggingface/datasets/issues/4603 | 2022-06-30T10:59:58 | 2022-06-30T13:22:25 | 2022-06-30T13:22:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,289,950,379 | 4,602 | Upgrade setuptools in windows CI | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | closed | https://github.com/huggingface/datasets/pull/4602 | 2022-06-30T10:48:41 | 2023-09-24T10:05:10 | 2022-06-30T12:46:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,289,924,715 | 4,601 | Upgrade pip in WIN CI | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | closed | https://github.com/huggingface/datasets/pull/4601 | 2022-06-30T10:25:42 | 2023-09-24T10:04:25 | 2022-06-30T10:43:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,289,177,042 | 4,600 | Remove multiple config section | This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :) | closed | https://github.com/huggingface/datasets/pull/4600 | 2022-06-29T19:09:21 | 2022-07-04T17:41:20 | 2022-07-04T17:29:41 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
1,288,849,933 | 4,599 | Smooth-BLEU bug fixed | Hi,
the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image).
This however contradicts the source paper suggesting the smoot... | closed | https://github.com/huggingface/datasets/pull/4599 | 2022-06-29T14:51:42 | 2022-09-23T07:42:40 | 2022-09-23T07:42:40 | {
"login": "Aktsvigun",
"id": 36672861,
"type": "User"
} | [
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true | [] |
1,288,774,514 | 4,598 | Host financial_phrasebank data on the Hub |
Fix #4597. | closed | https://github.com/huggingface/datasets/pull/4598 | 2022-06-29T13:59:31 | 2022-07-01T09:41:14 | 2022-07-01T09:29:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,288,672,007 | 4,597 | Streaming issue for financial_phrasebank | ### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dat... | closed | https://github.com/huggingface/datasets/issues/4597 | 2022-06-29T12:45:43 | 2022-07-01T09:29:36 | 2022-07-01T09:29:36 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "hosted-on-google-drive",
"color": "8B51EF"
}
] | false | [] |
1,288,381,735 | 4,596 | Dataset Viewer issue for universal_dependencies | ### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4596 | 2022-06-29T08:50:29 | 2022-09-07T11:29:28 | 2022-09-07T11:29:27 | {
"login": "Jordy-VL",
"id": 16034009,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,288,275,976 | 4,595 | Dataset Viewer issue with False positive PII redaction | ### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_ | closed | https://github.com/huggingface/datasets/issues/4595 | 2022-06-29T07:15:57 | 2022-06-29T08:29:41 | 2022-06-29T08:27:49 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | false | [] |
1,288,070,023 | 4,594 | load_from_disk suggests incorrect fix when used to load DatasetDict | Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indi... | closed | https://github.com/huggingface/datasets/issues/4594 | 2022-06-29T01:40:01 | 2022-06-29T04:03:44 | 2022-06-29T04:03:44 | {
"login": "dvsth",
"id": 11157811,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,288,067,699 | 4,593 | Fix error message when using load_from_disk to load DatasetDict | Issue #4594
Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error.
Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.
Chan... | closed | https://github.com/huggingface/datasets/pull/4593 | 2022-06-29T01:34:27 | 2022-06-29T04:01:59 | 2022-06-29T04:01:39 | {
"login": "dvsth",
"id": 11157811,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.