id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,406,736,710 | 5,107 | Multiprocessed dataset builder | This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded). | closed | https://github.com/huggingface/datasets/pull/5107 | 2022-10-12T19:59:17 | 2022-12-01T15:37:09 | 2022-11-09T17:11:43 | {
"login": "TevenLeScao",
"id": 26709476,
"type": "User"
} | [] | true | [] |
1,406,635,758 | 5,106 | Fix task template reload from dict | Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict | closed | https://github.com/huggingface/datasets/pull/5106 | 2022-10-12T18:33:49 | 2022-10-13T09:59:07 | 2022-10-13T09:56:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,406,078,357 | 5,105 | Specifying an exisiting folder in download_and_prepare deletes everything in it | ## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
... | open | https://github.com/huggingface/datasets/issues/5105 | 2022-10-12T11:53:33 | 2022-10-20T11:53:59 | null | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,405,973,102 | 5,104 | Fix loading how to guide (#5102) | null | closed | https://github.com/huggingface/datasets/pull/5104 | 2022-10-12T10:34:42 | 2022-10-12T11:34:07 | 2022-10-12T11:31:55 | {
"login": "riccardobucco",
"id": 9295277,
"type": "User"
} | [] | true | [] |
1,405,956,311 | 5,103 | url encode hub url (#5099) | null | closed | https://github.com/huggingface/datasets/pull/5103 | 2022-10-12T10:22:12 | 2022-10-12T15:27:24 | 2022-10-12T15:24:47 | {
"login": "riccardobucco",
"id": 9295277,
"type": "User"
} | [] | true | [] |
1,404,746,554 | 5,102 | Error in create a dataset from a Python generator | ## Describe the bug
In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in.
```Python
>>> from datasets import Dataset
>>> def my_gen... | closed | https://github.com/huggingface/datasets/issues/5102 | 2022-10-11T14:28:58 | 2022-10-12T11:31:56 | 2022-10-12T11:31:56 | {
"login": "yangxuhui",
"id": 9004682,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,404,513,085 | 5,101 | Free the "hf" filesystem protocol for `hffs` | null | closed | https://github.com/huggingface/datasets/pull/5101 | 2022-10-11T11:57:21 | 2022-10-12T15:32:59 | 2022-10-12T15:30:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,404,458,586 | 5,100 | datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method | null | closed | https://github.com/huggingface/datasets/issues/5100 | 2022-10-11T11:16:31 | 2022-10-11T13:48:26 | 2022-10-11T13:48:26 | {
"login": "jagochi",
"id": 115545475,
"type": "User"
} | [] | false | [] |
1,404,370,191 | 5,099 | datasets doesn't support # in data paths | ## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = lo... | closed | https://github.com/huggingface/datasets/issues/5099 | 2022-10-11T10:05:32 | 2022-10-13T13:14:20 | 2022-10-13T13:14:20 | {
"login": "loubnabnl",
"id": 44069155,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,404,058,518 | 5,098 | Classes label error when loading symbolic links using imagefolder | **Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide wh... | closed | https://github.com/huggingface/datasets/issues/5098 | 2022-10-11T06:10:58 | 2022-11-14T14:40:20 | 2022-11-14T14:40:20 | {
"login": "horizon86",
"id": 49552732,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,403,679,353 | 5,097 | Fatal error with pyarrow/libarrow.so | ## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reprodu... | closed | https://github.com/huggingface/datasets/issues/5097 | 2022-10-10T20:29:04 | 2022-10-11T06:56:01 | 2022-10-11T06:56:00 | {
"login": "catalys1",
"id": 11340846,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,403,379,816 | 5,096 | Transfer some canonical datasets under an organization namespace | As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventua... | closed | https://github.com/huggingface/datasets/issues/5096 | 2022-10-10T15:44:31 | 2024-06-24T06:06:28 | 2024-06-24T06:02:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | false | [] |
1,403,221,408 | 5,095 | Fix tutorial (#5093) | Close #5093 | closed | https://github.com/huggingface/datasets/pull/5095 | 2022-10-10T13:55:15 | 2022-10-10T17:50:52 | 2022-10-10T15:32:20 | {
"login": "riccardobucco",
"id": 9295277,
"type": "User"
} | [] | true | [] |
1,403,214,950 | 5,094 | Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock | ## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datase... | closed | https://github.com/huggingface/datasets/issues/5094 | 2022-10-10T13:50:56 | 2023-07-24T15:29:13 | 2023-07-24T15:29:13 | {
"login": "RR-28023",
"id": 36822895,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,402,939,660 | 5,093 | Mismatch between tutoriel and doc | ## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor... | closed | https://github.com/huggingface/datasets/issues/5093 | 2022-10-10T10:23:53 | 2022-10-10T17:51:15 | 2022-10-10T17:51:14 | {
"login": "clefourrier",
"id": 22726840,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,402,713,517 | 5,092 | Use HTML relative paths for tiles in the docs | This PR replaces the absolute paths in the landing page tiles with relative ones so that one can test navigation both locally in and in future PRs (see [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084/en/index) for an example PR where the links don't work).
I encountered this while working on the `op... | closed | https://github.com/huggingface/datasets/pull/5092 | 2022-10-10T07:24:27 | 2022-10-11T13:25:45 | 2022-10-11T13:23:23 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
1,401,112,552 | 5,091 | Allow connection objects in `from_sql` + small doc improvement | Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string.
PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~... | closed | https://github.com/huggingface/datasets/pull/5091 | 2022-10-07T12:39:44 | 2022-10-09T13:19:15 | 2022-10-09T13:16:57 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,401,102,407 | 5,090 | Review sync issues from GitHub to Hub | ## Describe the bug
We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch.
For example:
- this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b
- was not properly synced with the Hub: https... | closed | https://github.com/huggingface/datasets/issues/5090 | 2022-10-07T12:31:56 | 2022-10-08T07:07:36 | 2022-10-08T07:07:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,400,788,486 | 5,089 | Resume failed process | **Is your feature request related to a problem? Please describe.**
When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress.
**Describe the solution you'd like**
It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart ... | open | https://github.com/huggingface/datasets/issues/5089 | 2022-10-07T08:07:03 | 2022-10-07T08:07:03 | null | {
"login": "felix-schneider",
"id": 208336,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,400,530,412 | 5,088 | load_datasets("json", ...) don't read local .json.gz properly | ## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = Da... | open | https://github.com/huggingface/datasets/issues/5088 | 2022-10-07T02:16:58 | 2022-10-07T14:43:16 | null | {
"login": "junwang-wish",
"id": 112650299,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,400,487,967 | 5,087 | Fix filter with empty indices | Fix #5085 | closed | https://github.com/huggingface/datasets/pull/5087 | 2022-10-07T01:07:00 | 2022-10-07T18:43:03 | 2022-10-07T18:40:26 | {
"login": "Mouhanedg56",
"id": 23029765,
"type": "User"
} | [] | true | [] |
1,400,216,975 | 5,086 | HTTPError: 404 Client Error: Not Found for url | ## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-... | closed | https://github.com/huggingface/datasets/issues/5086 | 2022-10-06T19:48:58 | 2022-10-07T15:12:01 | 2022-10-07T15:12:01 | {
"login": "keyuchen21",
"id": 54015474,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,400,113,569 | 5,085 | Filtering on an empty dataset returns a corrupted dataset. | ## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # ... | closed | https://github.com/huggingface/datasets/issues/5085 | 2022-10-06T18:18:49 | 2022-10-07T19:06:02 | 2022-10-07T18:40:26 | {
"login": "gabegma",
"id": 36087158,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,400,016,229 | 5,084 | IterableDataset formatting in numpy/torch/tf/jax | This code now returns a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
It also works with "arrow", "pandas", "torch", "tf" and "jax"
### Implementation details:
I'm using the ex... | closed | https://github.com/huggingface/datasets/pull/5084 | 2022-10-06T16:53:38 | 2023-09-24T10:06:51 | 2022-12-20T17:19:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,399,842,514 | 5,083 | Support numpy/torch/tf/jax formatting for IterableDataset | Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
S... | closed | https://github.com/huggingface/datasets/issues/5083 | 2022-10-06T15:14:58 | 2023-10-09T12:42:15 | 2023-10-09T12:42:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "streaming",
"color": "fef2c0"
},
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,399,379,777 | 5,082 | adding keep in memory | Fixing #514 .
Hello @mariosasko 👋, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 . | closed | https://github.com/huggingface/datasets/pull/5082 | 2022-10-06T11:10:46 | 2022-10-07T14:35:34 | 2022-10-07T14:32:54 | {
"login": "Mustapha-AJEGHRIR",
"id": 66799406,
"type": "User"
} | [] | true | [] |
1,399,340,050 | 5,081 | Bug loading `sentence-transformers/parallel-sentences` | ## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the '... | open | https://github.com/huggingface/datasets/issues/5081 | 2022-10-06T10:47:51 | 2022-10-11T10:00:48 | null | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,398,849,565 | 5,080 | Use hfh for caching | ## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would prop... | open | https://github.com/huggingface/datasets/issues/5080 | 2022-10-06T05:51:58 | 2022-10-06T14:26:05 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,398,609,305 | 5,079 | refactor: replace AssertionError with more meaningful exceptions (#5074) | Closes #5074
Replaces `AssertionError` in the following files with more descriptive exceptions:
- `src/datasets/arrow_reader.py`
- `src/datasets/builder.py`
- `src/datasets/utils/version.py`
The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` d... | closed | https://github.com/huggingface/datasets/pull/5079 | 2022-10-06T01:39:35 | 2022-10-07T14:35:43 | 2022-10-07T14:33:10 | {
"login": "galbwe",
"id": 20004072,
"type": "User"
} | [] | true | [] |
1,398,335,148 | 5,078 | Fix header level in Audio docs | Fixes header level so `Dataset features` is the doc title instead of `The Audio type`:
 | closed | https://github.com/huggingface/datasets/pull/5078 | 2022-10-05T20:22:44 | 2022-10-06T08:12:23 | 2022-10-06T08:09:41 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,398,080,859 | 5,077 | Fix passed download_config in HubDatasetModuleFactoryWithoutScript | Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`. | closed | https://github.com/huggingface/datasets/pull/5077 | 2022-10-05T16:42:36 | 2022-10-06T05:31:22 | 2022-10-06T05:29:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,397,918,092 | 5,076 | fix: update exception throw from OSError to EnvironmentError in `push… | Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present. | closed | https://github.com/huggingface/datasets/pull/5076 | 2022-10-05T14:46:29 | 2022-10-07T14:35:57 | 2022-10-07T14:33:27 | {
"login": "rahulXs",
"id": 29496999,
"type": "User"
} | [] | true | [] |
1,397,865,501 | 5,075 | Throw EnvironmentError when token is not present | Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present. | closed | https://github.com/huggingface/datasets/issues/5075 | 2022-10-05T14:14:18 | 2022-10-07T14:33:28 | 2022-10-07T14:33:28 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,397,850,352 | 5,074 | Replace AssertionErrors with more meaningful errors | Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
``` | closed | https://github.com/huggingface/datasets/issues/5074 | 2022-10-05T14:03:55 | 2022-10-07T14:33:11 | 2022-10-07T14:33:11 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,397,832,183 | 5,073 | Restore saved format state in `load_from_disk` | Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and ... | closed | https://github.com/huggingface/datasets/pull/5073 | 2022-10-05T13:51:47 | 2022-10-11T16:55:07 | 2022-10-11T16:49:23 | {
"login": "asofiaoliveira",
"id": 74454835,
"type": "User"
} | [] | true | [] |
1,397,765,531 | 5,072 | Image & Audio formatting for numpy/torch/tf/jax | Added support for image and audio formatting for numpy, torch, tf and jax.
For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8
I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning... | closed | https://github.com/huggingface/datasets/pull/5072 | 2022-10-05T13:07:03 | 2022-10-10T13:24:10 | 2022-10-10T13:21:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,397,301,270 | 5,071 | Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS | This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00 | closed | https://github.com/huggingface/datasets/pull/5071 | 2022-10-05T06:28:39 | 2022-10-06T14:43:12 | 2022-10-06T14:40:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,396,765,647 | 5,070 | Support default config name when no builder configs | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to ... | closed | https://github.com/huggingface/datasets/issues/5070 | 2022-10-04T19:49:35 | 2022-10-06T14:40:26 | 2022-10-06T14:40:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,396,361,768 | 5,067 | Fix CONTRIBUTING once dataset scripts transferred to Hub | This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by som... | closed | https://github.com/huggingface/datasets/pull/5067 | 2022-10-04T14:16:05 | 2022-10-06T06:14:43 | 2022-10-06T06:12:12 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,396,086,745 | 5,066 | Support streaming gzip.open | This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`.
This has been a recurring issue. See, e.g.:
- #5060
- #3191 | closed | https://github.com/huggingface/datasets/pull/5066 | 2022-10-04T11:20:05 | 2022-10-06T15:13:51 | 2022-10-06T15:11:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,396,003,362 | 5,065 | Ci py3.10 | Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway) | closed | https://github.com/huggingface/datasets/pull/5065 | 2022-10-04T10:13:51 | 2022-11-29T15:28:05 | 2022-11-29T15:25:26 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,395,978,143 | 5,064 | Align signature of create/delete_repo with latest hfh | This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead.
Related to:
- #5063
CC: @lhoestq | closed | https://github.com/huggingface/datasets/pull/5064 | 2022-10-04T09:54:53 | 2022-10-07T17:02:11 | 2022-10-07T16:59:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,395,895,463 | 5,063 | Align signature of list_repo_files with latest hfh | This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq | closed | https://github.com/huggingface/datasets/pull/5063 | 2022-10-04T08:51:46 | 2022-10-07T16:42:57 | 2022-10-07T16:40:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,395,739,417 | 5,062 | Fix CI hfh token warning | In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/te... | closed | https://github.com/huggingface/datasets/pull/5062 | 2022-10-04T06:36:54 | 2022-10-04T08:58:15 | 2022-10-04T08:42:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,395,476,770 | 5,061 | `_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 24... | closed | https://github.com/huggingface/datasets/issues/5061 | 2022-10-03T23:51:38 | 2023-07-21T14:43:35 | 2023-07-21T14:43:34 | {
"login": "ZhaofengWu",
"id": 11954789,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,395,382,940 | 5,060 | Unable to Use Custom Dataset Locally | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in ... | closed | https://github.com/huggingface/datasets/issues/5060 | 2022-10-03T21:55:16 | 2022-10-06T14:29:18 | 2022-10-06T14:29:17 | {
"login": "zanussbaum",
"id": 33707069,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,395,050,876 | 5,059 | Fix typo | Fixes a small typo :) | closed | https://github.com/huggingface/datasets/pull/5059 | 2022-10-03T17:05:25 | 2022-10-03T17:34:40 | 2022-10-03T17:32:27 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,394,962,424 | 5,058 | Mark CI tests as xfail when 502 error | To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error):
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files
- https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
> raise HTTPEr... | closed | https://github.com/huggingface/datasets/pull/5058 | 2022-10-03T15:53:55 | 2022-10-04T10:03:23 | 2022-10-04T10:01:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,394,827,216 | 5,057 | Support `converters` in `CsvBuilder` | Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| closed | https://github.com/huggingface/datasets/pull/5057 | 2022-10-03T14:23:21 | 2022-10-04T11:19:28 | 2022-10-04T11:17:32 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,394,713,173 | 5,056 | Fix broken URL's (GEM) | This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova | closed | https://github.com/huggingface/datasets/pull/5056 | 2022-10-03T13:13:22 | 2022-10-04T13:49:00 | 2022-10-04T13:48:59 | {
"login": "manandey",
"id": 6687858,
"type": "User"
} | [] | true | [] |
1,394,503,844 | 5,055 | Fix backward compatibility for dataset_infos.json | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored o... | closed | https://github.com/huggingface/datasets/pull/5055 | 2022-10-03T10:30:14 | 2022-10-03T13:43:55 | 2022-10-03T13:41:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,394,152,728 | 5,054 | Fix license/citation information of squadshifts dataset card | This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring ... | closed | https://github.com/huggingface/datasets/pull/5054 | 2022-10-03T05:19:13 | 2022-10-03T09:26:49 | 2022-10-03T09:24:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,393,739,882 | 5,053 | Intermittent JSON parse error when streaming the Pile | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tok... | open | https://github.com/huggingface/datasets/issues/5053 | 2022-10-02T11:56:46 | 2022-10-04T17:59:03 | null | {
"login": "neelnanda-io",
"id": 77788841,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,393,076,765 | 5,052 | added from_generator method to IterableDataset class. | Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| closed | https://github.com/huggingface/datasets/pull/5052 | 2022-09-30T22:14:05 | 2022-10-05T12:51:48 | 2022-10-05T12:10:48 | {
"login": "hamid-vakilzadeh",
"id": 56002455,
"type": "User"
} | [] | true | [] |
1,392,559,503 | 5,051 | Revert task removal in folder-based builders | Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to in... | closed | https://github.com/huggingface/datasets/pull/5051 | 2022-09-30T14:50:03 | 2022-10-03T12:23:35 | 2022-10-03T12:21:31 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,392,381,882 | 5,050 | Restore saved format state in `load_from_disk` | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | closed | https://github.com/huggingface/datasets/issues/5050 | 2022-09-30T12:40:07 | 2022-10-11T16:49:24 | 2022-10-11T16:49:24 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,392,361,381 | 5,049 | Add `kwargs` to `Dataset.from_generator` | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | closed | https://github.com/huggingface/datasets/pull/5049 | 2022-09-30T12:24:27 | 2022-10-03T11:00:11 | 2022-10-03T10:58:15 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,392,170,680 | 5,048 | Fix bug with labels of eurlex config of lex_glue dataset | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequ... | closed | https://github.com/huggingface/datasets/pull/5048 | 2022-09-30T09:47:12 | 2022-09-30T16:30:25 | 2022-09-30T16:21:41 | {
"login": "iliaschalkidis",
"id": 1626984,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,392,088,398 | 5,047 | Fix cats_vs_dogs | Reported in https://github.com/huggingface/datasets/pull/3878
I updated the number of examples | closed | https://github.com/huggingface/datasets/pull/5047 | 2022-09-30T08:47:29 | 2022-09-30T10:23:22 | 2022-09-30T09:34:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,391,372,519 | 5,046 | Audiofolder creates empty Dataset if files same level as metadata | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain... | closed | https://github.com/huggingface/datasets/issues/5046 | 2022-09-29T19:17:23 | 2022-10-28T13:05:07 | 2022-10-28T13:05:07 | {
"login": "msis",
"id": 577139,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "hacktoberfest",
"color": "DF8D62"
}
] | false | [] |
1,391,287,609 | 5,045 | Automatically revert to last successful commit to hub when a push_to_hub is interrupted | **Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names do... | closed | https://github.com/huggingface/datasets/issues/5045 | 2022-09-29T18:08:12 | 2023-10-16T13:30:49 | 2023-10-16T13:30:49 | {
"login": "jorahn",
"id": 13120204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,391,242,908 | 5,044 | integrate `load_from_disk` into `load_dataset` | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how ... | open | https://github.com/huggingface/datasets/issues/5044 | 2022-09-29T17:37:12 | 2025-06-28T09:00:44 | null | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,391,141,773 | 5,043 | Fix `flatten_indices` with empty indices mapping | Fix #5038 | closed | https://github.com/huggingface/datasets/pull/5043 | 2022-09-29T16:17:28 | 2022-09-30T15:46:39 | 2022-09-30T15:44:25 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,390,762,877 | 5,042 | Update swiss judgment prediction | I forgot to add the new citation. | closed | https://github.com/huggingface/datasets/pull/5042 | 2022-09-29T12:10:02 | 2022-09-30T07:14:00 | 2022-09-29T14:32:02 | {
"login": "JoelNiklaus",
"id": 3775944,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,390,722,230 | 5,041 | Support streaming hendrycks_test dataset. | This PR:
- supports streaming
- fixes the description section of the dataset card | closed | https://github.com/huggingface/datasets/pull/5041 | 2022-09-29T11:37:58 | 2022-09-30T07:13:38 | 2022-09-29T12:07:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,390,566,428 | 5,040 | Fix NonMatchingChecksumError in hendrycks_test dataset | Update metadata JSON.
Fix #5039. | closed | https://github.com/huggingface/datasets/pull/5040 | 2022-09-29T09:37:43 | 2022-09-29T10:06:22 | 2022-09-29T10:04:19 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,390,353,315 | 5,039 | Hendrycks Checksum | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.... | closed | https://github.com/huggingface/datasets/issues/5039 | 2022-09-29T06:56:20 | 2022-09-29T10:23:30 | 2022-09-29T10:04:20 | {
"login": "DanielHesslow",
"id": 9974388,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,389,631,122 | 5,038 | `Dataset.unique` showing wrong output after filtering | ## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(data... | closed | https://github.com/huggingface/datasets/issues/5038 | 2022-09-28T16:20:35 | 2022-09-30T15:44:25 | 2022-09-30T15:44:25 | {
"login": "mxschmdt",
"id": 4904985,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,389,244,722 | 5,037 | Improve CI performance speed of PackagedDatasetTest | This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest):
- Duration (without parallelism) before: 334.78s (5.58m)
- Duration (without parallelism) afterwards: 0.48s
The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the ... | closed | https://github.com/huggingface/datasets/pull/5037 | 2022-09-28T12:08:16 | 2022-09-30T16:05:42 | 2022-09-30T16:03:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,389,094,075 | 5,036 | Add oversampling strategy iterable datasets interleave | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely... | closed | https://github.com/huggingface/datasets/pull/5036 | 2022-09-28T10:10:23 | 2022-09-30T12:30:48 | 2022-09-30T12:28:23 | {
"login": "ylacombe",
"id": 52246514,
"type": "User"
} | [] | true | [] |
1,388,914,476 | 5,035 | Fix typos in load docstrings and comments | Minor fix of typos in load docstrings and comments | closed | https://github.com/huggingface/datasets/pull/5035 | 2022-09-28T08:05:07 | 2022-09-28T17:28:40 | 2022-09-28T17:26:15 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,388,855,136 | 5,034 | Update README.md of yahoo_answers_topics dataset | null | closed | https://github.com/huggingface/datasets/pull/5034 | 2022-09-28T07:17:33 | 2022-10-06T15:56:05 | 2022-10-04T13:49:25 | {
"login": "borgr",
"id": 6416600,
"type": "User"
} | [] | true | [] |
1,388,842,236 | 5,033 | Remove redundant code from some dataset module factories | This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576 | closed | https://github.com/huggingface/datasets/pull/5033 | 2022-09-28T07:06:26 | 2022-09-28T16:57:51 | 2022-09-28T16:55:12 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,388,270,935 | 5,032 | new dataset type: single-label and multi-label video classification | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I h... | open | https://github.com/huggingface/datasets/issues/5032 | 2022-09-27T19:40:11 | 2022-11-02T19:10:13 | null | {
"login": "fcakyon",
"id": 34196005,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,388,201,146 | 5,031 | Support hfh 0.10 implicit auth | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fi... | closed | https://github.com/huggingface/datasets/pull/5031 | 2022-09-27T18:37:49 | 2022-09-30T09:18:24 | 2022-09-30T09:15:59 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,388,061,340 | 5,030 | Fast dataset iter | Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fe... | closed | https://github.com/huggingface/datasets/pull/5030 | 2022-09-27T16:44:51 | 2022-09-29T15:50:44 | 2022-09-29T15:48:17 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,387,600,960 | 5,029 | Fix import in `ClassLabel` docstring example | This PR addresses a super-simple fix: adding a missing `import` to the `ClassLabel` docstring example, as it was formatted as `from datasets Features`, so it's been fixed to `from datasets import Features`. | closed | https://github.com/huggingface/datasets/pull/5029 | 2022-09-27T11:35:29 | 2022-09-27T14:03:24 | 2022-09-27T12:27:50 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,386,272,533 | 5,028 | passing parameters to the method passed to Dataset.from_generator() | Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[id... | closed | https://github.com/huggingface/datasets/issues/5028 | 2022-09-26T15:20:06 | 2022-10-03T13:00:00 | 2022-10-03T13:00:00 | {
"login": "Basir-mahmood",
"id": 64276129,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,386,153,072 | 5,027 | Fix typo in error message | null | closed | https://github.com/huggingface/datasets/pull/5027 | 2022-09-26T14:10:09 | 2022-09-27T12:28:03 | 2022-09-27T12:26:02 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
1,386,071,154 | 5,026 | patch CI_HUB_TOKEN_PATH with Path instead of str | Should fix the tests for `huggingface_hub==0.10.0rc0` prerelease (see [failed CI](https://github.com/huggingface/datasets/actions/runs/3127805250/jobs/5074879144)).
Related to [this thread](https://huggingface.slack.com/archives/C02V5EA0A95/p1664195165294559) (internal link).
Note: this should be a backward compat... | closed | https://github.com/huggingface/datasets/pull/5026 | 2022-09-26T13:19:01 | 2022-09-26T14:30:55 | 2022-09-26T14:28:45 | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [] | true | [] |
1,386,011,239 | 5,025 | Custom Json Dataset Throwing Error when batch is False | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here -... | closed | https://github.com/huggingface/datasets/issues/5025 | 2022-09-26T12:38:39 | 2022-09-27T19:50:00 | 2022-09-27T19:50:00 | {
"login": "jmandivarapu1",
"id": 21245519,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,385,947,624 | 5,024 | Fix string features of xcsr dataset | This PR fixes string features of `xcsr` dataset to avoid character splitting.
Fix #5023.
CC: @yangxqiao, @yuchenlin | closed | https://github.com/huggingface/datasets/pull/5024 | 2022-09-26T11:55:36 | 2022-09-28T07:56:18 | 2022-09-28T07:54:19 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,385,881,112 | 5,023 | Text strings are split into lists of characters in xcsr dataset | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
... | closed | https://github.com/huggingface/datasets/issues/5023 | 2022-09-26T11:11:50 | 2022-09-28T07:54:20 | 2022-09-28T07:54:20 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,385,432,859 | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | Fix #5017.
CC: @yangxqiao, @yuchenlin | closed | https://github.com/huggingface/datasets/pull/5022 | 2022-09-26T05:13:39 | 2022-09-26T12:27:20 | 2022-09-26T10:57:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,385,351,250 | 5,021 | Split is inferred from filename and overrides metadata.jsonl | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce th... | closed | https://github.com/huggingface/datasets/issues/5021 | 2022-09-26T03:22:14 | 2022-09-29T08:07:50 | 2022-09-29T08:07:50 | {
"login": "float-trip",
"id": 102226344,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,384,684,078 | 5,020 | Fix URLs of sbu_captions dataset | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_ws... | closed | https://github.com/huggingface/datasets/pull/5020 | 2022-09-24T14:00:33 | 2022-09-28T07:20:20 | 2022-09-28T07:18:23 | {
"login": "donglixp",
"id": 1070872,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,384,673,718 | 5,019 | Update swiss judgment prediction | Hi,
I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation:
`Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr... | closed | https://github.com/huggingface/datasets/pull/5019 | 2022-09-24T13:28:57 | 2022-09-28T07:13:39 | 2022-09-28T05:48:50 | {
"login": "JoelNiklaus",
"id": 3775944,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,384,146,585 | 5,018 | Create all YAML dataset_info | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 ... | closed | https://github.com/huggingface/datasets/pull/5018 | 2022-09-23T18:08:15 | 2023-09-24T09:33:21 | 2022-10-03T17:08:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,384,022,463 | 5,017 | xcsr: X-CSQA simply uses english for all alleged non-english data | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original C... | closed | https://github.com/huggingface/datasets/issues/5017 | 2022-09-23T16:11:54 | 2022-09-26T10:57:31 | 2022-09-26T10:57:31 | {
"login": "thesofakillers",
"id": 26286291,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
1,383,883,058 | 5,016 | Fix tar extraction vuln | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I ... | closed | https://github.com/huggingface/datasets/pull/5016 | 2022-09-23T14:22:21 | 2022-09-29T12:42:26 | 2022-09-29T12:40:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,383,485,558 | 5,015 | Transfer dataset scripts to Hub | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should r... | closed | https://github.com/huggingface/datasets/issues/5015 | 2022-09-23T08:48:10 | 2022-10-05T07:15:57 | 2022-10-05T07:15:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,383,422,639 | 5,014 | I need to read the custom dataset in conll format | I need to read the custom dataset in conll format
| open | https://github.com/huggingface/datasets/issues/5014 | 2022-09-23T07:49:42 | 2022-11-02T11:57:15 | null | {
"login": "shell-nlp",
"id": 39985245,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,383,415,971 | 5,013 | would huggingface like publish cpp binding for datasets package ? | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | closed | https://github.com/huggingface/datasets/issues/5013 | 2022-09-23T07:42:49 | 2023-02-24T16:20:57 | 2023-02-24T16:20:57 | {
"login": "mullerhai",
"id": 6143404,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | false | [] |
1,382,851,096 | 5,012 | Force JSON format regardless of file naming on S3 | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adap... | closed | https://github.com/huggingface/datasets/issues/5012 | 2022-09-22T18:28:15 | 2023-08-16T09:58:36 | 2023-08-16T09:58:36 | {
"login": "junwang-wish",
"id": 112650299,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,382,609,587 | 5,011 | Audio: `encode_example` fails with IndexError | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functi... | closed | https://github.com/huggingface/datasets/issues/5011 | 2022-09-22T15:07:27 | 2022-09-23T09:05:18 | 2022-09-23T09:05:18 | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,382,308,799 | 5,010 | Add deprecation warning to multilingual_librispeech dataset card | Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well.
The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag.
Related to:
- #4060 | closed | https://github.com/huggingface/datasets/pull/5010 | 2022-09-22T11:41:59 | 2022-09-23T12:04:37 | 2022-09-23T12:02:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
1,381,194,067 | 5,009 | Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I ge... | closed | https://github.com/huggingface/datasets/issues/5009 | 2022-09-21T16:23:06 | 2022-09-29T13:07:29 | 2022-09-29T13:07:29 | {
"login": "ykl7",
"id": 4996184,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,381,090,903 | 5,008 | Re-apply input columns change | Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance.
Revert #5006 (which in turn reverts #4971)
Fix https://github.com/huggingface/datasets/issues/4858 | closed | https://github.com/huggingface/datasets/pull/5008 | 2022-09-21T15:09:01 | 2022-09-22T13:57:36 | 2022-09-22T13:55:23 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,381,007,607 | 5,007 | Add some note about running the transformers ci before a release | null | closed | https://github.com/huggingface/datasets/pull/5007 | 2022-09-21T14:14:25 | 2022-09-22T10:16:14 | 2022-09-22T10:14:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,380,968,395 | 5,006 | Revert input_columns change | Revert https://github.com/huggingface/datasets/pull/4971
Fix https://github.com/huggingface/datasets/issues/5005 | closed | https://github.com/huggingface/datasets/pull/5006 | 2022-09-21T13:49:20 | 2022-09-21T14:14:33 | 2022-09-21T14:11:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.