id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,348,653,895 | 6,965 | Improve skip take shuffling and distributed | set the right behavior of skip/take depending on whether it's called after or before shuffle/split_by_node | closed | https://github.com/huggingface/datasets/pull/6965 | 2024-06-12T12:30:27 | 2024-06-24T15:22:21 | 2024-06-24T15:16:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,344,973,229 | 6,964 | Fix resuming arrow format | following https://github.com/huggingface/datasets/pull/6658 | closed | https://github.com/huggingface/datasets/pull/6964 | 2024-06-10T22:40:33 | 2024-06-14T15:04:49 | 2024-06-14T14:58:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,344,269,477 | 6,963 | [Streaming] retry on requests errors | reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader
cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) ... | closed | https://github.com/huggingface/datasets/pull/6963 | 2024-06-10T15:51:56 | 2024-06-28T09:53:11 | 2024-06-28T09:46:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,343,394,378 | 6,962 | fix(ci): remove unnecessary permissions | ### What does this PR do?
Remove unnecessary permissions granted to the actions workflow.
Sorry for the mishap. | closed | https://github.com/huggingface/datasets/pull/6962 | 2024-06-10T09:28:02 | 2024-06-11T08:31:52 | 2024-06-11T08:25:47 | {
"login": "McPatate",
"id": 9112841,
"type": "User"
} | [] | true | [] |
2,342,022,418 | 6,961 | Manual downloads should count as downloads | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | open | https://github.com/huggingface/datasets/issues/6961 | 2024-06-09T04:52:06 | 2024-06-13T16:05:00 | null | {
"login": "umarbutler",
"id": 8473183,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,340,791,685 | 6,960 | feat(ci): add trufflehog secrets detection | ### What does this PR do?
Adding a GH action to scan for leaked secrets on each commit.
| closed | https://github.com/huggingface/datasets/pull/6960 | 2024-06-07T16:18:23 | 2024-06-08T14:58:27 | 2024-06-08T14:52:18 | {
"login": "McPatate",
"id": 9112841,
"type": "User"
} | [] | true | [] |
2,340,229,908 | 6,959 | Better error handling in `dataset_module_factory` | cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link)
This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed:
1. Use multiple `except ... as e` instead of using `isinstance(e, ...)`
2. Alway... | closed | https://github.com/huggingface/datasets/pull/6959 | 2024-06-07T11:24:15 | 2024-06-10T07:33:53 | 2024-06-10T07:27:43 | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [] | true | [] |
2,337,476,383 | 6,958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | ### Describe the bug
```
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory
raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg)
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on t... | closed | https://github.com/huggingface/datasets/issues/6958 | 2024-06-06T06:52:19 | 2024-07-01T11:27:46 | 2024-07-01T11:27:46 | {
"login": "wangguan1995",
"id": 39621324,
"type": "User"
} | [] | false | [] |
2,335,559,400 | 6,957 | Fix typos in docs | Fix typos in docs introduced by:
- #6956
Typos:
- `comparisions` => `comparisons`
- two consecutive sentences both ending in colon
- split one sentence into two
Sorry, I did not have time to review that PR.
CC: @lhoestq | closed | https://github.com/huggingface/datasets/pull/6957 | 2024-06-05T10:46:47 | 2024-06-05T13:01:07 | 2024-06-05T12:43:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,333,940,021 | 6,956 | update docs on N-dim arrays | null | closed | https://github.com/huggingface/datasets/pull/6956 | 2024-06-04T16:32:19 | 2024-06-04T16:46:34 | 2024-06-04T16:40:27 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,333,802,815 | 6,955 | Fix small typo | null | closed | https://github.com/huggingface/datasets/pull/6955 | 2024-06-04T15:19:02 | 2024-06-05T10:18:56 | 2024-06-04T15:20:55 | {
"login": "marcenacp",
"id": 17081356,
"type": "User"
} | [] | true | [] |
2,333,530,558 | 6,954 | Remove default `trust_remote_code=True` | TODO:
- [x] fix tests | closed | https://github.com/huggingface/datasets/pull/6954 | 2024-06-04T13:22:56 | 2024-06-17T16:32:24 | 2024-06-07T12:20:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,333,366,120 | 6,953 | Remove canonical datasets from docs | Remove canonical datasets from docs, now that we no longer have canonical datasets. | closed | https://github.com/huggingface/datasets/issues/6953 | 2024-06-04T12:09:03 | 2024-07-01T11:31:25 | 2024-07-01T11:31:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
2,333,320,411 | 6,952 | Move info_utils errors to exceptions module | Move `info_utils` errors to `exceptions` module.
Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones). | closed | https://github.com/huggingface/datasets/pull/6952 | 2024-06-04T11:48:32 | 2024-06-10T14:09:59 | 2024-06-10T14:03:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,333,231,042 | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | closed | https://github.com/huggingface/datasets/issues/6951 | 2024-06-04T11:02:33 | 2024-11-26T08:32:18 | 2024-07-01T11:33:10 | {
"login": "windmaple",
"id": 5577741,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,333,005,974 | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | closed | https://github.com/huggingface/datasets/issues/6950 | 2024-06-04T09:18:32 | 2024-06-25T08:05:49 | 2024-06-25T08:05:49 | {
"login": "iansheng",
"id": 42494185,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
2,332,336,573 | 6,949 | load_dataset error | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | closed | https://github.com/huggingface/datasets/issues/6949 | 2024-06-04T01:24:45 | 2024-07-01T11:33:46 | 2024-07-01T11:33:46 | {
"login": "frederichen01",
"id": 27952522,
"type": "User"
} | [] | false | [] |
2,331,758,300 | 6,948 | to_tf_dataset: Visible devices cannot be modified after being initialized | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _b... | open | https://github.com/huggingface/datasets/issues/6948 | 2024-06-03T18:10:57 | 2024-06-03T18:10:57 | null | {
"login": "logasja",
"id": 7151661,
"type": "User"
} | [] | false | [] |
2,331,114,055 | 6,947 | FileNotFoundError:error when loading C4 dataset | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | closed | https://github.com/huggingface/datasets/issues/6947 | 2024-06-03T13:06:33 | 2024-06-25T06:21:28 | 2024-06-25T06:21:28 | {
"login": "W-215",
"id": 62374585,
"type": "User"
} | [] | false | [] |
2,330,276,848 | 6,946 | Re-enable import sorting disabled by flake8:noqa directive when using ruff linter | Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR:
- #5519
Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR:
- #6619
That replacement was wrong because we kept the `is... | closed | https://github.com/huggingface/datasets/pull/6946 | 2024-06-03T06:24:47 | 2024-06-04T10:00:08 | 2024-06-04T09:54:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,330,224,869 | 6,945 | Update yanked version of minimum requests requirement | Update yanked version of minimum requests requirement.
Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/ | closed | https://github.com/huggingface/datasets/pull/6945 | 2024-06-03T05:45:50 | 2024-06-18T07:36:15 | 2024-06-03T06:09:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,330,207,120 | 6,944 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/6944 | 2024-06-03T05:29:59 | 2024-06-03T05:37:51 | 2024-06-03T05:31:47 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,330,176,890 | 6,943 | Release 2.19.2 | null | closed | https://github.com/huggingface/datasets/pull/6943 | 2024-06-03T05:01:50 | 2024-06-03T05:17:41 | 2024-06-03T05:17:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,329,562,382 | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | closed | https://github.com/huggingface/datasets/issues/6942 | 2024-06-02T09:43:34 | 2024-06-04T09:54:24 | 2024-06-04T09:54:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,328,930,165 | 6,941 | Supporting FFCV: Fast Forward Computer Vision | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | open | https://github.com/huggingface/datasets/issues/6941 | 2024-06-01T05:34:52 | 2024-06-01T05:34:52 | null | {
"login": "Luciennnnnnn",
"id": 20135317,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,328,637,831 | 6,940 | Enable Sharding to Equal Sized Shards | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | open | https://github.com/huggingface/datasets/issues/6940 | 2024-05-31T21:55:50 | 2024-06-01T07:34:12 | null | {
"login": "yuvalkirstain",
"id": 57996478,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,328,059,386 | 6,939 | ExpectedMoreSplits error when using data_dir | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
F... | closed | https://github.com/huggingface/datasets/issues/6939 | 2024-05-31T15:08:42 | 2024-05-31T17:10:39 | 2024-05-31T17:10:39 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,327,568,281 | 6,938 | Fix expected splits when passing data_files or dir | reported on slack:
The following code snippet gives an error with v2.19 but not with v2.18:
from datasets import load_dataset
```
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
and the error is:
```
Traceback (most recent ... | closed | https://github.com/huggingface/datasets/pull/6938 | 2024-05-31T11:04:22 | 2024-05-31T15:28:03 | 2024-05-31T15:28:02 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,327,212,611 | 6,937 | JSON loader implicitly coerces floats to integers | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===========================... | open | https://github.com/huggingface/datasets/issues/6937 | 2024-05-31T08:09:12 | 2025-06-24T05:49:20 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,326,119,853 | 6,936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), e... | open | https://github.com/huggingface/datasets/issues/6936 | 2024-05-30T16:48:39 | 2025-02-06T22:12:52 | null | {
"login": "ycattan",
"id": 54974949,
"type": "User"
} | [] | false | [] |
2,325,612,022 | 6,935 | Support for pathlib.Path in datasets 2.19.0 | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets impor... | open | https://github.com/huggingface/datasets/issues/6935 | 2024-05-30T12:53:36 | 2025-01-14T11:50:22 | null | {
"login": "lamyiowce",
"id": 12202811,
"type": "User"
} | [] | false | [] |
2,325,341,717 | 6,934 | Revert ci user | null | closed | https://github.com/huggingface/datasets/pull/6934 | 2024-05-30T10:45:26 | 2024-05-31T10:25:08 | 2024-05-30T10:45:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,325,300,800 | 6,933 | update ci user | token is ok to be public since it's only for the hub-ci | closed | https://github.com/huggingface/datasets/pull/6933 | 2024-05-30T10:23:02 | 2024-05-30T10:30:54 | 2024-05-30T10:23:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,324,729,267 | 6,932 | Update dataset_dict.py | shape returns (number of rows, number of columns) | closed | https://github.com/huggingface/datasets/pull/6932 | 2024-05-30T05:22:35 | 2024-06-04T12:56:20 | 2024-06-04T12:50:13 | {
"login": "Arunprakash-A",
"id": 20263729,
"type": "User"
} | [] | true | [] |
2,323,457,525 | 6,931 | [WebDataset] Support compressed files | null | closed | https://github.com/huggingface/datasets/pull/6931 | 2024-05-29T14:19:06 | 2024-05-29T16:33:18 | 2024-05-29T16:24:21 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,323,225,922 | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | open | https://github.com/huggingface/datasets/issues/6930 | 2024-05-29T12:40:05 | 2024-07-23T06:25:24 | null | {
"login": "Polarisamoon",
"id": 41767521,
"type": "User"
} | [] | false | [] |
2,322,980,077 | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | open | https://github.com/huggingface/datasets/issues/6929 | 2024-05-29T10:36:06 | 2024-05-29T20:51:56 | null | {
"login": "zinc75",
"id": 73740254,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,322,267,727 | 6,928 | Update process.mdx: Code Listings Fixes | null | closed | https://github.com/huggingface/datasets/pull/6928 | 2024-05-29T03:17:07 | 2024-06-04T13:08:19 | 2024-06-04T12:55:00 | {
"login": "FadyMorris",
"id": 16918280,
"type": "User"
} | [] | true | [] |
2,322,260,725 | 6,927 | Update process.mdx: Minor Code Listings Updates and Fixes | null | closed | https://github.com/huggingface/datasets/pull/6927 | 2024-05-29T03:09:01 | 2024-05-29T03:12:46 | 2024-05-29T03:12:46 | {
"login": "FadyMorris",
"id": 16918280,
"type": "User"
} | [] | true | [] |
2,322,164,287 | 6,926 | Update process.mdx: Fix code listing in Shard section | null | closed | https://github.com/huggingface/datasets/pull/6926 | 2024-05-29T01:25:55 | 2024-05-29T03:11:20 | 2024-05-29T03:11:08 | {
"login": "FadyMorris",
"id": 16918280,
"type": "User"
} | [] | true | [] |
2,321,084,967 | 6,925 | Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets | Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes:
- `data_dir`
- `data_files`
The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases.
Additionally, also if the user passes `revision` other than "main" (so that ... | closed | https://github.com/huggingface/datasets/pull/6925 | 2024-05-28T13:33:38 | 2024-11-07T20:41:58 | 2024-05-31T17:10:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,320,531,015 | 6,924 | Caching map result of DatasetDict. | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/... | open | https://github.com/huggingface/datasets/issues/6924 | 2024-05-28T09:07:41 | 2024-05-28T09:07:41 | null | {
"login": "MostHumble",
"id": 56939432,
"type": "User"
} | [] | false | [] |
2,319,292,872 | 6,923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](... | open | https://github.com/huggingface/datasets/issues/6919 | 2024-05-24T14:59:45 | 2024-05-24T14:59:45 | null | {
"login": "juanqui",
"id": 67964,
"type": "User"
} | [] | false | [] |
2,315,322,738 | 6,918 | NonMatchingSplitsSizesError when using data_dir | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on t... | closed | https://github.com/huggingface/datasets/issues/6918 | 2024-05-24T12:43:39 | 2024-05-31T17:10:38 | 2024-05-31T17:10:38 | {
"login": "srehaag",
"id": 86664538,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,314,683,663 | 6,917 | WinError 32 The process cannot access the file during load_dataset | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "tran... | open | https://github.com/huggingface/datasets/issues/6917 | 2024-05-24T07:54:51 | 2024-05-24T07:54:51 | null | {
"login": "elwe-2808",
"id": 56682168,
"type": "User"
} | [] | false | [] |
2,311,675,564 | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | closed | https://github.com/huggingface/datasets/issues/6916 | 2024-05-22T23:52:15 | 2024-05-23T00:07:53 | 2024-05-23T00:07:53 | {
"login": "jetlime",
"id": 29337128,
"type": "User"
} | [] | false | [] |
2,310,564,961 | 6,915 | Validate config name and data_files in packaged modules | Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method.
Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21e... | closed | https://github.com/huggingface/datasets/pull/6915 | 2024-05-22T13:36:33 | 2024-06-06T09:32:10 | 2024-06-06T09:24:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,310,107,326 | 6,914 | Preserve JSON column order and support list of strings field | Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts).
Additionally, support JSON file with a list of strings field.
Fix #6913. | closed | https://github.com/huggingface/datasets/pull/6914 | 2024-05-22T09:58:54 | 2024-05-29T13:18:47 | 2024-05-29T13:12:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,309,605,889 | 6,913 | Column order is nondeterministic when loading from JSON | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have column... | closed | https://github.com/huggingface/datasets/issues/6913 | 2024-05-22T05:30:14 | 2024-05-29T13:12:24 | 2024-05-29T13:12:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,309,365,961 | 6,912 | Add MedImg for streaming | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your con... | open | https://github.com/huggingface/datasets/issues/6912 | 2024-05-22T00:55:30 | 2024-09-05T16:53:54 | null | {
"login": "lhallee",
"id": 72926928,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
2,308,152,711 | 6,911 | Remove dead code for non-dict data_files from packaged modules | Remove dead code for non-dict data_files from packaged modules.
Since the merge of this PR:
- #2986
the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code. | closed | https://github.com/huggingface/datasets/pull/6911 | 2024-05-21T12:10:24 | 2024-05-23T08:05:58 | 2024-05-23T07:59:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,307,570,084 | 6,910 | Fix wrong type hints in data_files | Fix wrong type hints in data_files introduced in:
- #6493 | closed | https://github.com/huggingface/datasets/pull/6910 | 2024-05-21T07:41:09 | 2024-05-23T06:04:05 | 2024-05-23T05:58:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,307,508,120 | 6,909 | Update requests >=2.32.1 to fix vulnerability | Update requests >=2.32.1 to fix vulnerability. | closed | https://github.com/huggingface/datasets/pull/6909 | 2024-05-21T07:11:20 | 2024-05-21T07:45:58 | 2024-05-21T07:38:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,304,958,116 | 6,908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
... | closed | https://github.com/huggingface/datasets/issues/6908 | 2024-05-20T02:43:59 | 2024-05-24T10:58:09 | 2024-05-24T10:58:09 | {
"login": "guch8017",
"id": 38173059,
"type": "User"
} | [] | false | [] |
2,303,855,833 | 6,907 | Support the deserialization of json lines files comprised of lists | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a v... | open | https://github.com/huggingface/datasets/issues/6907 | 2024-05-18T05:07:23 | 2024-05-18T08:53:28 | null | {
"login": "umarbutler",
"id": 8473183,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,303,679,119 | 6,906 | irc_disentangle - Issue with splitting data | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
#... | closed | https://github.com/huggingface/datasets/issues/6906 | 2024-05-17T23:19:37 | 2024-07-16T00:21:56 | 2024-07-08T06:18:08 | {
"login": "eor51355",
"id": 114260604,
"type": "User"
} | [] | false | [] |
2,303,098,587 | 6,905 | Extraction protocol for arrow files is not defined | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_ut... | closed | https://github.com/huggingface/datasets/issues/6905 | 2024-05-17T16:01:41 | 2025-02-06T19:50:22 | 2025-02-06T19:50:20 | {
"login": "radulescupetru",
"id": 26553095,
"type": "User"
} | [] | false | [] |
2,302,912,179 | 6,904 | Fix decoding multi part extension | e.g. a field named `url.txt` should be a treated as text
I also included a small fix to support .npz correctly | closed | https://github.com/huggingface/datasets/pull/6904 | 2024-05-17T14:32:57 | 2024-05-17T14:52:56 | 2024-05-17T14:46:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,300,436,053 | 6,903 | Add the option of saving in parquet instead of arrow | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big... | open | https://github.com/huggingface/datasets/issues/6903 | 2024-05-16T13:35:51 | 2025-05-19T12:14:14 | null | {
"login": "arita37",
"id": 18707623,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,300,256,241 | 6,902 | Make CLI convert_to_parquet not raise error if no rights to create script branch | Make CLI convert_to_parquet not raise error if no rights to create "script" branch.
Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed.
Fix #6901.
Bug introduced in datasets-2.19.0 by:
- #6809 | closed | https://github.com/huggingface/datasets/pull/6902 | 2024-05-16T12:21:27 | 2024-06-03T04:43:17 | 2024-05-16T12:51:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,300,167,465 | 6,901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/ut... | closed | https://github.com/huggingface/datasets/issues/6901 | 2024-05-16T11:40:22 | 2024-05-16T12:51:06 | 2024-05-16T12:51:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,298,489,733 | 6,900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["_... | closed | https://github.com/huggingface/datasets/issues/6900 | 2024-05-15T17:48:34 | 2024-06-28T09:30:13 | 2024-06-28T09:30:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
2,298,059,597 | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | open | https://github.com/huggingface/datasets/issues/6899 | 2024-05-15T14:11:35 | 2025-04-01T20:48:03 | null | {
"login": "sohamparikh",
"id": 11831521,
"type": "User"
} | [] | false | [] |
2,294,432,108 | 6,898 | Fix YAML error in README files appearing on GitHub | Fix YAML error in README files appearing on GitHub.
See error message:

Fix #6897. | closed | https://github.com/huggingface/datasets/pull/6898 | 2024-05-14T05:21:57 | 2024-05-16T14:36:57 | 2024-05-16T14:28:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,293,428,243 | 6,897 | datasets template guide :: issue in documentation YAML | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the ... | closed | https://github.com/huggingface/datasets/issues/6897 | 2024-05-13T17:33:59 | 2024-05-16T14:28:17 | 2024-05-16T14:28:17 | {
"login": "bghira",
"id": 59658056,
"type": "User"
} | [] | false | [] |
2,293,176,061 | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipyth... | open | https://github.com/huggingface/datasets/issues/6896 | 2024-05-13T15:41:57 | 2025-03-25T01:21:06 | null | {
"login": "finiteautomata",
"id": 167943,
"type": "User"
} | [] | false | [] |
2,292,993,156 | 6,895 | Document that to_json defaults to JSON Lines | Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring.
Fix #6894. | closed | https://github.com/huggingface/datasets/pull/6895 | 2024-05-13T14:22:34 | 2024-05-16T14:37:25 | 2024-05-16T14:31:26 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,292,840,226 | 6,894 | Better document defaults of to_json | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | closed | https://github.com/huggingface/datasets/issues/6894 | 2024-05-13T13:30:54 | 2024-05-16T14:31:27 | 2024-05-16T14:31:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
2,292,677,439 | 6,893 | Close gzipped files properly | close https://github.com/huggingface/datasets/issues/6877 | closed | https://github.com/huggingface/datasets/pull/6893 | 2024-05-13T12:24:39 | 2024-05-13T13:53:17 | 2024-05-13T13:01:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,291,201,347 | 6,892 | Add support for categorical/dictionary types | Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column.
Unfortunately, huggingface datasets currently does n... | closed | https://github.com/huggingface/datasets/pull/6892 | 2024-05-12T07:15:08 | 2024-06-07T15:01:39 | 2024-06-07T12:20:42 | {
"login": "EthanSteinberg",
"id": 342233,
"type": "User"
} | [] | true | [] |
2,291,118,869 | 6,891 | Unable to load JSON saved using `to_json` | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset... | closed | https://github.com/huggingface/datasets/issues/6891 | 2024-05-12T01:02:51 | 2024-05-16T14:32:55 | 2024-05-12T07:02:02 | {
"login": "DarshanDeshpande",
"id": 39432636,
"type": "User"
} | [] | false | [] |
2,288,699,041 | 6,890 | add `with_transform` and/or `set_transform` to IterableDataset | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would ... | open | https://github.com/huggingface/datasets/issues/6890 | 2024-05-10T01:00:12 | 2024-05-10T01:00:46 | null | {
"login": "not-lain",
"id": 70411813,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,287,720,539 | 6,889 | fix bug #6877 | fix bug #6877 due to maybe f becomes invaild after yield process
the results are below:
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s]
Resolving data files: 100%|████████... | closed | https://github.com/huggingface/datasets/pull/6889 | 2024-05-09T13:38:40 | 2024-05-13T13:35:32 | 2024-05-13T13:35:32 | {
"login": "arthasking123",
"id": 16257131,
"type": "User"
} | [] | true | [] |
2,287,169,676 | 6,888 | Support WebDataset containing file basenames with dots | Support WebDataset containing file basenames with dots.
Fix #6880. | closed | https://github.com/huggingface/datasets/pull/6888 | 2024-05-09T08:25:30 | 2024-05-10T13:54:06 | 2024-05-10T13:54:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,286,786,396 | 6,887 | FAISS load to None | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transf... | open | https://github.com/huggingface/datasets/issues/6887 | 2024-05-09T02:43:50 | 2024-05-16T20:44:23 | null | {
"login": "brainer3220",
"id": 40418544,
"type": "User"
} | [] | false | [] |
2,286,328,984 | 6,886 | load_dataset with data_dir and cache_dir set fail with not supported | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="... | open | https://github.com/huggingface/datasets/issues/6886 | 2024-05-08T19:52:35 | 2024-05-08T19:58:11 | null | {
"login": "fah",
"id": 322496,
"type": "User"
} | [] | false | [] |
2,285,115,400 | 6,885 | Support jax 0.4.27 in CI tests | Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists).
Fix #6884. | closed | https://github.com/huggingface/datasets/pull/6885 | 2024-05-08T09:19:37 | 2024-05-08T09:43:19 | 2024-05-08T09:35:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,284,839,687 | 6,884 | CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device' | After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error:
```Python traceback
AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
```
See: https://github.com/huggingface/datasets/actions/runs/8997488... | closed | https://github.com/huggingface/datasets/issues/6884 | 2024-05-08T07:01:47 | 2024-05-08T09:35:17 | 2024-05-08T09:35:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,284,808,399 | 6,883 | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset.
The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3
The bug #6881 was introduced in datasets-2.19.0 by this PR:
- #6739... | closed | https://github.com/huggingface/datasets/pull/6883 | 2024-05-08T06:43:29 | 2024-08-28T13:13:57 | 2024-05-16T14:34:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,284,803,158 | 6,882 | Connection Error When Using By-pass Proxies | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | open | https://github.com/huggingface/datasets/issues/6882 | 2024-05-08T06:40:14 | 2024-05-17T06:38:30 | null | {
"login": "MRNOBODY-ZST",
"id": 78351684,
"type": "User"
} | [] | false | [] |
2,284,794,009 | 6,881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1... | closed | https://github.com/huggingface/datasets/issues/6881 | 2024-05-08T06:33:57 | 2024-07-18T06:49:30 | 2024-05-16T14:34:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,283,278,337 | 6,880 | Webdataset: KeyError: 'png' on some datasets when streaming | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloading data: 100%
1.41G/1.41G [00:48<00:00, 17.2MB/s]
Downloading data: 100%
619M/619M [00:11<00:00, 57.4MB/s]
Generating train sp... | open | https://github.com/huggingface/datasets/issues/6880 | 2024-05-07T13:09:02 | 2024-05-14T20:34:05 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
2,282,968,259 | 6,879 | Batched mapping does not raise an error if values for an existing column are empty | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the... | open | https://github.com/huggingface/datasets/issues/6879 | 2024-05-07T11:02:40 | 2024-05-07T11:02:40 | null | {
"login": "felix-schneider",
"id": 208336,
"type": "User"
} | [] | false | [] |
2,282,879,491 | 6,878 | Create function to convert to parquet | Analogously with `delete_from_hub`, this PR:
- creates the Python function `convert_to_parquet`
- makes the corresponding CLI command use that function.
This way, the functionality can be used both from a terminal and from a Python console.
This PR also implements a test for convert_to_parquet function. | closed | https://github.com/huggingface/datasets/pull/6878 | 2024-05-07T10:27:07 | 2024-05-16T14:46:44 | 2024-05-16T14:38:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,282,068,337 | 6,877 | OSError: [Errno 24] Too many open files | ### Describe the bug
I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb)
When trying to load it using the `load_dataset` function I get... | closed | https://github.com/huggingface/datasets/issues/6877 | 2024-05-07T01:15:09 | 2024-06-02T14:22:23 | 2024-05-13T13:01:55 | {
"login": "loicmagne",
"id": 53355258,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,281,450,743 | 6,876 | Unpin hfh | Needed to use those in dataset-viewer:
- dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests
- dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer
close https://github.com/hugging... | closed | https://github.com/huggingface/datasets/pull/6876 | 2024-05-06T18:10:49 | 2024-05-27T10:20:42 | 2024-05-27T10:14:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,281,428,826 | 6,875 | Shorten long logs | Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly).
In that case we should still be able to log something readable | closed | https://github.com/huggingface/datasets/pull/6875 | 2024-05-06T17:57:07 | 2024-05-07T12:31:46 | 2024-05-07T12:25:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,280,717,233 | 6,874 | Use pandas ujson in JSON loader to improve performance | Use pandas ujson in JSON loader to improve performance.
Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`.
Fix #6867.
CC: @natolambert | closed | https://github.com/huggingface/datasets/pull/6874 | 2024-05-06T12:01:27 | 2024-05-17T16:28:29 | 2024-05-17T16:22:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,280,463,182 | 6,873 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/6873 | 2024-05-06T09:43:18 | 2024-05-06T10:03:19 | 2024-05-06T09:57:12 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,280,438,432 | 6,872 | Release 2.19.1 | null | closed | https://github.com/huggingface/datasets/pull/6872 | 2024-05-06T09:29:15 | 2024-05-06T09:35:33 | 2024-05-06T09:35:32 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,280,102,869 | 6,871 | Fix download for dict of dicts of URLs | Fix download for a dict of dicts of URLs when batched (default), introduced by:
- #6794
This PR also implements regression tests.
Fix #6869, fix #6850. | closed | https://github.com/huggingface/datasets/pull/6871 | 2024-05-06T06:06:52 | 2024-05-06T09:32:03 | 2024-05-06T09:25:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,280,084,008 | 6,870 | Update tqdm >= 4.66.3 to fix vulnerability | Update tqdm >= 4.66.3 to fix vulnerability, | closed | https://github.com/huggingface/datasets/pull/6870 | 2024-05-06T05:49:36 | 2024-05-06T06:08:06 | 2024-05-06T06:02:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,280,048,297 | 6,869 | Download is broken for dict of dicts: FileNotFoundError | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-0000... | closed | https://github.com/huggingface/datasets/issues/6869 | 2024-05-06T05:13:36 | 2024-05-06T09:25:53 | 2024-05-06T09:25:53 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,279,385,159 | 6,868 | datasets.BuilderConfig does not work. | ### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,... | closed | https://github.com/huggingface/datasets/issues/6868 | 2024-05-05T08:08:55 | 2024-05-05T12:15:02 | 2024-05-05T12:15:01 | {
"login": "jdm4pku",
"id": 148830652,
"type": "User"
} | [] | false | [] |
2,279,059,787 | 6,867 | Improve performance of JSON loader | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that... | closed | https://github.com/huggingface/datasets/issues/6867 | 2024-05-04T15:04:16 | 2024-05-17T16:22:28 | 2024-05-17T16:22:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,278,736,221 | 6,866 | DataFilesNotFoundError for datasets in the open-llm-leaderboard | ### Describe the bug
When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started see... | closed | https://github.com/huggingface/datasets/issues/6866 | 2024-05-04T04:59:00 | 2024-05-14T08:09:56 | 2024-05-14T08:09:56 | {
"login": "jerome-white",
"id": 6140840,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.