id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,965,347,685 | 7,494 | Broken links in pdf loading documentation | ### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.... | closed | https://github.com/huggingface/datasets/issues/7494 | 2025-04-02T06:45:22 | 2025-04-15T13:36:25 | 2025-04-15T13:36:04 | {
"login": "VyoJ",
"id": 75789232,
"type": "User"
} | [] | false | [] |
2,964,025,179 | 7,493 | push_to_hub does not upload videos | ### Describe the bug
Hello,
I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows.
I created a dataset locally and it references the videos and the video readers can read them correctly. I u... | open | https://github.com/huggingface/datasets/issues/7493 | 2025-04-01T17:00:20 | 2025-04-15T12:34:23 | null | {
"login": "DominikVincent",
"id": 9339403,
"type": "User"
} | [] | false | [] |
2,959,088,568 | 7,492 | Closes #7457 | This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models. | closed | https://github.com/huggingface/datasets/pull/7492 | 2025-03-30T20:41:20 | 2025-04-13T22:05:07 | 2025-04-13T22:05:07 | {
"login": "Harry-Yang0518",
"id": 129883215,
"type": "User"
} | [] | true | [] |
2,959,085,647 | 7,491 | docs: update cache.mdx to include HF_DATASETS_CACHE documentation | null | closed | https://github.com/huggingface/datasets/pull/7491 | 2025-03-30T20:35:03 | 2025-03-30T20:36:40 | 2025-03-30T20:36:40 | {
"login": "Harry-Yang0518",
"id": 129883215,
"type": "User"
} | [] | true | [] |
2,958,826,222 | 7,490 | (refactor) remove redundant logic in _check_valid_index_key | This PR contributes a minor refactor, in a small function in `src/datasets/formatting/formatting.py`. No change in logic.
In the original code, there are separate if-conditionals for `isinstance(key, range)` and `isinstance(key, Iterable)`, with essentially the same logic.
This PR combines these two using a sin... | open | https://github.com/huggingface/datasets/pull/7490 | 2025-03-30T11:45:42 | 2025-03-30T11:50:22 | null | {
"login": "suzyahyah",
"id": 2980993,
"type": "User"
} | [] | true | [] |
2,958,204,763 | 7,489 | fix: loading of datasets from Disk(#7373) | Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed.
For more details, see https://github.com/huggingface/datasets/issues/7373. | open | https://github.com/huggingface/datasets/pull/7489 | 2025-03-29T16:22:58 | 2025-04-24T16:36:36 | null | {
"login": "sam-hey",
"id": 40773225,
"type": "User"
} | [] | true | [] |
2,956,559,358 | 7,488 | Support underscore int read instruction | close https://github.com/huggingface/datasets/issues/7481 | closed | https://github.com/huggingface/datasets/pull/7488 | 2025-03-28T16:01:15 | 2025-03-28T16:20:44 | 2025-03-28T16:20:43 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,956,533,448 | 7,487 | Write pdf in map | Fix this error when mapping a PDF dataset
```
pyarrow.lib.ArrowInvalid: Could not convert <pdfplumber.pdf.PDF object at 0x13498ee40> with type PDF: did not recognize Python value type when inferring an Arrow data type
```
and also let map() outputs be lists of images or pdfs | closed | https://github.com/huggingface/datasets/pull/7487 | 2025-03-28T15:49:25 | 2025-03-28T17:09:53 | 2025-03-28T17:09:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,954,042,179 | 7,486 | `shared_datadir` fixture is missing | ### Describe the bug
Running the tests for the latest release fails due to missing `shared_datadir` fixture.
### Steps to reproduce the bug
Running `pytest` while building a package for Arch Linux leads to these errors:
```
==================================== ERRORS ====================================
_________ E... | closed | https://github.com/huggingface/datasets/issues/7486 | 2025-03-27T18:17:12 | 2025-03-27T19:49:11 | 2025-03-27T19:49:10 | {
"login": "lahwaacz",
"id": 1289205,
"type": "User"
} | [] | false | [] |
2,953,696,519 | 7,485 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7485 | 2025-03-27T16:39:34 | 2025-03-27T16:41:59 | 2025-03-27T16:39:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,953,677,168 | 7,484 | release: 3.5.0 | null | closed | https://github.com/huggingface/datasets/pull/7484 | 2025-03-27T16:33:27 | 2025-03-27T16:35:44 | 2025-03-27T16:34:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,951,856,468 | 7,483 | Support skip_trying_type | This PR addresses Issue #7472
cc: @lhoestq | closed | https://github.com/huggingface/datasets/pull/7483 | 2025-03-27T07:07:20 | 2025-04-29T04:14:57 | 2025-04-09T09:53:10 | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"type": "User"
} | [] | true | [] |
2,950,890,368 | 7,482 | Implement capability to restore non-nullability in Features | This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479.
I am by no mea... | closed | https://github.com/huggingface/datasets/pull/7482 | 2025-03-26T22:16:09 | 2025-05-15T15:00:59 | 2025-05-15T15:00:59 | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | true | [] |
2,950,692,971 | 7,481 | deal with python `10_000` legal number in slice syntax | ### Feature request
```
In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]")
In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]")
[dozens of frames skipped]
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _s... | closed | https://github.com/huggingface/datasets/issues/7481 | 2025-03-26T20:10:54 | 2025-03-28T16:20:44 | 2025-03-28T16:20:44 | {
"login": "sfc-gh-sbekman",
"id": 196988264,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,950,315,214 | 7,480 | HF_DATASETS_CACHE ignored? | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process... | open | https://github.com/huggingface/datasets/issues/7480 | 2025-03-26T17:19:34 | 2025-04-28T10:16:16 | null | {
"login": "stephenroller",
"id": 31896,
"type": "User"
} | [] | false | [] |
2,950,235,396 | 7,479 | Features.from_arrow_schema is destructive | ### Describe the bug
I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises... | open | https://github.com/huggingface/datasets/issues/7479 | 2025-03-26T16:46:43 | 2025-03-26T16:46:58 | null | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [] | false | [] |
2,948,993,461 | 7,478 | update fsspec 2025.3.0 | It appears there have been two releases of fsspec since this dependency was last updated, it would be great if Datasets could be updated so that it didn't hold back the usage of newer fsspec versions in consuming projects.
PR based on https://github.com/huggingface/datasets/pull/7352 | closed | https://github.com/huggingface/datasets/pull/7478 | 2025-03-26T09:53:05 | 2025-03-28T19:15:54 | 2025-03-28T15:51:55 | {
"login": "peteski22",
"id": 487783,
"type": "User"
} | [] | true | [] |
2,947,169,460 | 7,477 | What is the canonical way to compress a Dataset? | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:... | open | https://github.com/huggingface/datasets/issues/7477 | 2025-03-25T16:47:51 | 2025-04-03T09:13:11 | null | {
"login": "eric-czech",
"id": 6130352,
"type": "User"
} | [] | false | [] |
2,946,997,924 | 7,476 | Priotitize json | `datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF | closed | https://github.com/huggingface/datasets/pull/7476 | 2025-03-25T15:44:31 | 2025-03-25T15:47:00 | 2025-03-25T15:45:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,946,640,570 | 7,475 | IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard | ### Describe the bug
I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard.
### Steps to reproduce the bug
I am reusing the example from the doc
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(6)}).to_... | closed | https://github.com/huggingface/datasets/issues/7475 | 2025-03-25T13:58:07 | 2025-05-06T14:22:19 | 2025-05-06T14:05:07 | {
"login": "bruno-hays",
"id": 48770768,
"type": "User"
} | [] | false | [] |
2,945,066,258 | 7,474 | Remove conditions for Python < 3.9 | This PR remove conditions for Python < 3.9. | closed | https://github.com/huggingface/datasets/pull/7474 | 2025-03-25T03:08:04 | 2025-04-16T00:11:06 | 2025-04-15T16:07:55 | {
"login": "cyyever",
"id": 17618148,
"type": "User"
} | [] | true | [] |
2,939,034,643 | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ... | closed | https://github.com/huggingface/datasets/issues/7473 | 2025-03-21T17:23:52 | 2025-03-21T19:19:58 | 2025-03-21T19:19:58 | {
"login": "edmcman",
"id": 1017189,
"type": "User"
} | [] | false | [] |
2,937,607,272 | 7,472 | Label casting during `map` process is canceled after the `map` process | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithL... | closed | https://github.com/huggingface/datasets/issues/7472 | 2025-03-21T07:56:22 | 2025-04-10T05:11:15 | 2025-04-10T05:11:14 | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"type": "User"
} | [] | false | [] |
2,937,530,069 | 7,471 | Adding argument to `_get_data_files_patterns` | ### Feature request
How about adding if the user already know about the pattern?
https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252
### Motivation
While using this load_dataset people might use 10M of images for the local files.
However, due to sear... | closed | https://github.com/huggingface/datasets/issues/7471 | 2025-03-21T07:17:53 | 2025-03-27T12:30:52 | 2025-03-26T07:26:27 | {
"login": "SangbumChoi",
"id": 34004152,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,937,236,323 | 7,470 | Is it possible to shard a single-sharded IterableDataset? | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo... | closed | https://github.com/huggingface/datasets/issues/7470 | 2025-03-21T04:33:37 | 2025-05-09T22:51:46 | 2025-03-26T06:49:28 | {
"login": "jonathanasdf",
"id": 511073,
"type": "User"
} | [] | false | [] |
2,936,606,080 | 7,469 | Custom split name with the web interface | ### Describe the bug
According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name
it should infer the split name from the subdir of data or the beg of the name of the files in data.
When doing this manually through web upload it does not work. it uses "train" as a unique spl... | closed | https://github.com/huggingface/datasets/issues/7469 | 2025-03-20T20:45:59 | 2025-03-21T07:20:37 | 2025-03-21T07:20:37 | {
"login": "vince62s",
"id": 15141326,
"type": "User"
} | [] | false | [] |
2,934,094,103 | 7,468 | function `load_dataset` can't solve folder path with regex characters like "[]" | ### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular e... | open | https://github.com/huggingface/datasets/issues/7468 | 2025-03-20T05:21:59 | 2025-03-25T10:18:12 | null | {
"login": "Hpeox",
"id": 89294013,
"type": "User"
} | [] | false | [] |
2,930,067,107 | 7,467 | load_dataset with streaming hangs on parquet datasets | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming... | open | https://github.com/huggingface/datasets/issues/7467 | 2025-03-18T23:33:54 | 2025-03-25T10:28:04 | null | {
"login": "The0nix",
"id": 10550252,
"type": "User"
} | [] | false | [] |
2,928,661,327 | 7,466 | Fix local pdf loading | fir this error when accessing a local pdf
```
File ~/.pyenv/versions/3.12.2/envs/hf-datasets/lib/python3.12/site-packages/pdfminer/psparser.py:220, in PSBaseParser.seek(self, pos)
218 """Seeks the parser to the given position."""
219 log.debug("seek: %r", pos)
--> 220 self.fp.seek(pos)
221 # reset t... | closed | https://github.com/huggingface/datasets/pull/7466 | 2025-03-18T14:09:06 | 2025-03-18T14:11:52 | 2025-03-18T14:09:21 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,926,478,838 | 7,464 | Minor fix for metadata files in extension counter | null | closed | https://github.com/huggingface/datasets/pull/7464 | 2025-03-17T21:57:11 | 2025-03-18T15:21:43 | 2025-03-18T15:21:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,925,924,452 | 7,463 | Adds EXR format to store depth images in float32 | This PR adds the EXR feature to store depth images (or can be normals, etc) in float32.
It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images.
| open | https://github.com/huggingface/datasets/pull/7463 | 2025-03-17T17:42:40 | 2025-04-02T12:33:39 | null | {
"login": "ducha-aiki",
"id": 4803565,
"type": "User"
} | [] | true | [] |
2,925,612,945 | 7,462 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7462 | 2025-03-17T16:00:53 | 2025-03-17T16:03:31 | 2025-03-17T16:01:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,925,608,123 | 7,461 | List of images behave differently on IterableDataset and Dataset | ### Describe the bug
This code:
```python
def train_iterable_gen():
images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128)))
yield {
"images": np.expand_dims(images, axis=0),
"messages": [
... | closed | https://github.com/huggingface/datasets/issues/7461 | 2025-03-17T15:59:23 | 2025-03-18T08:57:17 | 2025-03-18T08:57:16 | {
"login": "FredrikNoren",
"id": 1288009,
"type": "User"
} | [] | false | [] |
2,925,605,865 | 7,460 | release: 3.4.1 | null | closed | https://github.com/huggingface/datasets/pull/7460 | 2025-03-17T15:58:31 | 2025-03-17T16:01:14 | 2025-03-17T15:59:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,925,491,766 | 7,459 | Fix data_files filtering | close https://github.com/huggingface/datasets/issues/7458 | closed | https://github.com/huggingface/datasets/pull/7459 | 2025-03-17T15:20:21 | 2025-03-17T15:25:56 | 2025-03-17T15:25:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,925,403,528 | 7,458 | Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0 | ### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('l... | closed | https://github.com/huggingface/datasets/issues/7458 | 2025-03-17T14:54:02 | 2025-03-17T16:02:04 | 2025-03-17T15:25:55 | {
"login": "nikita-savelyevv",
"id": 23343961,
"type": "User"
} | [] | false | [] |
2,924,886,467 | 7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | closed | https://github.com/huggingface/datasets/issues/7457 | 2025-03-17T12:24:50 | 2025-05-06T15:54:39 | 2025-05-06T15:54:39 | {
"login": "LSerranoPEReN",
"id": 92166725,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,922,676,278 | 7,456 | .add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab | ### Describe the bug
At Google Colab
```!pip install faiss-cpu``` works
```import faiss``` no error
but
```embeddings_dataset.add_faiss_index(column='embeddings')```
returns
```
[/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, cus... | open | https://github.com/huggingface/datasets/issues/7456 | 2025-03-16T00:51:49 | 2025-03-17T15:57:19 | null | {
"login": "MapleBloom",
"id": 109490785,
"type": "User"
} | [] | false | [] |
2,921,933,250 | 7,455 | Problems with local dataset after upgrade from 3.3.2 to 3.4.0 | ### Describe the bug
I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0
The traceback is
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/... | open | https://github.com/huggingface/datasets/issues/7455 | 2025-03-15T09:22:50 | 2025-03-17T16:20:43 | null | {
"login": "andjoer",
"id": 60151338,
"type": "User"
} | [] | false | [] |
2,920,760,793 | 7,454 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7454 | 2025-03-14T16:48:19 | 2025-03-14T16:50:31 | 2025-03-14T16:48:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,920,719,503 | 7,453 | release: 3.4.0 | null | closed | https://github.com/huggingface/datasets/pull/7453 | 2025-03-14T16:30:45 | 2025-03-14T16:38:10 | 2025-03-14T16:38:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,920,354,783 | 7,452 | minor docs changes | before the release | closed | https://github.com/huggingface/datasets/pull/7452 | 2025-03-14T14:14:04 | 2025-03-14T14:16:38 | 2025-03-14T14:14:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,919,835,663 | 7,451 | Fix resuming after `ds.set_epoch(new_epoch)` | close https://github.com/huggingface/datasets/issues/7447 | closed | https://github.com/huggingface/datasets/pull/7451 | 2025-03-14T10:31:25 | 2025-03-14T10:50:11 | 2025-03-14T10:50:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,916,681,414 | 7,450 | Add IterableDataset.decode with multithreading | Useful for dataset streaming for multimodal datasets, and especially for lerobot.
It speeds up streaming up to 20 times.
When decoding is enabled (default), media types are decoded:
* audio -> dict of "array" and "sampling_rate" and "path"
* image -> PIL.Image
* video -> torchvision.io.VideoReader
You can e... | closed | https://github.com/huggingface/datasets/pull/7450 | 2025-03-13T10:41:35 | 2025-03-14T10:35:37 | 2025-03-14T10:35:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,916,235,092 | 7,449 | Cannot load data with different schemas from different parquet files | ### Describe the bug
Cannot load samples with optional fields from different files. The schema cannot be correctly derived.
### Steps to reproduce the bug
When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`.
```python
import pandas as ... | closed | https://github.com/huggingface/datasets/issues/7449 | 2025-03-13T08:14:49 | 2025-03-17T07:27:48 | 2025-03-17T07:27:46 | {
"login": "li-plus",
"id": 39846316,
"type": "User"
} | [] | false | [] |
2,916,025,762 | 7,448 | `datasets.disable_caching` doesn't work | When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function.
I tried `datasets.disable_caching`, but it doesn't work! | open | https://github.com/huggingface/datasets/issues/7448 | 2025-03-13T06:40:12 | 2025-03-22T04:37:07 | null | {
"login": "UCC-team",
"id": 35629974,
"type": "User"
} | [] | false | [] |
2,915,233,248 | 7,447 | Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True) | ### Describe the bug
When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches ... | closed | https://github.com/huggingface/datasets/issues/7447 | 2025-03-12T21:41:05 | 2025-07-09T23:04:57 | 2025-03-14T10:50:10 | {
"login": "dhruvdcoder",
"id": 4356534,
"type": "User"
} | [] | false | [] |
2,913,050,552 | 7,446 | pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' | ### Describe the bug
A dict with its keys are all str but get following error
```python
test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}]
dataset = datasets.Dataset.from_list(test_data)
```
```bash
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
```
### Steps to reproduce the... | closed | https://github.com/huggingface/datasets/issues/7446 | 2025-03-12T07:48:37 | 2025-07-04T05:14:45 | 2025-07-04T05:14:45 | {
"login": "rangehow",
"id": 88258534,
"type": "User"
} | [] | false | [] |
2,911,507,923 | 7,445 | Fix small bugs with async map | helpful for the next PR to enable parallel image/audio/video decoding and make multimodal datasets go brr (e.g. for lerobot)
- fix with_indices
- fix resuming with save_state_dict() / load_state_dict() - omg that wasn't easy
- remove unnecessary decoding in map() to enable parallelism in FormattedExampleIterable l... | closed | https://github.com/huggingface/datasets/pull/7445 | 2025-03-11T18:30:57 | 2025-03-13T10:38:03 | 2025-03-13T10:37:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,911,202,445 | 7,444 | Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP. | ### Describe the bug
I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method.
However, when ... | open | https://github.com/huggingface/datasets/issues/7444 | 2025-03-11T16:34:39 | 2025-05-13T09:41:03 | null | {
"login": "dhruvdcoder",
"id": 4356534,
"type": "User"
} | [] | false | [] |
2,908,585,656 | 7,443 | index error when num_shards > len(dataset) | In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`.
I frequently... | open | https://github.com/huggingface/datasets/issues/7443 | 2025-03-10T22:40:59 | 2025-03-10T23:43:08 | null | {
"login": "eminorhan",
"id": 17934496,
"type": "User"
} | [] | false | [] |
2,905,543,017 | 7,442 | Flexible Loader | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
... | open | https://github.com/huggingface/datasets/issues/7442 | 2025-03-09T16:55:03 | 2025-03-27T23:58:17 | null | {
"login": "dipta007",
"id": 13894030,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,904,702,329 | 7,441 | `drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker | ### Describe the bug
See the script below
`drop_last_batch=True` is defined using map() for each dataset.
The last batch for each dataset is expected to be dropped, id 21-25.
The code behaves as expected when num_workers=0 or 1.
When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 a... | open | https://github.com/huggingface/datasets/issues/7441 | 2025-03-08T10:28:44 | 2025-03-09T21:27:33 | null | {
"login": "memray",
"id": 4197249,
"type": "User"
} | [] | false | [] |
2,903,740,662 | 7,440 | IterableDataset raises FileNotFoundError instead of retrying | ### Describe the bug
In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*).
I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can ... | open | https://github.com/huggingface/datasets/issues/7440 | 2025-03-07T19:14:18 | 2025-07-22T08:15:44 | null | {
"login": "bauwenst",
"id": 145220868,
"type": "User"
} | [] | false | [] |
2,900,143,289 | 7,439 | Fix multi gpu process example | to is not an inplace function.
But i am not sure about this code anyway, i think this is modifying the global variable `model` everytime the function is called? Which is on every batch? So it is juggling the same model on every gpu right? Isnt that very inefficient? | closed | https://github.com/huggingface/datasets/pull/7439 | 2025-03-06T11:29:19 | 2025-03-06T17:07:28 | 2025-03-06T17:06:38 | {
"login": "SwayStar123",
"id": 46050679,
"type": "User"
} | [] | true | [] |
2,899,209,484 | 7,438 | Allow dataset row indexing with np.int types (#7423) | @lhoestq
Proposed fix for #7423. Added a couple simple tests as requested. I had some test failures related to Java and pyspark even when installing with dev but these don't seem to be related to the changes here and fail for me even on clean main.
The typeerror raised when using the wrong type is: "Wrong key type... | closed | https://github.com/huggingface/datasets/pull/7438 | 2025-03-06T03:10:43 | 2025-07-23T17:56:22 | 2025-07-23T16:44:42 | {
"login": "DavidRConnell",
"id": 35470740,
"type": "User"
} | [] | true | [] |
2,899,104,679 | 7,437 | Use pyupgrade --py39-plus for remaining files | This work follows #7428. And "requires-python" is set in pyproject.toml | open | https://github.com/huggingface/datasets/pull/7437 | 2025-03-06T02:12:25 | 2025-07-18T04:04:08 | null | {
"login": "cyyever",
"id": 17618148,
"type": "User"
} | [] | true | [] |
2,898,385,725 | 7,436 | chore: fix typos | null | closed | https://github.com/huggingface/datasets/pull/7436 | 2025-03-05T20:17:54 | 2025-04-28T14:00:09 | 2025-04-28T13:51:26 | {
"login": "afuetterer",
"id": 35225576,
"type": "User"
} | [] | true | [] |
2,895,536,956 | 7,435 | Refactor `string_to_dict` to return `None` if there is no match instead of raising `ValueError` | Making this change, as encouraged here:
* https://github.com/huggingface/datasets/pull/7434#discussion_r1979933054
instead of having the pattern of using `try`-`except` to handle when there is no match, we can instead check if the return value is `None`; we can also assert that the return value should not be `Non... | closed | https://github.com/huggingface/datasets/pull/7435 | 2025-03-04T22:01:20 | 2025-03-12T16:52:00 | 2025-03-12T16:52:00 | {
"login": "ringohoffman",
"id": 27844407,
"type": "User"
} | [] | true | [] |
2,893,075,908 | 7,434 | Refactor `Dataset.map` to reuse cache files mapped with different `num_proc` | Fixes #7433
This refactor unifies `num_proc is None or num_proc == 1` and `num_proc > 1`; instead of handling them completely separately where one uses a list of kwargs and shards and the other just uses a single set of kwargs and `self`, by wrapping the `num_proc == 1` case in a list and making the difference just ... | closed | https://github.com/huggingface/datasets/pull/7434 | 2025-03-04T06:12:37 | 2025-05-14T10:45:10 | 2025-05-12T15:14:08 | {
"login": "ringohoffman",
"id": 27844407,
"type": "User"
} | [] | true | [] |
2,890,240,400 | 7,433 | `Dataset.map` ignores existing caches and remaps when ran with different `num_proc` | ### Describe the bug
If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped.
### Steps to reproduce the bug
1. Download a dataset
```python
import datase... | closed | https://github.com/huggingface/datasets/issues/7433 | 2025-03-03T05:51:26 | 2025-05-12T15:14:09 | 2025-05-12T15:14:09 | {
"login": "ringohoffman",
"id": 27844407,
"type": "User"
} | [] | false | [] |
2,887,717,289 | 7,432 | Fix type annotation | null | closed | https://github.com/huggingface/datasets/pull/7432 | 2025-02-28T17:28:20 | 2025-03-04T15:53:03 | 2025-03-04T15:53:03 | {
"login": "NeilGirdhar",
"id": 730137,
"type": "User"
} | [] | true | [] |
2,887,244,074 | 7,431 | Issues with large Datasets | ### Describe the bug
If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections.
My current work around is the following code but would ... | open | https://github.com/huggingface/datasets/issues/7431 | 2025-02-28T14:05:22 | 2025-03-04T15:02:26 | null | {
"login": "nikitabelooussovbtis",
"id": 106806889,
"type": "User"
} | [] | false | [] |
2,886,922,573 | 7,430 | Error in code "Time to slice and dice" from course "NLP Course" | ### Describe the bug
When we execute code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
answer should be like this
condition | frequency
birth control | 27655
dep... | closed | https://github.com/huggingface/datasets/issues/7430 | 2025-02-28T11:36:10 | 2025-03-05T11:32:47 | 2025-03-03T17:52:15 | {
"login": "Yurkmez",
"id": 122965300,
"type": "User"
} | [] | false | [] |
2,886,806,513 | 7,429 | Improved type annotation | I've refined several type annotations throughout the codebase to align with current best practices and enhance overall clarity. Given the complexity of the code, there may still be areas that need further attention. I welcome any feedback or suggestions to make these improvements even better.
- Fixes #7202 | open | https://github.com/huggingface/datasets/pull/7429 | 2025-02-28T10:39:10 | 2025-05-15T12:27:17 | null | {
"login": "saiden89",
"id": 45285915,
"type": "User"
} | [] | true | [] |
2,886,111,651 | 7,428 | Use pyupgrade --py39-plus | null | closed | https://github.com/huggingface/datasets/pull/7428 | 2025-02-28T03:39:44 | 2025-03-22T00:51:20 | 2025-03-05T15:04:16 | {
"login": "cyyever",
"id": 17618148,
"type": "User"
} | [] | true | [] |
2,886,032,571 | 7,427 | Error splitting the input into NAL units. | ### Describe the bug
I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library
The error logging is like following:
```text
Convertin... | open | https://github.com/huggingface/datasets/issues/7427 | 2025-02-28T02:30:15 | 2025-03-04T01:40:28 | null | {
"login": "MengHao666",
"id": 47114466,
"type": "User"
} | [] | false | [] |
2,883,754,507 | 7,426 | fix: None default with bool type on load creates typing error | Hello!
Pyright flags any use of `load_dataset` as an error, because the default for `trust_remote_code` is `None`, but the function is typed as `bool`, not `Optional[bool]`. I changed the type and docstrings to reflect this, but no other code was touched.
| closed | https://github.com/huggingface/datasets/pull/7426 | 2025-02-27T08:11:36 | 2025-03-04T15:53:40 | 2025-03-04T15:53:40 | {
"login": "stephantul",
"id": 8882233,
"type": "User"
} | [] | true | [] |
2,883,684,686 | 7,425 | load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable | ### Describe the bug
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
or
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
both error:
Traceback (most recent call last):
File "", line 1, in
File... | open | https://github.com/huggingface/datasets/issues/7425 | 2025-02-27T07:36:02 | 2025-03-27T05:05:33 | null | {
"login": "dshwei",
"id": 42167236,
"type": "User"
} | [] | false | [] |
2,882,663,621 | 7,424 | Faster folder based builder + parquet support + allow repeated media + use torchvideo | This will be useful for LeRobotDataset (robotics datasets for [lerobot](https://github.com/huggingface/lerobot) based on videos)
Impacted builders:
- ImageFolder
- AudioFolder
- VideoFolder
Improvements:
- faster to stream (got a 5x speed up on an image dataset)
- improved RAM usage
- support for metadata.p... | closed | https://github.com/huggingface/datasets/pull/7424 | 2025-02-26T19:55:18 | 2025-03-05T18:51:00 | 2025-03-05T17:41:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,879,271,409 | 7,423 | Row indexing a dataset with numpy integers | ### Feature request
Allow indexing datasets with a scalar numpy integer type.
### Motivation
Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type`
``` python
def key_to_query_type(key: Union[int, slice, range, str, Ite... | open | https://github.com/huggingface/datasets/issues/7423 | 2025-02-25T18:44:45 | 2025-03-03T17:55:24 | null | {
"login": "DavidRConnell",
"id": 35470740,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,878,369,052 | 7,421 | DVC integration broken | ### Describe the bug
The DVC integration seems to be broken.
Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface
### Steps to reproduce the bug
#### Script to reproduce
~~~python
from datasets import load_dataset
dataset = load_dataset(
"csv",
data_files="dvc://workshop/satellite-d... | open | https://github.com/huggingface/datasets/issues/7421 | 2025-02-25T13:14:31 | 2025-03-03T17:42:02 | null | {
"login": "maxstrobel",
"id": 34747372,
"type": "User"
} | [] | false | [] |
2,876,281,928 | 7,420 | better correspondence between cached and saved datasets created using from_generator | ### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a... | open | https://github.com/huggingface/datasets/issues/7420 | 2025-02-24T22:14:37 | 2025-02-26T03:10:22 | null | {
"login": "vttrifonov",
"id": 12157034,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,875,635,320 | 7,419 | Import order crashes script execution | ### Describe the bug
Hello,
I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so.
Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely).
Thank you for your help
🙏
... | open | https://github.com/huggingface/datasets/issues/7419 | 2025-02-24T17:03:43 | 2025-02-24T17:03:43 | null | {
"login": "DamienMatias",
"id": 23298479,
"type": "User"
} | [] | false | [] |
2,868,701,471 | 7,418 | pyarrow.lib.arrowinvalid: cannot mix list and non-list, non-null values with map function | ### Describe the bug
Encounter pyarrow.lib.arrowinvalid error with map function in some example when loading the dataset
### Steps to reproduce the bug
```
from datasets import load_dataset
from PIL import Image, PngImagePlugin
dataset = load_dataset("leonardPKU/GEOQA_R1V_Train_8K")
system_prompt="You are a helpful... | open | https://github.com/huggingface/datasets/issues/7418 | 2025-02-21T10:58:06 | 2025-07-11T13:06:10 | null | {
"login": "alexxchen",
"id": 15705569,
"type": "User"
} | [] | false | [] |
2,866,868,922 | 7,417 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7417 | 2025-02-20T17:45:29 | 2025-02-20T17:47:50 | 2025-02-20T17:45:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,866,862,143 | 7,416 | Release: 3.3.2 | null | closed | https://github.com/huggingface/datasets/pull/7416 | 2025-02-20T17:42:11 | 2025-02-20T17:44:35 | 2025-02-20T17:43:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,865,774,546 | 7,415 | Shard Dataset at specific indices | I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from... | open | https://github.com/huggingface/datasets/issues/7415 | 2025-02-20T10:43:10 | 2025-02-24T11:06:45 | null | {
"login": "nikonikolov",
"id": 11044035,
"type": "User"
} | [] | false | [] |
2,863,798,756 | 7,414 | Gracefully cancel async tasks | null | closed | https://github.com/huggingface/datasets/pull/7414 | 2025-02-19T16:10:58 | 2025-02-20T14:12:26 | 2025-02-20T14:12:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,860,947,582 | 7,413 | Documentation on multiple media files of the same type with WebDataset | The [current documentation](https://huggingface.co/docs/datasets/en/video_dataset) on a creating a video dataset includes only examples with one media file and one json. It would be useful to have examples where multiple files of the same type are included. For example, in a sign language dataset, you may have a base v... | open | https://github.com/huggingface/datasets/issues/7413 | 2025-02-18T16:13:20 | 2025-02-20T14:17:54 | null | {
"login": "DCNemesis",
"id": 3616964,
"type": "User"
} | [] | false | [] |
2,859,433,710 | 7,412 | Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset | ### Describe the bug
I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class
However I am getting below error
```
raise IndexError(f"Invalid Key: {key is our of bounds for size {size}")
IndexError: Invalid key: 1840208 is out of b... | open | https://github.com/huggingface/datasets/issues/7412 | 2025-02-18T05:58:33 | 2025-02-18T06:42:07 | null | {
"login": "harshakhmk",
"id": 56113657,
"type": "User"
} | [] | false | [] |
2,858,993,390 | 7,411 | Attempt to fix multiprocessing hang by closing and joining the pool before termination | https://github.com/huggingface/datasets/issues/6393 has plagued me on and off for a very long time. I have had various workarounds (one time combining two filter calls into one filter call removed the issue, another time making rank 0 go first resolved a cache race condition, one time i think upgrading the version of s... | closed | https://github.com/huggingface/datasets/pull/7411 | 2025-02-17T23:58:03 | 2025-02-19T21:11:24 | 2025-02-19T13:40:32 | {
"login": "dakinggg",
"id": 43149077,
"type": "User"
} | [] | true | [] |
2,858,085,707 | 7,410 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/7410 | 2025-02-17T14:54:39 | 2025-02-17T14:56:58 | 2025-02-17T14:54:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,858,079,508 | 7,409 | Release: 3.3.1 | null | closed | https://github.com/huggingface/datasets/pull/7409 | 2025-02-17T14:52:12 | 2025-02-17T14:54:32 | 2025-02-17T14:53:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,858,012,313 | 7,408 | Fix filter speed regression | close https://github.com/huggingface/datasets/issues/7404 | closed | https://github.com/huggingface/datasets/pull/7408 | 2025-02-17T14:25:32 | 2025-02-17T14:28:48 | 2025-02-17T14:28:46 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,856,517,442 | 7,407 | Update use_with_pandas.mdx: to_pandas() correction in last section | last section ``to_pandas()" | closed | https://github.com/huggingface/datasets/pull/7407 | 2025-02-17T01:53:31 | 2025-02-20T17:28:04 | 2025-02-20T17:28:04 | {
"login": "ibarrien",
"id": 7552335,
"type": "User"
} | [] | true | [] |
2,856,441,206 | 7,406 | Adding Core Maintainer List to CONTRIBUTING.md | ### Feature request
I propose adding a core maintainer list to the `CONTRIBUTING.md` file.
### Motivation
The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module.
However, the Datasets project doesn't have such a list.
### Your contribution
I have nothing to add here. | closed | https://github.com/huggingface/datasets/issues/7406 | 2025-02-17T00:32:40 | 2025-03-24T10:57:54 | 2025-03-24T10:57:54 | {
"login": "jp1924",
"id": 93233241,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,856,372,814 | 7,405 | Lazy loading of environment variables | ### Describe the bug
Loading a `.env` file after an `import datasets` call does not correctly use the environment variables.
This is due the fact that environment variables are read at import time:
https://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/config.py#L155C1-L15... | open | https://github.com/huggingface/datasets/issues/7405 | 2025-02-16T22:31:41 | 2025-02-17T15:17:18 | null | {
"login": "nikvaessen",
"id": 7225987,
"type": "User"
} | [] | false | [] |
2,856,366,207 | 7,404 | Performance regression in `dataset.filter` | ### Describe the bug
We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours.
We use 16 threads/workers, and stack trace at them look as follows:
```
Traceback (most recent call last):
Fi... | closed | https://github.com/huggingface/datasets/issues/7404 | 2025-02-16T22:19:14 | 2025-02-17T17:46:06 | 2025-02-17T14:28:48 | {
"login": "ttim",
"id": 82200,
"type": "User"
} | [] | false | [] |
2,855,880,858 | 7,402 | Fix a typo in arrow_dataset.py | "in the feature" should be "in the future" | closed | https://github.com/huggingface/datasets/pull/7402 | 2025-02-16T04:52:02 | 2025-02-20T17:29:28 | 2025-02-20T17:29:28 | {
"login": "jingedawang",
"id": 7996256,
"type": "User"
} | [] | true | [] |
2,853,260,869 | 7,401 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7401 | 2025-02-14T10:17:03 | 2025-02-14T10:19:20 | 2025-02-14T10:17:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,853,098,442 | 7,399 | Synchronize parameters for various datasets | ### Describe the bug
[IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_refe... | open | https://github.com/huggingface/datasets/issues/7399 | 2025-02-14T09:15:11 | 2025-02-19T11:50:29 | null | {
"login": "grofte",
"id": 7976840,
"type": "User"
} | [] | false | [] |
2,853,097,869 | 7,398 | Release: 3.3.0 | null | closed | https://github.com/huggingface/datasets/pull/7398 | 2025-02-14T09:15:03 | 2025-02-14T09:57:39 | 2025-02-14T09:57:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,852,829,763 | 7,397 | Kannada dataset(Conversations, Wikipedia etc) | null | closed | https://github.com/huggingface/datasets/pull/7397 | 2025-02-14T06:53:03 | 2025-02-20T17:28:54 | 2025-02-20T17:28:53 | {
"login": "Likhith2612",
"id": 146451281,
"type": "User"
} | [] | true | [] |
2,853,201,277 | 7,400 | 504 Gateway Timeout when uploading large dataset to Hugging Face Hub | ### Description
I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error.
I will continue trying to upload. While it might succeed in future attempts, I wanted to report... | open | https://github.com/huggingface/datasets/issues/7400 | 2025-02-14T02:18:35 | 2025-02-14T23:48:36 | null | {
"login": "hotchpotch",
"id": 3500,
"type": "User"
} | [] | false | [] |
2,851,716,755 | 7,396 | Update README.md | null | closed | https://github.com/huggingface/datasets/pull/7396 | 2025-02-13T17:44:36 | 2025-02-13T17:46:57 | 2025-02-13T17:44:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,851,575,160 | 7,395 | Update docs | - update min python version
- replace canonical dataset names with new names
- avoid examples with trust_remote_code | closed | https://github.com/huggingface/datasets/pull/7395 | 2025-02-13T16:43:15 | 2025-02-13T17:20:32 | 2025-02-13T17:20:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,847,172,115 | 7,394 | Using load_dataset with data_files and split arguments yields an error | ### Describe the bug
It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument.
If I run
```python
from datasets import load_dataset
load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl")
```
then I get the error
```
Va... | open | https://github.com/huggingface/datasets/issues/7394 | 2025-02-12T04:50:11 | 2025-02-12T04:50:11 | null | {
"login": "devon-research",
"id": 61103399,
"type": "User"
} | [] | false | [] |
2,846,446,674 | 7,393 | Optimized sequence encoding for scalars | The change in https://github.com/huggingface/datasets/pull/3197 introduced redundant list-comprehensions when `obj` is a long sequence of scalars. This becomes a noticeable overhead when loading data from an `IterableDataset` in the function `_apply_feature_types_on_example` and can be eliminated by adding a check for ... | closed | https://github.com/huggingface/datasets/pull/7393 | 2025-02-11T20:30:44 | 2025-02-13T17:11:33 | 2025-02-13T17:11:32 | {
"login": "lukasgd",
"id": 38319063,
"type": "User"
} | [] | true | [] |
2,846,095,043 | 7,392 | push_to_hub payload too large error when using large ClassLabel feature | ### Describe the bug
When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small.
### Steps to reproduce the bug
``` python
import random
import sys
impor... | open | https://github.com/huggingface/datasets/issues/7392 | 2025-02-11T17:51:34 | 2025-02-11T18:01:31 | null | {
"login": "DavidRConnell",
"id": 35470740,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.