id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,961,869,203 | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these cal... | open | https://github.com/huggingface/datasets/issues/6350 | 2023-10-25T17:08:39 | 2023-10-26T21:03:06 | null | {
"login": "phalexo",
"id": 4603365,
"type": "User"
} | [] | false | [] |
1,961,435,673 | 6,349 | Can't load ds = load_dataset("imdb") | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as we... | closed | https://github.com/huggingface/datasets/issues/6349 | 2023-10-25T13:29:51 | 2024-03-20T15:09:53 | 2023-10-31T19:59:35 | {
"login": "vivianc2",
"id": 86415736,
"type": "User"
} | [] | false | [] |
1,961,268,504 | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | open | https://github.com/huggingface/datasets/issues/6348 | 2023-10-25T12:12:44 | 2025-04-17T12:21:43 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,959,004,835 | 6,347 | Incorrect example code in 'Create a dataset' docs | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python... | closed | https://github.com/huggingface/datasets/issues/6347 | 2023-10-24T11:01:21 | 2023-10-25T13:05:21 | 2023-10-25T13:05:21 | {
"login": "rwood-97",
"id": 72076688,
"type": "User"
} | [] | false | [] |
1,958,777,076 | 6,346 | Fix UnboundLocalError if preprocessing returns an empty list | If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list.
```
def tokenize(batch, tokenizer, context_length):
outputs = tokenizer(
batch["text"],
truncation=True,
max_length=context_length,
r... | closed | https://github.com/huggingface/datasets/pull/6346 | 2023-10-24T08:38:43 | 2023-10-25T17:39:17 | 2023-10-25T16:36:38 | {
"login": "cwallenwein",
"id": 40916592,
"type": "User"
} | [] | true | [] |
1,957,707,870 | 6,345 | support squad structure datasets using a YAML parameter | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should... | open | https://github.com/huggingface/datasets/issues/6345 | 2023-10-23T17:55:37 | 2023-10-23T17:55:37 | null | {
"login": "MajdTannous1",
"id": 138524319,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,957,412,169 | 6,344 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/6344 | 2023-10-23T15:13:28 | 2023-10-23T15:24:31 | 2023-10-23T15:13:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,957,370,711 | 6,343 | Remove unused argument in `_get_data_files_patterns` | null | closed | https://github.com/huggingface/datasets/pull/6343 | 2023-10-23T14:54:18 | 2023-11-16T09:09:42 | 2023-11-16T09:03:39 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,957,344,445 | 6,342 | Release: 2.14.6 | null | closed | https://github.com/huggingface/datasets/pull/6342 | 2023-10-23T14:43:26 | 2023-10-23T15:21:54 | 2023-10-23T15:07:25 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,956,917,893 | 6,340 | Release 2.14.5 | (wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`) | closed | https://github.com/huggingface/datasets/pull/6340 | 2023-10-23T11:10:22 | 2023-10-23T14:20:46 | 2023-10-23T11:12:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,956,912,627 | 6,339 | minor release step improvement | null | closed | https://github.com/huggingface/datasets/pull/6339 | 2023-10-23T11:07:04 | 2023-11-07T10:38:54 | 2023-11-07T10:32:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,956,886,072 | 6,338 | pin fsspec before it switches to glob.glob | null | closed | https://github.com/huggingface/datasets/pull/6338 | 2023-10-23T10:50:54 | 2024-01-11T06:32:56 | 2023-10-23T10:51:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,956,875,259 | 6,337 | Pin supported upper version of fsspec | Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See:
- #6331
- #6210
- #5731
- #5617
- #5447
I propose that we explicitly test, introduce fixes and support each new `fsspec` version release.
... | closed | https://github.com/huggingface/datasets/pull/6337 | 2023-10-23T10:44:16 | 2023-10-23T12:13:20 | 2023-10-23T12:04:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,956,827,232 | 6,336 | unpin-fsspec | Close #6333. | closed | https://github.com/huggingface/datasets/pull/6336 | 2023-10-23T10:16:46 | 2024-02-07T12:41:35 | 2023-10-23T10:17:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,956,740,818 | 6,335 | Support fsspec 2023.10.0 | Fix #6333. | closed | https://github.com/huggingface/datasets/pull/6335 | 2023-10-23T09:29:17 | 2024-01-11T06:33:35 | 2023-11-14T14:17:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,956,719,774 | 6,334 | datasets.filesystems: fix is_remote_filesystems | Close #6330, close #6333.
`fsspec.implementations.LocalFilesystem.protocol`
was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0`
This commit supports both styles. | closed | https://github.com/huggingface/datasets/pull/6334 | 2023-10-23T09:17:54 | 2024-02-07T12:41:15 | 2023-10-23T10:14:10 | {
"login": "ap--",
"id": 1463443,
"type": "User"
} | [] | true | [] |
1,956,714,423 | 6,333 | Support fsspec 2023.10.0 | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | closed | https://github.com/huggingface/datasets/issues/6333 | 2023-10-23T09:14:53 | 2024-02-07T12:39:58 | 2024-02-07T12:39:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,956,697,328 | 6,332 | Replace deprecated license_file in setup.cfg | Replace deprecated license_file in `setup.cfg`.
See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331
```
/tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
... | closed | https://github.com/huggingface/datasets/pull/6332 | 2023-10-23T09:05:26 | 2023-11-07T08:23:10 | 2023-11-07T08:09:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,956,671,256 | 6,331 | Temporarily pin fsspec < 2023.10.0 | Temporarily pin fsspec < 2023.10.0 until permanent solution is found.
Hot fix #6330.
See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987
```
...
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileS... | closed | https://github.com/huggingface/datasets/pull/6331 | 2023-10-23T08:51:50 | 2023-10-23T09:26:42 | 2023-10-23T09:17:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,956,053,294 | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps ... | closed | https://github.com/huggingface/datasets/issues/6330 | 2023-10-22T20:57:10 | 2025-06-09T22:00:16 | 2023-10-23T09:17:56 | {
"login": "ZachNagengast",
"id": 1981179,
"type": "User"
} | [] | false | [] |
1,955,858,020 | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| closed | https://github.com/huggingface/datasets/issues/6329 | 2023-10-22T11:07:46 | 2023-10-23T09:22:58 | 2023-10-23T09:22:58 | {
"login": "shabnam706",
"id": 147399213,
"type": "User"
} | [] | false | [] |
1,955,857,904 | 6,328 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | null | closed | https://github.com/huggingface/datasets/issues/6328 | 2023-10-22T11:07:21 | 2023-10-23T09:22:38 | 2023-10-23T09:22:38 | {
"login": "shabnam706",
"id": 147399213,
"type": "User"
} | [] | false | [] |
1,955,470,755 | 6,327 | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loadi... | closed | https://github.com/huggingface/datasets/issues/6327 | 2023-10-21T12:27:03 | 2023-10-23T18:50:07 | 2023-10-23T18:50:07 | {
"login": "yzhangcs",
"id": 18402347,
"type": "User"
} | [] | false | [] |
1,955,420,536 | 6,326 | Create battery_analysis.py | null | closed | https://github.com/huggingface/datasets/pull/6326 | 2023-10-21T10:07:48 | 2023-10-23T14:56:20 | 2023-10-23T14:56:20 | {
"login": "vinitkm",
"id": 130216732,
"type": "User"
} | [] | true | [] |
1,955,420,178 | 6,325 | Create battery_analysis.py | null | closed | https://github.com/huggingface/datasets/pull/6325 | 2023-10-21T10:06:37 | 2023-10-23T14:55:58 | 2023-10-23T14:55:58 | {
"login": "vinitkm",
"id": 130216732,
"type": "User"
} | [] | true | [] |
1,955,126,687 | 6,324 | Conversion to Arrow fails due to wrong type heuristic | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowI... | closed | https://github.com/huggingface/datasets/issues/6324 | 2023-10-20T23:20:58 | 2023-10-23T20:52:57 | 2023-10-23T20:52:57 | {
"login": "jphme",
"id": 2862336,
"type": "User"
} | [] | false | [] |
1,954,245,980 | 6,323 | Loading dataset from large GCS bucket very slow since 2.14 | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a76083... | open | https://github.com/huggingface/datasets/issues/6323 | 2023-10-20T12:59:55 | 2024-09-03T18:42:33 | null | {
"login": "jbcdnr",
"id": 6209990,
"type": "User"
} | [] | false | [] |
1,952,947,461 | 6,322 | Fix regex `get_data_files` formatting for base paths | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`... | closed | https://github.com/huggingface/datasets/pull/6322 | 2023-10-19T19:45:10 | 2023-10-23T14:40:45 | 2023-10-23T14:31:21 | {
"login": "ZachNagengast",
"id": 1981179,
"type": "User"
} | [] | true | [] |
1,952,643,483 | 6,321 | Fix typos | null | closed | https://github.com/huggingface/datasets/pull/6321 | 2023-10-19T16:24:35 | 2023-10-19T17:18:00 | 2023-10-19T17:07:35 | {
"login": "python273",
"id": 3097956,
"type": "User"
} | [] | true | [] |
1,952,618,316 | 6,320 | Dataset slice splits can't load training and validation at the same time | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However ex... | closed | https://github.com/huggingface/datasets/issues/6320 | 2023-10-19T16:09:22 | 2023-11-30T16:21:15 | 2023-11-30T16:21:15 | {
"login": "timlac",
"id": 32488097,
"type": "User"
} | [] | false | [] |
1,952,101,717 | 6,319 | Datasets.map is severely broken | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing s... | open | https://github.com/huggingface/datasets/issues/6319 | 2023-10-19T12:19:33 | 2024-08-08T17:05:08 | null | {
"login": "phalexo",
"id": 4603365,
"type": "User"
} | [] | false | [] |
1,952,100,706 | 6,318 | Deterministic set hash | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | closed | https://github.com/huggingface/datasets/pull/6318 | 2023-10-19T12:19:13 | 2023-10-19T16:27:20 | 2023-10-19T16:16:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,951,965,668 | 6,317 | sentiment140 dataset unavailable | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from data... | closed | https://github.com/huggingface/datasets/issues/6317 | 2023-10-19T11:25:21 | 2023-10-19T13:04:56 | 2023-10-19T13:04:56 | {
"login": "AndreasKarasenko",
"id": 52670382,
"type": "User"
} | [] | false | [] |
1,951,819,869 | 6,316 | Fix loading Hub datasets with CSV metadata file | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- correspon... | closed | https://github.com/huggingface/datasets/pull/6316 | 2023-10-19T10:21:34 | 2023-10-20T06:23:21 | 2023-10-20T06:14:09 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,951,800,819 | 6,315 | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | closed | https://github.com/huggingface/datasets/issues/6315 | 2023-10-19T10:11:29 | 2023-10-20T06:14:10 | 2023-10-20T06:14:10 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,951,684,763 | 6,314 | Support creating new branch in push_to_hub | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | closed | https://github.com/huggingface/datasets/pull/6314 | 2023-10-19T09:12:39 | 2023-10-19T09:20:06 | 2023-10-19T09:19:48 | {
"login": "jmif",
"id": 1000442,
"type": "User"
} | [] | true | [] |
1,951,527,712 | 6,313 | Fix commit message formatting in multi-commit uploads | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset... | closed | https://github.com/huggingface/datasets/pull/6313 | 2023-10-19T07:53:56 | 2023-10-20T14:06:13 | 2023-10-20T13:57:39 | {
"login": "qgallouedec",
"id": 45557362,
"type": "User"
} | [] | true | [] |
1,950,128,416 | 6,312 | docs: resolving namespace conflict, refactored variable | In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good conven... | closed | https://github.com/huggingface/datasets/pull/6312 | 2023-10-18T16:10:59 | 2023-10-19T16:31:59 | 2023-10-19T16:23:07 | {
"login": "smty2018",
"id": 74114936,
"type": "User"
} | [] | true | [] |
1,949,304,993 | 6,311 | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] =... | closed | https://github.com/huggingface/datasets/issues/6311 | 2023-10-18T09:38:05 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | {
"login": "neiblegy",
"id": 16574677,
"type": "User"
} | [] | false | [] |
1,947,457,988 | 6,310 | Add return_file_name in load_dataset | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- fo... | closed | https://github.com/huggingface/datasets/pull/6310 | 2023-10-17T13:36:57 | 2024-08-09T11:51:55 | 2024-07-31T13:56:50 | {
"login": "juliendenize",
"id": 40604584,
"type": "User"
} | [] | true | [] |
1,946,916,969 | 6,309 | Fix get_data_patterns for directories with the word data twice | Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice:
- For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`)
- The in... | closed | https://github.com/huggingface/datasets/pull/6309 | 2023-10-17T09:00:39 | 2023-10-18T14:01:52 | 2023-10-18T13:50:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,946,810,625 | 6,308 | module 'resource' has no attribute 'error' | ### Describe the bug
just run import:
`from datasets import load_dataset`
and then:
```
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow... | closed | https://github.com/huggingface/datasets/issues/6308 | 2023-10-17T08:08:54 | 2023-10-25T17:09:22 | 2023-10-25T17:09:22 | {
"login": "NeoWang9999",
"id": 48009681,
"type": "User"
} | [] | false | [] |
1,946,414,808 | 6,307 | Fix typo in code example in docs | null | closed | https://github.com/huggingface/datasets/pull/6307 | 2023-10-17T02:28:50 | 2023-10-17T12:59:26 | 2023-10-17T06:36:19 | {
"login": "bryant1410",
"id": 3905501,
"type": "User"
} | [] | true | [] |
1,946,363,452 | 6,306 | pyinstaller : OSError: could not get source code | ### Describe the bug
I ran a package with pyinstaller and got the following error:
### Steps to reproduce the bug
```
...
File "datasets\__init__.py", line 52, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_an... | closed | https://github.com/huggingface/datasets/issues/6306 | 2023-10-17T01:41:51 | 2023-11-02T07:24:51 | 2023-10-18T14:03:42 | {
"login": "dusk877647949",
"id": 57702070,
"type": "User"
} | [] | false | [] |
1,946,010,912 | 6,305 | Cannot load dataset with `2.14.5`: `FileNotFound` error | ### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
D... | closed | https://github.com/huggingface/datasets/issues/6305 | 2023-10-16T20:11:27 | 2023-10-18T13:50:36 | 2023-10-18T13:50:36 | {
"login": "finiteautomata",
"id": 167943,
"type": "User"
} | [] | false | [] |
1,945,913,521 | 6,304 | Update README.md | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
| closed | https://github.com/huggingface/datasets/pull/6304 | 2023-10-16T19:10:39 | 2023-10-17T15:13:37 | 2023-10-17T15:04:52 | {
"login": "smty2018",
"id": 74114936,
"type": "User"
} | [] | true | [] |
1,943,466,532 | 6,303 | Parquet uploads off-by-one naming scheme | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71... | open | https://github.com/huggingface/datasets/issues/6303 | 2023-10-14T18:31:03 | 2023-10-16T16:33:21 | null | {
"login": "ZachNagengast",
"id": 1981179,
"type": "User"
} | [] | false | [] |
1,942,096,078 | 6,302 | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason f... | closed | https://github.com/huggingface/datasets/issues/6302 | 2023-10-13T14:43:36 | 2023-10-17T06:52:12 | 2023-10-17T06:52:11 | {
"login": "Rassibassi",
"id": 2855550,
"type": "User"
} | [] | false | [] |
1,940,183,999 | 6,301 | Unpin `tensorflow` maximum version | Removes the temporary pin introduced in #6264 | closed | https://github.com/huggingface/datasets/pull/6301 | 2023-10-12T14:58:07 | 2023-10-12T15:58:20 | 2023-10-12T15:49:54 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,940,153,432 | 6,300 | Unpin `jax` maximum version | fix #6299
fix #6202 | closed | https://github.com/huggingface/datasets/pull/6300 | 2023-10-12T14:42:40 | 2023-10-12T16:37:55 | 2023-10-12T16:28:57 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,939,649,238 | 6,299 | Support for newer versions of JAX | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a lim... | closed | https://github.com/huggingface/datasets/issues/6299 | 2023-10-12T10:03:46 | 2023-10-12T16:28:59 | 2023-10-12T16:28:59 | {
"login": "ddrous",
"id": 25456859,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,938,797,389 | 6,298 | Doc readme improvements | Changes in the doc READMe:
* adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples"
* replaces the mentions of `transformers` with `datasets`
* fixes some dead links | closed | https://github.com/huggingface/datasets/pull/6298 | 2023-10-11T21:51:12 | 2023-10-12T12:47:15 | 2023-10-12T12:38:19 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,938,752,707 | 6,297 | Fix ArrayXD cast | Fix #6291 | closed | https://github.com/huggingface/datasets/pull/6297 | 2023-10-11T21:14:59 | 2023-10-13T13:54:00 | 2023-10-13T13:45:30 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,938,453,845 | 6,296 | Move `exceptions.py` to `utils/exceptions.py` | I didn't notice the path while reviewing the PR yesterday :( | closed | https://github.com/huggingface/datasets/pull/6296 | 2023-10-11T18:28:00 | 2024-09-03T16:00:04 | 2024-09-03T16:00:03 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,937,362,102 | 6,295 | Fix parquet columns argument in streaming mode | It was failing when there's a DatasetInfo with non-None info.features from the YAML (therefore containing columns that should be ignored)
Fix https://github.com/huggingface/datasets/issues/6293 | closed | https://github.com/huggingface/datasets/pull/6295 | 2023-10-11T10:01:01 | 2023-10-11T16:30:24 | 2023-10-11T16:21:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,937,359,605 | 6,294 | IndexError: Invalid key is out of bounds for size 0 despite having a populated dataset | ### Describe the bug
I am encountering an `IndexError` when trying to access data from a DataLoader which wraps around a dataset I've loaded using the `datasets` library. The error suggests that the dataset size is `0`, but when I check the length and print the dataset, it's clear that it has `1166` entries.
### Step... | closed | https://github.com/huggingface/datasets/issues/6294 | 2023-10-11T09:59:38 | 2023-10-17T11:24:06 | 2023-10-17T11:24:06 | {
"login": "ZYM66",
"id": 61892155,
"type": "User"
} | [] | false | [] |
1,937,238,047 | 6,293 | Choose columns to stream parquet data in streaming mode | Currently passing columns= to load_dataset in streaming mode fails
```
Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=... | closed | https://github.com/huggingface/datasets/issues/6293 | 2023-10-11T08:59:36 | 2023-10-11T16:21:38 | 2023-10-11T16:21:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,937,050,470 | 6,292 | how to load the image of dtype float32 or float64 | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | open | https://github.com/huggingface/datasets/issues/6292 | 2023-10-11T07:27:16 | 2023-10-11T13:19:11 | null | {
"login": "wanglaofei",
"id": 26437644,
"type": "User"
} | [] | false | [] |
1,936,129,871 | 6,291 | Casting type from Array2D int to Array2D float crashes | ### Describe the bug
I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error :
```
Traceback (most recent call last):
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearnin... | closed | https://github.com/huggingface/datasets/issues/6291 | 2023-10-10T20:10:10 | 2023-10-13T13:45:31 | 2023-10-13T13:45:31 | {
"login": "AlanBlanchet",
"id": 22567306,
"type": "User"
} | [] | false | [] |
1,935,629,679 | 6,290 | Incremental dataset (e.g. `.push_to_hub(..., append=True)`) | ### Feature request
Have the possibility to do `ds.push_to_hub(..., append=True)`.
### Motivation
Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and
this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675... | open | https://github.com/huggingface/datasets/issues/6290 | 2023-10-10T15:18:03 | 2025-03-12T13:41:26 | null | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,935,628,506 | 6,289 | testing doc-builder | testing https://github.com/huggingface/doc-builder/pull/426 | closed | https://github.com/huggingface/datasets/pull/6289 | 2023-10-10T15:17:29 | 2023-10-13T08:57:14 | 2023-10-13T08:56:48 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,935,005,457 | 6,288 | Dataset.from_pandas with a DataFrame of PIL.Images | Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way | open | https://github.com/huggingface/datasets/issues/6288 | 2023-10-10T10:29:16 | 2024-11-29T16:35:30 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,932,758,192 | 6,287 | map() not recognizing "text" | ### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedData... | closed | https://github.com/huggingface/datasets/issues/6287 | 2023-10-09T10:27:30 | 2023-10-11T20:28:45 | 2023-10-11T20:28:45 | {
"login": "EngineerKhan",
"id": 5688359,
"type": "User"
} | [] | false | [] |
1,932,640,128 | 6,286 | Create DefunctDatasetError | Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible.
See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b | closed | https://github.com/huggingface/datasets/pull/6286 | 2023-10-09T09:23:23 | 2023-10-10T07:13:22 | 2023-10-10T07:03:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,932,306,325 | 6,285 | TypeError: expected str, bytes or os.PathLike object, not dict | ### Describe the bug
my dataset is in form : train- image /n -labels
and tried the code:
```
from datasets import load_dataset
data_files = {
"train": "/content/datasets/PotholeDetectionYOLOv8-1/train/",
"validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/",
"test": "/content/dat... | open | https://github.com/huggingface/datasets/issues/6285 | 2023-10-09T04:56:26 | 2023-10-10T13:17:33 | null | {
"login": "andysingal",
"id": 20493493,
"type": "User"
} | [] | false | [] |
1,929,551,712 | 6,284 | Add Belebele multiple-choice machine reading comprehension (MRC) dataset | ### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short pass... | closed | https://github.com/huggingface/datasets/issues/6284 | 2023-10-06T06:58:03 | 2023-10-06T13:26:51 | 2023-10-06T13:26:51 | {
"login": "rajveer43",
"id": 64583161,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,928,552,257 | 6,283 | Fix array cast/embed with null values | Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation.
Fix #6280, fix #6311, fix #6360
(Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=... | closed | https://github.com/huggingface/datasets/pull/6283 | 2023-10-05T15:24:05 | 2024-07-04T07:24:20 | 2024-02-06T19:24:19 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,928,473,630 | 6,282 | Drop data_files duplicates | I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
| closed | https://github.com/huggingface/datasets/pull/6282 | 2023-10-05T14:43:08 | 2024-09-02T14:08:35 | 2024-09-02T14:08:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,928,456,959 | 6,281 | Improve documentation of dataset.from_generator | Improve documentation to clarify sharding behavior (#6270) | closed | https://github.com/huggingface/datasets/pull/6281 | 2023-10-05T14:34:49 | 2023-10-05T19:09:07 | 2023-10-05T18:57:41 | {
"login": "hartmans",
"id": 53510,
"type": "User"
} | [] | true | [] |
1,928,215,278 | 6,280 | Couldn't cast array of type fixed_size_list to Sequence(Value(float64)) | ### Describe the bug
I have a dataset with an embedding column, when I try to map that dataset I get the following exception:
```
Traceback (most recent call last):
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map
for rank, done, content... | closed | https://github.com/huggingface/datasets/issues/6280 | 2023-10-05T12:48:31 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | {
"login": "jmif",
"id": 1000442,
"type": "User"
} | [] | false | [] |
1,928,028,226 | 6,279 | Batched IterableDataset | ### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation load... | open | https://github.com/huggingface/datasets/issues/6279 | 2023-10-05T11:12:49 | 2024-11-07T10:01:22 | null | {
"login": "lneukom",
"id": 7010688,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,927,957,877 | 6,278 | No data files duplicates | I added a new DataFilesSet class to disallow duplicate data files.
I also deprecated DataFilesList.
EDIT: actually I might just add drop_duplicates=True to `.from_patterns`
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
TODO:
- [ ] tests
... | closed | https://github.com/huggingface/datasets/pull/6278 | 2023-10-05T10:31:58 | 2024-01-11T06:32:49 | 2023-10-05T14:43:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,927,044,546 | 6,277 | FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. | ### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub eit... | closed | https://github.com/huggingface/datasets/issues/6277 | 2023-10-04T22:01:25 | 2023-10-08T17:05:46 | 2023-10-08T17:05:46 | {
"login": "diegogonzalezc",
"id": 66733346,
"type": "User"
} | [] | false | [] |
1,925,961,878 | 6,276 | I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error | ### Describe the bug
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
I tried google collab and it works but because I'm on the free version the training ... | open | https://github.com/huggingface/datasets/issues/6276 | 2023-10-04T11:03:41 | 2023-11-27T10:39:16 | null | {
"login": "valaofficial",
"id": 50768065,
"type": "User"
} | [] | false | [] |
1,921,354,680 | 6,275 | Would like to Contribute a dataset | I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community | closed | https://github.com/huggingface/datasets/issues/6275 | 2023-10-02T07:00:21 | 2023-10-10T16:27:54 | 2023-10-10T16:27:54 | {
"login": "vikas70607",
"id": 97907750,
"type": "User"
} | [] | false | [] |
1,921,036,328 | 6,274 | FileNotFoundError for dataset with multiple builder config | ### Describe the bug
When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen.
FileNotFoundError: [Errno 2... | closed | https://github.com/huggingface/datasets/issues/6274 | 2023-10-01T23:45:56 | 2024-08-14T04:42:02 | 2023-10-02T20:09:38 | {
"login": "LouisChen15",
"id": 97120485,
"type": "User"
} | [] | false | [] |
1,920,922,260 | 6,273 | Broken Link to PubMed Abstracts dataset . | ### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapt... | open | https://github.com/huggingface/datasets/issues/6273 | 2023-10-01T19:08:48 | 2024-04-28T02:30:42 | null | {
"login": "sameemqureshi",
"id": 100606327,
"type": "User"
} | [] | false | [] |
1,920,831,487 | 6,272 | Duplicate `data_files` when named `<split>/<split>.parquet` | e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/... | closed | https://github.com/huggingface/datasets/issues/6272 | 2023-10-01T15:43:56 | 2024-03-15T15:22:05 | 2024-03-15T15:22:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,920,420,295 | 6,271 | Overwriting Split overwrites data but not metadata, corrupting dataset | ### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
Whe... | closed | https://github.com/huggingface/datasets/issues/6271 | 2023-09-30T22:37:31 | 2023-10-16T13:30:50 | 2023-10-16T13:30:50 | {
"login": "govindrai",
"id": 13859249,
"type": "User"
} | [] | false | [] |
1,920,329,373 | 6,270 | Dataset.from_generator raises with sharded gen_args | ### Describe the bug
According to the docs of Datasets.from_generator:
```
gen_kwargs(`dict`, *optional*):
Keyword arguments to be passed to the `generator` callable.
You can define a sharded dataset by passing the list of shards in `gen_kwargs`.
```
So I'd expect that if gen_kwar... | closed | https://github.com/huggingface/datasets/issues/6270 | 2023-09-30T16:50:06 | 2023-10-11T20:29:12 | 2023-10-11T20:29:11 | {
"login": "hartmans",
"id": 53510,
"type": "User"
} | [] | false | [] |
1,919,572,790 | 6,269 | Reduce the number of commits in `push_to_hub` | Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files.
A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming schem... | closed | https://github.com/huggingface/datasets/pull/6269 | 2023-09-29T16:22:31 | 2023-10-16T16:03:18 | 2023-10-16T13:30:46 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,919,010,645 | 6,268 | Add repo_id to DatasetInfo | ```python
from datasets import load_dataset
ds = load_dataset("lhoestq/demo1", split="train")
ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"])
print(ds.repo_id)
# lhoestq/demo1
```
- repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict
- ... | open | https://github.com/huggingface/datasets/pull/6268 | 2023-09-29T10:24:55 | 2023-10-01T15:29:45 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,916,443,262 | 6,267 | Multi label class encoding | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
... | open | https://github.com/huggingface/datasets/issues/6267 | 2023-09-27T22:48:08 | 2023-10-26T18:46:08 | null | {
"login": "jmif",
"id": 1000442,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,916,334,394 | 6,266 | Use LibYAML with PyYAML if available | PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibY... | open | https://github.com/huggingface/datasets/pull/6266 | 2023-09-27T21:13:36 | 2023-09-28T14:29:24 | null | {
"login": "bryant1410",
"id": 3905501,
"type": "User"
} | [] | true | [] |
1,915,651,566 | 6,265 | Remove `apache_beam` import in `BeamBasedBuilder._save_info` | ... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS)
Fix https://github.com/huggingface/datasets/issues/6260 | closed | https://github.com/huggingface/datasets/pull/6265 | 2023-09-27T13:56:34 | 2023-09-28T18:34:02 | 2023-09-28T18:23:35 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,914,958,781 | 6,264 | Temporarily pin tensorflow < 2.14.0 | Temporarily pin tensorflow < 2.14.0 until permanent solution is found.
Hot fix #6263. | closed | https://github.com/huggingface/datasets/pull/6264 | 2023-09-27T08:16:06 | 2023-09-27T08:45:24 | 2023-09-27T08:36:39 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,914,951,043 | 6,263 | CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python' | Python 3.10 CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/li... | closed | https://github.com/huggingface/datasets/issues/6263 | 2023-09-27T08:12:05 | 2023-09-27T08:36:40 | 2023-09-27T08:36:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,914,895,459 | 6,262 | Fix CI 404 errors | Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884
```
FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.u... | closed | https://github.com/huggingface/datasets/pull/6262 | 2023-09-27T07:40:18 | 2023-09-28T15:39:16 | 2023-09-28T15:30:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,913,813,178 | 6,261 | Can't load a dataset | ### Describe the bug
Can't seem to load the JourneyDB dataset.
It throws the following error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[15], line 2
1 # If the dataset is gated/priv... | closed | https://github.com/huggingface/datasets/issues/6261 | 2023-09-26T15:46:25 | 2023-10-05T10:23:23 | 2023-10-05T10:23:22 | {
"login": "joaopedrosdmm",
"id": 37955817,
"type": "User"
} | [] | false | [] |
1,912,593,466 | 6,260 | REUSE_DATASET_IF_EXISTS don't work | ### Describe the bug
I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ
config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/da... | closed | https://github.com/huggingface/datasets/issues/6260 | 2023-09-26T03:02:16 | 2023-09-28T18:23:36 | 2023-09-28T18:23:36 | {
"login": "rangehow",
"id": 88258534,
"type": "User"
} | [] | false | [] |
1,911,965,758 | 6,259 | Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories | ### Describe the bug
When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets.
### Steps to reproduce the bug... | closed | https://github.com/huggingface/datasets/issues/6259 | 2023-09-25T17:20:54 | 2024-03-15T15:22:04 | 2024-03-15T15:22:04 | {
"login": "MF-FOOM",
"id": 141304309,
"type": "User"
} | [] | false | [] |
1,911,445,373 | 6,258 | [DOCS] Fix typo: Elasticsearch | Not ElasticSearch :) | closed | https://github.com/huggingface/datasets/pull/6258 | 2023-09-25T12:50:59 | 2023-09-26T14:55:35 | 2023-09-26T13:36:40 | {
"login": "leemthompo",
"id": 32779855,
"type": "User"
} | [] | true | [] |
1,910,741,044 | 6,257 | HfHubHTTPError - exceeded our hourly quotas for action: commit | ### Describe the bug
I try to upload a very large dataset of images, and get the following error:
```
File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo... | closed | https://github.com/huggingface/datasets/issues/6257 | 2023-09-25T06:11:43 | 2023-10-16T13:30:49 | 2023-10-16T13:30:48 | {
"login": "yuvalkirstain",
"id": 57996478,
"type": "User"
} | [] | false | [] |
1,910,275,199 | 6,256 | load_dataset() function's cache_dir does not seems to work | ### Describe the bug
datasets version: 2.14.5
when trying to run the following command
trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir')
I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine.
It seems the cache_... | closed | https://github.com/huggingface/datasets/issues/6256 | 2023-09-24T15:34:06 | 2025-05-14T10:08:53 | 2024-10-08T15:45:18 | {
"login": "andyzhu",
"id": 171831,
"type": "User"
} | [] | false | [] |
1,909,842,977 | 6,255 | Parallelize builder configs creation | For datasets with lots of configs defined in YAML
E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec | closed | https://github.com/huggingface/datasets/pull/6255 | 2023-09-23T11:56:20 | 2024-01-11T06:32:34 | 2023-09-26T15:44:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,909,672,104 | 6,254 | Dataset.from_generator() cost much more time in vscode debugging mode then running mode | ### Describe the bug
Hey there,
I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset.
However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal.
### Steps to reproduce the bu... | closed | https://github.com/huggingface/datasets/issues/6254 | 2023-09-23T02:07:26 | 2023-10-03T14:42:53 | 2023-10-03T14:42:53 | {
"login": "dontnet-wuenze",
"id": 56437469,
"type": "User"
} | [] | false | [] |
1,906,618,910 | 6,253 | Check builder cls default config name in inspect | Fix https://github.com/huggingface/datasets-server/issues/1812
this was causing this issue:
```ipython
In [1]: from datasets import *
In [2]: inspect.get_dataset_config_names("aakanksha/udpos")
Out[2]: ['default']
In [3]: load_dataset_builder("aakanksha/udpos").config.name
Out[3]: 'en'
``` | closed | https://github.com/huggingface/datasets/pull/6253 | 2023-09-21T10:15:32 | 2023-09-21T14:16:44 | 2023-09-21T14:08:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,906,375,378 | 6,252 | exif_transpose not done to Image (PIL problem) | ### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca... | closed | https://github.com/huggingface/datasets/issues/6252 | 2023-09-21T08:11:46 | 2024-03-19T15:29:43 | 2024-03-19T15:29:43 | {
"login": "rhajou",
"id": 108274349,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,904,418,426 | 6,251 | Support streaming datasets with pyarrow.parquet.read_table | Support streaming datasets with `pyarrow.parquet.read_table`.
See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2
CC: @AndreaFrancis | closed | https://github.com/huggingface/datasets/pull/6251 | 2023-09-20T08:07:02 | 2023-09-27T06:37:03 | 2023-09-27T06:26:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,901,390,945 | 6,247 | Update create_dataset.mdx | modified , as AudioFolder and ImageFolder not in Dataset Library.
``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset```
```
cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site... | closed | https://github.com/huggingface/datasets/pull/6247 | 2023-09-18T17:06:29 | 2023-09-19T18:51:49 | 2023-09-19T18:40:10 | {
"login": "EswarDivi",
"id": 76403422,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.