id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,660,455,202 | 5,725 | How to limit the number of examples in dataset, for testing? | ### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected beh... | closed | https://github.com/huggingface/datasets/issues/5725 | 2023-04-10T08:41:43 | 2023-04-21T06:16:24 | 2023-04-21T06:16:24 | {
"login": "ndvbd",
"id": 845175,
"type": "User"
} | [] | false | [] |
1,659,938,135 | 5,724 | Error after shuffling streaming IterableDatasets with downloaded dataset | ### Describe the bug
I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`:
```
File "/d... | closed | https://github.com/huggingface/datasets/issues/5724 | 2023-04-09T16:58:44 | 2023-04-20T20:37:30 | 2023-04-20T20:37:30 | {
"login": "szxiangjn",
"id": 41177966,
"type": "User"
} | [] | false | [] |
1,659,837,510 | 5,722 | Distributed Training Error on Customized Dataset | Hi guys, recently I tried to use `datasets` to train a dual encoder.
I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script)
Here are my code:
```python
class RetrivalDataset(datasets.GeneratorBasedBuilder):
"""CrossEncoder dataset."""
B... | closed | https://github.com/huggingface/datasets/issues/5722 | 2023-04-09T11:04:59 | 2023-07-24T14:50:46 | 2023-07-24T14:50:46 | {
"login": "wlhgtc",
"id": 16603773,
"type": "User"
} | [] | false | [] |
1,659,680,682 | 5,721 | Calling datasets.load_dataset("text" ...) results in a wrong split. | ### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the follo... | open | https://github.com/huggingface/datasets/issues/5721 | 2023-04-08T23:55:12 | 2023-04-08T23:55:12 | null | {
"login": "cyrilzakka",
"id": 1841186,
"type": "User"
} | [] | false | [] |
1,659,610,705 | 5,720 | Streaming IterableDatasets do not work with torch DataLoaders | ### Describe the bug
When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader:
```
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__
self.... | open | https://github.com/huggingface/datasets/issues/5720 | 2023-04-08T18:45:48 | 2025-03-19T14:06:47 | null | {
"login": "jlehrer1",
"id": 29244648,
"type": "User"
} | [] | false | [] |
1,659,203,222 | 5,719 | Array2D feature creates a list of list instead of a numpy array | ### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array int... | closed | https://github.com/huggingface/datasets/issues/5719 | 2023-04-07T21:04:08 | 2023-04-20T15:34:41 | 2023-04-20T15:34:41 | {
"login": "offchan42",
"id": 15215732,
"type": "User"
} | [] | false | [] |
1,658,958,406 | 5,718 | Reorder default data splits to have validation before test | This PR reorders data splits, so that by default validation appears before test.
The default order becomes: [train, validation, test] instead of [train, test, validation]. | closed | https://github.com/huggingface/datasets/pull/5718 | 2023-04-07T16:01:26 | 2023-04-27T14:43:13 | 2023-04-27T14:35:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,658,729,866 | 5,717 | Errror when saving to disk a dataset of images | ### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_... | open | https://github.com/huggingface/datasets/issues/5717 | 2023-04-07T11:59:17 | 2025-07-13T08:27:47 | null | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
1,658,613,092 | 5,716 | Handle empty audio | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_... | closed | https://github.com/huggingface/datasets/issues/5716 | 2023-04-07T09:51:40 | 2023-09-27T17:47:08 | 2023-09-27T17:47:08 | {
"login": "ben-8878",
"id": 38179632,
"type": "User"
} | [] | false | [] |
1,657,479,788 | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch... | closed | https://github.com/huggingface/datasets/issues/5715 | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 | {
"login": "jungbaepark",
"id": 34066771,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,657,388,033 | 5,714 | Fix xnumpy_load for .npz files | PR:
- #5626
implemented support for streaming `.npy` files by using `numpy.load`.
However, it introduced a bug when used with `.npz` files, within a context manager:
```
ValueError: seek of closed file
```
or in streaming mode:
```
ValueError: I/O operation on closed file.
```
This PR fixes the bug an... | closed | https://github.com/huggingface/datasets/pull/5714 | 2023-04-06T13:01:45 | 2023-04-07T09:23:54 | 2023-04-07T09:16:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,657,141,251 | 5,713 | ArrowNotImplementedError when loading dataset from the hub | ### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_... | closed | https://github.com/huggingface/datasets/issues/5713 | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
1,655,972,106 | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | closed | https://github.com/huggingface/datasets/issues/5712 | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 | {
"login": "rcasero",
"id": 1219084,
"type": "User"
} | [] | false | [] |
1,655,971,647 | 5,711 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | closed | https://github.com/huggingface/datasets/issues/5711 | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 | {
"login": "rcasero",
"id": 1219084,
"type": "User"
} | [] | false | [] |
1,655,703,534 | 5,710 | OSError: Memory mapping file failed: Cannot allocate memory | ### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```te... | closed | https://github.com/huggingface/datasets/issues/5710 | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 | {
"login": "Saibo-creator",
"id": 53392976,
"type": "User"
} | [] | false | [] |
1,655,423,503 | 5,709 | Manually dataset info made not taken into account | ### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hen... | closed | https://github.com/huggingface/datasets/issues/5709 | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
1,655,023,642 | 5,708 | Dataset sizes are in MiB instead of MB in dataset cards | As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while t... | closed | https://github.com/huggingface/datasets/issues/5708 | 2023-04-05T06:36:03 | 2023-12-21T10:20:28 | 2023-12-21T10:20:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,653,545,835 | 5,706 | Support categorical data types for Parquet | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parq... | closed | https://github.com/huggingface/datasets/issues/5706 | 2023-04-04T09:45:35 | 2024-06-07T12:20:43 | 2024-06-07T12:20:43 | {
"login": "kklemon",
"id": 1430243,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,653,500,383 | 5,705 | Getting next item from IterableDataset took forever. | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda... | closed | https://github.com/huggingface/datasets/issues/5705 | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 | {
"login": "HongtaoYang",
"id": 16588434,
"type": "User"
} | [] | false | [] |
1,653,471,356 | 5,704 | 5537 speedup load | I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average.
That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode... | open | https://github.com/huggingface/datasets/pull/5704 | 2023-04-04T08:58:14 | 2023-04-07T16:10:55 | null | {
"login": "semajyllek",
"id": 35013374,
"type": "User"
} | [] | true | [] |
1,653,158,955 | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | null | closed | https://github.com/huggingface/datasets/pull/5703 | 2023-04-04T04:37:49 | 2023-04-20T03:17:37 | 2023-04-20T03:17:32 | {
"login": "hvaara",
"id": 1535968,
"type": "User"
} | [] | true | [] |
1,653,104,720 | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18... | closed | https://github.com/huggingface/datasets/issues/5702 | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 | {
"login": "gitforziio",
"id": 10508116,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,652,931,399 | 5,701 | Add Dataset.from_spark | Adds static method Dataset.from_spark to create datasets from Spark DataFrames.
This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train ... | closed | https://github.com/huggingface/datasets/pull/5701 | 2023-04-03T23:51:29 | 2023-06-16T16:39:32 | 2023-04-26T15:43:39 | {
"login": "maddiedawson",
"id": 106995444,
"type": "User"
} | [] | true | [] |
1,652,527,530 | 5,700 | fix: fix wrong modification of the 'cache_file_name' -related paramet… | …ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699) | open | https://github.com/huggingface/datasets/pull/5700 | 2023-04-03T18:05:26 | 2023-04-06T17:17:27 | null | {
"login": "FrancoisNoyez",
"id": 47528215,
"type": "User"
} | [] | true | [] |
1,652,437,419 | 5,699 | Issue when wanting to split in memory a cached dataset | ### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not No... | open | https://github.com/huggingface/datasets/issues/5699 | 2023-04-03T17:00:07 | 2024-05-15T13:12:18 | null | {
"login": "FrancoisNoyez",
"id": 47528215,
"type": "User"
} | [] | false | [] |
1,652,183,611 | 5,698 | Add Qdrant as another search index | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search syst... | open | https://github.com/huggingface/datasets/issues/5698 | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null | {
"login": "kacperlukawski",
"id": 2649301,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,651,812,614 | 5,697 | Raise an error on missing distributed seed | close https://github.com/huggingface/datasets/issues/5696 | closed | https://github.com/huggingface/datasets/pull/5697 | 2023-04-03T10:44:58 | 2023-04-04T15:05:24 | 2023-04-04T14:58:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,651,707,008 | 5,696 | Shuffle a sharded iterable dataset without seed can lead to duplicate data | As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead o... | closed | https://github.com/huggingface/datasets/issues/5696 | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,650,974,156 | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | ### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the e... | closed | https://github.com/huggingface/datasets/issues/5695 | 2023-04-02T14:42:44 | 2024-05-15T12:04:47 | 2023-04-10T08:04:04 | {
"login": "amariucaitheodor",
"id": 32778667,
"type": "User"
} | [] | false | [] |
1,650,467,793 | 5,694 | Dataset configuration | Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
... | open | https://github.com/huggingface/datasets/issues/5694 | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
1,649,934,749 | 5,693 | [docs] Split pattern search order | This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits. | closed | https://github.com/huggingface/datasets/pull/5693 | 2023-03-31T19:51:38 | 2023-04-03T18:43:30 | 2023-04-03T18:29:58 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,649,818,644 | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | ### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/trai... | open | https://github.com/huggingface/datasets/issues/5692 | 2023-03-31T18:19:40 | 2024-01-14T07:24:21 | null | {
"login": "cyanic-selkie",
"id": 32219669,
"type": "User"
} | [] | false | [] |
1,649,737,526 | 5,691 | [docs] Compress data files | This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage). | closed | https://github.com/huggingface/datasets/pull/5691 | 2023-03-31T17:17:26 | 2023-04-19T13:37:32 | 2023-04-19T07:25:58 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,648,956,349 | 5,689 | Support streaming Beam datasets from HF GCS preprocessed data | This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage:
- natural_questions
- wiki40b
- wikipedia
This is done by streaming from the prepared Arrow files in HF Google Cloud Storage.
This will fix their corresponding dataset viewers. Relat... | closed | https://github.com/huggingface/datasets/pull/5689 | 2023-03-31T08:44:24 | 2023-04-12T05:57:55 | 2023-04-12T05:50:31 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,649,289,883 | 5,690 | raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api | ### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, Dat... | closed | https://github.com/huggingface/datasets/issues/5690 | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 | {
"login": "wccccp",
"id": 55964850,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,648,463,504 | 5,688 | Wikipedia download_and_prepare for GCS | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039a... | closed | https://github.com/huggingface/datasets/issues/5688 | 2023-03-30T23:43:22 | 2024-03-15T15:59:18 | 2024-03-15T15:59:18 | {
"login": "adrianfagerland",
"id": 25522531,
"type": "User"
} | [] | false | [] |
1,647,009,018 | 5,687 | Document to compress data files before uploading | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are t... | closed | https://github.com/huggingface/datasets/issues/5687 | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,646,308,228 | 5,686 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/5686 | 2023-03-29T18:24:13 | 2023-03-29T18:33:49 | 2023-03-29T18:24:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,646,048,667 | 5,685 | Broken Image render on the hub website | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type
 issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load... | closed | https://github.com/huggingface/datasets/issues/5681 | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,645,430,103 | 5,680 | Fix a description error for interleave_datasets. | There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy.
``` python
d1 = Dataset.from_dict({"a": [0, 1, 2]})
d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
dataset = interleave_datasets([d1, d2, d3], stopping... | closed | https://github.com/huggingface/datasets/pull/5680 | 2023-03-29T09:50:23 | 2023-03-30T13:14:19 | 2023-03-30T13:07:18 | {
"login": "QizhiPei",
"id": 55624066,
"type": "User"
} | [] | true | [] |
1,645,184,622 | 5,679 | Allow load_dataset to take a working dir for intermediate data | ### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It wi... | open | https://github.com/huggingface/datasets/issues/5679 | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null | {
"login": "lu-wang-dl",
"id": 38018689,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,645,018,359 | 5,678 | Add support to create a Dataset from spark dataframe | ### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing t... | closed | https://github.com/huggingface/datasets/issues/5678 | 2023-03-29T04:36:28 | 2024-08-27T14:43:19 | 2023-07-21T14:15:38 | {
"login": "lu-wang-dl",
"id": 38018689,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,644,828,606 | 5,677 | Dataset.map() crashes when any column contains more than 1000 empty dictionaries | ### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets... | closed | https://github.com/huggingface/datasets/issues/5677 | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 | {
"login": "mtoles",
"id": 7139344,
"type": "User"
} | [] | false | [] |
1,641,763,478 | 5,675 | Filter datasets by language code | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search fo... | closed | https://github.com/huggingface/datasets/issues/5675 | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 | {
"login": "named-entity",
"id": 5658496,
"type": "User"
} | [] | false | [] |
1,641,084,105 | 5,674 | Stored XSS | x | closed | https://github.com/huggingface/datasets/issues/5674 | 2023-03-26T20:55:58 | 2024-04-30T22:56:41 | 2023-03-27T21:01:55 | {
"login": "Fadavvi",
"id": 21213484,
"type": "User"
} | [] | false | [] |
1,641,066,352 | 5,673 | Pass down storage options | Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by all... | closed | https://github.com/huggingface/datasets/pull/5673 | 2023-03-26T20:09:37 | 2023-03-28T15:03:38 | 2023-03-28T14:54:17 | {
"login": "dwyatte",
"id": 2512762,
"type": "User"
} | [] | true | [] |
1,641,005,322 | 5,672 | Pushing dataset to hub crash | ### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub b... | closed | https://github.com/huggingface/datasets/issues/5672 | 2023-03-26T17:42:13 | 2023-03-30T08:11:05 | 2023-03-30T08:11:05 | {
"login": "tzvc",
"id": 14275989,
"type": "User"
} | [] | false | [] |
1,640,840,012 | 5,671 | How to use `load_dataset('glue', 'cola')` | ### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
------------------------------------------------------------------------... | closed | https://github.com/huggingface/datasets/issues/5671 | 2023-03-26T09:40:34 | 2023-03-28T07:43:44 | 2023-03-28T07:43:43 | {
"login": "makinzm",
"id": 40193664,
"type": "User"
} | [] | false | [] |
1,640,607,045 | 5,670 | Unable to load multi class classification datasets | ### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting... | closed | https://github.com/huggingface/datasets/issues/5670 | 2023-03-25T18:06:15 | 2023-03-27T22:54:56 | 2023-03-27T22:54:56 | {
"login": "ysahil97",
"id": 19690506,
"type": "User"
} | [] | false | [] |
1,638,070,046 | 5,669 | Almost identical datasets, huge performance difference | ### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset(... | open | https://github.com/huggingface/datasets/issues/5669 | 2023-03-23T18:20:20 | 2023-04-09T18:56:23 | null | {
"login": "eli-osherovich",
"id": 2437102,
"type": "User"
} | [] | false | [] |
1,638,018,598 | 5,668 | Support for downloading only provided split | We can pass split to `_split_generators()`.
But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json` | open | https://github.com/huggingface/datasets/pull/5668 | 2023-03-23T17:53:39 | 2023-03-24T06:43:14 | null | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,637,789,361 | 5,667 | Jax requires jaxlib | close https://github.com/huggingface/datasets/issues/5666 | closed | https://github.com/huggingface/datasets/pull/5667 | 2023-03-23T15:41:09 | 2023-03-23T16:23:11 | 2023-03-23T16:14:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,637,675,062 | 5,666 | Support tensorflow 2.12.0 in CI | Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664 | closed | https://github.com/huggingface/datasets/issues/5666 | 2023-03-23T14:37:51 | 2023-03-23T16:14:54 | 2023-03-23T16:14:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,637,193,648 | 5,665 | Feature request: IterableDataset.push_to_hub | ### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
`... | closed | https://github.com/huggingface/datasets/issues/5665 | 2023-03-23T09:53:04 | 2025-06-06T16:13:22 | 2025-06-06T16:12:36 | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,637,192,684 | 5,664 | Fix CI by temporarily pinning tensorflow < 2.12.0 | As a hotfix for our CI, temporarily pin `tensorflow` upper version:
- In Python 3.10, tensorflow-2.12.0 also installs `jax`
Fix #5663
Until root cause is fixed. | closed | https://github.com/huggingface/datasets/pull/5664 | 2023-03-23T09:52:26 | 2023-03-23T10:17:11 | 2023-03-23T10:09:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,637,173,248 | 5,663 | CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed | CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installati... | closed | https://github.com/huggingface/datasets/issues/5663 | 2023-03-23T09:39:43 | 2023-03-23T10:09:55 | 2023-03-23T10:09:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,637,140,813 | 5,662 | Fix unnecessary dict comprehension | After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See:
- https://github.com/charliermarsh/ruff/releases/tag/v0.0.258
- https://github.com/charliermarsh/ruff/pull/3605
This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple valu... | closed | https://github.com/huggingface/datasets/pull/5662 | 2023-03-23T09:18:58 | 2023-03-23T09:46:59 | 2023-03-23T09:37:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,637,129,445 | 5,661 | CI is broken: Unnecessary `dict` comprehension | CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
``` | closed | https://github.com/huggingface/datasets/issues/5661 | 2023-03-23T09:13:01 | 2023-03-23T09:37:51 | 2023-03-23T09:37:51 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,635,543,646 | 5,660 | integration with imbalanced-learn | ### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I'v... | closed | https://github.com/huggingface/datasets/issues/5660 | 2023-03-22T11:05:17 | 2023-07-06T18:10:15 | 2023-07-06T18:10:15 | {
"login": "tansaku",
"id": 30216,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "wontfix",
"color": "ffffff"
}
] | false | [] |
1,635,447,540 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file t... | closed | https://github.com/huggingface/datasets/issues/5659 | 2023-03-22T10:07:33 | 2024-07-12T01:35:01 | 2023-04-07T08:51:28 | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [] | false | [] |
1,634,867,204 | 5,658 | docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict | Closes #5653
@mariosasko | closed | https://github.com/huggingface/datasets/pull/5658 | 2023-03-22T00:12:18 | 2023-03-24T16:43:34 | 2023-03-24T16:36:21 | {
"login": "connor-henderson",
"id": 78612354,
"type": "User"
} | [] | true | [] |
1,634,156,563 | 5,656 | Fix `fsspec.open` when using an HTTP proxy | Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't supp... | closed | https://github.com/huggingface/datasets/pull/5656 | 2023-03-21T15:23:29 | 2023-03-23T14:14:50 | 2023-03-23T13:15:46 | {
"login": "bryant1410",
"id": 3905501,
"type": "User"
} | [] | true | [] |
1,634,030,017 | 5,655 | Improve features decoding in to_iterable_dataset | Following discussion at https://github.com/huggingface/datasets/pull/5589
Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily).
I fixed it by providing a generator that yields undecoded examples | closed | https://github.com/huggingface/datasets/pull/5655 | 2023-03-21T14:18:09 | 2023-03-23T13:19:27 | 2023-03-23T13:12:25 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,633,523,705 | 5,654 | Offset overflow when executing Dataset.map | ### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last): ... | open | https://github.com/huggingface/datasets/issues/5654 | 2023-03-21T09:33:27 | 2023-03-21T10:32:07 | null | {
"login": "jan-pair",
"id": 118280608,
"type": "User"
} | [] | false | [] |
1,633,254,159 | 5,653 | Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented | ### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://... | closed | https://github.com/huggingface/datasets/issues/5653 | 2023-03-21T05:25:35 | 2023-03-24T16:36:23 | 2023-03-24T16:36:23 | {
"login": "RmZeta2718",
"id": 42400165,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,632,546,073 | 5,652 | Copy features | Some users (even internally at HF) are doing
```python
dset_features = dset.features
dset_features.pop(col_to_remove)
dset = dset.map(..., features=dset_features)
```
Right now this causes issues because it modifies the features dict in place before the map.
In this PR I modified `dset.features` to return a ... | closed | https://github.com/huggingface/datasets/pull/5652 | 2023-03-20T17:17:23 | 2023-03-23T13:19:19 | 2023-03-23T13:12:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,631,967,509 | 5,651 | expanduser in save_to_disk | ### Describe the bug
save_to_disk() does not expand `~`
1. `dataset = load_datasets("any dataset")`
2. `dataset.save_to_disk("~/data")`
3. a folder named "~" created in current folder
4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`)
related issue https://github.... | closed | https://github.com/huggingface/datasets/issues/5651 | 2023-03-20T12:02:18 | 2023-10-27T14:04:37 | 2023-10-27T14:04:37 | {
"login": "RmZeta2718",
"id": 42400165,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,630,336,919 | 5,650 | load_dataset can't work correct with my image data | I have about 20000 images in my folder which divided into 4 folders with class names.
When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting imag... | closed | https://github.com/huggingface/datasets/issues/5650 | 2023-03-18T13:59:13 | 2023-07-24T14:13:02 | 2023-07-24T14:13:01 | {
"login": "WiNE-iNEFF",
"id": 41611046,
"type": "User"
} | [] | false | [] |
1,630,173,460 | 5,649 | The index column created with .to_sql() is dependent on the batch_size when writing | ### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the ... | closed | https://github.com/huggingface/datasets/issues/5649 | 2023-03-18T05:25:17 | 2023-06-17T07:01:57 | 2023-06-17T07:01:57 | {
"login": "lsb",
"id": 45281,
"type": "User"
} | [] | false | [] |
1,629,253,719 | 5,648 | flatten_indices doesn't work with pandas format | ### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.r... | open | https://github.com/huggingface/datasets/issues/5648 | 2023-03-17T12:44:25 | 2023-03-21T13:12:03 | null | {
"login": "alialamiidrissi",
"id": 14365168,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,628,225,544 | 5,647 | Make all print statements optional | ### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute | closed | https://github.com/huggingface/datasets/issues/5647 | 2023-03-16T20:30:07 | 2023-07-21T14:20:25 | 2023-07-21T14:20:24 | {
"login": "gagan3012",
"id": 49101362,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,627,838,762 | 5,646 | Allow self as key in `Features` | Fix #5641 | closed | https://github.com/huggingface/datasets/pull/5646 | 2023-03-16T16:17:03 | 2023-03-16T17:21:58 | 2023-03-16T17:14:50 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,627,108,278 | 5,645 | Datasets map and select(range()) is giving dill error | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns... | closed | https://github.com/huggingface/datasets/issues/5645 | 2023-03-16T10:01:28 | 2023-03-17T04:24:51 | 2023-03-17T04:24:51 | {
"login": "Tanya-11",
"id": 90728105,
"type": "User"
} | [] | false | [] |
1,626,204,046 | 5,644 | Allow direct cast from binary to Audio/Image | To address https://github.com/huggingface/datasets/discussions/5593.
| closed | https://github.com/huggingface/datasets/pull/5644 | 2023-03-15T20:02:54 | 2023-03-16T14:20:44 | 2023-03-16T14:12:55 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,626,160,220 | 5,643 | Support PyArrow arrays as column values in `from_dict` | For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values.
"Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417 | closed | https://github.com/huggingface/datasets/pull/5643 | 2023-03-15T19:32:40 | 2023-03-16T17:23:06 | 2023-03-16T17:15:40 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,626,043,177 | 5,642 | Bump hfh to 0.11.0 | to fix errors like
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/...
```
(e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997))
0.11.0 is the current mini... | closed | https://github.com/huggingface/datasets/pull/5642 | 2023-03-15T18:26:07 | 2023-03-20T12:34:09 | 2023-03-20T12:26:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,625,942,730 | 5,641 | Features cannot be named "self" | ### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0... | closed | https://github.com/huggingface/datasets/issues/5641 | 2023-03-15T17:16:40 | 2023-03-16T17:14:51 | 2023-03-16T17:14:51 | {
"login": "alialamiidrissi",
"id": 14365168,
"type": "User"
} | [] | false | [] |
1,625,896,057 | 5,640 | Less zip false positives | `zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile`
This is a known issue: https://github.com/python/cpython/issues/72680
At first I wanted to rely only on magic numbers, but t... | closed | https://github.com/huggingface/datasets/pull/5640 | 2023-03-15T16:48:59 | 2023-03-16T13:47:37 | 2023-03-16T13:40:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,625,737,098 | 5,639 | Parquet file wrongly recognized as zip prevents loading a dataset | ### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data... | closed | https://github.com/huggingface/datasets/issues/5639 | 2023-03-15T15:20:45 | 2023-03-16T13:40:14 | 2023-03-16T13:40:14 | {
"login": "clefourrier",
"id": 22726840,
"type": "User"
} | [] | false | [] |
1,625,564,471 | 5,638 | xPath to implement all operations for Path | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | closed | https://github.com/huggingface/datasets/issues/5638 | 2023-03-15T13:47:11 | 2023-03-17T13:21:12 | 2023-03-17T13:21:12 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,625,295,691 | 5,637 | IterableDataset with_format does not support 'device' keyword for jax | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | open | https://github.com/huggingface/datasets/issues/5637 | 2023-03-15T11:04:12 | 2025-01-07T06:59:33 | null | {
"login": "Lime-Cakes",
"id": 91322985,
"type": "User"
} | [] | false | [] |
1,623,721,577 | 5,636 | Fix CI: ignore C901 ("some_func" is to complex) in `ruff` | idk if I should have added this ignore to `ruff` too, but I added :) | closed | https://github.com/huggingface/datasets/pull/5636 | 2023-03-14T15:29:11 | 2023-03-14T16:37:06 | 2023-03-14T16:29:52 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,623,682,558 | 5,635 | Pass custom metadata filename to Image/Audio folders | This is a quick fix.
Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter.
For example, with the structure like:
```
data
images_dir/
im1.jpg
im2.jpg
...
metadata_dir/
meta_file... | open | https://github.com/huggingface/datasets/pull/5635 | 2023-03-14T15:08:16 | 2023-03-22T17:50:31 | null | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,622,424,174 | 5,634 | Not all progress bars are showing up when they should for downloading dataset | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | closed | https://github.com/huggingface/datasets/issues/5634 | 2023-03-13T23:04:18 | 2023-10-11T16:30:16 | 2023-10-11T16:30:16 | {
"login": "garlandz-db",
"id": 110427462,
"type": "User"
} | [] | false | [] |
1,621,469,970 | 5,633 | Cannot import datasets | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Pl... | closed | https://github.com/huggingface/datasets/issues/5633 | 2023-03-13T13:14:44 | 2023-03-13T17:54:19 | 2023-03-13T17:54:19 | {
"login": "ruplet",
"id": 11250555,
"type": "User"
} | [] | false | [] |
1,621,177,391 | 5,632 | Dataset cannot convert too large dictionnary | ### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of m... | open | https://github.com/huggingface/datasets/issues/5632 | 2023-03-13T10:14:40 | 2023-03-16T15:28:57 | null | {
"login": "MaraLac",
"id": 108518627,
"type": "User"
} | [] | false | [] |
1,620,442,854 | 5,631 | Custom split names | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren... | closed | https://github.com/huggingface/datasets/issues/5631 | 2023-03-12T17:21:43 | 2023-03-24T14:13:00 | 2023-03-24T14:13:00 | {
"login": "ErfanMoosaviMonazzah",
"id": 79091831,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,620,327,510 | 5,630 | adds early exit if url is `PathLike` | Closes #4864
Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument. | open | https://github.com/huggingface/datasets/pull/5630 | 2023-03-12T11:23:28 | 2023-03-15T11:58:38 | null | {
"login": "vvvm23",
"id": 44398246,
"type": "User"
} | [] | true | [] |
1,619,921,247 | 5,629 | load_dataset gives "403" error when using Financial phrasebank | When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads ... | open | https://github.com/huggingface/datasets/issues/5629 | 2023-03-11T07:46:39 | 2023-03-13T18:27:26 | null | {
"login": "Jimchoo91",
"id": 67709789,
"type": "User"
} | [] | false | [] |
1,619,641,810 | 5,628 | add kwargs to index search | This PR proposes to add kwargs to index search methods.
This is particularly useful for setting the timeout of a query on elasticsearch.
A typical use case would be:
```python
dset.add_elasticsearch_index("filename", es_client=es_client)
scores, examples = dset.get_nearest_examples("filename", "my_name-train_2... | closed | https://github.com/huggingface/datasets/pull/5628 | 2023-03-10T21:24:58 | 2023-03-15T14:48:47 | 2023-03-15T14:46:04 | {
"login": "SaulLu",
"id": 55560583,
"type": "User"
} | [] | true | [] |
1,619,336,609 | 5,627 | Unable to load AutoTrain-generated dataset from the hub | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | open | https://github.com/huggingface/datasets/issues/5627 | 2023-03-10T17:25:58 | 2023-03-11T15:44:42 | null | {
"login": "ijmiller2",
"id": 8560151,
"type": "User"
} | [] | false | [] |
1,619,252,984 | 5,626 | Support streaming datasets with numpy.load | Support streaming datasets with `numpy.load`.
See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1 | closed | https://github.com/huggingface/datasets/pull/5626 | 2023-03-10T16:33:39 | 2023-03-21T06:36:05 | 2023-03-21T06:28:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,618,971,855 | 5,625 | Allow "jsonl" data type signifier | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | open | https://github.com/huggingface/datasets/issues/5625 | 2023-03-10T13:21:48 | 2023-03-11T10:35:39 | null | {
"login": "BramVanroy",
"id": 2779410,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,617,400,192 | 5,624 | glue datasets returning -1 for test split | ### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
... | closed | https://github.com/huggingface/datasets/issues/5624 | 2023-03-09T14:47:18 | 2023-03-09T16:49:29 | 2023-03-09T16:49:29 | {
"login": "lithafnium",
"id": 8939967,
"type": "User"
} | [] | false | [] |
1,616,712,665 | 5,623 | Remove set_access_token usage + fail tests if FutureWarning | `set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`.
This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere.
In the future, us... | closed | https://github.com/huggingface/datasets/pull/5623 | 2023-03-09T08:46:01 | 2023-03-09T15:39:00 | 2023-03-09T15:31:59 | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [] | true | [] |
1,615,190,942 | 5,622 | Update README template to better template | null | closed | https://github.com/huggingface/datasets/pull/5622 | 2023-03-08T12:30:23 | 2023-03-11T05:07:38 | 2023-03-11T05:07:38 | {
"login": "emiltj",
"id": 54767532,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.