id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,746,249,161 | 5,932 | [doc build] Use secrets | Companion pr to https://github.com/huggingface/doc-builder/pull/379 | closed | https://github.com/huggingface/datasets/pull/5932 | 2023-06-07T16:09:39 | 2023-06-09T10:16:58 | 2023-06-09T09:53:16 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,745,408,784 | 5,931 | `datasets.map` not reusing cached copy by default | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was... | closed | https://github.com/huggingface/datasets/issues/5931 | 2023-06-07T09:03:33 | 2023-06-21T16:15:40 | 2023-06-21T16:15:40 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | false | [] |
1,745,184,395 | 5,930 | loading private custom dataset script - authentication error | ### Describe the bug
Train model with my custom dataset stored in HuggingFace and loaded with the loading script requires authentication but I am not sure how ?
I am logged in in the terminal, in the browser. I receive this error:
/python3.8/site-packages/datasets/utils/file_utils.py", line 566, in get_from... | closed | https://github.com/huggingface/datasets/issues/5930 | 2023-06-07T06:58:23 | 2023-06-15T14:49:21 | 2023-06-15T14:49:20 | {
"login": "flckv",
"id": 103381497,
"type": "User"
} | [] | false | [] |
1,744,478,456 | 5,929 | Importing PyTorch reduces multiprocessing performance for map | ### Describe the bug
I noticed that the performance of my dataset preprocessing with `map(...,num_proc=32)` decreases when PyTorch is imported.
### Steps to reproduce the bug
I created two example scripts to reproduce this behavior:
```
import datasets
datasets.disable_caching()
from datasets import Da... | closed | https://github.com/huggingface/datasets/issues/5929 | 2023-06-06T19:42:25 | 2023-06-16T13:09:12 | 2023-06-16T13:09:12 | {
"login": "Maxscha",
"id": 12814709,
"type": "User"
} | [] | false | [] |
1,744,098,371 | 5,928 | Fix link to quickstart docs in README.md | null | closed | https://github.com/huggingface/datasets/pull/5928 | 2023-06-06T15:23:01 | 2023-06-06T15:52:34 | 2023-06-06T15:43:53 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,744,009,032 | 5,927 | `IndexError` when indexing `Sequence` of `Array2D` with `None` values | ### Describe the bug
Having `None` values in a `Sequence` of `ArrayND` fails.
### Steps to reproduce the bug
```python
from datasets import Array2D, Dataset, Features, Sequence
data = [
[
[[0]],
None,
None,
]
]
feature = Sequence(Array2D((1, 1), dtype="int64"))
dataset =... | closed | https://github.com/huggingface/datasets/issues/5927 | 2023-06-06T14:36:22 | 2023-06-13T12:39:39 | 2023-06-09T13:23:50 | {
"login": "qgallouedec",
"id": 45557362,
"type": "User"
} | [] | false | [] |
1,743,922,028 | 5,926 | Uncaught exception when generating the splits from a dataset that miss data | ### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now corr... | open | https://github.com/huggingface/datasets/issues/5926 | 2023-06-06T13:51:01 | 2023-06-07T07:53:16 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
1,741,941,436 | 5,925 | Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets | ### Describe the bug
Hi all,
after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes re... | closed | https://github.com/huggingface/datasets/issues/5925 | 2023-06-05T14:46:04 | 2023-06-19T17:22:43 | 2023-06-19T17:22:43 | {
"login": "mtkinit",
"id": 78868366,
"type": "User"
} | [] | false | [] |
1,738,889,236 | 5,924 | Add parallel module using joblib for Spark | Discussion in https://github.com/huggingface/datasets/issues/5798 | closed | https://github.com/huggingface/datasets/pull/5924 | 2023-06-02T22:25:25 | 2023-06-14T10:25:10 | 2023-06-14T10:15:46 | {
"login": "es94129",
"id": 12763339,
"type": "User"
} | [] | true | [] |
1,737,436,227 | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | ### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>... | closed | https://github.com/huggingface/datasets/issues/5923 | 2023-06-02T04:16:32 | 2024-06-27T10:07:49 | 2024-02-25T16:38:03 | {
"login": "ehuangc",
"id": 71412682,
"type": "User"
} | [] | false | [] |
1,736,898,953 | 5,922 | Length of table does not accurately reflect the split | ### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior... | closed | https://github.com/huggingface/datasets/issues/5922 | 2023-06-01T18:56:26 | 2023-06-02T16:13:31 | 2023-06-02T16:13:31 | {
"login": "amogkam",
"id": 8068268,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | false | [] |
1,736,563,023 | 5,921 | Fix streaming parquet with image feature in schema | It was not reading the feature type from the parquet arrow schema | closed | https://github.com/huggingface/datasets/pull/5921 | 2023-06-01T15:23:10 | 2023-06-02T10:02:54 | 2023-06-02T09:53:11 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,736,196,991 | 5,920 | Optimize IterableDataset.from_file using ArrowExamplesIterable | following https://github.com/huggingface/datasets/pull/5893 | closed | https://github.com/huggingface/datasets/pull/5920 | 2023-06-01T12:14:36 | 2023-06-01T12:42:10 | 2023-06-01T12:35:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,735,519,227 | 5,919 | add support for storage_options for load_dataset API | to solve the issue in #5880
1. add s3 support in the link check step, previous we only check `http` and `https`,
2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files.
3... | closed | https://github.com/huggingface/datasets/pull/5919 | 2023-06-01T05:52:32 | 2023-07-18T06:14:32 | 2023-07-17T17:02:00 | {
"login": "janineguo",
"id": 59083384,
"type": "User"
} | [] | true | [] |
1,735,313,549 | 5,918 | File not found for audio dataset | ### Describe the bug
After loading an audio dataset, and looking at a sample entry, the `path` element, which is supposed to be the path to the audio file, doesn't actually exist.
### Steps to reproduce the bug
Run bug.py:
```py
import os.path
from datasets import load_dataset
def run() -> None:
cv1... | open | https://github.com/huggingface/datasets/issues/5918 | 2023-06-01T02:15:29 | 2023-06-11T06:02:25 | null | {
"login": "RobertBaruch",
"id": 1783950,
"type": "User"
} | [] | false | [] |
1,733,661,588 | 5,917 | Refactor extensions | Related to:
- #5850 | closed | https://github.com/huggingface/datasets/pull/5917 | 2023-05-31T08:33:02 | 2023-05-31T13:34:35 | 2023-05-31T13:25:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,732,456,392 | 5,916 | Unpin responses | Fix #5906 | closed | https://github.com/huggingface/datasets/pull/5916 | 2023-05-30T14:59:48 | 2023-05-30T18:03:10 | 2023-05-30T17:53:29 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,732,389,984 | 5,915 | Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"` | Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring)
Fix #5874 | closed | https://github.com/huggingface/datasets/pull/5915 | 2023-05-30T14:27:55 | 2023-05-31T13:31:21 | 2023-05-31T13:23:54 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,731,483,996 | 5,914 | array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets | ### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing... | open | https://github.com/huggingface/datasets/issues/5914 | 2023-05-30T04:25:00 | 2024-10-27T04:09:18 | null | {
"login": "ravenouse",
"id": 85110830,
"type": "User"
} | [] | false | [] |
1,731,427,484 | 5,913 | I tried to load a custom dataset using the following statement: dataset = load_dataset('json', data_files=data_files). The dataset contains 50 million text-image pairs, but an error occurred. | ### Describe the bug
File "/home/kas/.conda/envs/diffusers/lib/python3.7/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
Downloading and preparing dataset json/default to /home/kas/diffusers/examples/dreambooth/cache_data/datasets/json/default-acf423d8c6ef99d0/0.0.0/e347ab1c932092252e717ff3f94... | closed | https://github.com/huggingface/datasets/issues/5913 | 2023-05-30T02:55:26 | 2023-07-24T12:00:38 | 2023-07-24T12:00:38 | {
"login": "cjt222",
"id": 17508662,
"type": "User"
} | [] | false | [] |
1,730,299,852 | 5,912 | Missing elements in `map` a batched dataset | ### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kag... | closed | https://github.com/huggingface/datasets/issues/5912 | 2023-05-29T08:09:19 | 2023-07-26T15:48:15 | 2023-07-26T15:48:15 | {
"login": "sachinruk",
"id": 1410927,
"type": "User"
} | [] | false | [] |
1,728,909,790 | 5,910 | Cannot use both set_format and set_transform | ### Describe the bug
I need to process some data using the set_transform method but I also need the data to be formatted for pytorch before processing it.
I don't see anywhere in the documentation something that says that both methods cannot be used at the same time.
### Steps to reproduce the bug
```
from... | closed | https://github.com/huggingface/datasets/issues/5910 | 2023-05-27T19:22:23 | 2023-07-09T21:40:54 | 2023-06-16T14:41:24 | {
"login": "ybouane",
"id": 14046002,
"type": "User"
} | [] | false | [] |
1,728,900,068 | 5,909 | Use more efficient and idiomatic way to construct list. | Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not? | closed | https://github.com/huggingface/datasets/pull/5909 | 2023-05-27T18:54:47 | 2023-05-31T15:37:11 | 2023-05-31T13:28:29 | {
"login": "ttsugriy",
"id": 172294,
"type": "User"
} | [] | true | [] |
1,728,653,935 | 5,908 | Unbearably slow sorting on big mapped datasets | ### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lin... | open | https://github.com/huggingface/datasets/issues/5908 | 2023-05-27T11:08:32 | 2023-06-13T17:45:10 | null | {
"login": "maximxlss",
"id": 29152154,
"type": "User"
} | [] | false | [] |
1,728,648,560 | 5,907 | Add `flatten_indices` to `DatasetDict` | Add `flatten_indices` to `DatasetDict` for convinience | closed | https://github.com/huggingface/datasets/pull/5907 | 2023-05-27T10:55:44 | 2023-06-01T11:46:35 | 2023-06-01T11:39:36 | {
"login": "maximxlss",
"id": 29152154,
"type": "User"
} | [] | true | [] |
1,728,171,113 | 5,906 | Could you unpin responses version? | ### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this librar... | closed | https://github.com/huggingface/datasets/issues/5906 | 2023-05-26T20:02:14 | 2023-05-30T17:53:31 | 2023-05-30T17:53:31 | {
"login": "kenimou",
"id": 47789026,
"type": "User"
} | [] | false | [] |
1,727,541,392 | 5,905 | Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently | ### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally... | open | https://github.com/huggingface/datasets/issues/5905 | 2023-05-26T12:33:02 | 2023-06-15T13:34:18 | null | {
"login": "bruno-hays",
"id": 48770768,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,727,415,626 | 5,904 | Validate name parameter in make_file_instructions | Validate `name` parameter in `make_file_instructions`.
This way users get more informative error messages, instead of:
```stacktrace
.../huggingface/datasets/src/datasets/arrow_reader.py in make_file_instructions(name, split_infos, instruction, filetype_suffix, prefix_path)
110 name2len = {info.name: info... | closed | https://github.com/huggingface/datasets/pull/5904 | 2023-05-26T11:12:46 | 2023-05-31T07:43:32 | 2023-05-31T07:34:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,727,372,549 | 5,903 | Relax `ci.yml` trigger for `pull_request` based on modified paths | ## What's in this PR?
As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths... | open | https://github.com/huggingface/datasets/pull/5903 | 2023-05-26T10:46:52 | 2023-09-07T15:52:36 | null | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,727,342,194 | 5,902 | Fix `Overview.ipynb` & detach Jupyter Notebooks from `datasets` repository | ## What's in this PR?
This PR solves #5887 since there was a mismatch between the tokenizer and the model used, since the tokenizer was `bert-base-cased` while the model was `distilbert-base-case` both for the PyTorch and TensorFlow alternatives. Since DistilBERT doesn't use/need the `token_type_ids`, the `**batch` ... | closed | https://github.com/huggingface/datasets/pull/5902 | 2023-05-26T10:25:01 | 2023-07-25T13:50:06 | 2023-07-25T13:38:33 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,727,179,016 | 5,901 | Make prepare_split more robust if errors in metadata dataset_info splits | This PR uses `split_generator.split_info` as default value for `split_info` if any exception is raised while trying to get `split_generator.name` from `self.info.splits` (this may happen if there is any error in the metadata dataset_info splits).
Please note that `split_info` is only used by the logger.
Fix #5895... | closed | https://github.com/huggingface/datasets/pull/5901 | 2023-05-26T08:48:22 | 2023-06-02T06:06:38 | 2023-06-01T13:39:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,727,129,617 | 5,900 | Fix minor typo in docs loading.mdx | Minor fix. | closed | https://github.com/huggingface/datasets/pull/5900 | 2023-05-26T08:10:54 | 2023-05-26T09:34:15 | 2023-05-26T09:25:12 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,726,279,011 | 5,899 | canonicalize data dir in config ID hash | fixes #5871
The second commit is optional but improves readability. | closed | https://github.com/huggingface/datasets/pull/5899 | 2023-05-25T18:17:10 | 2023-06-02T16:02:15 | 2023-06-02T15:52:04 | {
"login": "kylrth",
"id": 5044802,
"type": "User"
} | [] | true | [] |
1,726,190,481 | 5,898 | Loading The flores data set for specific language | ### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
... | closed | https://github.com/huggingface/datasets/issues/5898 | 2023-05-25T17:08:55 | 2023-05-25T17:21:38 | 2023-05-25T17:21:37 | {
"login": "106AbdulBasit",
"id": 36159918,
"type": "User"
} | [] | false | [] |
1,726,135,494 | 5,897 | Fix `FixedSizeListArray` casting | Fix cast on sliced `FixedSizeListArray`s.
Fix #5866 | closed | https://github.com/huggingface/datasets/pull/5897 | 2023-05-25T16:26:33 | 2023-05-26T12:22:04 | 2023-05-26T11:57:16 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,726,022,500 | 5,896 | HuggingFace does not cache downloaded files aggressively/early enough | ### Describe the bug
I wrote the following script:
```
import datasets
dataset = datasets.load.load_dataset("wikipedia", "20220301.en", split="train[:10000]")
```
I ran it and spent 90 minutes downloading a 20GB file. Then I saw:
```
Downloading: 100%|████████████████████████████████████████████████████... | closed | https://github.com/huggingface/datasets/issues/5896 | 2023-05-25T15:14:36 | 2024-03-15T15:36:07 | 2024-03-15T15:36:07 | {
"login": "jack-jjm",
"id": 2124157,
"type": "User"
} | [] | false | [] |
1,725,467,252 | 5,895 | The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset | ### Describe the bug
When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset.
When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", ... | closed | https://github.com/huggingface/datasets/issues/5895 | 2023-05-25T09:39:06 | 2023-05-29T02:32:12 | 2023-05-29T02:32:12 | {
"login": "DongHande",
"id": 45357817,
"type": "User"
} | [] | false | [] |
1,724,774,910 | 5,894 | Force overwrite existing filesystem protocol | Fix #5876 | closed | https://github.com/huggingface/datasets/pull/5894 | 2023-05-24T21:41:53 | 2023-05-25T06:52:08 | 2023-05-25T06:42:33 | {
"login": "baskrahmer",
"id": 24520725,
"type": "User"
} | [] | true | [] |
1,722,519,056 | 5,893 | Load cached dataset as iterable | To be used to train models it allows to load an IterableDataset from the cached Arrow file.
See https://github.com/huggingface/datasets/issues/5481 | closed | https://github.com/huggingface/datasets/pull/5893 | 2023-05-23T17:40:35 | 2023-06-01T11:58:24 | 2023-06-01T11:51:29 | {
"login": "mariusz-jachimowicz-83",
"id": 10278877,
"type": "User"
} | [] | true | [] |
1,722,503,824 | 5,892 | User access requests with manual review do not notify the dataset owner | ### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that part... | closed | https://github.com/huggingface/datasets/issues/5892 | 2023-05-23T17:27:46 | 2023-07-21T13:55:37 | 2023-07-21T13:55:36 | {
"login": "leondz",
"id": 121934,
"type": "User"
} | [] | false | [] |
1,722,384,135 | 5,891 | Make split slicing consistent with list slicing | Fix #1774, fix #5875
| closed | https://github.com/huggingface/datasets/pull/5891 | 2023-05-23T16:04:33 | 2024-01-31T16:00:26 | 2024-01-31T15:54:17 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,722,373,618 | 5,889 | Token Alignment for input and output data over train and test batch/dataset. | `data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET i... | open | https://github.com/huggingface/datasets/issues/5889 | 2023-05-23T15:58:55 | 2023-05-23T15:58:55 | null | {
"login": "akesh1235",
"id": 125154243,
"type": "User"
} | [] | false | [] |
1,722,166,382 | 5,887 | HuggingsFace dataset example give error | ### Describe the bug


### Steps to reproduce the bug
Use link as reference document written https://c... | closed | https://github.com/huggingface/datasets/issues/5887 | 2023-05-23T14:09:05 | 2023-07-25T14:01:01 | 2023-07-25T14:01:00 | {
"login": "donhuvy",
"id": 1328316,
"type": "User"
} | [] | false | [] |
1,721,070,225 | 5,886 | Use work-stealing algorithm when parallel computing | ### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task ... | open | https://github.com/huggingface/datasets/issues/5886 | 2023-05-23T03:08:44 | 2023-05-24T15:30:09 | null | {
"login": "1014661165",
"id": 46060451,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,720,954,440 | 5,885 | Modify `is_remote_filesystem` to return True for FUSE-mounted paths | null | closed | https://github.com/huggingface/datasets/pull/5885 | 2023-05-23T01:04:54 | 2024-01-08T18:31:00 | 2024-01-08T18:31:00 | {
"login": "maddiedawson",
"id": 106995444,
"type": "User"
} | [] | true | [] |
1,722,290,363 | 5,888 | A way to upload and visualize .mp4 files (millions of them) as part of a dataset | **Is your feature request related to a problem? Please describe.**
I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI
It combines images, text, audio and video. Now, I could very easily upload a dataset made via datase... | open | https://github.com/huggingface/datasets/issues/5888 | 2023-05-22T18:05:26 | 2023-06-23T03:37:16 | null | {
"login": "AntreasAntoniou",
"id": 10792502,
"type": "User"
} | [] | false | [] |
1,719,548,172 | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | ### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to rep... | closed | https://github.com/huggingface/datasets/issues/5884 | 2023-05-22T12:03:06 | 2023-06-09T16:04:56 | 2023-06-09T16:04:55 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | false | [] |
1,719,527,597 | 5,883 | Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset` | ## What's in this PR?
This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset.
The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `nu... | closed | https://github.com/huggingface/datasets/pull/5883 | 2023-05-22T11:51:07 | 2023-06-08T11:09:03 | 2023-06-06T16:49:15 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,719,402,643 | 5,881 | Split dataset by node: index error when sharding iterable dataset | ### Describe the bug
Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers
When we iterate over it for 5 steps, we don't get an error
When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many wo... | open | https://github.com/huggingface/datasets/issues/5881 | 2023-05-22T10:36:13 | 2025-01-31T16:36:30 | null | {
"login": "sanchit-gandhi",
"id": 93869735,
"type": "User"
} | [] | false | [] |
1,719,090,101 | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | ### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.c... | open | https://github.com/huggingface/datasets/issues/5880 | 2023-05-22T07:40:27 | 2023-05-26T12:52:08 | null | {
"login": "janineguo",
"id": 59083384,
"type": "User"
} | [] | false | [] |
1,718,203,843 | 5,878 | Prefetching for IterableDataset | ### Feature request
Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.
### Motivation
The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk... | open | https://github.com/huggingface/datasets/issues/5878 | 2023-05-20T15:25:40 | 2025-01-24T17:13:55 | null | {
"login": "vyeevani",
"id": 30946190,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,717,983,961 | 5,877 | Request for text deduplication feature | ### Feature request
It would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library.
### Motivation
Motivated by this blog post https://huggingface.co/blog/dedup and this library https://github.com/google-research/deduplicate-text-datase... | open | https://github.com/huggingface/datasets/issues/5877 | 2023-05-20T01:56:00 | 2024-01-25T14:40:09 | null | {
"login": "SupreethRao99",
"id": 55043035,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,717,978,985 | 5,876 | Incompatibility with DataLab | ### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSyste... | closed | https://github.com/huggingface/datasets/issues/5876 | 2023-05-20T01:39:11 | 2023-05-25T06:42:34 | 2023-05-25T06:42:34 | {
"login": "helpmefindaname",
"id": 26192135,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,716,770,394 | 5,875 | Why split slicing doesn't behave like list slicing ? | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> V... | closed | https://github.com/huggingface/datasets/issues/5875 | 2023-05-19T07:21:10 | 2024-01-31T15:54:18 | 2024-01-31T15:54:18 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,715,708,930 | 5,874 | Using as_dataset on a "parquet" builder | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```... | closed | https://github.com/huggingface/datasets/issues/5874 | 2023-05-18T14:09:03 | 2023-05-31T13:23:55 | 2023-05-31T13:23:55 | {
"login": "rems75",
"id": 9039058,
"type": "User"
} | [] | false | [] |
1,713,269,724 | 5,873 | Allow setting the environment variable for the lock file path | ### Feature request
Add an environment variable to replace the default lock file path.
### Motivation
Usually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually.
### Your contribution
```/src/datasets/utils/fi... | open | https://github.com/huggingface/datasets/issues/5873 | 2023-05-17T07:10:02 | 2023-05-17T07:11:05 | null | {
"login": "xin3he",
"id": 83260933,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,713,174,662 | 5,872 | Fix infer module for uppercase extensions | Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned. | closed | https://github.com/huggingface/datasets/pull/5872 | 2023-05-17T05:56:45 | 2023-05-17T14:26:59 | 2023-05-17T14:19:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,712,573,073 | 5,871 | data configuration hash suffix depends on uncanonicalized data_dir | ### Describe the bug
I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that... | closed | https://github.com/huggingface/datasets/issues/5871 | 2023-05-16T18:56:04 | 2023-06-02T15:52:05 | 2023-06-02T15:52:05 | {
"login": "kylrth",
"id": 5044802,
"type": "User"
} | [
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,712,156,282 | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to ... | open | https://github.com/huggingface/datasets/issues/5870 | 2023-05-16T14:32:57 | 2023-05-16T14:36:05 | null | {
"login": "llStringll",
"id": 30209072,
"type": "User"
} | [] | false | [] |
1,711,990,003 | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet... | closed | https://github.com/huggingface/datasets/issues/5869 | 2023-05-16T09:42:58 | 2023-06-16T12:48:38 | 2023-06-16T09:30:48 | {
"login": "PhilippeMoussalli",
"id": 47530815,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,711,173,098 | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always f... | closed | https://github.com/huggingface/datasets/issues/5868 | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | {
"login": "zyh3826",
"id": 31238754,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,710,656,067 | 5,867 | Add logic for hashing modules/functions optimized with `torch.compile` | Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point. | closed | https://github.com/huggingface/datasets/pull/5867 | 2023-05-15T19:03:35 | 2024-01-11T06:30:50 | 2023-11-27T20:03:31 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,710,496,993 | 5,866 | Issue with Sequence features | ### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Va... | closed | https://github.com/huggingface/datasets/issues/5866 | 2023-05-15T17:13:29 | 2023-05-26T11:57:17 | 2023-05-26T11:57:17 | {
"login": "alialamiidrissi",
"id": 14365168,
"type": "User"
} | [] | false | [] |
1,710,455,738 | 5,865 | Deprecate task api | The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingfac... | closed | https://github.com/huggingface/datasets/pull/5865 | 2023-05-15T16:48:24 | 2023-07-10T12:33:59 | 2023-07-10T12:24:01 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,710,450,047 | 5,864 | Slow iteration over Torch tensors | ### Describe the bug
I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vani... | open | https://github.com/huggingface/datasets/issues/5864 | 2023-05-15T16:43:58 | 2024-10-08T10:21:48 | null | {
"login": "crisostomi",
"id": 51738205,
"type": "User"
} | [] | false | [] |
1,710,335,905 | 5,863 | Use a new low-memory approach for tf dataset index shuffling | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | closed | https://github.com/huggingface/datasets/pull/5863 | 2023-05-15T15:28:34 | 2023-06-08T16:40:18 | 2023-06-08T16:32:51 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | true | [] |
1,710,140,646 | 5,862 | IndexError: list index out of range with data hosted on Zenodo | The dataset viewer sometimes raises an `IndexError`:
```
IndexError: list index out of range
```
See:
- huggingface/datasets-server#1151
- https://huggingface.co/datasets/reddit/discussions/5
- huggingface/datasets-server#1118
- https://huggingface.co/datasets/krr-oxford/OntoLAMA/discussions/1
- https://hu... | open | https://github.com/huggingface/datasets/issues/5862 | 2023-05-15T13:47:19 | 2023-09-25T12:09:51 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,709,807,340 | 5,861 | Better error message when combining dataset dicts instead of datasets | close https://github.com/huggingface/datasets/issues/5851 | closed | https://github.com/huggingface/datasets/pull/5861 | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,709,727,460 | 5,860 | Minor tqdm optim | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | closed | https://github.com/huggingface/datasets/pull/5860 | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,709,554,829 | 5,859 | Raise TypeError when indexing a dataset with bool | Fix #5858. | closed | https://github.com/huggingface/datasets/pull/5859 | 2023-05-15T08:08:42 | 2023-05-25T16:31:24 | 2023-05-25T16:23:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,709,332,632 | 5,858 | Throw an error when dataset improperly indexed | ### Describe the bug
Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. `squad = datasets.load_dataset("squad_v2", split="validati... | closed | https://github.com/huggingface/datasets/issues/5858 | 2023-05-15T05:15:53 | 2023-05-25T16:23:19 | 2023-05-25T16:23:19 | {
"login": "sarahwie",
"id": 8027676,
"type": "User"
} | [] | false | [] |
1,709,326,622 | 5,857 | Adding chemistry dataset/models in huggingface | ### Feature request
Huggingface is really amazing platform for open science.
In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers.
We have been working on a comprehensive ben... | closed | https://github.com/huggingface/datasets/issues/5857 | 2023-05-15T05:09:49 | 2023-07-21T13:45:40 | 2023-07-21T13:45:40 | {
"login": "knc6",
"id": 16902896,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,709,218,242 | 5,856 | Error loading natural_questions | ### Describe the bug
When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
It failed with following info:
`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not impl... | closed | https://github.com/huggingface/datasets/issues/5856 | 2023-05-15T02:46:04 | 2023-06-05T09:11:19 | 2023-06-05T09:11:18 | {
"login": "Crownor",
"id": 19185508,
"type": "User"
} | [] | false | [] |
1,708,784,943 | 5,855 | `to_tf_dataset` consumes too much memory | ### Describe the bug
Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.
After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github.... | closed | https://github.com/huggingface/datasets/issues/5855 | 2023-05-14T01:22:29 | 2023-06-08T16:32:52 | 2023-06-08T16:32:52 | {
"login": "massquantity",
"id": 28751760,
"type": "User"
} | [] | false | [] |
1,708,779,300 | 5,854 | Can not load audiofolder dataset on kaggle | ### Describe the bug
It's crash log:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/audiofolder/audiofolder.py or any data file in the same directory. Couldn't find 'audiofolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingfac... | closed | https://github.com/huggingface/datasets/issues/5854 | 2023-05-14T00:50:47 | 2023-08-16T13:35:36 | 2023-07-21T13:53:45 | {
"login": "ILG2021",
"id": 93691919,
"type": "User"
} | [] | false | [] |
1,708,092,786 | 5,853 | [docs] Redirects, migrated from nginx | null | closed | https://github.com/huggingface/datasets/pull/5853 | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
1,707,927,165 | 5,852 | Iterable torch formatting | Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close htt... | closed | https://github.com/huggingface/datasets/pull/5852 | 2023-05-12T16:48:49 | 2023-06-13T16:04:05 | 2023-06-13T15:57:05 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,707,678,911 | 5,850 | Make packaged builders skip non-supported file formats | This PR makes packaged builders skip non-supported file formats:
- Csv builder skips non-CSV files
- Analogously for the other builders
Fix #5849. | open | https://github.com/huggingface/datasets/pull/5850 | 2023-05-12T13:52:34 | 2023-06-07T12:26:38 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,707,551,511 | 5,849 | CSV datasets should only read the CSV data files in the repo | When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.
I think the Csv builder should filter out non-CSV files when reading.
An analogue solution should be implemented for other packaged builders.
... | closed | https://github.com/huggingface/datasets/issues/5849 | 2023-05-12T12:29:53 | 2023-06-22T14:16:27 | 2023-06-22T14:16:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,707,506,734 | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | closed | https://github.com/huggingface/datasets/pull/5848 | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,706,616,634 | 5,847 | Streaming IterableDataset not working with translation pipeline | ### Describe the bug
I'm trying to use a streaming dataset for translation inference to avoid downloading the training data.
I'm using a pipeline and a dataset, and following the guidance in the tutorial.
Instead I get an exception that IterableDataset has no len().
### Steps to reproduce the bug
CODE:
```
... | open | https://github.com/huggingface/datasets/issues/5847 | 2023-05-11T21:52:38 | 2023-05-16T15:59:55 | null | {
"login": "jlquinn",
"id": 826841,
"type": "User"
} | [] | false | [] |
1,707,907,048 | 5,851 | Error message not clear in interleaving datasets | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm tr... | closed | https://github.com/huggingface/datasets/issues/5851 | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | {
"login": "surya-narayanan",
"id": 17240858,
"type": "User"
} | [] | false | [] |
1,706,289,290 | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.1... | closed | https://github.com/huggingface/datasets/issues/5846 | 2023-05-11T17:58:57 | 2024-04-08T12:53:17 | 2024-04-05T12:28:58 | {
"login": "tbenthompson",
"id": 4241811,
"type": "User"
} | [] | false | [] |
1,706,253,251 | 5,845 | Add `date_format` param to the CSV reader | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | closed | https://github.com/huggingface/datasets/pull/5845 | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,705,907,812 | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None),... | open | https://github.com/huggingface/datasets/issues/5844 | 2023-05-11T14:15:01 | 2023-05-11T14:15:01 | null | {
"login": "chen-coding",
"id": 54010030,
"type": "User"
} | [] | false | [] |
1,705,286,639 | 5,841 | Abusurdly slow on iteration | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_d... | closed | https://github.com/huggingface/datasets/issues/5841 | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | {
"login": "fecet",
"id": 41792945,
"type": "User"
} | [] | false | [] |
1,705,212,085 | 5,840 | load model error. | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't ... | closed | https://github.com/huggingface/datasets/issues/5840 | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | {
"login": "LanShanPi",
"id": 58167546,
"type": "User"
} | [] | false | [] |
1,705,510,602 | 5,842 | Remove columns in interable dataset | ### Feature request
Right now, remove_columns() produces a NotImplementedError for iterable style datasets
### Motivation
It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset
### Your contribution
hope and courage. | closed | https://github.com/huggingface/datasets/issues/5842 | 2023-05-11T03:48:46 | 2023-06-21T16:36:42 | 2023-06-21T16:36:41 | {
"login": "surya-narayanan",
"id": 17240858,
"type": "User"
} | [] | false | [] |
1,705,514,551 | 5,843 | Can't add iterable datasets to a Dataset Dict. | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Get th... | closed | https://github.com/huggingface/datasets/issues/5843 | 2023-05-11T02:09:29 | 2023-05-25T04:51:59 | 2023-05-25T04:51:59 | {
"login": "surya-narayanan",
"id": 17240858,
"type": "User"
} | [] | false | [] |
1,704,554,718 | 5,839 | Make models/functions optimized with `torch.compile` hashable | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling... | closed | https://github.com/huggingface/datasets/issues/5839 | 2023-05-10T20:02:08 | 2023-11-28T16:29:33 | 2023-11-28T16:29:33 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,703,210,848 | 5,838 | Streaming support for `load_from_disk` | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data ... | closed | https://github.com/huggingface/datasets/issues/5838 | 2023-05-10T06:25:22 | 2024-10-28T14:19:44 | 2023-05-12T09:37:45 | {
"login": "Nilabhra",
"id": 5437792,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,703,019,816 | 5,837 | Use DeepSpeed load myself " .csv " dataset. | ### Describe the bug
When I use DeepSpeed train a model with my own " XXX.csv" dataset I got the follow question:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1767, in load_dataset
builder_instan... | open | https://github.com/huggingface/datasets/issues/5837 | 2023-05-10T02:39:28 | 2023-05-15T03:51:36 | null | {
"login": "LanShanPi",
"id": 58167546,
"type": "User"
} | [] | false | [] |
1,702,773,316 | 5,836 | [docs] Custom decoding transforms | Adds custom decoding transform solution to the docs to fix #5782. | closed | https://github.com/huggingface/datasets/pull/5836 | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,702,522,620 | 5,835 | Always set nullable fields in the writer | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | closed | https://github.com/huggingface/datasets/pull/5835 | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,702,448,892 | 5,834 | Is uint8 supported? | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way ... | closed | https://github.com/huggingface/datasets/issues/5834 | 2023-05-09T17:31:13 | 2023-05-13T05:04:21 | 2023-05-13T05:04:21 | {
"login": "ryokan0123",
"id": 17979572,
"type": "User"
} | [] | false | [] |
1,702,280,682 | 5,833 | Unable to push dataset - `create_pr` problem | ### Describe the bug
I can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible.
### Steps to reproduce the bug
here what I have:
```python
dataset.push_to_hub("agomberto/Fr... | closed | https://github.com/huggingface/datasets/issues/5833 | 2023-05-09T15:32:55 | 2023-10-24T18:22:29 | 2023-10-24T18:22:29 | {
"login": "agombert",
"id": 17645711,
"type": "User"
} | [] | false | [] |
1,702,135,336 | 5,832 | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | ### Describe the bug
Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback-
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretra... | closed | https://github.com/huggingface/datasets/issues/5832 | 2023-05-09T14:14:59 | 2023-05-09T14:25:59 | 2023-05-09T14:25:59 | {
"login": "varungupta31",
"id": 51288316,
"type": "User"
} | [] | false | [] |
1,701,813,835 | 5,831 | [Bug]504 Server Error when loading dataset which was already cached | ### Describe the bug
I have already cached the dataset using:
```
dataset = load_dataset("databricks/databricks-dolly-15k",
cache_dir="/mnt/data/llm/datasets/databricks-dolly-15k")
```
After that, I tried to load it again using the same machine, I got this error:
```
Traceback (most rece... | open | https://github.com/huggingface/datasets/issues/5831 | 2023-05-09T10:31:07 | 2023-05-10T01:48:20 | null | {
"login": "SingL3",
"id": 20473466,
"type": "User"
} | [] | false | [] |
1,701,451,399 | 5,830 | Debug windows #2 | null | closed | https://github.com/huggingface/datasets/pull/5830 | 2023-05-09T06:40:34 | 2023-05-09T06:40:47 | 2023-05-09T06:40:47 | {
"login": "HyukjinKwon",
"id": 6477701,
"type": "User"
} | [] | true | [] |
1,699,958,189 | 5,829 | (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) | ### Describe the bug
M2 MBP can't run
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Steps to reproduce the bug
1. Use M2 MBP
2. Python 3.10.10 from pyenv
3. Run
```
from datasets import load_dataset
jazzy = load_... | closed | https://github.com/huggingface/datasets/issues/5829 | 2023-05-08T10:07:14 | 2023-06-30T11:39:14 | 2023-05-09T00:46:42 | {
"login": "elcolie",
"id": 18206728,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.