id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,844,991,583 | 6,139 | Offline dataset viewer | ### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something t... | closed | https://github.com/huggingface/datasets/issues/6139 | 2023-08-10T11:30:00 | 2024-09-24T18:36:35 | 2023-09-29T13:10:22 | {
"login": "yuvalkirstain",
"id": 57996478,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,844,952,496 | 6,138 | Ignore CI lint rule violation in Pickler.memoize | This PR ignores the violation of the lint rule E721 in `Pickler.memoize`.
The lint rule violation was introduced in this PR:
- #3182
@lhoestq is there a reason you did not use `isinstance` instead?
As a hotfix, we just ignore the violation of the lint rule.
Fix #6136. | closed | https://github.com/huggingface/datasets/pull/6138 | 2023-08-10T11:03:15 | 2023-08-10T11:31:45 | 2023-08-10T11:22:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,844,952,312 | 6,137 | (`from_spark()`) Unable to connect HDFS in pyspark YARN setting | ### Describe the bug
related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613
---
Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library tha... | open | https://github.com/huggingface/datasets/issues/6137 | 2023-08-10T11:03:08 | 2023-08-10T11:03:08 | null | {
"login": "kyoungrok0517",
"id": 1051900,
"type": "User"
} | [] | false | [] |
1,844,887,866 | 6,136 | CI check_code_quality error: E721 Do not compare types, use `isinstance()` | After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error:
```
src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()`
``` | closed | https://github.com/huggingface/datasets/issues/6136 | 2023-08-10T10:19:50 | 2023-08-10T11:22:58 | 2023-08-10T11:22:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
1,844,870,943 | 6,135 | Remove unused allowed_extensions param | This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`. | closed | https://github.com/huggingface/datasets/pull/6135 | 2023-08-10T10:09:54 | 2023-08-10T12:08:38 | 2023-08-10T12:00:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,844,535,142 | 6,134 | `datasets` cannot be installed alongside `apache-beam` | ### Describe the bug
If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something s... | closed | https://github.com/huggingface/datasets/issues/6134 | 2023-08-10T06:54:32 | 2023-09-01T03:19:49 | 2023-08-10T15:22:10 | {
"login": "boyleconnor",
"id": 6520892,
"type": "User"
} | [] | false | [] |
1,844,511,519 | 6,133 | Dataset is slower after calling `to_iterable_dataset` | ### Describe the bug
Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset`
### Steps to reproduce the bug
Any dataset after converting to `IterableDataset`
### Expected behavior
Maybe it should be faster on big dataset? I only test on small... | open | https://github.com/huggingface/datasets/issues/6133 | 2023-08-10T06:36:23 | 2023-08-16T09:18:54 | null | {
"login": "npuichigo",
"id": 11533479,
"type": "User"
} | [] | false | [] |
1,843,491,020 | 6,132 | to_iterable_dataset is missing in document | ### Describe the bug
to_iterable_dataset is missing in document
### Steps to reproduce the bug
to_iterable_dataset is missing in document
### Expected behavior
document enhancement
### Environment info
unrelated | closed | https://github.com/huggingface/datasets/issues/6132 | 2023-08-09T15:15:03 | 2023-08-16T04:43:36 | 2023-08-16T04:43:29 | {
"login": "npuichigo",
"id": 11533479,
"type": "User"
} | [] | false | [] |
1,843,158,846 | 6,130 | default config name doesn't work when config kwargs are specified. | ### Describe the bug
https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522
If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset ... | closed | https://github.com/huggingface/datasets/issues/6130 | 2023-08-09T12:43:15 | 2023-11-22T11:50:49 | 2023-11-22T11:50:48 | {
"login": "npuichigo",
"id": 11533479,
"type": "User"
} | [] | false | [] |
1,841,563,517 | 6,129 | Release 2.14.4 | null | closed | https://github.com/huggingface/datasets/pull/6129 | 2023-08-08T15:43:56 | 2023-08-08T16:08:22 | 2023-08-08T15:49:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,841,545,493 | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM... | closed | https://github.com/huggingface/datasets/issues/6128 | 2023-08-08T15:32:08 | 2023-12-26T07:51:57 | 2023-08-11T13:35:09 | {
"login": "TomasAndersonFang",
"id": 38727343,
"type": "User"
} | [] | false | [] |
1,839,746,721 | 6,127 | Fix authentication issues | This PR fixes 3 authentication issues:
- Fix authentication when passing `token`.
- Fix authentication in `Audio.decode_example` and `Image.decode_example`.
- Fix authentication to resolve `data_files` in repositories without script.
This PR also fixes our CI so that we properly test when passing `token` and we d... | closed | https://github.com/huggingface/datasets/pull/6127 | 2023-08-07T15:41:25 | 2023-08-08T15:24:59 | 2023-08-08T15:16:22 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,839,675,320 | 6,126 | Private datasets do not load when passing token | ### Describe the bug
Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`.
This is a non-planned backward incompatible breaking change.
Note that private datasets do load if instead `download_config` is passed:
```python
from datasets i... | closed | https://github.com/huggingface/datasets/issues/6126 | 2023-08-07T15:06:47 | 2023-08-08T15:16:23 | 2023-08-08T15:16:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,837,980,986 | 6,125 | Reinforcement Learning and Robotics are not task categories in HF datasets metadata | ### Describe the bug
In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets
Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags
Moreover we see some older datasets that do have that tag, bu... | closed | https://github.com/huggingface/datasets/issues/6125 | 2023-08-05T23:59:42 | 2023-08-18T12:28:42 | 2023-08-18T12:28:42 | {
"login": "StoneT2000",
"id": 35373228,
"type": "User"
} | [] | false | [] |
1,837,868,112 | 6,124 | Datasets crashing runs due to KeyError | ### Describe the bug
Hi all,
I have been running into a pretty persistent issue recently when trying to load datasets.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
I receive a KeyError which crashes the runs.
```
Traceback (most recent call... | closed | https://github.com/huggingface/datasets/issues/6124 | 2023-08-05T17:48:56 | 2023-11-30T16:28:57 | 2023-11-30T16:28:57 | {
"login": "conceptofmind",
"id": 25208228,
"type": "User"
} | [] | false | [] |
1,837,789,294 | 6,123 | Inaccurate Bounding Boxes in "wildreceipt" Dataset | ### Describe the bug
I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, n... | closed | https://github.com/huggingface/datasets/issues/6123 | 2023-08-05T14:34:13 | 2023-08-17T14:25:27 | 2023-08-17T14:25:26 | {
"login": "HamzaGbada",
"id": 50714796,
"type": "User"
} | [] | false | [] |
1,837,335,721 | 6,122 | Upload README via `push_to_hub` | ### Feature request
`push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually.
However, I do discover snippets to intialize a README for every `push_to_hub`:
```
dataset_card = (
DatasetCard(
... | closed | https://github.com/huggingface/datasets/issues/6122 | 2023-08-04T21:00:27 | 2023-08-21T18:18:54 | 2023-08-21T18:18:54 | {
"login": "liyucheng09",
"id": 27999909,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,836,761,712 | 6,121 | Small typo in the code example of create imagefolder dataset | Fix type of code example of load imagefolder dataset | closed | https://github.com/huggingface/datasets/pull/6121 | 2023-08-04T13:36:59 | 2023-08-04T13:45:32 | 2023-08-04T13:41:43 | {
"login": "WangXin93",
"id": 19688994,
"type": "User"
} | [] | true | [] |
1,836,026,938 | 6,120 | Lookahead streaming support? | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mappi... | open | https://github.com/huggingface/datasets/issues/6120 | 2023-08-04T04:01:52 | 2023-08-17T17:48:42 | null | {
"login": "PicoCreator",
"id": 17175484,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,835,996,350 | 6,119 | [Docs] Add description of `select_columns` to guide | Closes #6116 | closed | https://github.com/huggingface/datasets/pull/6119 | 2023-08-04T03:13:30 | 2023-08-16T10:13:02 | 2023-08-16T10:02:52 | {
"login": "unifyh",
"id": 18213435,
"type": "User"
} | [] | true | [] |
1,835,940,417 | 6,118 | IterableDataset.from_generator() fails with pickle error when provided a generator or iterator | ### Describe the bug
**Description**
Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator.
**Code example**
```
def line_generator(files: List[Path]):
if isinstance(f... | open | https://github.com/huggingface/datasets/issues/6118 | 2023-08-04T01:45:04 | 2024-12-18T18:30:57 | null | {
"login": "finkga",
"id": 1281051,
"type": "User"
} | [] | false | [] |
1,835,213,848 | 6,117 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/6117 | 2023-08-03T14:46:04 | 2023-08-03T14:56:59 | 2023-08-03T14:46:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,835,098,484 | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the gui... | closed | https://github.com/huggingface/datasets/issues/6116 | 2023-08-03T13:45:10 | 2023-08-16T10:02:53 | 2023-08-16T10:02:53 | {
"login": "unifyh",
"id": 18213435,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,834,765,485 | 6,115 | Release: 2.14.3 | null | closed | https://github.com/huggingface/datasets/pull/6115 | 2023-08-03T10:18:32 | 2023-08-03T15:08:02 | 2023-08-03T10:24:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,834,015,584 | 6,114 | Cache not being used when loading commonvoice 8.0.0 | ### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 ins... | closed | https://github.com/huggingface/datasets/issues/6114 | 2023-08-02T23:18:11 | 2023-08-18T23:59:00 | 2023-08-18T23:59:00 | {
"login": "clabornd",
"id": 31082141,
"type": "User"
} | [] | false | [] |
1,833,854,030 | 6,113 | load_dataset() fails with streamlit caching inside docker | ### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
Fil... | closed | https://github.com/huggingface/datasets/issues/6113 | 2023-08-02T20:20:26 | 2023-08-21T18:18:27 | 2023-08-21T18:18:27 | {
"login": "fierval",
"id": 987574,
"type": "User"
} | [] | false | [] |
1,833,693,299 | 6,112 | yaml error using push_to_hub with generated README.md | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"token... | closed | https://github.com/huggingface/datasets/issues/6112 | 2023-08-02T18:21:21 | 2023-12-12T15:00:44 | 2023-12-12T15:00:44 | {
"login": "kevintee",
"id": 1643887,
"type": "User"
} | [] | false | [] |
1,832,781,654 | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for exa... | closed | https://github.com/huggingface/datasets/issues/6111 | 2023-08-02T09:17:29 | 2023-08-29T02:00:28 | 2023-08-29T02:00:28 | {
"login": "2catycm",
"id": 41530341,
"type": "User"
} | [] | false | [] |
1,831,110,633 | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was ru... | closed | https://github.com/huggingface/datasets/issues/6110 | 2023-08-01T11:58:58 | 2023-08-17T14:03:01 | 2023-08-17T14:03:00 | {
"login": "MattYoon",
"id": 57797966,
"type": "User"
} | [] | false | [] |
1,830,753,793 | 6,109 | Problems in downloading Amazon reviews from HF | ### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%... | closed | https://github.com/huggingface/datasets/issues/6109 | 2023-08-01T08:38:29 | 2025-07-18T17:47:30 | 2023-08-02T07:12:07 | {
"login": "610v4nn1",
"id": 52964960,
"type": "User"
} | [] | false | [] |
1,830,347,187 | 6,108 | Loading local datasets got strangely stuck | ### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=... | open | https://github.com/huggingface/datasets/issues/6108 | 2023-08-01T02:28:06 | 2024-12-31T16:01:00 | null | {
"login": "LoveCatc",
"id": 48412571,
"type": "User"
} | [] | false | [] |
1,829,625,320 | 6,107 | Fix deprecation of use_auth_token in file_utils | Fix issues with the deprecation of `use_auth_token` introduced by:
- #5996
in functions:
- `get_authentication_headers_for_url`
- `request_etag`
- `get_from_cache`
Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588
```
FAILED tes... | closed | https://github.com/huggingface/datasets/pull/6107 | 2023-07-31T16:32:01 | 2023-08-03T10:13:32 | 2023-08-03T10:04:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,829,131,223 | 6,106 | load local json_file as dataset | ### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be trigg... | closed | https://github.com/huggingface/datasets/issues/6106 | 2023-07-31T12:53:49 | 2023-08-18T01:46:35 | 2023-08-18T01:46:35 | {
"login": "CiaoHe",
"id": 39040787,
"type": "User"
} | [] | false | [] |
1,829,008,430 | 6,105 | Fix error when loading from GCP bucket | Fix `resolve_pattern` for filesystems with tuple protocol.
Fix #6100.
The bug code lines were introduced by:
- #6028 | closed | https://github.com/huggingface/datasets/pull/6105 | 2023-07-31T11:44:46 | 2023-08-01T10:48:52 | 2023-08-01T10:38:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,828,959,107 | 6,104 | HF Datasets data access is extremely slow even when in memory | ### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fat... | open | https://github.com/huggingface/datasets/issues/6104 | 2023-07-31T11:12:19 | 2023-08-01T11:22:43 | null | {
"login": "NightMachinery",
"id": 36224762,
"type": "User"
} | [] | false | [] |
1,828,515,165 | 6,103 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/6103 | 2023-07-31T06:44:05 | 2023-07-31T06:55:58 | 2023-07-31T06:45:41 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,828,494,896 | 6,102 | Release 2.14.2 | null | closed | https://github.com/huggingface/datasets/pull/6102 | 2023-07-31T06:27:47 | 2023-07-31T06:48:09 | 2023-07-31T06:32:58 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,828,469,648 | 6,101 | Release 2.14.2 | null | closed | https://github.com/huggingface/datasets/pull/6101 | 2023-07-31T06:05:36 | 2023-07-31T06:33:00 | 2023-07-31T06:18:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,828,118,930 | 6,100 | TypeError when loading from GCP bucket | ### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_f... | closed | https://github.com/huggingface/datasets/issues/6100 | 2023-07-30T23:03:00 | 2023-08-03T10:00:48 | 2023-08-01T10:38:55 | {
"login": "bilelomrani1",
"id": 16692099,
"type": "User"
} | [] | false | [] |
1,827,893,576 | 6,099 | How do i get "amazon_us_reviews | ### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1... | closed | https://github.com/huggingface/datasets/issues/6099 | 2023-07-30T11:02:17 | 2023-08-21T05:08:08 | 2023-08-10T05:02:35 | {
"login": "IqraBaluch",
"id": 57810189,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,827,655,071 | 6,098 | Expanduser in save_to_disk() | Fixes #5651. The same problem occurs when loading from disk so I fixed it there too.
I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`). | closed | https://github.com/huggingface/datasets/pull/6098 | 2023-07-29T20:50:45 | 2023-10-27T14:14:11 | 2023-10-27T14:04:36 | {
"login": "Unknown3141592",
"id": 51715864,
"type": "User"
} | [] | true | [] |
1,827,054,143 | 6,097 | Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format | ### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case ... | closed | https://github.com/huggingface/datasets/issues/6097 | 2023-07-28T20:31:59 | 2023-07-28T20:49:58 | 2023-07-28T20:49:58 | {
"login": "aschoenauer-sebag",
"id": 2538048,
"type": "User"
} | [] | false | [] |
1,826,731,091 | 6,096 | Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet` | Hi to whoever is reading this! 🤗 (Most likely @mariosasko)
## What's in this PR?
This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as reque... | closed | https://github.com/huggingface/datasets/pull/6096 | 2023-07-28T16:36:59 | 2024-05-28T07:40:30 | 2024-03-06T11:12:42 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | true | [] |
1,826,496,967 | 6,095 | Fix deprecation of errors in TextConfig | This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by:
- #5974
```python
In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict")
---------------------------------------------------------------------------
TypeError Traceback (most ... | closed | https://github.com/huggingface/datasets/pull/6095 | 2023-07-28T14:08:37 | 2023-07-31T05:26:32 | 2023-07-31T05:17:38 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,826,293,414 | 6,094 | Fix deprecation of use_auth_token in DownloadConfig | This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by:
- #5996
```python
In [1]: from datasets import DownloadConfig
In [2]: DownloadConfig(use_auth_token=False)
---------------------------------------------------------------------------
TypeError ... | closed | https://github.com/huggingface/datasets/pull/6094 | 2023-07-28T11:52:21 | 2023-07-31T05:08:41 | 2023-07-31T04:59:50 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,826,210,490 | 6,093 | Deprecate `download_custom` | Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead.
We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been req... | closed | https://github.com/huggingface/datasets/pull/6093 | 2023-07-28T10:49:06 | 2023-08-21T17:51:34 | 2023-07-28T11:30:02 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,826,111,806 | 6,092 | Minor fix in `iter_files` for hidden files | Fix #6090 | closed | https://github.com/huggingface/datasets/pull/6092 | 2023-07-28T09:50:12 | 2023-07-28T10:59:28 | 2023-07-28T10:50:10 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,826,086,487 | 6,091 | Bump fsspec from 2021.11.1 to 2022.3.0 | Fix https://github.com/huggingface/datasets/issues/6087
(Colab installs 2023.6.0, so we should be good) | closed | https://github.com/huggingface/datasets/pull/6091 | 2023-07-28T09:37:15 | 2023-07-28T10:16:11 | 2023-07-28T10:07:02 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,825,865,043 | 6,090 | FilesIterable skips all the files after a hidden file | ### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manag... | closed | https://github.com/huggingface/datasets/issues/6090 | 2023-07-28T07:25:57 | 2023-07-28T10:51:14 | 2023-07-28T10:50:11 | {
"login": "dkrivosic",
"id": 10785413,
"type": "User"
} | [] | false | [] |
1,825,761,476 | 6,089 | AssertionError: daemonic processes are not allowed to have children | ### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users... | open | https://github.com/huggingface/datasets/issues/6089 | 2023-07-28T06:04:00 | 2023-07-31T02:34:02 | null | {
"login": "codingl2k1",
"id": 138426806,
"type": "User"
} | [] | false | [] |
1,825,665,235 | 6,088 | Loading local data files initiates web requests | As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_... | closed | https://github.com/huggingface/datasets/issues/6088 | 2023-07-28T04:06:26 | 2023-07-28T05:02:22 | 2023-07-28T05:02:22 | {
"login": "lytning98",
"id": 23375707,
"type": "User"
} | [] | false | [] |
1,825,133,741 | 6,087 | fsspec dependency is set too low | ### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: ht... | closed | https://github.com/huggingface/datasets/issues/6087 | 2023-07-27T20:08:22 | 2023-07-28T10:07:56 | 2023-07-28T10:07:03 | {
"login": "iXce",
"id": 1085885,
"type": "User"
} | [] | false | [] |
1,825,009,268 | 6,086 | Support `fsspec` in `Dataset.to_<format>` methods | Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353). | closed | https://github.com/huggingface/datasets/issues/6086 | 2023-07-27T19:08:37 | 2024-03-07T07:22:43 | 2024-03-07T07:22:42 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,824,985,188 | 6,085 | Fix `fsspec` download | Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ...
TODO: fix
```
self.session = aiobotocore.session.AioSession(**self.kwargs)
TypeError: __init__() got ... | open | https://github.com/huggingface/datasets/pull/6085 | 2023-07-27T18:54:47 | 2023-07-27T19:06:13 | null | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,824,896,761 | 6,084 | Changing pixel values of images in the Winoground dataset | Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slight... | open | https://github.com/huggingface/datasets/issues/6084 | 2023-07-27T17:55:35 | 2023-07-27T17:55:35 | null | {
"login": "ZitengWangNYU",
"id": 90359895,
"type": "User"
} | [] | false | [] |
1,824,832,348 | 6,083 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/6083 | 2023-07-27T17:10:41 | 2023-07-27T17:22:05 | 2023-07-27T17:11:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,824,819,672 | 6,082 | Release: 2.14.1 | null | closed | https://github.com/huggingface/datasets/pull/6082 | 2023-07-27T17:05:54 | 2023-07-31T06:32:16 | 2023-07-27T17:08:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,824,486,278 | 6,081 | Deprecate `Dataset.export` | Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) t... | closed | https://github.com/huggingface/datasets/pull/6081 | 2023-07-27T14:22:18 | 2023-07-28T11:09:54 | 2023-07-28T11:01:04 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,822,667,554 | 6,080 | Remove README link to deprecated Colab notebook | null | closed | https://github.com/huggingface/datasets/pull/6080 | 2023-07-26T15:27:49 | 2023-07-26T16:24:43 | 2023-07-26T16:14:34 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,822,597,471 | 6,079 | Iterating over DataLoader based on HF datasets is stuck forever | ### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What shou... | closed | https://github.com/huggingface/datasets/issues/6079 | 2023-07-26T14:52:37 | 2024-02-07T17:46:52 | 2023-07-30T14:09:06 | {
"login": "arindamsarkar93",
"id": 5454868,
"type": "User"
} | [] | false | [] |
1,822,501,472 | 6,078 | resume_download with streaming=True | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download f... | closed | https://github.com/huggingface/datasets/issues/6078 | 2023-07-26T14:08:22 | 2023-07-28T11:05:03 | 2023-07-28T11:05:03 | {
"login": "NicolasMICAUX",
"id": 72763959,
"type": "User"
} | [] | false | [] |
1,822,486,810 | 6,077 | Mapping gets stuck at 99% | ### Describe the bug
Hi !
I'm currently working with a large (~150GB) unnormalized dataset at work.
The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it.
I want to normalize the features of the dataset, ... | open | https://github.com/huggingface/datasets/issues/6077 | 2023-07-26T14:00:40 | 2024-07-22T12:28:06 | null | {
"login": "Laurent2916",
"id": 21087104,
"type": "User"
} | [] | false | [] |
1,822,345,597 | 6,076 | No gzip encoding from github | Don't accept gzip encoding from github, otherwise some files are not streamable + seekable.
fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84
and making sure https://github.com/huggingface/datasets/issues/2918 works as well | closed | https://github.com/huggingface/datasets/pull/6076 | 2023-07-26T12:46:07 | 2023-07-27T16:15:11 | 2023-07-27T16:14:40 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,822,341,398 | 6,075 | Error loading music files using `load_dataset` | ### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/en... | closed | https://github.com/huggingface/datasets/issues/6075 | 2023-07-26T12:44:05 | 2023-07-26T13:08:08 | 2023-07-26T13:08:08 | {
"login": "susnato",
"id": 56069179,
"type": "User"
} | [] | false | [] |
1,822,299,128 | 6,074 | Misc doc improvements | Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has b... | closed | https://github.com/huggingface/datasets/pull/6074 | 2023-07-26T12:20:54 | 2023-07-27T16:16:28 | 2023-07-27T16:16:02 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,822,167,804 | 6,073 | version2.3.2 load_dataset()data_files can't include .xxxx in path | ### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2... | closed | https://github.com/huggingface/datasets/issues/6073 | 2023-07-26T11:09:31 | 2023-08-29T15:53:59 | 2023-08-29T15:53:59 | {
"login": "BUAAChuanWang",
"id": 45893496,
"type": "User"
} | [] | false | [] |
1,822,123,560 | 6,072 | Fix fsspec storage_options from load_dataset | close https://github.com/huggingface/datasets/issues/6071 | closed | https://github.com/huggingface/datasets/pull/6072 | 2023-07-26T10:44:23 | 2023-07-27T12:51:51 | 2023-07-27T12:42:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,821,990,749 | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_sto... | closed | https://github.com/huggingface/datasets/issues/6071 | 2023-07-26T09:37:20 | 2023-07-27T12:42:58 | 2023-07-27T12:42:58 | {
"login": "exs-avianello",
"id": 128361578,
"type": "User"
} | [] | false | [] |
1,820,836,330 | 6,070 | Fix Quickstart notebook link | Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt) | closed | https://github.com/huggingface/datasets/pull/6070 | 2023-07-25T17:48:37 | 2023-07-25T18:19:01 | 2023-07-25T18:10:16 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,820,831,535 | 6,069 | KeyError: dataset has no key "image" | ### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch ... | closed | https://github.com/huggingface/datasets/issues/6069 | 2023-07-25T17:45:50 | 2024-09-06T08:16:16 | 2023-07-27T12:42:17 | {
"login": "etetteh",
"id": 28512232,
"type": "User"
} | [] | false | [] |
1,820,106,952 | 6,068 | fix tqdm lock deletion | related to https://github.com/huggingface/datasets/issues/6066 | closed | https://github.com/huggingface/datasets/pull/6068 | 2023-07-25T11:17:25 | 2023-07-25T15:29:39 | 2023-07-25T15:17:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,819,919,025 | 6,067 | fix tqdm lock | close https://github.com/huggingface/datasets/issues/6066 | closed | https://github.com/huggingface/datasets/pull/6067 | 2023-07-25T09:32:16 | 2023-07-25T10:02:43 | 2023-07-25T09:54:12 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,819,717,542 | 6,066 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | ### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-p... | closed | https://github.com/huggingface/datasets/issues/6066 | 2023-07-25T07:24:36 | 2023-07-26T10:56:25 | 2023-07-26T10:56:24 | {
"login": "codingl2k1",
"id": 138426806,
"type": "User"
} | [] | false | [] |
1,819,334,932 | 6,065 | Add column type guessing from map return function | As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset.
This PR provid... | closed | https://github.com/huggingface/datasets/pull/6065 | 2023-07-25T00:34:17 | 2023-07-26T15:13:45 | 2023-07-26T15:13:44 | {
"login": "piercefreeman",
"id": 1712066,
"type": "User"
} | [] | true | [] |
1,818,703,725 | 6,064 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/6064 | 2023-07-24T15:56:00 | 2023-07-24T16:05:19 | 2023-07-24T15:56:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,818,679,485 | 6,063 | Release: 2.14.0 | null | closed | https://github.com/huggingface/datasets/pull/6063 | 2023-07-24T15:41:19 | 2023-07-24T16:05:16 | 2023-07-24T15:47:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,818,341,584 | 6,062 | Improve `Dataset.from_list` docstring | null | closed | https://github.com/huggingface/datasets/pull/6062 | 2023-07-24T12:36:38 | 2023-07-24T14:43:48 | 2023-07-24T14:34:43 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,818,337,136 | 6,061 | Dill 3.7 support | Adds support for dill 3.7. | closed | https://github.com/huggingface/datasets/pull/6061 | 2023-07-24T12:33:58 | 2023-07-24T14:13:20 | 2023-07-24T14:04:36 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,816,614,120 | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. W... | closed | https://github.com/huggingface/datasets/issues/6060 | 2023-07-22T05:06:43 | 2024-01-22T18:35:12 | 2024-01-22T18:35:12 | {
"login": "wanghaoyucn",
"id": 39429965,
"type": "User"
} | [] | false | [] |
1,816,537,176 | 6,059 | Provide ability to load label mappings from file | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be... | open | https://github.com/huggingface/datasets/issues/6059 | 2023-07-22T02:04:19 | 2024-04-16T08:07:55 | null | {
"login": "david-waterworth",
"id": 5028974,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,815,131,397 | 6,058 | laion-coco download error | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no... | closed | https://github.com/huggingface/datasets/issues/6058 | 2023-07-21T04:24:15 | 2023-07-22T01:42:06 | 2023-07-22T01:42:06 | {
"login": "yangyijune",
"id": 54424110,
"type": "User"
} | [] | false | [] |
1,815,100,151 | 6,057 | Why is the speed difference of gen example so big? | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('tex... | closed | https://github.com/huggingface/datasets/issues/6057 | 2023-07-21T03:34:49 | 2023-10-04T18:06:16 | 2023-10-04T18:06:15 | {
"login": "pixeli99",
"id": 46072190,
"type": "User"
} | [] | false | [] |
1,815,086,963 | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get th... | open | https://github.com/huggingface/datasets/pull/6056 | 2023-07-21T03:13:21 | 2023-08-17T08:26:53 | null | {
"login": "AntreasAntoniou",
"id": 10792502,
"type": "User"
} | [] | true | [] |
1,813,524,145 | 6,055 | Fix host URL in The Pile datasets | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSCo... | open | https://github.com/huggingface/datasets/issues/6055 | 2023-07-20T09:08:52 | 2023-07-20T09:09:37 | null | {
"login": "nickovchinnikov",
"id": 7540752,
"type": "User"
} | [] | false | [] |
1,813,271,304 | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows i... | closed | https://github.com/huggingface/datasets/issues/6054 | 2023-07-20T06:36:14 | 2023-07-21T15:19:37 | 2023-07-21T15:19:37 | {
"login": "ShinoharaHare",
"id": 47121592,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,812,635,902 | 6,053 | Change package name from "datasets" to something less generic | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have n... | closed | https://github.com/huggingface/datasets/issues/6053 | 2023-07-19T19:53:28 | 2024-11-20T21:22:36 | 2023-10-03T16:04:09 | {
"login": "jack-jjm",
"id": 2124157,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,812,145,100 | 6,052 | Remove `HfFileSystem` and deprecate `S3FileSystem` | Remove the legacy `HfFileSystem` and deprecate `S3FileSystem`
cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem` | closed | https://github.com/huggingface/datasets/pull/6052 | 2023-07-19T15:00:01 | 2023-07-19T17:39:11 | 2023-07-19T17:27:17 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,811,549,650 | 6,051 | Skipping shard in the remote repo and resume upload | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enume... | closed | https://github.com/huggingface/datasets/issues/6051 | 2023-07-19T09:25:26 | 2023-07-20T18:16:01 | 2023-07-20T18:16:00 | {
"login": "rs9000",
"id": 9029817,
"type": "User"
} | [] | false | [] |
1,810,378,706 | 6,049 | Update `ruff` version in pre-commit config | so that it corresponds to the one that is being run in CI | closed | https://github.com/huggingface/datasets/pull/6049 | 2023-07-18T17:13:50 | 2023-12-01T14:26:19 | 2023-12-01T14:26:19 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,809,629,346 | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/... | closed | https://github.com/huggingface/datasets/issues/6048 | 2023-07-18T10:16:34 | 2023-07-18T16:18:39 | 2023-07-18T16:18:39 | {
"login": "yangy1992",
"id": 137855591,
"type": "User"
} | [] | false | [] |
1,809,627,947 | 6,047 | Bump dev version | workaround to fix an issue with transformers CI
https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626 | closed | https://github.com/huggingface/datasets/pull/6047 | 2023-07-18T10:15:39 | 2023-07-18T10:28:01 | 2023-07-18T10:15:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,808,154,414 | 6,046 | Support proxy and user-agent in fsspec calls | Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem`... | open | https://github.com/huggingface/datasets/issues/6046 | 2023-07-17T16:39:26 | 2025-06-26T18:26:27 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,808,072,270 | 6,045 | Check if column names match in Parquet loader only when config `features` are specified | Fix #6039 | closed | https://github.com/huggingface/datasets/pull/6045 | 2023-07-17T15:50:15 | 2023-07-24T14:45:56 | 2023-07-24T14:35:03 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,808,057,906 | 6,044 | Rename "pattern" to "path" in YAML data_files configs | To make it easier to understand for users.
They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s>
Glob patterns are still supported though
| closed | https://github.com/huggingface/datasets/pull/6044 | 2023-07-17T15:41:16 | 2023-07-19T16:59:55 | 2023-07-19T16:48:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,807,771,750 | 6,043 | Compression kwargs have no effect when saving datasets as csv | ### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, ... | open | https://github.com/huggingface/datasets/issues/6043 | 2023-07-17T13:19:21 | 2023-07-22T17:34:18 | null | {
"login": "exs-avianello",
"id": 128361578,
"type": "User"
} | [] | false | [] |
1,807,516,762 | 6,042 | Fix unused DatasetInfosDict code in push_to_hub | null | closed | https://github.com/huggingface/datasets/pull/6042 | 2023-07-17T11:03:09 | 2023-07-18T16:17:52 | 2023-07-18T16:08:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,807,441,055 | 6,041 | Flatten repository_structure docs on yaml | To have Splits, Configurations and Builder parameters at the same doc level | closed | https://github.com/huggingface/datasets/pull/6041 | 2023-07-17T10:15:10 | 2023-07-17T10:24:51 | 2023-07-17T10:16:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,807,410,238 | 6,040 | Fix legacy_dataset_infos | was causing transformers CI to fail
https://circleci.com/gh/huggingface/transformers/855105 | closed | https://github.com/huggingface/datasets/pull/6040 | 2023-07-17T09:56:21 | 2023-07-17T10:24:34 | 2023-07-17T10:16:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,806,508,451 | 6,039 | Loading column subset from parquet file produces error since version 2.13 | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in ... | closed | https://github.com/huggingface/datasets/issues/6039 | 2023-07-16T09:13:07 | 2023-07-24T14:35:04 | 2023-07-24T14:35:04 | {
"login": "kklemon",
"id": 1430243,
"type": "User"
} | [] | false | [] |
1,805,960,244 | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configurati... | closed | https://github.com/huggingface/datasets/issues/6038 | 2023-07-15T07:58:08 | 2023-07-24T11:54:15 | 2023-07-24T11:54:15 | {
"login": "BaiMeiyingxue",
"id": 53547009,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.