id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,577,976,608 | 5,517 | `with_format("numpy")` silently downcasts float64 to float32 features | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | open | https://github.com/huggingface/datasets/issues/5517 | 2023-02-09T14:18:00 | 2024-01-18T08:42:17 | null | {
"login": "ernestum",
"id": 1250234,
"type": "User"
} | [] | false | [] |
1,577,661,640 | 5,516 | Reload features from Parquet metadata | Resolves #5482.
Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`.
This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482).
@lhoest... | closed | https://github.com/huggingface/datasets/pull/5516 | 2023-02-09T10:52:15 | 2023-02-12T16:00:00 | 2023-02-12T15:57:01 | {
"login": "MFreidank",
"id": 6368040,
"type": "User"
} | [] | true | [] |
1,577,590,611 | 5,515 | Unify `load_from_cache_file` type and logic | * Updating type annotations for #`load_from_cache_file`
* Added logic for cache checking if needed
* Updated documentation following the wording of `Dataset.map` | closed | https://github.com/huggingface/datasets/pull/5515 | 2023-02-09T10:04:46 | 2023-02-14T15:38:13 | 2023-02-14T14:26:42 | {
"login": "HallerPatrick",
"id": 22773355,
"type": "User"
} | [] | true | [] |
1,576,453,837 | 5,514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_... | closed | https://github.com/huggingface/datasets/issues/5514 | 2023-02-08T16:40:44 | 2023-02-14T14:26:44 | 2023-02-14T14:26:44 | {
"login": "HallerPatrick",
"id": 22773355,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,576,300,803 | 5,513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your inp... | closed | https://github.com/huggingface/datasets/issues/5513 | 2023-02-08T15:13:46 | 2023-07-24T16:02:18 | 2023-07-24T14:27:59 | {
"login": "alvarobartt",
"id": 36760800,
"type": "User"
} | [] | false | [] |
1,576,142,432 | 5,512 | Speed up batched PyTorch DataLoader | I implemented `__getitems__` to speed up batched data loading in PyTorch
close https://github.com/huggingface/datasets/issues/5505 | closed | https://github.com/huggingface/datasets/pull/5512 | 2023-02-08T13:38:59 | 2023-02-19T18:35:09 | 2023-02-19T18:27:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,575,851,768 | 5,511 | Creating a dummy dataset from a bigger one | ### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset... | closed | https://github.com/huggingface/datasets/issues/5511 | 2023-02-08T10:18:41 | 2023-12-28T18:21:01 | 2023-02-08T10:35:48 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | false | [] |
1,575,191,549 | 5,510 | Milvus integration for search | Signed-off-by: Filip Haltmayer <filip.haltmayer@zilliz.com> | open | https://github.com/huggingface/datasets/pull/5510 | 2023-02-07T23:30:26 | 2023-02-24T16:45:09 | null | {
"login": "filip-halt",
"id": 81822489,
"type": "User"
} | [] | true | [] |
1,574,177,320 | 5,509 | Add a static `__all__` to `__init__.py` for typecheckers | This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) t... | open | https://github.com/huggingface/datasets/pull/5509 | 2023-02-07T11:42:40 | 2023-02-08T17:48:24 | null | {
"login": "LoicGrobol",
"id": 14248012,
"type": "User"
} | [] | true | [] |
1,573,290,359 | 5,508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [..... | closed | https://github.com/huggingface/datasets/issues/5508 | 2023-02-06T21:08:58 | 2023-02-09T14:55:26 | 2023-02-09T14:55:26 | {
"login": "joebhakim",
"id": 13984157,
"type": "User"
} | [] | false | [] |
1,572,667,036 | 5,507 | Optimise behaviour in respect to indices mapping | _Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_
Considering all this, perhaps for Datasets 3.0, we can do the following:
* [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense... | open | https://github.com/huggingface/datasets/issues/5507 | 2023-02-06T14:25:55 | 2023-02-28T18:19:18 | null | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,571,838,641 | 5,506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous... | closed | https://github.com/huggingface/datasets/issues/5506 | 2023-02-06T03:26:03 | 2023-02-08T18:30:08 | 2023-02-08T18:30:07 | {
"login": "kheyer",
"id": 38166299,
"type": "User"
} | [] | false | [] |
1,571,720,814 | 5,505 | PyTorch BatchSampler still loads from Dataset one-by-one | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the on... | closed | https://github.com/huggingface/datasets/issues/5505 | 2023-02-06T01:14:55 | 2023-02-19T18:27:30 | 2023-02-19T18:27:30 | {
"login": "davidgilbertson",
"id": 4443482,
"type": "User"
} | [] | false | [] |
1,570,621,242 | 5,504 | don't zero copy timestamps | Fixes https://github.com/huggingface/datasets/issues/5495
I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug | closed | https://github.com/huggingface/datasets/pull/5504 | 2023-02-03T23:39:04 | 2023-02-08T17:28:50 | 2023-02-08T14:33:17 | {
"login": "dwyatte",
"id": 2512762,
"type": "User"
} | [] | true | [] |
1,570,091,225 | 5,502 | Added functionality: sort datasets by multiple keys | Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425. | closed | https://github.com/huggingface/datasets/pull/5502 | 2023-02-03T16:17:00 | 2023-02-21T14:46:49 | 2023-02-21T14:39:23 | {
"login": "MichlF",
"id": 7805682,
"type": "User"
} | [] | true | [] |
1,569,644,159 | 5,501 | Increase chunk size for speeding up file downloads | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | open | https://github.com/huggingface/datasets/pull/5501 | 2023-02-03T10:50:10 | 2023-02-09T11:04:11 | null | {
"login": "Narsil",
"id": 204321,
"type": "User"
} | [] | true | [] |
1,569,257,240 | 5,500 | WMT19 custom download checksum error | ### Describe the bug
I use the following scripts to download data from WMT19:
```python
import datasets
from datasets import inspect_dataset, load_dataset_builder
from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS
## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-fi... | closed | https://github.com/huggingface/datasets/issues/5500 | 2023-02-03T05:45:37 | 2023-02-03T05:52:56 | 2023-02-03T05:52:56 | {
"login": "Hannibal046",
"id": 38466901,
"type": "User"
} | [] | false | [] |
1,568,937,026 | 5,499 | `load_dataset` has ~4 seconds of overhead for cached data | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk... | open | https://github.com/huggingface/datasets/issues/5499 | 2023-02-02T23:34:50 | 2023-02-07T19:35:11 | null | {
"login": "davidgilbertson",
"id": 4443482,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,568,190,529 | 5,498 | TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset | ### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the ... | closed | https://github.com/huggingface/datasets/issues/5498 | 2023-02-02T14:46:49 | 2023-10-08T06:12:47 | 2023-02-04T17:19:36 | {
"login": "vmuel",
"id": 91255010,
"type": "User"
} | [] | false | [] |
1,567,601,264 | 5,497 | Improved error message for gated/private repos | Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos
See https://github.com/huggingface/huggingface_hub/pull/1064 | closed | https://github.com/huggingface/datasets/pull/5497 | 2023-02-02T08:56:15 | 2023-02-02T11:26:08 | 2023-02-02T11:17:15 | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [] | true | [] |
1,567,301,765 | 5,496 | Add a `reduce` method | ### Feature request
Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`.
### Motivation
A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average... | closed | https://github.com/huggingface/datasets/issues/5496 | 2023-02-02T04:30:22 | 2024-11-12T05:58:14 | 2023-07-21T14:24:32 | {
"login": "zhangir-azerbayev",
"id": 59542043,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,566,803,452 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column... | closed | https://github.com/huggingface/datasets/issues/5495 | 2023-02-01T20:47:33 | 2023-02-08T14:33:19 | 2023-02-08T14:33:19 | {
"login": "dwyatte",
"id": 2512762,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,566,655,348 | 5,494 | Update audio installation doc page | Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a cust... | closed | https://github.com/huggingface/datasets/issues/5494 | 2023-02-01T19:07:50 | 2023-03-02T16:08:17 | 2023-03-02T16:08:17 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
1,566,637,806 | 5,493 | Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring | null | closed | https://github.com/huggingface/datasets/pull/5493 | 2023-02-01T18:57:48 | 2023-02-08T15:10:46 | 2023-02-08T15:03:50 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,566,604,216 | 5,492 | Push_to_hub in a pull request | Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name
cc @nateraw
It should be possible to tweak the use of `huggingface_hub` in `pus... | closed | https://github.com/huggingface/datasets/issues/5492 | 2023-02-01T18:32:14 | 2023-10-16T13:30:48 | 2023-10-16T13:30:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,566,235,012 | 5,491 | [MINOR] Typo | null | closed | https://github.com/huggingface/datasets/pull/5491 | 2023-02-01T14:39:39 | 2023-02-02T07:42:28 | 2023-02-02T07:35:14 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
1,565,842,327 | 5,490 | Do not add index column by default when exporting to CSV | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` t... | closed | https://github.com/huggingface/datasets/pull/5490 | 2023-02-01T10:20:55 | 2023-02-09T09:29:08 | 2023-02-09T09:22:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,565,761,705 | 5,489 | Pin dill lower version | Pin `dill` lower version compatible with `datasets`.
Related to:
- #5487
- #288
Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multip... | closed | https://github.com/huggingface/datasets/pull/5489 | 2023-02-01T09:33:42 | 2023-02-02T07:48:09 | 2023-02-02T07:40:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,565,025,262 | 5,488 | Error loading MP3 files from CommonVoice | ### Describe the bug
When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays:
```python
---------------------------------------------------------------------------
LibsndfileError Traceback (most recent call last)
~/.l... | closed | https://github.com/huggingface/datasets/issues/5488 | 2023-01-31T21:25:33 | 2023-03-02T16:25:14 | 2023-03-02T16:25:13 | {
"login": "kradonneoh",
"id": 110259722,
"type": "User"
} | [] | false | [] |
1,564,480,121 | 5,487 | Incorrect filepath for dill module | ### Describe the bug
I installed the `datasets` package and when I try to `import` it, I get the following error:
```
Traceback (most recent call last):
File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module>
import datasets
File "/Users/avivbrokman/... | closed | https://github.com/huggingface/datasets/issues/5487 | 2023-01-31T15:01:08 | 2023-02-24T16:18:36 | 2023-02-24T16:18:36 | {
"login": "avivbrokman",
"id": 35349273,
"type": "User"
} | [] | false | [] |
1,564,059,749 | 5,486 | Adding `sep` to TextConfig | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` ... | open | https://github.com/huggingface/datasets/issues/5486 | 2023-01-31T10:39:53 | 2023-01-31T14:50:18 | null | {
"login": "omar-araboghli",
"id": 29576434,
"type": "User"
} | [] | false | [] |
1,563,002,829 | 5,485 | Add section in tutorial for IterableDataset | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new d... | closed | https://github.com/huggingface/datasets/pull/5485 | 2023-01-30T18:43:04 | 2023-02-01T18:15:38 | 2023-02-01T18:08:46 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,562,877,070 | 5,484 | Update docs for `nyu_depth_v2` dataset | This PR will fix the issue mentioned in #5461. Here is brief overview,
## Bug:
Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are diffe... | closed | https://github.com/huggingface/datasets/pull/5484 | 2023-01-30T17:37:08 | 2023-09-29T06:43:11 | 2023-02-05T14:15:04 | {
"login": "awsaf49",
"id": 36858976,
"type": "User"
} | [] | true | [] |
1,560,894,690 | 5,483 | Unable to upload dataset | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.pus... | closed | https://github.com/huggingface/datasets/issues/5483 | 2023-01-28T15:18:26 | 2023-01-29T08:09:49 | 2023-01-29T08:09:49 | {
"login": "yuvalkirstain",
"id": 57996478,
"type": "User"
} | [] | false | [] |
1,560,853,137 | 5,482 | Reload features from Parquet metadata | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading th... | closed | https://github.com/huggingface/datasets/issues/5482 | 2023-01-28T13:12:31 | 2023-02-12T15:57:02 | 2023-02-12T15:57:02 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,560,468,195 | 5,481 | Load a cached dataset as iterable | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | open | https://github.com/huggingface/datasets/issues/5481 | 2023-01-27T21:43:51 | 2025-06-19T19:30:52 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good second issue",
"color": "BDE59C"
}
] | false | [] |
1,560,364,866 | 5,480 | Select columns of Dataset or DatasetDict | Close #5474 and #5468. | closed | https://github.com/huggingface/datasets/pull/5480 | 2023-01-27T20:06:16 | 2023-02-13T11:10:13 | 2023-02-13T09:59:35 | {
"login": "daskol",
"id": 9336514,
"type": "User"
} | [] | true | [] |
1,560,357,590 | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what cou... | closed | https://github.com/huggingface/datasets/issues/5479 | 2023-01-27T20:01:22 | 2023-01-29T05:23:14 | 2023-01-29T05:23:14 | {
"login": "jcho19",
"id": 107211437,
"type": "User"
} | [] | false | [] |
1,560,357,583 | 5,478 | Tip for recomputing metadata | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | closed | https://github.com/huggingface/datasets/pull/5478 | 2023-01-27T20:01:22 | 2023-01-30T19:22:21 | 2023-01-30T19:15:26 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,559,909,892 | 5,477 | Unpin sqlalchemy once issue is fixed | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | closed | https://github.com/huggingface/datasets/issues/5477 | 2023-01-27T15:01:55 | 2024-01-26T14:50:45 | 2024-01-26T14:50:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
1,559,594,684 | 5,476 | Pin sqlalchemy | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | closed | https://github.com/huggingface/datasets/pull/5476 | 2023-01-27T11:26:38 | 2023-01-27T12:06:51 | 2023-01-27T11:57:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,559,030,149 | 5,475 | Dataset scan time is much slower than using native arrow | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that exp... | closed | https://github.com/huggingface/datasets/issues/5475 | 2023-01-27T01:32:25 | 2023-01-30T16:17:11 | 2023-01-30T16:17:11 | {
"login": "jonny-cyberhaven",
"id": 121845112,
"type": "User"
} | [] | false | [] |
1,558,827,155 | 5,474 | Column project operation on `datasets.Dataset` | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # std... | closed | https://github.com/huggingface/datasets/issues/5474 | 2023-01-26T21:47:53 | 2023-02-13T09:59:37 | 2023-02-13T09:59:37 | {
"login": "daskol",
"id": 9336514,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
},
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,558,668,197 | 5,473 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/5473 | 2023-01-26T19:34:44 | 2023-01-26T19:47:34 | 2023-01-26T19:38:30 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,558,662,251 | 5,472 | Release: 2.9.0 | null | closed | https://github.com/huggingface/datasets/pull/5472 | 2023-01-26T19:29:42 | 2023-01-26T19:40:44 | 2023-01-26T19:33:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,558,557,545 | 5,471 | Add num_test_batches option | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same ac... | closed | https://github.com/huggingface/datasets/pull/5471 | 2023-01-26T18:09:40 | 2023-01-27T18:16:45 | 2023-01-27T18:08:36 | {
"login": "amyeroberts",
"id": 22614925,
"type": "User"
} | [] | true | [] |
1,558,542,611 | 5,470 | Update dataset card creation | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | closed | https://github.com/huggingface/datasets/pull/5470 | 2023-01-26T17:57:51 | 2023-01-27T16:27:00 | 2023-01-27T16:20:10 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,558,346,906 | 5,469 | Remove deprecated `shard_size` arg from `.push_to_hub()` | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | closed | https://github.com/huggingface/datasets/pull/5469 | 2023-01-26T15:40:56 | 2023-01-26T17:37:51 | 2023-01-26T17:30:59 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,558,066,625 | 5,468 | Allow opposite of remove_columns on Dataset and DatasetDict | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(column... | closed | https://github.com/huggingface/datasets/issues/5468 | 2023-01-26T12:28:09 | 2023-02-13T09:59:38 | 2023-02-13T09:59:38 | {
"login": "hollance",
"id": 346853,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,557,898,273 | 5,467 | Fix conda command in readme | The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining
```
conda install -c huggingface datasets
``` | closed | https://github.com/huggingface/datasets/pull/5467 | 2023-01-26T10:03:01 | 2023-09-24T10:06:59 | 2023-01-26T18:29:37 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,557,584,845 | 5,466 | remove pathlib.Path with URIs | Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage | closed | https://github.com/huggingface/datasets/pull/5466 | 2023-01-26T03:25:45 | 2023-01-26T17:08:57 | 2023-01-26T16:59:11 | {
"login": "jonny-cyberhaven",
"id": 121845112,
"type": "User"
} | [] | true | [] |
1,557,510,618 | 5,465 | audiofolder creates empty dataset even though the dataset passed in follows the correct structure | ### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the follo... | closed | https://github.com/huggingface/datasets/issues/5465 | 2023-01-26T01:45:45 | 2023-01-26T08:48:45 | 2023-01-26T08:48:45 | {
"login": "jcho19",
"id": 107211437,
"type": "User"
} | [] | false | [] |
1,557,462,104 | 5,464 | NonMatchingChecksumError for hendrycks_test | ### Describe the bug
The checksum of the file has likely changed on the remote host.
### Steps to reproduce the bug
`dataset = nlp.load_dataset("hendrycks_test", "anatomy")`
### Expected behavior
no error thrown
### Environment info
- `datasets` version: 2.2.1
- Platform: macOS-13.1-arm64-arm-64bit
- Pyt... | closed | https://github.com/huggingface/datasets/issues/5464 | 2023-01-26T00:43:23 | 2023-01-27T05:44:31 | 2023-01-26T07:41:58 | {
"login": "sarahwie",
"id": 8027676,
"type": "User"
} | [] | false | [] |
1,557,021,041 | 5,463 | Imagefolder docs: mention support of CSV and ZIP | null | closed | https://github.com/huggingface/datasets/pull/5463 | 2023-01-25T17:24:01 | 2023-01-25T18:33:35 | 2023-01-25T18:26:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,556,572,144 | 5,462 | Concatenate on axis=1 with misaligned blocks | Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413 | closed | https://github.com/huggingface/datasets/pull/5462 | 2023-01-25T12:33:22 | 2023-01-26T09:37:00 | 2023-01-26T09:27:19 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,555,532,719 | 5,461 | Discrepancy in `nyu_depth_v2` dataset | ### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-sid... | open | https://github.com/huggingface/datasets/issues/5461 | 2023-01-24T19:15:46 | 2023-02-06T20:52:00 | null | {
"login": "awsaf49",
"id": 36858976,
"type": "User"
} | [] | false | [] |
1,555,387,532 | 5,460 | Document that removing all the columns returns an empty document and the num_row is lost | null | closed | https://github.com/huggingface/datasets/pull/5460 | 2023-01-24T17:33:38 | 2023-01-25T16:11:10 | 2023-01-25T16:04:03 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,555,367,504 | 5,459 | Disable aiohttp requoting of redirection URL | The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'`
This is a problem for our Hugging Face Hub, which requires exact URL from location header.
Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `re... | closed | https://github.com/huggingface/datasets/pull/5459 | 2023-01-24T17:18:59 | 2024-09-01T18:08:31 | 2023-01-31T08:37:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,555,054,737 | 5,458 | slice split while streaming | ### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split:... | closed | https://github.com/huggingface/datasets/issues/5458 | 2023-01-24T14:08:17 | 2023-01-24T15:11:47 | 2023-01-24T15:11:47 | {
"login": "SvenDS9",
"id": 122370631,
"type": "User"
} | [] | false | [] |
1,554,171,264 | 5,457 | prebuilt dataset relies on `downloads/extracted` | ### Describe the bug
I pre-built the dataset:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
and it can be used just fine.
now I wipe out `downloads/extracted` and it no longer works.
```
rm -r ~/.cache/huggingface... | open | https://github.com/huggingface/datasets/issues/5457 | 2023-01-24T02:09:32 | 2024-11-18T07:43:51 | null | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | false | [] |
1,553,905,148 | 5,456 | feat: tqdm for `to_parquet` | As described in #5418
I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet? | closed | https://github.com/huggingface/datasets/pull/5456 | 2023-01-23T22:05:38 | 2023-01-24T11:26:47 | 2023-01-24T11:17:12 | {
"login": "zanussbaum",
"id": 33707069,
"type": "User"
} | [] | true | [] |
1,553,040,080 | 5,455 | Single TQDM bar in multi-proc map | Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode.
Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issue... | closed | https://github.com/huggingface/datasets/pull/5455 | 2023-01-23T12:49:40 | 2023-02-13T20:23:34 | 2023-02-13T20:16:38 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,552,890,419 | 5,454 | Save and resume the state of a DataLoader | It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed)
What I have in mind (but lmk if you have other ideas or comments):
For map-style datasets, this requires to have a PyTorch Sampler state that can be sav... | open | https://github.com/huggingface/datasets/issues/5454 | 2023-01-23T10:58:54 | 2024-11-27T01:19:21 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
1,552,727,425 | 5,453 | Fix base directory while extracting insecure TAR files | This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared:
- from: "."
- to: `output_path`
This PR also adds tests for extracting insecure TAR files.
Related to:
- #5441
- #5452
@stas00 please note this PR addresses just one of the issues you pointe... | closed | https://github.com/huggingface/datasets/pull/5453 | 2023-01-23T08:57:40 | 2023-01-24T01:34:20 | 2023-01-23T10:10:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,552,655,939 | 5,452 | Swap log messages for symbolic/hard links in tar extractor | The log messages do not match their if-condition. This PR swaps them.
Found while investigating:
- #5441
CC: @lhoestq | closed | https://github.com/huggingface/datasets/pull/5452 | 2023-01-23T07:53:38 | 2023-01-23T09:40:55 | 2023-01-23T08:31:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,552,336,300 | 5,451 | ImageFolder BadZipFile: Bad offset for central directory | ### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents │
│ │
│ 1350 │ │ # self.start_dir: Position of start of central directory ... | closed | https://github.com/huggingface/datasets/issues/5451 | 2023-01-22T23:50:12 | 2023-05-23T10:35:48 | 2023-02-10T16:31:36 | {
"login": "hmartiro",
"id": 1524208,
"type": "User"
} | [] | false | [] |
1,551,109,365 | 5,450 | to_tf_dataset with a TF collator causes bizarrely persistent slowdown | ### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data colla... | closed | https://github.com/huggingface/datasets/issues/5450 | 2023-01-20T16:08:37 | 2023-02-13T14:13:34 | 2023-02-13T14:13:34 | {
"login": "Rocketknight1",
"id": 12866554,
"type": "User"
} | [] | false | [] |
1,550,801,453 | 5,449 | Support fsspec 2023.1.0 in CI | Support fsspec 2023.1.0 in CI.
In the 2023.1.0 fsspec release, they replaced the type of `fsspec.registry`:
- from `ReadOnlyRegistry`, with an attribute called `target`
- to `MappingProxyType`, without that attribute
Consequently, we need to change our `mock_fsspec` fixtures, that were using the `target` attrib... | closed | https://github.com/huggingface/datasets/pull/5449 | 2023-01-20T12:53:17 | 2023-01-20T13:32:50 | 2023-01-20T13:26:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,550,618,514 | 5,448 | Support fsspec 2023.1.0 in CI | Once we find out the root cause of:
- #5445
we should revert the temporary pin on fsspec introduced by:
- #5447 | closed | https://github.com/huggingface/datasets/issues/5448 | 2023-01-20T10:26:31 | 2023-01-20T13:26:05 | 2023-01-20T13:26:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,550,599,193 | 5,447 | Fix CI by temporarily pinning fsspec < 2023.1.0 | Temporarily pin fsspec < 2023.1.0
Fix #5445. | closed | https://github.com/huggingface/datasets/pull/5447 | 2023-01-20T10:11:02 | 2023-01-20T10:38:13 | 2023-01-20T10:28:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,550,591,588 | 5,446 | test v0.12.0.rc0 | DO NOT MERGE.
Only to test the CI.
cc @lhoestq @albertvillanova | closed | https://github.com/huggingface/datasets/pull/5446 | 2023-01-20T10:05:19 | 2023-01-20T10:43:22 | 2023-01-20T10:13:48 | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [] | true | [] |
1,550,588,703 | 5,445 | CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target' | CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185
```
...
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_path... | closed | https://github.com/huggingface/datasets/issues/5445 | 2023-01-20T10:03:10 | 2023-01-20T10:28:44 | 2023-01-20T10:28:44 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,550,185,071 | 5,444 | info messages logged as warnings | ### Describe the bug
Code in `datasets` is using `logger.warning` when it should be using `logger.info`.
Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category.
Definitions from the Python docs for reference:
* I... | closed | https://github.com/huggingface/datasets/issues/5444 | 2023-01-20T01:19:18 | 2023-07-12T17:19:31 | 2023-07-12T17:19:31 | {
"login": "davidgilbertson",
"id": 4443482,
"type": "User"
} | [] | false | [] |
1,550,178,914 | 5,443 | Update share tutorial | Based on feedback from discussion #5423, this PR updates the sharing tutorial with a mention of writing your own dataset loading script to support more advanced dataset creation options like multiple configs.
I'll open a separate PR to update the *Create a Dataset card* with the new Hub metadata UI update 😄 | closed | https://github.com/huggingface/datasets/pull/5443 | 2023-01-20T01:09:14 | 2023-01-20T15:44:45 | 2023-01-20T15:37:30 | {
"login": "stevhliu",
"id": 59462357,
"type": "User"
} | [] | true | [] |
1,550,084,450 | 5,442 | OneDrive Integrations with HF Datasets | ### Feature request
First of all , I would like to thank all community who are developed DataSet storage and make it free available
How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive*... | closed | https://github.com/huggingface/datasets/issues/5442 | 2023-01-19T23:12:08 | 2023-02-24T16:17:51 | 2023-02-24T16:17:51 | {
"login": "Mohammed20201991",
"id": 59222637,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,548,417,594 | 5,441 | resolving a weird tar extract issue | ok, every so often, I have been getting a strange failure on dataset install:
```
$ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Downloading and prep... | open | https://github.com/huggingface/datasets/pull/5441 | 2023-01-19T02:17:21 | 2023-01-20T16:49:22 | null | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | true | [] |
1,538,361,143 | 5,440 | Fix documentation about batch samplers | null | closed | https://github.com/huggingface/datasets/pull/5440 | 2023-01-18T17:04:27 | 2023-01-18T17:57:29 | 2023-01-18T17:50:04 | {
"login": "thomasw21",
"id": 24695242,
"type": "User"
} | [] | true | [] |
1,537,973,564 | 5,439 | [dataset request] Add Common Voice 12.0 | ### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
| closed | https://github.com/huggingface/datasets/issues/5439 | 2023-01-18T13:07:05 | 2023-07-21T14:26:10 | 2023-07-21T14:26:09 | {
"login": "MohammedRakib",
"id": 31034499,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,537,489,730 | 5,438 | Update actions/checkout in CD Conda release | This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/ | closed | https://github.com/huggingface/datasets/pull/5438 | 2023-01-18T06:53:15 | 2023-01-18T13:49:51 | 2023-01-18T13:42:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,536,837,144 | 5,437 | Can't load png dataset with 4 channel (RGBA) | I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand. for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead... | closed | https://github.com/huggingface/datasets/pull/5436 | 2023-01-17T15:59:50 | 2023-01-18T09:05:49 | 2023-01-18T06:29:06 | {
"login": "0x2b3bfa0",
"id": 11387611,
"type": "User"
} | [] | true | [] |
1,536,099,300 | 5,435 | Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage | ### Describe the bug
In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states:
> Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples cou... | closed | https://github.com/huggingface/datasets/issues/5435 | 2023-01-17T10:04:16 | 2023-01-19T09:56:03 | 2023-01-19T09:56:03 | {
"login": "DanielYang59",
"id": 80093591,
"type": "User"
} | [] | false | [] |
1,536,090,042 | 5,434 | sample_dataset module not found | null | closed | https://github.com/huggingface/datasets/issues/5434 | 2023-01-17T09:57:54 | 2023-01-19T13:52:12 | 2023-01-19T07:55:11 | {
"login": "nickums",
"id": 15816213,
"type": "User"
} | [] | false | [] |
1,536,017,901 | 5,433 | Support latest Docker image in CI benchmarks | Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432 | closed | https://github.com/huggingface/datasets/issues/5433 | 2023-01-17T09:06:08 | 2023-01-18T06:29:08 | 2023-01-18T06:29:08 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,535,893,019 | 5,432 | Fix CI benchmarks by temporarily pinning Docker image version | This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag.
It also updates deprecated `cml-send-comment` command and using `cml comment create` instead.
Fix #5431. | closed | https://github.com/huggingface/datasets/pull/5432 | 2023-01-17T07:15:31 | 2023-01-17T08:58:22 | 2023-01-17T08:51:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,535,862,621 | 5,431 | CI benchmarks are broken: Unknown arguments: runnerPath, path | Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|██████████| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes ... | closed | https://github.com/huggingface/datasets/issues/5431 | 2023-01-17T06:49:57 | 2023-01-18T06:33:24 | 2023-01-17T08:51:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
1,535,856,503 | 5,430 | Support Apache Beam >= 2.44.0 | Once we find out the root cause of:
- #5426
we should revert the temporary pin on apache-beam introduced by:
- #5429 | closed | https://github.com/huggingface/datasets/issues/5430 | 2023-01-17T06:42:12 | 2024-02-06T19:24:21 | 2024-02-06T19:24:21 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,535,192,687 | 5,429 | Fix CI by temporarily pinning apache-beam < 2.44.0 | Temporarily pin apache-beam < 2.44.0
Fix #5426. | closed | https://github.com/huggingface/datasets/pull/5429 | 2023-01-16T16:20:09 | 2023-01-16T16:51:42 | 2023-01-16T16:49:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,535,166,139 | 5,428 | Load/Save FAISS index using fsspec | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In... | closed | https://github.com/huggingface/datasets/issues/5428 | 2023-01-16T16:08:12 | 2023-03-27T15:18:22 | 2023-03-27T15:18:22 | {
"login": "Dref360",
"id": 8976546,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,535,162,889 | 5,427 | Unable to download dataset id_clickbait | ### Describe the bug
I tried to download dataset `id_clickbait`, but receive this error message.
```
FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip
```
When i open the link using browser, i got this XML data.
```xml
<?xml versi... | closed | https://github.com/huggingface/datasets/issues/5427 | 2023-01-16T16:05:36 | 2023-01-18T09:51:28 | 2023-01-18T09:25:19 | {
"login": "ilos-vigil",
"id": 45941585,
"type": "User"
} | [] | false | [] |
1,535,158,555 | 5,426 | CI tests are broken: SchemaInferenceError | CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `feat... | closed | https://github.com/huggingface/datasets/issues/5426 | 2023-01-16T16:02:07 | 2023-06-02T06:40:32 | 2023-01-16T16:49:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,534,581,850 | 5,425 | Sort on multiple keys with datasets.Dataset.sort() | ### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to panda... | closed | https://github.com/huggingface/datasets/issues/5425 | 2023-01-16T09:22:26 | 2023-02-24T16:15:11 | 2023-02-24T16:15:11 | {
"login": "rocco-fortuna",
"id": 101344863,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
1,534,394,756 | 5,424 | When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset? | ### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduc... | closed | https://github.com/huggingface/datasets/issues/5424 | 2023-01-16T06:54:28 | 2023-02-24T16:19:00 | 2023-02-24T16:19:00 | {
"login": "macabdul9",
"id": 25720695,
"type": "User"
} | [] | false | [] |
1,533,385,239 | 5,422 | Datasets load error for saved github issues | ### Describe the bug
Loading a previously downloaded & saved dataset as described in the HuggingFace course:
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
Gives this error:
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset... | open | https://github.com/huggingface/datasets/issues/5422 | 2023-01-14T17:29:38 | 2023-09-14T11:39:57 | null | {
"login": "folterj",
"id": 7360564,
"type": "User"
} | [] | false | [] |
1,532,278,307 | 5,421 | Support case-insensitive Hub dataset name in load_dataset | ### Feature request
The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue.
Ideally, we could load the glue dataset using the following:
```
from d... | closed | https://github.com/huggingface/datasets/issues/5421 | 2023-01-13T13:07:07 | 2023-01-13T20:12:32 | 2023-01-13T20:12:32 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,532,265,742 | 5,420 | ci: 🎡 remove two obsolete issue templates | add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project.
See https://github.com/huggingface/datasets/issues/new/choose
<img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubuserconten... | closed | https://github.com/huggingface/datasets/pull/5420 | 2023-01-13T12:58:43 | 2023-01-13T13:36:00 | 2023-01-13T13:29:01 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
1,531,999,850 | 5,419 | label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator | ### Describe the bug
When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem.
It is required to rename the column... | closed | https://github.com/huggingface/datasets/issues/5419 | 2023-01-13T09:40:07 | 2023-07-21T14:27:08 | 2023-07-21T14:27:08 | {
"login": "CreatixEA",
"id": 172385,
"type": "User"
} | [] | false | [] |
1,530,111,184 | 5,418 | Add ProgressBar for `to_parquet` | ### Feature request
Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works.
### Motivation
It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar
### Your contribution
Sure I can help if needed | closed | https://github.com/huggingface/datasets/issues/5418 | 2023-01-12T05:06:20 | 2023-01-24T18:18:24 | 2023-01-24T18:18:24 | {
"login": "zanussbaum",
"id": 33707069,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,526,988,113 | 5,416 | Fix RuntimeError: Sharding is ambiguous for this dataset | This PR fixes the RuntimeError: Sharding is ambiguous for this dataset.
The error for ambiguous sharding will be raised only if num_proc > 1.
Fix #5415, fix #5414.
Fix https://huggingface.co/datasets/ami/discussions/3. | closed | https://github.com/huggingface/datasets/pull/5416 | 2023-01-10T08:43:19 | 2023-01-18T17:12:17 | 2023-01-18T14:09:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,526,904,861 | 5,415 | RuntimeError: Sharding is ambiguous for this dataset | ### Describe the bug
When loading some datasets, a RuntimeError is raised.
For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3
```
.../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
... | closed | https://github.com/huggingface/datasets/issues/5415 | 2023-01-10T07:36:11 | 2023-01-18T14:09:04 | 2023-01-18T14:09:03 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.