id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,425,460,168 | 7,067 | Convert_to_parquet fails for datasets with multiple configs | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
... | closed | https://github.com/huggingface/datasets/issues/7067 | 2024-07-23T15:09:33 | 2024-07-30T10:51:02 | 2024-07-30T10:51:02 | {
"login": "HuangZhen02",
"id": 97585031,
"type": "User"
} | [] | false | [] |
2,425,125,160 | 7,066 | One subset per file in repo ? | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jso... | open | https://github.com/huggingface/datasets/issues/7066 | 2024-07-23T12:43:59 | 2025-06-26T08:24:50 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
2,424,734,953 | 7,065 | Cannot get item after loading from disk and then converting to iterable. | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Au... | open | https://github.com/huggingface/datasets/issues/7065 | 2024-07-23T09:37:56 | 2024-07-23T09:37:56 | null | {
"login": "happyTonakai",
"id": 21305646,
"type": "User"
} | [] | false | [] |
2,424,613,104 | 7,064 | Add `batch` method to `Dataset` class | This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples.
Key changes:
- Add `batch` method to `Dataset` class in `arrow_da... | closed | https://github.com/huggingface/datasets/pull/7064 | 2024-07-23T08:40:43 | 2024-07-25T13:51:25 | 2024-07-25T13:45:20 | {
"login": "lappemic",
"id": 61876623,
"type": "User"
} | [] | true | [] |
2,424,488,648 | 7,063 | Add `batch` method to `Dataset` | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | closed | https://github.com/huggingface/datasets/issues/7063 | 2024-07-23T07:36:59 | 2024-07-25T13:45:21 | 2024-07-25T13:45:21 | {
"login": "lappemic",
"id": 61876623,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,424,467,484 | 7,062 | Avoid calling http_head for non-HTTP URLs | Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement.
Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,...
I discovered this while working in an unrelated issue. | closed | https://github.com/huggingface/datasets/pull/7062 | 2024-07-23T07:25:09 | 2024-07-23T14:28:27 | 2024-07-23T14:21:08 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,423,786,881 | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
`... | open | https://github.com/huggingface/datasets/issues/7061 | 2024-07-22T21:18:12 | 2024-09-09T14:48:07 | null | {
"login": "hahmad2008",
"id": 68266028,
"type": "User"
} | [] | false | [] |
2,423,188,419 | 7,060 | WebDataset BuilderConfig | This PR adds `WebDatasetConfig`.
Closes #7055 | closed | https://github.com/huggingface/datasets/pull/7060 | 2024-07-22T15:41:07 | 2024-07-23T13:28:44 | 2024-07-23T13:28:44 | {
"login": "hlky",
"id": 106811348,
"type": "User"
} | [] | true | [] |
2,422,827,892 | 7,059 | None values are skipped when reading jsonl in subobjects | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
... | open | https://github.com/huggingface/datasets/issues/7059 | 2024-07-22T13:02:42 | 2024-07-22T13:02:53 | null | {
"login": "PonteIneptique",
"id": 1929830,
"type": "User"
} | [] | false | [] |
2,422,560,355 | 7,058 | New feature type: Document | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | open | https://github.com/huggingface/datasets/issues/7058 | 2024-07-22T10:49:20 | 2024-07-22T10:49:20 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | false | [] |
2,422,498,520 | 7,057 | Update load_hub.mdx | null | closed | https://github.com/huggingface/datasets/pull/7057 | 2024-07-22T10:17:46 | 2024-07-22T10:34:14 | 2024-07-22T10:28:10 | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [] | true | [] |
2,422,192,257 | 7,056 | Make `BufferShuffledExamplesIterable` resumable | This PR aims to implement a resumable `BufferShuffledExamplesIterable`.
Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first e... | closed | https://github.com/huggingface/datasets/pull/7056 | 2024-07-22T07:50:02 | 2025-01-31T05:34:20 | 2025-01-31T05:34:19 | {
"login": "yzhangcs",
"id": 18402347,
"type": "User"
} | [] | true | [] |
2,421,708,891 | 7,055 | WebDataset with different prefixes are unsupported | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules... | closed | https://github.com/huggingface/datasets/issues/7055 | 2024-07-22T01:14:19 | 2024-07-24T13:26:30 | 2024-07-23T13:28:46 | {
"login": "hlky",
"id": 106811348,
"type": "User"
} | [] | false | [] |
2,418,548,995 | 7,054 | Add batching to `IterableDataset` | I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class.
The main changes are:
1. A new `BatchedExamplesIterable` that groups examples into batches.
2. A `.batch()` method for... | closed | https://github.com/huggingface/datasets/pull/7054 | 2024-07-19T10:11:47 | 2024-07-23T13:25:13 | 2024-07-23T10:34:28 | {
"login": "lappemic",
"id": 61876623,
"type": "User"
} | [] | true | [] |
2,416,423,791 | 7,053 | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != ... | closed | https://github.com/huggingface/datasets/issues/7053 | 2024-07-18T13:42:35 | 2024-07-18T15:17:42 | 2024-07-18T15:16:18 | {
"login": "MatthewYZhang",
"id": 48289218,
"type": "User"
} | [] | false | [] |
2,411,682,730 | 7,052 | Adding `Music` feature for symbolic music modality (MIDI, abc) | ⚠️ (WIP) ⚠️
### What this PR does
This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files.
### Motivations
These two file formats are widely used in th... | closed | https://github.com/huggingface/datasets/pull/7052 | 2024-07-16T17:26:04 | 2024-07-29T06:47:55 | 2024-07-29T06:47:55 | {
"login": "Natooz",
"id": 56734983,
"type": "User"
} | [] | true | [] |
2,409,353,929 | 7,051 | How to set_epoch with interleave_datasets? | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I... | closed | https://github.com/huggingface/datasets/issues/7051 | 2024-07-15T18:24:52 | 2024-08-05T20:58:04 | 2024-08-05T20:58:04 | {
"login": "jonathanasdf",
"id": 511073,
"type": "User"
} | [] | false | [] |
2,409,048,733 | 7,050 | add checkpoint and resume title in docs | (minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata | closed | https://github.com/huggingface/datasets/pull/7050 | 2024-07-15T15:38:04 | 2024-07-15T16:06:15 | 2024-07-15T15:59:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,408,514,366 | 7,049 | Save nparray as list | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, ... | closed | https://github.com/huggingface/datasets/issues/7049 | 2024-07-15T11:36:11 | 2024-07-18T11:33:34 | 2024-07-18T11:33:34 | {
"login": "Sakurakdx",
"id": 48399040,
"type": "User"
} | [] | false | [] |
2,408,487,547 | 7,048 | ImportError: numpy.core.multiarray when using `filter` | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
... | closed | https://github.com/huggingface/datasets/issues/7048 | 2024-07-15T11:21:04 | 2024-07-16T10:11:25 | 2024-07-16T10:11:25 | {
"login": "kamilakesbi",
"id": 45195979,
"type": "User"
} | [] | false | [] |
2,406,495,084 | 7,047 | Save Dataset as Sharded Parquet | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_... | open | https://github.com/huggingface/datasets/issues/7047 | 2024-07-12T23:47:51 | 2024-07-17T12:07:08 | null | {
"login": "tom-p-reichel",
"id": 43631024,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,405,485,582 | 7,046 | Support librosa and numpy 2.0 for Python 3.10 | Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release:
- https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1
- https://github.com/dofuuz/python-soxr/issues/28 | closed | https://github.com/huggingface/datasets/pull/7046 | 2024-07-12T12:42:47 | 2024-07-12T13:04:40 | 2024-07-12T12:58:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,405,447,858 | 7,045 | Fix tensorflow min version depending on Python version | Fix tensorflow min version depending on Python version.
Related to:
- #6991 | closed | https://github.com/huggingface/datasets/pull/7045 | 2024-07-12T12:20:23 | 2024-07-12T12:38:53 | 2024-07-12T12:33:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,405,002,987 | 7,044 | Mark tests that require librosa | Mark tests that require `librosa`.
Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`:
-... | closed | https://github.com/huggingface/datasets/pull/7044 | 2024-07-12T08:06:59 | 2024-07-12T09:06:32 | 2024-07-12T09:00:09 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,404,951,714 | 7,043 | Add decorator as explicit test dependency | Add decorator as explicit test dependency.
We use `decorator` library in our CI test since PR:
- #4845
However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies.
I discovered this while testing Numpy 2.0 and removing incompatible librarie... | closed | https://github.com/huggingface/datasets/pull/7043 | 2024-07-12T07:35:23 | 2024-07-12T08:12:55 | 2024-07-12T08:07:10 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,404,605,836 | 7,042 | Improved the tutorial by adding a link for loading datasets | Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets. | closed | https://github.com/huggingface/datasets/pull/7042 | 2024-07-12T03:49:54 | 2024-08-15T10:07:44 | 2024-08-15T10:01:59 | {
"login": "AmboThom",
"id": 41874659,
"type": "User"
} | [] | true | [] |
2,404,576,038 | 7,041 | `sort` after `filter` unreasonably slow | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("f... | closed | https://github.com/huggingface/datasets/issues/7041 | 2024-07-12T03:29:27 | 2025-04-29T09:49:25 | 2025-04-29T09:49:25 | {
"login": "Tobin-rgb",
"id": 56711045,
"type": "User"
} | [] | false | [] |
2,402,918,335 | 7,040 | load `streaming=True` dataset with downloaded cache | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 f... | open | https://github.com/huggingface/datasets/issues/7040 | 2024-07-11T11:14:13 | 2024-07-11T14:11:56 | null | {
"login": "wanghaoyucn",
"id": 39429965,
"type": "User"
} | [] | false | [] |
2,402,403,390 | 7,039 | Fix export to JSON when dataset larger than batch size | Fix export to JSON (`lines=False`) when dataset larger than batch size.
Fix #7037. | open | https://github.com/huggingface/datasets/pull/7039 | 2024-07-11T06:52:22 | 2024-09-28T06:10:00 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,400,192,419 | 7,037 | A bug of Dataset.to_json() function | ### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the f... | open | https://github.com/huggingface/datasets/issues/7037 | 2024-07-10T09:11:22 | 2024-09-22T13:16:07 | null | {
"login": "LinglingGreat",
"id": 26499566,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,400,035,672 | 7,036 | Fix doc generation when NamedSplit is used as parameter default value | Fix doc generation when `NamedSplit` is used as parameter default value.
Fix #7035. | closed | https://github.com/huggingface/datasets/pull/7036 | 2024-07-10T07:58:46 | 2024-07-26T07:58:00 | 2024-07-26T07:51:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,400,021,225 | 7,035 | Docs are not generated when a parameter defaults to a NamedSplit value | While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like:
```python
def call_function(split=Split.TRAIN):
...
```
The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'>
See: https://github.com/huggingface/datasets/action... | closed | https://github.com/huggingface/datasets/issues/7035 | 2024-07-10T07:51:24 | 2024-07-26T07:51:53 | 2024-07-26T07:51:53 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,397,525,974 | 7,034 | chore: fix typos in docs | null | closed | https://github.com/huggingface/datasets/pull/7034 | 2024-07-09T08:35:05 | 2024-08-13T08:22:25 | 2024-08-13T08:16:22 | {
"login": "hattizai",
"id": 150505746,
"type": "User"
} | [] | true | [] |
2,397,419,768 | 7,033 | `from_generator` does not allow to specify the split name | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/g... | closed | https://github.com/huggingface/datasets/issues/7033 | 2024-07-09T07:47:58 | 2024-07-26T12:56:16 | 2024-07-26T09:31:56 | {
"login": "pminervini",
"id": 227357,
"type": "User"
} | [] | false | [] |
2,395,531,699 | 7,032 | Register `.zstd` extension for zstd-compressed files | For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered). | closed | https://github.com/huggingface/datasets/pull/7032 | 2024-07-08T12:39:50 | 2024-07-12T15:07:03 | 2024-07-12T15:07:03 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
2,395,401,692 | 7,031 | CI quality is broken: use ruff check instead | CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027
```
error: `ruff <path>` has been removed. Use `ruff check <path>` instead.
``` | closed | https://github.com/huggingface/datasets/issues/7031 | 2024-07-08T11:42:24 | 2024-07-08T11:47:29 | 2024-07-08T11:47:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
2,393,411,631 | 7,030 | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-... | closed | https://github.com/huggingface/datasets/issues/7030 | 2024-07-06T05:43:37 | 2024-07-13T14:35:59 | 2024-07-13T14:35:59 | {
"login": "yuvalkirstain",
"id": 57996478,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,391,366,696 | 7,029 | load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error | ### Describe the bug
I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /... | open | https://github.com/huggingface/datasets/issues/7029 | 2024-07-04T19:15:16 | 2024-07-17T12:44:03 | null | {
"login": "sugam-nexusflow",
"id": 171606538,
"type": "User"
} | [] | false | [] |
2,391,077,531 | 7,028 | Fix ci | ...after last pr errors | closed | https://github.com/huggingface/datasets/pull/7028 | 2024-07-04T15:11:08 | 2024-07-04T15:26:35 | 2024-07-04T15:19:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,391,013,330 | 7,027 | Missing line from previous pr | null | closed | https://github.com/huggingface/datasets/pull/7027 | 2024-07-04T14:34:29 | 2024-07-04T14:40:46 | 2024-07-04T14:34:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,390,983,889 | 7,026 | Fix check_library_imports | move it to after the `trust_remote_code` check
Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly | closed | https://github.com/huggingface/datasets/pull/7026 | 2024-07-04T14:18:38 | 2024-07-04T14:28:36 | 2024-07-04T14:20:02 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,390,488,546 | 7,025 | feat: support non streamable arrow file binary format | Support Arrow files (`.arrow`) that are in non streamable binary file formats. | closed | https://github.com/huggingface/datasets/pull/7025 | 2024-07-04T10:11:12 | 2024-07-31T06:15:50 | 2024-07-31T06:09:31 | {
"login": "kmehant",
"id": 15800200,
"type": "User"
} | [] | true | [] |
2,390,141,626 | 7,024 | Streaming dataset not returning data | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning ... | open | https://github.com/huggingface/datasets/issues/7024 | 2024-07-04T07:21:47 | 2024-07-04T07:21:47 | null | {
"login": "johnwee1",
"id": 91670254,
"type": "User"
} | [] | false | [] |
2,388,090,424 | 7,023 | Remove dead code for pyarrow < 15.0.0 | Remove dead code for pyarrow < 15.0.0.
Code is dead since the merge of:
- #6892
Fix #7022. | closed | https://github.com/huggingface/datasets/pull/7023 | 2024-07-03T09:05:03 | 2024-07-03T09:24:46 | 2024-07-03T09:17:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,388,064,650 | 7,022 | There is dead code after we require pyarrow >= 15.0.0 | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | closed | https://github.com/huggingface/datasets/issues/7022 | 2024-07-03T08:52:57 | 2024-07-03T09:17:36 | 2024-07-03T09:17:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,387,948,935 | 7,021 | Fix casting list array to fixed size list | Fix casting list array to fixed size list.
This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899
- #6283
Fix #7020. | closed | https://github.com/huggingface/datasets/pull/7021 | 2024-07-03T07:58:57 | 2024-07-03T08:47:49 | 2024-07-03T08:41:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,387,940,990 | 7,020 | Casting list array to fixed size list raises error | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.lis... | closed | https://github.com/huggingface/datasets/issues/7020 | 2024-07-03T07:54:49 | 2024-07-03T08:41:56 | 2024-07-03T08:41:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
2,385,793,897 | 7,019 | Support pyarrow large_list | Allow Polars round trip by supporting pyarrow large list.
Fix #6834, fix #6984.
Supersede and close #4800, close #6835, close #6986. | closed | https://github.com/huggingface/datasets/pull/7019 | 2024-07-02T09:52:52 | 2024-08-12T14:49:45 | 2024-08-12T14:43:45 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,383,700,286 | 7,018 | `load_dataset` fails to load dataset saved by `save_to_disk` | ### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_functi... | open | https://github.com/huggingface/datasets/issues/7018 | 2024-07-01T12:19:19 | 2025-05-24T05:21:12 | null | {
"login": "sliedes",
"id": 2307997,
"type": "User"
} | [] | false | [] |
2,383,647,419 | 7,017 | Support fsspec 2024.6.1 | Support fsspec 2024.6.1. | closed | https://github.com/huggingface/datasets/pull/7017 | 2024-07-01T11:57:15 | 2024-07-01T12:12:32 | 2024-07-01T12:06:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,383,262,608 | 7,016 | `drop_duplicates` method | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | open | https://github.com/huggingface/datasets/issues/7016 | 2024-07-01T09:01:06 | 2024-07-20T06:51:58 | null | {
"login": "MohamedAliRashad",
"id": 26205298,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
},
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,383,151,220 | 7,015 | add split argument to Generator | ## Actual
When creating a multi-split dataset using generators like
```python
datasets.DatasetDict({
"val": datasets.Dataset.from_generator(
generator=generator_val,
features=features
),
"test": datasets.Dataset.from_generator(
generator=generator_test,
features=features,
... | closed | https://github.com/huggingface/datasets/pull/7015 | 2024-07-01T08:09:25 | 2024-07-26T09:37:51 | 2024-07-26T09:31:56 | {
"login": "piercus",
"id": 156736,
"type": "User"
} | [] | true | [] |
2,382,985,847 | 7,014 | Skip faiss tests on Windows to avoid running CI for 360 minutes | Skip faiss tests on Windows to avoid running CI for 360 minutes.
Fix #7013.
Revert once the underlying issue is fixed. | closed | https://github.com/huggingface/datasets/pull/7014 | 2024-07-01T06:45:35 | 2024-07-01T07:16:36 | 2024-07-01T07:10:27 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,382,976,738 | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time o... | closed | https://github.com/huggingface/datasets/issues/7013 | 2024-07-01T06:40:03 | 2024-07-01T07:10:28 | 2024-07-01T07:10:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,380,934,047 | 7,012 | Raise an error when a nested object is expected to be a mapping that displays the object | null | closed | https://github.com/huggingface/datasets/pull/7012 | 2024-06-28T18:10:59 | 2024-07-11T02:06:16 | 2024-07-11T02:06:16 | {
"login": "sebbyjp",
"id": 22511797,
"type": "User"
} | [] | true | [] |
2,379,785,262 | 7,011 | Re-enable raising error from huggingface-hub FutureWarning in CI | Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers
- https://github.com/huggingface/transformers/pull/31007
was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0
Fix #7010. | closed | https://github.com/huggingface/datasets/pull/7011 | 2024-06-28T07:28:32 | 2024-06-28T12:25:25 | 2024-06-28T12:19:28 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,379,777,480 | 7,010 | Re-enable raising error from huggingface-hub FutureWarning in CI | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | closed | https://github.com/huggingface/datasets/issues/7010 | 2024-06-28T07:23:40 | 2024-06-28T12:19:30 | 2024-06-28T12:19:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,379,619,132 | 7,009 | Support ruff 0.5.0 in CI | Support ruff 0.5.0 in CI and revert:
- #7007
Fix #7008. | closed | https://github.com/huggingface/datasets/pull/7009 | 2024-06-28T05:37:36 | 2024-06-28T07:17:26 | 2024-06-28T07:11:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,379,591,141 | 7,008 | Support ruff 0.5.0 in CI | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | closed | https://github.com/huggingface/datasets/issues/7008 | 2024-06-28T05:11:26 | 2024-06-28T07:11:18 | 2024-06-28T07:11:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,379,588,676 | 7,007 | Fix CI by temporarily pinning ruff < 0.5.0 | As a hotfix for CI, temporarily pin ruff upper version < 0.5.0.
Fix #7006.
Revert once root cause is fixed. | closed | https://github.com/huggingface/datasets/pull/7007 | 2024-06-28T05:09:17 | 2024-06-28T05:31:21 | 2024-06-28T05:25:17 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,379,581,543 | 7,006 | CI is broken after ruff-0.5.0: E721 | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstanc... | closed | https://github.com/huggingface/datasets/issues/7006 | 2024-06-28T05:03:28 | 2024-06-28T05:25:18 | 2024-06-28T05:25:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,378,424,349 | 7,005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all ... | closed | https://github.com/huggingface/datasets/issues/7005 | 2024-06-27T15:08:26 | 2024-06-28T09:56:19 | 2024-06-28T09:56:19 | {
"login": "Aki1991",
"id": 117731544,
"type": "User"
} | [] | false | [] |
2,376,064,264 | 7,004 | Fix WebDatasets KeyError for user-defined Features when a field is missing in an example | Fixes: https://github.com/huggingface/datasets/issues/6900
Not sure if this needs any addition stuff before merging | closed | https://github.com/huggingface/datasets/pull/7004 | 2024-06-26T18:58:05 | 2024-06-29T00:15:49 | 2024-06-28T09:30:12 | {
"login": "ProGamerGov",
"id": 10626398,
"type": "User"
} | [] | true | [] |
2,373,084,132 | 7,003 | minor fix for bfloat16 | null | closed | https://github.com/huggingface/datasets/pull/7003 | 2024-06-25T16:10:04 | 2024-06-25T16:16:11 | 2024-06-25T16:10:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,373,010,351 | 7,002 | Fix dump of bfloat16 torch tensor | close https://github.com/huggingface/datasets/issues/7000 | closed | https://github.com/huggingface/datasets/pull/7002 | 2024-06-25T15:38:09 | 2024-06-25T16:10:16 | 2024-06-25T15:51:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,372,930,879 | 7,001 | Datasetbuilder Local Download FileNotFoundError | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems... | open | https://github.com/huggingface/datasets/issues/7001 | 2024-06-25T15:02:34 | 2024-06-25T15:21:19 | null | {
"login": "purefall",
"id": 12601271,
"type": "User"
} | [] | false | [] |
2,372,887,585 | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType ... | closed | https://github.com/huggingface/datasets/issues/7000 | 2024-06-25T14:43:26 | 2024-06-25T16:04:00 | 2024-06-25T15:51:53 | {
"login": "stoical07",
"id": 170015089,
"type": "User"
} | [] | false | [] |
2,372,124,589 | 6,999 | Remove tasks | Remove tasks, as part of the 3.0 release. | closed | https://github.com/huggingface/datasets/pull/6999 | 2024-06-25T09:06:16 | 2024-08-21T09:07:07 | 2024-08-21T09:01:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,371,973,926 | 6,998 | Fix tests using hf-internal-testing/librispeech_asr_dummy | Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet.
Fix #6997. | closed | https://github.com/huggingface/datasets/pull/6998 | 2024-06-25T07:59:44 | 2024-06-25T08:22:38 | 2024-06-25T08:13:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,371,966,127 | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'othe... | closed | https://github.com/huggingface/datasets/issues/6997 | 2024-06-25T07:55:44 | 2024-06-25T08:13:43 | 2024-06-25T08:13:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "maintenance",
"color": "d4c5f9"
}
] | false | [] |
2,371,841,671 | 6,996 | Remove deprecated code | Remove deprecated code, as part of the 3.0 release.
First merge:
- [x] #6983
- [x] #6987
- [x] #6999 | closed | https://github.com/huggingface/datasets/pull/6996 | 2024-06-25T06:54:40 | 2024-08-21T09:42:52 | 2024-08-21T09:35:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,370,713,475 | 6,995 | ImportError when importing datasets.load_dataset | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. f... | closed | https://github.com/huggingface/datasets/issues/6995 | 2024-06-24T17:07:22 | 2024-11-14T01:42:09 | 2024-06-25T06:11:37 | {
"login": "Leo-Lsc",
"id": 124846947,
"type": "User"
} | [] | false | [] |
2,370,491,689 | 6,994 | Fix incorrect rank value in data splitting | Fix #6990. | closed | https://github.com/huggingface/datasets/pull/6994 | 2024-06-24T15:07:47 | 2024-06-26T04:37:35 | 2024-06-25T16:19:17 | {
"login": "yzhangcs",
"id": 18402347,
"type": "User"
} | [] | true | [] |
2,370,444,104 | 6,993 | less script docs | + mark as legacy in some parts of the docs since we'll not build new features for script datasets | closed | https://github.com/huggingface/datasets/pull/6993 | 2024-06-24T14:45:28 | 2024-07-08T13:10:53 | 2024-06-27T09:31:21 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,367,890,622 | 6,992 | Dataset with streaming doesn't work with proxy | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both... | open | https://github.com/huggingface/datasets/issues/6992 | 2024-06-22T16:12:08 | 2024-06-25T15:43:05 | null | {
"login": "YHL04",
"id": 57779173,
"type": "User"
} | [] | false | [] |
2,367,711,094 | 6,991 | Unblock NumPy 2.0 | Fixes https://github.com/huggingface/datasets/issues/6980 | closed | https://github.com/huggingface/datasets/pull/6991 | 2024-06-22T09:19:53 | 2024-12-25T17:57:34 | 2024-07-12T12:04:53 | {
"login": "NeilGirdhar",
"id": 730137,
"type": "User"
} | [] | true | [] |
2,366,660,785 | 6,990 | Problematic rank after calling `split_dataset_by_node` twice | ### Describe the bug
I'm trying to split `IterableDataset` by `split_dataset_by_node`.
But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`.
### Steps to reproduce the bug
Here is the minimal code for reproduction:
```py
>>> from datasets import load_dataset
>>... | closed | https://github.com/huggingface/datasets/issues/6990 | 2024-06-21T14:25:26 | 2024-06-25T16:19:19 | 2024-06-25T16:19:19 | {
"login": "yzhangcs",
"id": 18402347,
"type": "User"
} | [] | false | [] |
2,365,556,449 | 6,989 | cache in nfs error | ### Describe the bug
- When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory
- When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory
- The default is to use the path of tempfile.tempdir
- If I modify this path to the N... | open | https://github.com/huggingface/datasets/issues/6989 | 2024-06-21T02:09:22 | 2025-01-29T11:44:04 | null | {
"login": "simplew2011",
"id": 66729924,
"type": "User"
} | [] | false | [] |
2,364,129,918 | 6,988 | [`feat`] Move dataset card creation to method for easier overriding | Hello!
## Pull Request overview
* Move dataset card creation to method for easier overriding
## Details
It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co/datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the d... | open | https://github.com/huggingface/datasets/pull/6988 | 2024-06-20T10:47:57 | 2024-06-21T16:04:58 | null | {
"login": "tomaarsen",
"id": 37621491,
"type": "User"
} | [] | true | [] |
2,363,728,190 | 6,987 | Remove beam | Remove beam, as part of the 3.0 release. | closed | https://github.com/huggingface/datasets/pull/6987 | 2024-06-20T07:27:14 | 2024-06-26T19:41:55 | 2024-06-26T19:35:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,362,584,179 | 6,986 | Add large_list type support in string_to_arrow | add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py
Fix #6984
| closed | https://github.com/huggingface/datasets/pull/6986 | 2024-06-19T14:54:25 | 2024-08-12T14:43:48 | 2024-08-12T14:43:47 | {
"login": "arthasking123",
"id": 16257131,
"type": "User"
} | [] | true | [] |
2,362,378,276 | 6,985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | ### Describe the bug
I have been struggling with this for two days, any help would be appreciated. Python 3.10
```
from setfit import SetFitModel
from huggingface_hub import login
access_token_read = "cccxxxccc"
# Authenticate with the Hugging Face Hub
login(token=access_token_read)
# Load the models fr... | closed | https://github.com/huggingface/datasets/issues/6985 | 2024-06-19T13:22:28 | 2025-03-14T18:47:53 | 2024-06-25T05:40:51 | {
"login": "firmai",
"id": 26666267,
"type": "User"
} | [] | false | [] |
2,362,143,554 | 6,984 | Convert polars DataFrame back to datasets | ### Feature request
This returns error.
```python
from datasets import Dataset
dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]})
Dataset.from_polars(dsdf.to_polars())
```
ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent.
### Motivation
When datasets... | closed | https://github.com/huggingface/datasets/issues/6984 | 2024-06-19T11:38:48 | 2024-08-12T14:43:46 | 2024-08-12T14:43:46 | {
"login": "ljw20180420",
"id": 38550511,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,361,806,201 | 6,983 | Remove metrics | Remove all metrics, as part of the 3.0 release.
Note they are deprecated since 2.5.0 version. | closed | https://github.com/huggingface/datasets/pull/6983 | 2024-06-19T09:08:55 | 2024-06-28T06:57:38 | 2024-06-28T06:51:30 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,361,661,469 | 6,982 | cannot split dataset when using load_dataset | ### Describe the bug
when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document,
This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for da... | closed | https://github.com/huggingface/datasets/issues/6982 | 2024-06-19T08:07:16 | 2024-07-08T06:20:16 | 2024-07-08T06:20:16 | {
"login": "cybest0608",
"id": 17721894,
"type": "User"
} | [] | false | [] |
2,361,520,022 | 6,981 | Update docs on trust_remote_code defaults to False | Update docs on trust_remote_code defaults to False.
The docs needed to be updated due to this PR:
- #6954 | closed | https://github.com/huggingface/datasets/pull/6981 | 2024-06-19T07:12:21 | 2024-06-19T14:32:59 | 2024-06-19T14:26:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,360,909,930 | 6,980 | Support NumPy 2.0 | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was ... | closed | https://github.com/huggingface/datasets/issues/6980 | 2024-06-18T23:30:22 | 2024-07-12T12:04:54 | 2024-07-12T12:04:53 | {
"login": "NeilGirdhar",
"id": 730137,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,360,175,363 | 6,979 | How can I load partial parquet files only? | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | closed | https://github.com/huggingface/datasets/issues/6979 | 2024-06-18T15:44:16 | 2024-06-21T17:09:32 | 2024-06-21T13:32:50 | {
"login": "lucasjinreal",
"id": 21303438,
"type": "User"
} | [] | false | [] |
2,359,511,469 | 6,978 | Fix regression for pandas < 2.0.0 in JSON loader | A regression was introduced for pandas < 2.0.0 in PR:
- #6914
As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html
This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on ... | closed | https://github.com/huggingface/datasets/pull/6978 | 2024-06-18T10:26:34 | 2024-06-19T06:23:24 | 2024-06-19T05:50:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,359,295,045 | 6,977 | load json file error with v2.20.0 | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = p... | closed | https://github.com/huggingface/datasets/issues/6977 | 2024-06-18T08:41:01 | 2024-06-18T10:06:10 | 2024-06-18T10:06:09 | {
"login": "xiaoyaolangzhi",
"id": 15037766,
"type": "User"
} | [] | false | [] |
2,357,107,203 | 6,976 | Ensure compatibility with numpy 2.0.0 | Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.
The following fix should resolve the issue.
error found during testing on the MTEB repository e.g. [here](https://github.c... | closed | https://github.com/huggingface/datasets/pull/6976 | 2024-06-17T11:29:22 | 2024-06-19T14:30:32 | 2024-06-19T14:04:34 | {
"login": "KennethEnevoldsen",
"id": 23721977,
"type": "User"
} | [] | true | [] |
2,357,003,959 | 6,975 | Set temporary numpy upper version < 2.0.0 to fix CI | Set temporary numpy upper version < 2.0.0 to fix CI. See: https://github.com/huggingface/datasets/actions/runs/9546031216/job/26308072017
```
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.... | closed | https://github.com/huggingface/datasets/pull/6975 | 2024-06-17T10:36:54 | 2024-06-17T12:49:53 | 2024-06-17T12:43:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,355,517,362 | 6,973 | IndexError during training with Squad dataset and T5-small model | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libr... | closed | https://github.com/huggingface/datasets/issues/6973 | 2024-06-16T07:53:54 | 2024-07-01T11:25:40 | 2024-07-01T11:25:40 | {
"login": "ramtunguturi36",
"id": 151521233,
"type": "User"
} | [] | false | [] |
2,353,531,912 | 6,972 | Fix webdataset pickling | ...by making tracked iterables picklable.
This is important to make streaming datasets compatible with multiprocessing e.g. for parallel data loading | closed | https://github.com/huggingface/datasets/pull/6972 | 2024-06-14T14:43:02 | 2024-06-14T15:43:43 | 2024-06-14T15:37:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,351,830,856 | 6,971 | packaging: Remove useless dependencies | Revert changes in #6396 and #6404. CVE-2023-47248 has been fixed since PyArrow v14.0.1. Meanwhile Python requirements requires `pyarrow>=15.0.0`. | closed | https://github.com/huggingface/datasets/pull/6971 | 2024-06-13T18:43:43 | 2024-06-14T14:03:34 | 2024-06-14T13:57:24 | {
"login": "daskol",
"id": 9336514,
"type": "User"
} | [] | true | [] |
2,351,380,029 | 6,970 | Set dev version | null | closed | https://github.com/huggingface/datasets/pull/6970 | 2024-06-13T14:59:45 | 2024-06-13T15:06:18 | 2024-06-13T14:59:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,351,351,436 | 6,969 | Release: 2.20.0 | null | closed | https://github.com/huggingface/datasets/pull/6969 | 2024-06-13T14:48:20 | 2024-06-13T15:04:39 | 2024-06-13T14:55:53 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
2,351,331,417 | 6,968 | Use `HF_HUB_OFFLINE` instead of `HF_DATASETS_OFFLINE` | To use `datasets` offline, one can use the `HF_DATASETS_OFFLINE` environment variable. This PR makes `HF_HUB_OFFLINE` the recommended environment variable for offline training. Goal is to be more consistent with the rest of HF ecosystem and have a single config value to set.
The changes are backward-compatible meani... | closed | https://github.com/huggingface/datasets/pull/6968 | 2024-06-13T14:39:40 | 2024-06-13T17:31:37 | 2024-06-13T17:25:37 | {
"login": "Wauplin",
"id": 11801849,
"type": "User"
} | [] | true | [] |
2,349,146,398 | 6,967 | Method to load Laion400m | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings... | open | https://github.com/huggingface/datasets/issues/6967 | 2024-06-12T16:04:04 | 2024-06-12T16:04:04 | null | {
"login": "humanely",
"id": 6862868,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,348,934,466 | 6,966 | Remove underlines between badges | ## Before:
<img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac">
## After:
<img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2">
| closed | https://github.com/huggingface/datasets/pull/6966 | 2024-06-12T14:32:11 | 2024-06-19T14:16:21 | 2024-06-19T14:10:11 | {
"login": "andrewhong04",
"id": 35881688,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.