id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,148,186,272
3,780
Add ElkarHizketak v1.0 dataset
null
closed
https://github.com/huggingface/datasets/pull/3780
2022-02-23T14:44:17
2022-03-04T19:04:29
2022-03-04T19:04:29
{ "login": "antxa", "id": 7646055, "type": "User" }
[]
true
[]
1,148,050,636
3,779
Update manual download URL in newsroom dataset
Fix #3778.
closed
https://github.com/huggingface/datasets/pull/3779
2022-02-23T12:49:07
2022-02-23T13:26:41
2022-02-23T13:26:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,147,898,946
3,778
Not be able to download dataset - "Newsroom"
Hello, I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**! For manually, Link is also didn't work! It is sawing some ad or something! If anybody has solved this issue please help me out or if somebody has this dataset please share your google driv...
closed
https://github.com/huggingface/datasets/issues/3778
2022-02-23T10:15:50
2022-02-23T17:05:04
2022-02-23T13:26:40
{ "login": "Darshan2104", "id": 61326242, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,147,232,875
3,777
Start removing canonical datasets logic
I updated the source code and the documentation to start removing the "canonical datasets" logic. Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly. ### Changes - the documentation about dataset ...
closed
https://github.com/huggingface/datasets/pull/3777
2022-02-22T18:23:30
2022-02-24T15:04:37
2022-02-24T15:04:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,146,932,871
3,776
Allow download only some files from the Wikipedia dataset
**Is your feature request related to a problem? Please describe.** The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb). **Describe the solution you'd like** I...
open
https://github.com/huggingface/datasets/issues/3776
2022-02-22T13:46:41
2022-02-22T14:50:02
null
{ "login": "jvanz", "id": 1514798, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,146,849,454
3,775
Update gigaword card and info
Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999
closed
https://github.com/huggingface/datasets/pull/3775
2022-02-22T12:27:16
2022-02-28T11:35:24
2022-02-28T11:35:24
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,146,843,177
3,774
Fix reddit_tifu data URL
Fix #3773.
closed
https://github.com/huggingface/datasets/pull/3774
2022-02-22T12:21:15
2022-02-22T12:38:45
2022-02-22T12:38:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,146,758,335
3,773
Checksum mismatch for the reddit_tifu dataset
## Describe the bug A checksum occurs when downloading the reddit_tifu data (both long & short). ## Steps to reproduce the bug reddit_tifu_dataset = load_dataset('reddit_tifu', 'long') ## Expected results The expected result is for the dataset to be downloaded and cached locally. ## Actual results File "...
closed
https://github.com/huggingface/datasets/issues/3773
2022-02-22T10:57:07
2022-02-25T19:27:49
2022-02-22T12:38:44
{ "login": "anna-kay", "id": 56791604, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,146,718,630
3,772
Fix: dataset name is stored in keys
null
closed
https://github.com/huggingface/datasets/pull/3772
2022-02-22T10:20:37
2022-02-22T11:08:34
2022-02-22T11:08:33
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,146,561,140
3,771
Fix DuplicatedKeysError on msr_sqa dataset
Fix #3770.
closed
https://github.com/huggingface/datasets/pull/3771
2022-02-22T07:44:24
2022-02-22T08:12:40
2022-02-22T08:12:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,146,336,667
3,770
DuplicatedKeysError on msr_sqa dataset
### Describe the bug Failure to generate dataset msr_sqa because of duplicate keys. ### Steps to reproduce the bug ``` from datasets import load_dataset load_dataset("msr_sqa") ``` ### Expected results The examples keys should be unique. **Actual results** ``` >>> load_dataset("msr_sqa") Downloading: 6...
closed
https://github.com/huggingface/datasets/issues/3770
2022-02-22T00:43:33
2022-02-22T08:12:39
2022-02-22T08:12:39
{ "login": "kolk", "id": 9049591, "type": "User" }
[]
false
[]
1,146,258,023
3,769
`dataset = dataset.map()` causes faiss index lost
## Describe the bug assigning the resulted dataset to original dataset causes lost of the faiss index ## Steps to reproduce the bug `my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure ```python self.dataset.add_faiss_index('embeddings') self.dataset.list_indexes() # ['embeddin...
open
https://github.com/huggingface/datasets/issues/3769
2022-02-21T21:59:23
2022-06-27T14:56:29
null
{ "login": "Oaklight", "id": 13076552, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,146,102,442
3,768
Fix HfFileSystem docstring
null
closed
https://github.com/huggingface/datasets/pull/3768
2022-02-21T18:14:40
2022-02-22T09:13:03
2022-02-22T09:13:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,146,036,648
3,767
Expose method and fix param
A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670
closed
https://github.com/huggingface/datasets/pull/3767
2022-02-21T16:57:47
2022-02-22T08:35:03
2022-02-22T08:35:02
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,145,829,289
3,766
Fix head_qa data URL
Fix #3758.
closed
https://github.com/huggingface/datasets/pull/3766
2022-02-21T13:52:50
2022-02-21T14:39:20
2022-02-21T14:39:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,145,126,881
3,765
Update URL for tagging app
This PR updates the URL for the tagging app to be the one on Spaces.
closed
https://github.com/huggingface/datasets/pull/3765
2022-02-20T20:34:31
2022-02-20T20:36:10
2022-02-20T20:36:06
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
1,145,107,050
3,764
!
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3764
2022-02-20T19:05:43
2022-02-21T08:55:58
2022-02-21T08:55:58
{ "login": "LesiaFedorenko", "id": 77545307, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,145,099,878
3,763
It's not possible download `20200501.pt` dataset
## Describe the bug The dataset `20200501.pt` is broken. The available datasets: https://dumps.wikimedia.org/ptwiki/ ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner') ``` ## Expected results I expect t...
closed
https://github.com/huggingface/datasets/issues/3763
2022-02-20T18:34:58
2022-02-21T12:06:12
2022-02-21T09:25:06
{ "login": "jvanz", "id": 1514798, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,144,849,557
3,762
`Dataset.class_encode` should support custom class names
I can make a PR, just wanted approval before starting. **Is your feature request related to a problem? Please describe.** It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing. https://github.com/huggingface/datasets/blob/master/sr...
closed
https://github.com/huggingface/datasets/issues/3762
2022-02-19T21:21:45
2022-02-21T12:16:35
2022-02-21T12:16:35
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,144,830,702
3,761
Know your data for HF hub
**Is your feature request related to a problem? Please describe.** Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues. **Describe the solution you'd like** Something like https://knowyourdata.withgoogle.com/ for HF hub
closed
https://github.com/huggingface/datasets/issues/3761
2022-02-19T19:48:47
2022-02-21T14:15:23
2022-02-21T14:15:23
{ "login": "Muhtasham", "id": 20128202, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,144,804,558
3,760
Unable to view the Gradio flagged call back dataset
## Dataset viewer issue for '*savtadepth-flags*' **Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)* *with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://h...
closed
https://github.com/huggingface/datasets/issues/3760
2022-02-19T17:45:08
2022-03-22T07:12:11
2022-03-22T07:12:11
{ "login": "kingabzpro", "id": 36753484, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,143,400,770
3,759
Rename GenerateMode to DownloadMode
This PR: - Renames `GenerateMode` to `DownloadMode` - Implements `DeprecatedEnum` - Deprecates `GenerateMode` Close #769.
closed
https://github.com/huggingface/datasets/pull/3759
2022-02-18T16:53:53
2022-02-22T13:57:24
2022-02-22T12:22:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,143,366,393
3,758
head_qa file missing
## Describe the bug A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json) ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("head_qa", name="en") ``` ## Expec...
closed
https://github.com/huggingface/datasets/issues/3758
2022-02-18T16:32:43
2022-02-28T14:29:18
2022-02-21T14:39:19
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,143,300,880
3,757
Add perplexity to metrics
Adding perplexity metric This code differs from the code in [this](https://huggingface.co/docs/transformers/perplexity) HF blog post because the blogpost code fails in at least the following circumstances: - returns nans whenever the stride = 1 - hits a runtime error when the stride is significantly larger than th...
closed
https://github.com/huggingface/datasets/pull/3757
2022-02-18T15:52:23
2022-02-25T17:13:34
2022-02-25T17:13:34
{ "login": "emibaylor", "id": 27527747, "type": "User" }
[]
true
[]
1,143,273,825
3,756
Images get decoded when using `map()` with `input_columns` argument on a dataset
## Describe the bug The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances. However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image ...
closed
https://github.com/huggingface/datasets/issues/3756
2022-02-18T15:35:38
2022-12-13T16:59:06
2022-12-13T16:59:06
{ "login": "kklemon", "id": 1430243, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,143,032,961
3,755
Cannot preview dataset
## Dataset viewer issue for '*rubrix/news*' **Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page* Cannot see the dataset preview: ``` Status code: 400 Exception: Status400Error Message: Not found. Cache is waiting to be refreshed. ``` Am I the one who added thi...
closed
https://github.com/huggingface/datasets/issues/3755
2022-02-18T13:06:45
2022-02-19T14:30:28
2022-02-18T15:41:33
{ "login": "frascuchon", "id": 2518789, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,142,886,536
3,754
Overflowing indices in `select`
## Describe the bug The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"test": [1,2,3]}) ds = ds.select(range(5)) print(ds) p...
closed
https://github.com/huggingface/datasets/issues/3754
2022-02-18T11:30:52
2022-02-18T11:38:23
2022-02-18T11:38:23
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,142,821,144
3,753
Expanding streaming capabilities
Some ideas for a few features that could be useful when working with large datasets in streaming mode. ## `filter` for `IterableDataset` Adding filtering to streaming datasets would be useful in several scenarios: - filter a dataset with many languages for a subset of languages - filter a dataset for specific li...
open
https://github.com/huggingface/datasets/issues/3753
2022-02-18T10:45:41
2025-03-19T14:50:14
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,142,627,889
3,752
Update metadata JSON for cats_vs_dogs dataset
Note that the number of examples in the train split was already fixed in the dataset card. Fix #3750.
closed
https://github.com/huggingface/datasets/pull/3752
2022-02-18T08:32:53
2022-02-18T14:56:12
2022-02-18T14:56:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,142,609,327
3,751
Fix typo in train split name
In the README guide (and consequently in many datasets) there was a typo in the train split name: ``` | Tain | Valid | Test | ``` This PR: - fixes the typo in the train split name - fixes the column alignment of the split tables in the README guide and in all datasets.
closed
https://github.com/huggingface/datasets/pull/3751
2022-02-18T08:18:04
2022-02-18T14:28:52
2022-02-18T14:28:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,142,408,331
3,750
`NonMatchingSplitsSizesError` for cats_vs_dogs dataset
## Describe the bug Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cats_vs_dogs") ``` ## Expected results Loading is successful. ## Actual results ``` NonMatchingSplitsSiz...
closed
https://github.com/huggingface/datasets/issues/3750
2022-02-18T05:46:39
2022-02-18T14:56:11
2022-02-18T14:56:11
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,142,156,678
3,749
Add tqdm arguments
In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible.
closed
https://github.com/huggingface/datasets/pull/3749
2022-02-18T01:34:46
2022-03-08T09:38:48
2022-03-08T09:38:48
{ "login": "penguinwang96825", "id": 28087825, "type": "User" }
[]
true
[]
1,142,128,763
3,748
Add tqdm arguments
In this PR, there are two changes. 1. It is able to show the progress bar by adding the length of the iterator. 2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library.
closed
https://github.com/huggingface/datasets/pull/3748
2022-02-18T00:47:55
2022-02-18T00:59:15
2022-02-18T00:59:15
{ "login": "penguinwang96825", "id": 28087825, "type": "User" }
[]
true
[]
1,141,688,854
3,747
Passing invalid subset should throw an error
## Describe the bug Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('rotten_tomatoes', 'asdfasdfa') ``` ## Expected results This should break, since ...
open
https://github.com/huggingface/datasets/issues/3747
2022-02-17T18:16:11
2022-02-17T18:16:11
null
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,141,612,810
3,746
Use the same seed to shuffle shards and metadata in streaming mode
When shuffling in streaming mode, those two entangled lists are shuffled independently. In this PR I changed this to shuffle the lists of same length with the exact same seed, in order for the files and metadata to still be aligned. ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename...
closed
https://github.com/huggingface/datasets/pull/3746
2022-02-17T17:06:31
2022-02-23T15:00:59
2022-02-23T15:00:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,141,520,953
3,745
Add mIoU metric
This PR adds the mean Intersection-over-Union metric to the library, useful for tasks like semantic segmentation. It is entirely based on mmseg's [implementation](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/core/evaluation/metrics.py). I've removed any PyTorch dependency, and rely on Numpy only...
closed
https://github.com/huggingface/datasets/pull/3745
2022-02-17T15:52:17
2022-03-08T13:20:26
2022-03-08T13:20:26
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[]
true
[]
1,141,461,165
3,744
Better shards shuffling in streaming mode
Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`: ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "me...
closed
https://github.com/huggingface/datasets/issues/3744
2022-02-17T15:07:21
2022-02-23T15:00:58
2022-02-23T15:00:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
1,141,176,011
3,743
initial monash time series forecasting repository
null
closed
https://github.com/huggingface/datasets/pull/3743
2022-02-17T10:51:31
2022-03-21T09:54:41
2022-03-21T09:50:16
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,141,174,549
3,742
Fix ValueError message formatting in int2str
Hi! I bumped into this particular `ValueError` during my work (because an instance of `np.int64` was passed instead of regular Python `int`), and so I had to `print(type(values))` myself. Apparently, it's just the missing `f` to make message an f-string. It ain't much for a contribution, but it's honest work. Hop...
closed
https://github.com/huggingface/datasets/pull/3742
2022-02-17T10:50:08
2022-02-17T15:32:02
2022-02-17T15:32:02
{ "login": "aaakulchyk", "id": 41182803, "type": "User" }
[]
true
[]
1,141,132,649
3,741
Rm sphinx doc
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
closed
https://github.com/huggingface/datasets/pull/3741
2022-02-17T10:11:37
2022-02-17T10:15:17
2022-02-17T10:15:12
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,140,720,739
3,740
Support streaming for pubmed
This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739. Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes: * Change URL prefix from `ftp://` to `https://` * Explicilty `open` the filename and pass the XML ...
closed
https://github.com/huggingface/datasets/pull/3740
2022-02-17T00:18:22
2022-02-18T14:42:13
2022-02-18T14:42:13
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[]
true
[]
1,140,329,189
3,739
Pubmed dataset does not work in streaming mode
## Describe the bug Trying to use the `pubmed` dataset with `streaming=True` fails. ## Steps to reproduce the bug ```python import datasets pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True) print (next(iter(pubmed_train))) ``` ## Expected results I would expect to see the first ...
closed
https://github.com/huggingface/datasets/issues/3739
2022-02-16T17:13:37
2022-02-18T14:42:13
2022-02-18T14:42:13
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,164,253
3,738
For data-only datasets, streaming and non-streaming don't behave the same
See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files. In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys: ```python import datasets as ds iterable_dataset = ds.load_dataset("huggingface/transformers-metadat...
open
https://github.com/huggingface/datasets/issues/3738
2022-02-16T15:20:57
2022-02-21T14:24:55
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,148,050
3,737
Make RedCaps streamable
Make RedCaps streamable. @lhoestq Using `data/redcaps_v1.0_annotations.zip` as a download URL gives an error locally when running `datasets-cli test` (will investigate this another time)
closed
https://github.com/huggingface/datasets/pull/3737
2022-02-16T15:12:23
2022-02-16T15:28:38
2022-02-16T15:28:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,140,134,483
3,736
Local paths in common voice
Continuation of https://github.com/huggingface/datasets/pull/3664: - pass the `streaming` parameter to _split_generator - update @anton-l's code to use this parameter for `common_voice` - add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming Now the `common_...
closed
https://github.com/huggingface/datasets/pull/3736
2022-02-16T15:01:29
2022-09-21T14:58:38
2022-02-22T09:13:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,140,087,891
3,735
Performance of `datasets` at scale
# Performance of `datasets` at 1TB scale ## What is this? During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library. ## Dataset The da...
open
https://github.com/huggingface/datasets/issues/3735
2022-02-16T14:23:32
2024-06-27T01:17:48
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
false
[]
1,140,050,336
3,734
Fix bugs in NewsQA dataset
Fix #3733.
closed
https://github.com/huggingface/datasets/pull/3734
2022-02-16T13:51:28
2022-02-17T07:54:26
2022-02-17T07:54:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,140,011,378
3,733
Bugs in NewsQA dataset
## Describe the bug NewsQA dataset has the following bugs: - the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict - the field `badQuestion` does not appear in `answers` nor `validated_answers` ## Steps to reproduce the bug By inspecting the da...
closed
https://github.com/huggingface/datasets/issues/3733
2022-02-16T13:17:37
2022-02-17T07:54:25
2022-02-17T07:54:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,140,004,022
3,732
Support streaming in size estimation function in `push_to_hub`
This PR adds the streamable version of `os.path.getsize` (`fsspec` can return `None`, so we fall back to `fs.open` to make it more robust) to account for possible streamable paths in the nested `extra_nbytes_visitor` function inside `push_to_hub`.
closed
https://github.com/huggingface/datasets/pull/3732
2022-02-16T13:10:48
2022-02-21T18:18:45
2022-02-21T18:18:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,139,626,362
3,731
Fix Multi-News dataset metadata and card
Fix #3730.
closed
https://github.com/huggingface/datasets/pull/3731
2022-02-16T07:14:57
2022-02-16T08:48:47
2022-02-16T08:48:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,139,545,613
3,730
Checksum Error when loading multi-news dataset
## Describe the bug When using the load_dataset function from datasets module to load the Multi-News dataset, does not load the dataset but throws Checksum Error instead. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("multi_news") ``` ## Expected results ...
closed
https://github.com/huggingface/datasets/issues/3730
2022-02-16T05:11:08
2022-02-16T20:05:06
2022-02-16T08:48:46
{ "login": "byw2", "id": 60560991, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,139,398,442
3,729
Wrong number of examples when loading a text dataset
## Describe the bug when I use load_dataset to read a txt file I find that the number of the samples is incorrect ## Steps to reproduce the bug ``` fr = open('train.txt','r',encoding='utf-8').readlines() print(len(fr)) # 1199637 datasets = load_dataset('text', data_files={'train': ['train.txt']}, streaming...
closed
https://github.com/huggingface/datasets/issues/3729
2022-02-16T01:13:31
2022-03-15T16:16:09
2022-03-15T16:16:09
{ "login": "kg-nlp", "id": 58376804, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,139,303,614
3,728
VoxPopuli
## Adding a Dataset - **Name:** VoxPopuli - **Description:** A Large-Scale Multilingual Speech Corpus - **Paper:** https://arxiv.org/pdf/2101.00390.pdf - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** one of the largest (if not the largest) multilingual speech corpus: 400K hours of multi...
closed
https://github.com/huggingface/datasets/issues/3728
2022-02-15T23:04:55
2022-02-16T18:49:12
2022-02-16T18:49:12
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,138,979,732
3,727
Patch all module attributes in its namespace
When patching module attributes, only those defined in its `__all__` variable were considered by default (only falling back to `__dict__` if `__all__` was None). However those are only a subset of all the module attributes in its namespace (`__dict__` variable). This PR fixes the problem of modules that have non-...
closed
https://github.com/huggingface/datasets/pull/3727
2022-02-15T17:12:27
2022-02-17T17:06:18
2022-02-17T17:06:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,870,362
3,726
Use config pandas version in CSV dataset builder
Fix #3724.
closed
https://github.com/huggingface/datasets/pull/3726
2022-02-15T15:47:49
2022-02-15T16:55:45
2022-02-15T16:55:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,835,625
3,725
Pin pandas to avoid bug in streaming mode
Temporarily pin pandas version to avoid bug in streaming mode (patching no longer works). Related to #3724.
closed
https://github.com/huggingface/datasets/pull/3725
2022-02-15T15:21:00
2022-02-15T15:52:38
2022-02-15T15:52:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,138,827,681
3,724
Bug while streaming CSV dataset with pandas 1.4
## Describe the bug If we upgrade to pandas `1.4`, the patching of the pandas module is no longer working ``` AttributeError: '_PatchedModuleObj' object has no attribute '__version__' ``` ## Steps to reproduce the bug ``` pip install pandas==1.4 ``` ```python from datasets import load_dataset ds = load_dat...
closed
https://github.com/huggingface/datasets/issues/3724
2022-02-15T15:16:19
2022-02-15T16:55:44
2022-02-15T16:55:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,138,789,493
3,723
Fix flatten of complex feature types
Fix `flatten` for the following feature types: Image/Audio, Translation, and TranslationVariableLanguages. Inspired by `cast`/`table_cast`, I've introduced a `table_flatten` function to handle the Image/Audio types. CC: @SBrandeis Fix #3686.
closed
https://github.com/huggingface/datasets/pull/3723
2022-02-15T14:45:33
2022-03-18T17:32:26
2022-03-18T17:28:14
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,138,770,211
3,722
added electricity load diagram dataset
Initial Electricity Load Diagram time series dataset.
closed
https://github.com/huggingface/datasets/pull/3722
2022-02-15T14:29:29
2022-02-16T18:53:21
2022-02-16T18:48:07
{ "login": "kashif", "id": 8100, "type": "User" }
[]
true
[]
1,137,617,108
3,721
Multi-GPU support for `FaissIndex`
Per #3716 , current implementation does not take into consideration that `faiss` can run on multiple GPUs. In this commit, I provided multi-GPU support for `FaissIndex` by modifying the device management in `IndexableMixin.add_faiss_index` and `FaissIndex.load`. Now users are able to pass in 1. a positive intege...
closed
https://github.com/huggingface/datasets/pull/3721
2022-02-14T17:26:51
2022-03-07T16:28:57
2022-03-07T16:28:56
{ "login": "rentruewang", "id": 32859905, "type": "User" }
[]
true
[]
1,137,537,080
3,720
Builder Configuration Update Required on Common Voice Dataset
Missing language in Common Voice dataset **Link:** https://huggingface.co/datasets/common_voice I tried to call the Urdu dataset using `load_dataset("common_voice", "ur", split="train+validation")` but couldn't due to builder configuration not found. I checked the source file here for the languages support: ht...
closed
https://github.com/huggingface/datasets/issues/3720
2022-02-14T16:21:41
2024-04-28T18:03:08
2024-04-28T18:03:08
{ "login": "aasem", "id": 12482065, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,137,237,622
3,719
Check if indices values in `Dataset.select` are within bounds
Fix #3707 Instead of reusing `_check_valid_index_key` from `datasets.formatting`, I defined a new function to provide a more meaningful error message.
closed
https://github.com/huggingface/datasets/pull/3719
2022-02-14T12:31:41
2022-02-14T19:19:22
2022-02-14T19:19:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,137,196,388
3,718
Fix Evidence Infer Treatment dataset
This PR: - fixes a bug in the script, by removing an unnamed column with the row index: fix KeyError - fix the metadata JSON, by adding both configurations (1.1 and 2.0): fix ExpectedMoreDownloadedFiles - updates the dataset card Fix #3515.
closed
https://github.com/huggingface/datasets/pull/3718
2022-02-14T11:58:07
2022-02-14T13:21:45
2022-02-14T13:21:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,137,183,015
3,717
wrong condition in `Features ClassLabel encode_example`
## Describe the bug The `encode_example` function in *features.py* seems to have a wrong condition. ```python if not -1 <= example_data < self.num_classes: raise ValueError(f"Class label {example_data:d} greater than configured num_classes {self.num_classes}") ``` ## Expected results The `not - 1` co...
closed
https://github.com/huggingface/datasets/issues/3717
2022-02-14T11:44:35
2022-02-14T15:09:36
2022-02-14T15:07:43
{ "login": "Tudyx", "id": 56633664, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,136,831,092
3,716
`FaissIndex` to support multiple GPU and `custom_index`
**Is your feature request related to a problem? Please describe.** Currently, because `device` is of the type `int | None`, to leverage `faiss-gpu`'s multi-gpu support, you need to create a `custom_index`. However, if using a `custom_index` created by e.g. `faiss.index_cpu_to_all_gpus`, then `FaissIndex.save` does not ...
closed
https://github.com/huggingface/datasets/issues/3716
2022-02-14T06:21:43
2022-03-07T16:28:56
2022-03-07T16:28:56
{ "login": "rentruewang", "id": 32859905, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,136,107,879
3,715
Fix bugs in msr_sqa dataset
The last version has many problems, 1) Errors in table load-in. Split by a single comma instead of using pandas is wrong. 2) id reduplicated in _generate_examples function. 3) Missing information of history questions which make it hard to use. I fix it refer to https://github.com/HKUNLP/UnifiedSKG. And we test ...
closed
https://github.com/huggingface/datasets/pull/3715
2022-02-13T16:37:30
2022-10-03T09:10:02
2022-10-03T09:08:06
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,136,105,530
3,714
tatoeba_mt: File not found error and key error
## Dataset viewer issue for 'tatoeba_mt' **Link:** https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt My data loader script does not seem to work. The files are part of the local repository but cannot be found. An example where it should work is the subset for "afr-eng". Another problem is that I do not ...
closed
https://github.com/huggingface/datasets/issues/3714
2022-02-13T16:35:45
2022-02-13T20:44:04
2022-02-13T20:44:04
{ "login": "jorgtied", "id": 614718, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,135,692,572
3,713
Rm sphinx doc
Checklist - [x] Update circle ci yaml - [x] Delete sphinx static & python files in docs dir - [x] Update readme in docs dir - [ ] Update docs config in setup.py
closed
https://github.com/huggingface/datasets/pull/3713
2022-02-13T11:26:31
2022-02-17T10:18:46
2022-02-17T10:12:09
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,134,252,505
3,712
Fix the error of msr_sqa dataset
Fix the error of _load_table_data function in msr_sqa dataset, it is wrong to use comma to split each row.
closed
https://github.com/huggingface/datasets/pull/3712
2022-02-12T16:27:54
2022-02-13T11:21:05
2022-02-13T11:21:05
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[]
true
[]
1,134,050,545
3,711
Fix the error of _load_table_data function in msr_sqa dataset
The _load_table_data function from the last version is wrong, it is wrong to use comma to split each row.
closed
https://github.com/huggingface/datasets/pull/3711
2022-02-12T13:20:53
2022-02-12T13:30:43
2022-02-12T13:30:43
{ "login": "Timothyxxx", "id": 47296835, "type": "User" }
[]
true
[]
1,133,955,393
3,710
Fix CI code quality issue
Fix CI code quality issue introduced by #3695.
closed
https://github.com/huggingface/datasets/pull/3710
2022-02-12T12:05:39
2022-02-12T12:58:05
2022-02-12T12:58:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,132,997,904
3,709
Set base path to hub url for canonical datasets
This should allow canonical datasets to use relative paths to download data files from the Hub cc @polinaeterna this will be useful if we have audio datasets that are canonical and for which you'd like to host data files
closed
https://github.com/huggingface/datasets/pull/3709
2022-02-11T19:23:20
2022-02-16T14:02:28
2022-02-16T14:02:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,132,968,402
3,708
Loading JSON gets stuck with many workers/threads
## Describe the bug Loading a JSON dataset with `load_dataset` can get stuck when running on a machine with many CPUs. This is especially an issue when loading a large dataset on a large machine. ## Steps to reproduce the bug I originally created the following script to reproduce the issue: ```python from dat...
open
https://github.com/huggingface/datasets/issues/3708
2022-02-11T18:50:48
2023-06-16T11:24:12
null
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,741,903
3,707
`.select`: unexpected behavior with `indices`
## Describe the bug The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive. ## Steps to reproduce the bug ```python from datasets import Dataset ds = Dataset.from_dict({"text": [...
closed
https://github.com/huggingface/datasets/issues/3707
2022-02-11T15:20:01
2022-02-14T19:19:21
2022-02-14T19:19:21
{ "login": "gabegma", "id": 36087158, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,218,874
3,706
Unable to load dataset 'big_patent'
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\dat...
closed
https://github.com/huggingface/datasets/issues/3706
2022-02-11T09:48:34
2022-02-14T15:26:03
2022-02-14T15:26:03
{ "login": "ankitk2109", "id": 26432753, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,132,053,226
3,705
Raise informative error when loading a save_to_disk dataset
People recurrently report error when trying to load a dataset (using `load_dataset`) that was previously saved using `save_to_disk`. This PR raises an informative error message telling them they should use `load_from_disk` instead. Close #3700.
closed
https://github.com/huggingface/datasets/pull/3705
2022-02-11T08:21:03
2022-02-11T22:56:40
2022-02-11T22:56:39
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,132,042,631
3,704
OSCAR-2109 datasets are misaligned and truncated
## Describe the bug The `oscar-corpus/OSCAR-2109` data appears to be misaligned and truncated by the dataset builder for subsets that contain more than one part and for cases where the texts contain non-unix newlines. ## Steps to reproduce the bug A few examples, although I'm not sure how deterministic the par...
closed
https://github.com/huggingface/datasets/issues/3704
2022-02-11T08:14:59
2022-03-17T18:01:04
2022-03-16T16:21:28
{ "login": "adrianeboyd", "id": 5794899, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,131,882,772
3,703
ImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'
hi : I want to use the seqeval indicator because of direct load_ When metric ('seqeval '), it will prompt that the network connection fails. So I downloaded the seqeval Py to load locally. Loading code: metric = load_ metric(path='mymetric/seqeval/seqeval.py') But tips: Traceback (most recent call last): File...
closed
https://github.com/huggingface/datasets/issues/3703
2022-02-11T06:38:42
2023-07-11T09:31:59
2023-07-11T09:31:59
{ "login": "zhangyifei1", "id": 28425091, "type": "User" }
[]
false
[]
1,130,666,707
3,702
Update data URL of lm1b dataset
The http address doesn't work anymore
closed
https://github.com/huggingface/datasets/pull/3702
2022-02-10T18:46:30
2022-09-23T11:52:39
2022-09-23T11:52:39
{ "login": "yazdanbakhsh", "id": 7105134, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,130,498,738
3,701
Pin ElasticSearch
Until we manage to support ES 8.0, I'm setting the version to `<8.0.0` Currently we're getting this error on 8.0: ```python ValueError: Either 'hosts' or 'cloud_id' must be specified ``` When instantiating a `Elasticsearch()` object
closed
https://github.com/huggingface/datasets/pull/3701
2022-02-10T17:15:26
2022-02-10T17:31:13
2022-02-10T17:31:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,130,200,593
3,699
Add dev-only config to Natural Questions dataset
As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded. Fix #413.
closed
https://github.com/huggingface/datasets/pull/3699
2022-02-10T14:42:24
2022-02-11T09:50:22
2022-02-11T09:50:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,129,864,282
3,698
Add finetune-data CodeFill
null
closed
https://github.com/huggingface/datasets/pull/3698
2022-02-10T11:12:51
2022-10-03T09:36:18
2022-10-03T09:36:18
{ "login": "rgismondi", "id": 49989029, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,129,795,724
3,697
Add code-fill datasets for pretraining/finetuning/evaluating
null
closed
https://github.com/huggingface/datasets/pull/3697
2022-02-10T10:31:48
2022-07-06T15:19:58
2022-07-06T15:19:58
{ "login": "rgismondi", "id": 49989029, "type": "User" }
[]
true
[]
1,129,764,534
3,696
Force unique keys in newsqa dataset
Currently, it may raise `DuplicatedKeysError`. Fix #3630.
closed
https://github.com/huggingface/datasets/pull/3696
2022-02-10T10:09:19
2022-02-14T08:37:20
2022-02-14T08:37:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,129,730,148
3,695
Fix ClassLabel to/from dict when passed names_file
Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`. This PR, removes `names_file` as a field of the data class `ClassLabel`. - it is only used at ...
closed
https://github.com/huggingface/datasets/pull/3695
2022-02-10T09:47:10
2022-02-11T23:02:32
2022-02-11T23:02:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,128,554,365
3,693
Standardize to `Example::`
null
closed
https://github.com/huggingface/datasets/pull/3693
2022-02-09T13:37:13
2022-02-17T10:20:55
2022-02-17T10:20:52
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,128,320,004
3,692
Update data URL in pubmed dataset
Fix #3655.
closed
https://github.com/huggingface/datasets/pull/3692
2022-02-09T10:06:21
2022-02-14T14:15:42
2022-02-14T14:15:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,127,629,306
3,691
Upgrade black to version ~=22.0
Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0.
closed
https://github.com/huggingface/datasets/pull/3691
2022-02-08T18:45:19
2022-02-08T19:56:40
2022-02-08T19:56:39
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[]
true
[]
1,127,493,538
3,690
Update docs to new frontend/UI
### TLDR: Update `datasets` `docs` to the new syntax (markdown and mdx files) & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index)) | Light mode | Dark mode ...
closed
https://github.com/huggingface/datasets/pull/3690
2022-02-08T16:38:09
2022-03-03T20:04:21
2022-03-03T20:04:20
{ "login": "mishig25", "id": 11827707, "type": "User" }
[]
true
[]
1,127,422,478
3,689
Fix streaming for servers not supporting HTTP range requests
Some servers do not support HTTP range requests, whereas this is required to stream some file formats (like ZIP). ~~This PR implements a workaround for those cases, by download the files locally in a temporary directory (cleaned up by the OS once the process is finished).~~ This PR raises custom error explaining ...
closed
https://github.com/huggingface/datasets/pull/3689
2022-02-08T15:41:05
2022-02-10T16:51:25
2022-02-10T16:51:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,127,218,321
3,688
Pyarrow version error
## Describe the bug I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error: `To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`. i tryed w...
closed
https://github.com/huggingface/datasets/issues/3688
2022-02-08T12:53:59
2022-02-09T06:35:33
2022-02-09T06:35:32
{ "login": "Zaker237", "id": 49993443, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,127,154,766
3,687
Can't get the text data when calling to_tf_dataset
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = Defa...
closed
https://github.com/huggingface/datasets/issues/3687
2022-02-08T11:52:10
2023-01-19T14:55:18
2023-01-19T14:55:18
{ "login": "phrasenmaeher", "id": 82086367, "type": "User" }
[]
false
[]
1,127,137,290
3,686
`Translation` features cannot be `flatten`ed
## Describe the bug (`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8] ## Steps to...
closed
https://github.com/huggingface/datasets/issues/3686
2022-02-08T11:33:48
2022-03-18T17:28:13
2022-03-18T17:28:13
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,126,240,444
3,685
Add support for `Audio` and `Image` feature in `push_to_hub`
Add support for the `Audio` and the `Image` feature in `push_to_hub`. The idea is to remove local path information and store file content under "bytes" in the Arrow table before the push. My initial approach (https://github.com/huggingface/datasets/commit/34c652afeff9686b6b8bf4e703c84d2205d670aa) was to use a ma...
closed
https://github.com/huggingface/datasets/pull/3685
2022-02-07T16:47:16
2022-02-14T18:14:57
2022-02-14T18:04:58
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,125,133,664
3,684
[fix]: iwslt2017 download urls
Fixes #2076.
closed
https://github.com/huggingface/datasets/pull/3684
2022-02-06T07:56:55
2022-09-22T16:20:19
2022-09-22T16:20:18
{ "login": "msarmi9", "id": 48395294, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,124,458,371
3,683
added told-br (brazilian hate speech) dataset
Hey, Adding ToLD-Br. Feel free to ask for modifications. Thanks!!
closed
https://github.com/huggingface/datasets/pull/3683
2022-02-04T17:44:32
2022-02-07T21:14:52
2022-02-07T21:14:52
{ "login": "joaoaleite", "id": 26556320, "type": "User" }
[]
true
[]
1,124,434,330
3,682
adding told-br for toxic/abusive hatespeech detection
Hey, I'm adding our dataset from our paper published at AACL 2020. Feel free to ask for modifications. Thanks!
closed
https://github.com/huggingface/datasets/pull/3682
2022-02-04T17:18:29
2022-02-07T03:23:24
2022-02-04T17:36:40
{ "login": "joaoaleite", "id": 26556320, "type": "User" }
[]
true
[]
1,124,237,458
3,681
Fix TestCommand to move dataset_infos instead of copying
Why do we copy instead of moving the file? CC: @lhoestq @lvwerra
closed
https://github.com/huggingface/datasets/pull/3681
2022-02-04T14:01:52
2023-09-24T10:00:11
2023-09-24T09:59:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,124,213,416
3,680
Fix TestCommand to copy dataset_infos to local dir with only data files
Currently this case is missed. CC: @lvwerra
closed
https://github.com/huggingface/datasets/pull/3680
2022-02-04T13:36:46
2022-02-08T10:32:55
2022-02-08T10:32:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,124,062,133
3,679
Download datasets from a private hub
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local s...
closed
https://github.com/huggingface/datasets/issues/3679
2022-02-04T10:49:06
2022-02-22T11:08:07
2022-02-22T11:08:07
{ "login": "juliensimon", "id": 3436143, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "private-hub", "color": "A929D8" } ]
false
[]