id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,007,340,089
2,970
Magnet’s
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2970
2021-09-26T09:50:29
2021-09-26T10:38:59
2021-09-26T10:38:59
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,007,217,867
2,969
medical-dialog error
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_d...
closed
https://github.com/huggingface/datasets/issues/2969
2021-09-25T23:08:44
2024-01-08T09:55:12
2021-10-11T07:46:42
{ "login": "smeyerhot", "id": 43877130, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,007,209,488
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folder...
closed
https://github.com/huggingface/datasets/issues/2968
2021-09-25T22:18:39
2021-10-07T22:47:42
2021-10-07T22:47:26
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,007,194,837
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recentl...
closed
https://github.com/huggingface/datasets/issues/2967
2021-09-25T20:58:15
2021-10-03T20:34:22
2021-10-03T20:34:22
{ "login": "WadeYin9712", "id": 42200725, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,007,142,233
2,966
Upload greek-legal-code dataset
null
closed
https://github.com/huggingface/datasets/pull/2966
2021-09-25T16:52:15
2021-10-13T13:37:30
2021-10-13T13:37:30
{ "login": "christospi", "id": 9130406, "type": "User" }
[]
true
[]
1,007,084,153
2,965
Invalid download URL of WMT17 `zh-en` data
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/pa...
closed
https://github.com/huggingface/datasets/issues/2965
2021-09-25T13:17:32
2022-08-31T06:47:11
2022-08-31T06:47:10
{ "login": "Ririkoo", "id": 3339950, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,006,605,904
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if re...
closed
https://github.com/huggingface/datasets/issues/2964
2021-09-24T15:55:21
2024-02-16T10:14:35
2021-09-25T08:06:07
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,006,588,605
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provi...
closed
https://github.com/huggingface/datasets/issues/2963
2021-09-24T15:35:11
2021-09-24T15:38:24
2021-09-24T15:38:24
{ "login": "keloemma", "id": 40454218, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,006,557,666
2,962
Enable splits during streaming the dataset
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternativ...
open
https://github.com/huggingface/datasets/issues/2962
2021-09-24T15:01:29
2025-07-17T04:53:20
null
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,006,453,781
2,961
Fix CI doc build
Pin `fsspec`. Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1' Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
closed
https://github.com/huggingface/datasets/pull/2961
2021-09-24T13:13:28
2021-09-24T13:18:07
2021-09-24T13:18:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,006,222,850
2,960
Support pandas 1.3 new `read_csv` parameters
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
closed
https://github.com/huggingface/datasets/pull/2960
2021-09-24T08:37:24
2021-09-24T11:22:31
2021-09-24T11:22:30
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
1,005,547,632
2,959
Added computer vision tasks
Added various image processing/computer vision tasks.
closed
https://github.com/huggingface/datasets/pull/2959
2021-09-23T15:07:27
2022-03-01T17:41:51
2022-03-01T17:41:51
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
1,005,144,601
2,958
Add security policy to the project
Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository Close #2953.
closed
https://github.com/huggingface/datasets/pull/2958
2021-09-23T08:20:55
2021-10-21T15:16:44
2021-10-21T15:16:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,004,868,337
2,957
MultiWOZ Dataset NonMatchingChecksumError
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = loa...
closed
https://github.com/huggingface/datasets/issues/2957
2021-09-22T23:45:00
2022-03-15T16:07:02
2022-03-15T16:07:02
{ "login": "bradyneal", "id": 8754873, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,004,306,367
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.g...
open
https://github.com/huggingface/datasets/issues/2956
2021-09-22T13:34:32
2023-08-31T16:49:01
null
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,003,999,469
2,955
Update legacy Python image for CI tests in Linux
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to fas...
closed
https://github.com/huggingface/datasets/pull/2955
2021-09-22T08:25:27
2021-09-24T10:36:05
2021-09-24T10:36:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,003,904,803
2,954
Run tests in parallel
Run CI tests in parallel to speed up the test suite. Speed up results: - Linux: from `7m 30s` to `5m 32s` - Windows: from `13m 52s` to `11m 10s`
closed
https://github.com/huggingface/datasets/pull/2954
2021-09-22T07:00:44
2021-09-28T06:55:51
2021-09-28T06:55:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,002,766,517
2,953
Trying to get in touch regarding a security issue
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-rep...
closed
https://github.com/huggingface/datasets/issues/2953
2021-09-21T15:58:13
2021-10-21T15:16:43
2021-10-21T15:16:43
{ "login": "JamieSlome", "id": 55323451, "type": "User" }
[]
false
[]
1,002,704,096
2,952
Fix missing conda deps
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
closed
https://github.com/huggingface/datasets/pull/2952
2021-09-21T15:23:01
2021-09-22T04:39:59
2021-09-21T15:30:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,001,267,888
2,951
Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
closed
https://github.com/huggingface/datasets/pull/2951
2021-09-20T18:26:59
2021-09-21T14:00:57
2021-09-21T10:14:32
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
1,001,085,353
2,950
Fix fn kwargs in filter
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
closed
https://github.com/huggingface/datasets/pull/2950
2021-09-20T15:10:26
2021-09-20T16:22:59
2021-09-20T15:28:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,001,026,680
2,949
Introduce web and wiki config in triviaqa dataset
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
closed
https://github.com/huggingface/datasets/pull/2949
2021-09-20T14:17:23
2021-10-05T13:20:52
2021-10-01T15:39:29
{ "login": "shirte", "id": 1706443, "type": "User" }
[]
true
[]
1,000,844,077
2,948
Fix minor URL format in scitldr dataset
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
closed
https://github.com/huggingface/datasets/pull/2948
2021-09-20T11:11:32
2021-09-20T13:18:28
2021-09-20T13:18:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,000,798,338
2,947
Don't use old, incompatible cache for the new `filter`
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into...
closed
https://github.com/huggingface/datasets/pull/2947
2021-09-20T10:18:59
2021-09-20T16:25:09
2021-09-20T13:43:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,000,754,824
2,946
Update meteor score from nltk update
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
closed
https://github.com/huggingface/datasets/pull/2946
2021-09-20T09:28:46
2021-09-20T09:35:59
2021-09-20T09:35:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,000,624,883
2,945
Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propo...
closed
https://github.com/huggingface/datasets/issues/2945
2021-09-20T06:47:01
2021-09-20T12:01:27
2021-09-20T12:00:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,000,544,370
2,944
Add `remove_columns` to `IterableDataset `
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'I...
closed
https://github.com/huggingface/datasets/issues/2944
2021-09-20T04:01:00
2021-10-08T15:31:53
2021-10-08T15:31:53
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,000,355,115
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
closed
https://github.com/huggingface/datasets/issues/2943
2021-09-19T16:16:37
2021-09-20T16:25:43
2021-09-20T16:25:42
{ "login": "anton-l", "id": 26864830, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,000,309,765
2,942
Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
closed
https://github.com/huggingface/datasets/pull/2942
2021-09-19T13:11:24
2021-09-24T10:39:55
2021-09-24T10:39:54
{ "login": "Hazoom", "id": 13545154, "type": "User" }
[]
true
[]
1,000,000,711
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num...
open
https://github.com/huggingface/datasets/issues/2941
2021-09-18T10:39:13
2022-01-19T14:10:07
null
{ "login": "ayaka14732", "id": 68557794, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
999,680,796
2,940
add swedish_medical_ner dataset
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2940
2021-09-17T20:03:05
2021-10-05T12:13:34
2021-10-05T12:13:33
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
999,639,630
2,939
MENYO-20k repo has moved, updating URL
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
closed
https://github.com/huggingface/datasets/pull/2939
2021-09-17T19:01:54
2021-09-21T15:31:37
2021-09-21T15:31:36
{ "login": "cdleong", "id": 4109253, "type": "User" }
[]
true
[]
999,552,263
2,938
Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I...
closed
https://github.com/huggingface/datasets/pull/2938
2021-09-17T16:57:33
2021-12-17T10:52:18
2021-09-29T13:01:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
999,548,277
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
closed
https://github.com/huggingface/datasets/issues/2937
2021-09-17T16:52:10
2022-08-24T13:09:08
2022-08-24T13:09:08
{ "login": "daqieq", "id": 40532020, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
999,521,647
2,936
Check that array is not Float as nan != nan
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
closed
https://github.com/huggingface/datasets/pull/2936
2021-09-17T16:16:41
2021-09-21T09:39:05
2021-09-21T09:39:04
{ "login": "Iwontbecreative", "id": 494951, "type": "User" }
[]
true
[]
999,518,469
2,935
Add Jigsaw unintended Bias
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
closed
https://github.com/huggingface/datasets/pull/2935
2021-09-17T16:12:31
2021-09-24T10:41:52
2021-09-24T10:41:52
{ "login": "Iwontbecreative", "id": 494951, "type": "User" }
[]
true
[]
999,477,413
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one refe...
closed
https://github.com/huggingface/datasets/issues/2934
2021-09-17T15:26:53
2021-10-13T09:03:23
2021-10-13T09:03:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
999,392,566
2,933
Replace script_version with revision
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are ...
closed
https://github.com/huggingface/datasets/pull/2933
2021-09-17T14:04:39
2021-09-20T09:52:10
2021-09-20T09:52:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
999,317,750
2,932
Conda build fails
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
closed
https://github.com/huggingface/datasets/issues/2932
2021-09-17T12:49:22
2021-09-21T15:31:10
2021-09-21T15:31:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
998,326,359
2,931
Fix bug in to_tf_dataset
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
closed
https://github.com/huggingface/datasets/pull/2931
2021-09-16T15:08:03
2021-09-16T17:01:38
2021-09-16T17:01:37
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
998,154,311
2,930
Mutable columns argument breaks set_format
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] datas...
closed
https://github.com/huggingface/datasets/issues/2930
2021-09-16T12:27:22
2021-09-16T13:50:53
2021-09-16T13:50:53
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,960,024
2,929
Add regression test for null Sequence
Relates to #2892 and #2900.
closed
https://github.com/huggingface/datasets/pull/2929
2021-09-16T08:58:33
2021-09-17T08:23:59
2021-09-17T08:23:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
997,941,506
2,928
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2928
2021-09-16T08:39:20
2021-09-16T12:35:34
2021-09-16T12:35:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
997,654,680
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[st...
closed
https://github.com/huggingface/datasets/issues/2927
2021-09-16T01:14:02
2021-09-20T16:23:22
2021-09-20T16:23:21
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,463,277
2,926
Error when downloading datasets to non-traditional cache directories
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual...
open
https://github.com/huggingface/datasets/issues/2926
2021-09-15T19:59:46
2021-11-24T21:42:31
null
{ "login": "dar-tau", "id": 45885627, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,407,034
2,925
Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dat...
closed
https://github.com/huggingface/datasets/pull/2925
2021-09-15T18:54:42
2021-09-27T17:51:55
2021-09-27T17:51:55
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
997,378,113
2,924
"File name too long" error for file locks
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc...
closed
https://github.com/huggingface/datasets/issues/2924
2021-09-15T18:16:50
2023-12-08T13:39:51
2021-10-29T09:42:24
{ "login": "gar1t", "id": 184949, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,351,590
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an err...
closed
https://github.com/huggingface/datasets/issues/2923
2021-09-15T17:44:38
2022-04-12T10:09:40
2022-04-12T10:09:39
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
997,332,662
2,922
Fix conversion of multidim arrays in list to arrow
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow ...
closed
https://github.com/huggingface/datasets/pull/2922
2021-09-15T17:21:36
2021-09-15T17:22:52
2021-09-15T17:21:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
997,325,424
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <mod...
closed
https://github.com/huggingface/datasets/issues/2921
2021-09-15T17:12:11
2021-09-15T17:21:45
2021-09-15T17:21:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
997,323,014
2,920
Fix unwanted tqdm bar when accessing examples
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
closed
https://github.com/huggingface/datasets/pull/2920
2021-09-15T17:09:11
2021-09-15T17:18:24
2021-09-15T17:18:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
997,127,487
2,919
Unwanted progress bars when accessing examples
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") ...
closed
https://github.com/huggingface/datasets/issues/2919
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
997,063,347
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_...
closed
https://github.com/huggingface/datasets/issues/2918
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
997,041,658
2,917
windows download abnormal
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-43...
closed
https://github.com/huggingface/datasets/issues/2917
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
{ "login": "wei1826676931", "id": 52347799, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,003,661
2,916
Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references`...
closed
https://github.com/huggingface/datasets/pull/2916
2021-09-15T12:05:43
2021-11-12T14:19:51
2021-11-12T14:19:50
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
996,870,071
2,915
Fix fsspec AbstractFileSystem access
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
closed
https://github.com/huggingface/datasets/pull/2915
2021-09-15T09:39:20
2021-09-15T11:35:24
2021-09-15T11:35:24
{ "login": "pierre-godard", "id": 3969168, "type": "User" }
[]
true
[]
996,770,168
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesys...
closed
https://github.com/huggingface/datasets/issues/2914
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
{ "login": "pierre-godard", "id": 3969168, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
996,436,368
2,913
timit_asr dataset only includes one text phrase
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis...
closed
https://github.com/huggingface/datasets/issues/2913
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
{ "login": "margotwagner", "id": 39107794, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
996,256,005
2,912
Update link to Blog in docs footer
Update link.
closed
https://github.com/huggingface/datasets/pull/2912
2021-09-14T17:23:14
2021-09-15T07:59:23
2021-09-15T07:59:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
996,202,598
2,911
Fix exception chaining
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
closed
https://github.com/huggingface/datasets/pull/2911
2021-09-14T16:19:29
2021-09-16T15:04:44
2021-09-16T15:04:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
996,149,632
2,910
feat: 🎸 pass additional arguments to get private configs + info
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
closed
https://github.com/huggingface/datasets/pull/2910
2021-09-14T15:24:19
2021-09-15T16:19:09
2021-09-15T16:19:06
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
996,002,180
2,909
fix anli splits
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
closed
https://github.com/huggingface/datasets/pull/2909
2021-09-14T13:10:35
2021-10-13T11:27:49
2021-10-13T11:27:49
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
995,970,612
2,908
Update Zenodo metadata with creator names and affiliation
This PR helps in prefilling author data when automatically generating the DOI after each release.
closed
https://github.com/huggingface/datasets/pull/2908
2021-09-14T12:39:37
2021-09-14T14:29:25
2021-09-14T14:29:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,968,152
2,907
add story_cloze dataset
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
closed
https://github.com/huggingface/datasets/pull/2907
2021-09-14T12:36:53
2021-10-08T21:41:42
2021-10-08T21:41:41
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
995,962,905
2,906
feat: 🎸 add a function to get a dataset config's split names
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blo...
closed
https://github.com/huggingface/datasets/pull/2906
2021-09-14T12:31:22
2021-10-04T09:55:38
2021-10-04T09:55:37
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
995,843,964
2,905
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2905
2021-09-14T10:16:17
2021-09-14T12:25:37
2021-09-14T12:25:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,814,222
2,904
FORCE_REDOWNLOAD does not work
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default...
open
https://github.com/huggingface/datasets/issues/2904
2021-09-14T09:45:26
2021-10-06T09:37:19
null
{ "login": "anoopkatti", "id": 5278299, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
995,715,191
2,903
Fix xpathopen to accept positional arguments
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
closed
https://github.com/huggingface/datasets/pull/2903
2021-09-14T08:02:50
2021-09-14T08:51:21
2021-09-14T08:40:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,254,216
2,902
Add WIT Dataset
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (e...
closed
https://github.com/huggingface/datasets/issues/2902
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
995,232,844
2,901
Incompatibility with pytest
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pyt...
closed
https://github.com/huggingface/datasets/issues/2901
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
994,922,580
2,900
Fix null sequence encoding
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
closed
https://github.com/huggingface/datasets/pull/2900
2021-09-13T13:55:08
2021-09-13T14:17:43
2021-09-13T14:17:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
994,082,432
2,899
Dataset
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2899
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
994,032,814
2,898
Hug emoji
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2898
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
{ "login": "Jackg-08", "id": 90539794, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
993,798,386
2,897
Add OpenAI's HumanEval dataset
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
closed
https://github.com/huggingface/datasets/pull/2897
2021-09-11T09:37:47
2021-09-16T15:02:11
2021-09-16T15:02:11
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
993,613,113
2,896
add multi-proc in `to_csv`
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_siz...
closed
https://github.com/huggingface/datasets/pull/2896
2021-09-10T21:35:09
2021-10-28T05:47:33
2021-10-26T16:00:42
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
993,462,274
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.repla...
closed
https://github.com/huggingface/datasets/pull/2895
2021-09-10T17:56:57
2021-09-21T22:50:01
2021-09-21T08:18:35
{ "login": "arsarabi", "id": 12345848, "type": "User" }
[]
true
[]
993,375,654
2,894
Fix COUNTER dataset
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
closed
https://github.com/huggingface/datasets/pull/2894
2021-09-10T16:07:29
2021-09-10T16:27:45
2021-09-10T16:27:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
993,342,781
2,893
add mbpp dataset
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. ...
closed
https://github.com/huggingface/datasets/pull/2893
2021-09-10T15:27:30
2021-09-16T09:35:42
2021-09-16T09:35:42
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
993,274,572
2,892
Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ...
closed
https://github.com/huggingface/datasets/issues/2892
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
993,161,984
2,891
Allow dynamic first dimension for ArrayXD
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to exten...
closed
https://github.com/huggingface/datasets/pull/2891
2021-09-10T11:52:52
2021-11-23T15:33:13
2021-10-29T09:37:17
{ "login": "rpowalski", "id": 10357417, "type": "User" }
[]
true
[]
993,074,102
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2890
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
992,968,382
2,889
Coc
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2889
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
{ "login": "Bwiggity", "id": 90444264, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
992,676,535
2,888
v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
closed
https://github.com/huggingface/datasets/issues/2888
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
{ "login": "fcakyon", "id": 34196005, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
992,576,305
2,887
#2837 Use cache folder for lockfile
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
closed
https://github.com/huggingface/datasets/pull/2887
2021-09-09T19:55:56
2021-10-05T17:58:22
2021-10-05T17:58:22
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
992,534,632
2,886
Hj
null
closed
https://github.com/huggingface/datasets/issues/2886
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
{ "login": "Noorasri", "id": 90416328, "type": "User" }
[]
false
[]
992,160,544
2,885
Adding an Elastic Search index to a Dataset
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|████████████████████████████...
open
https://github.com/huggingface/datasets/issues/2885
2021-09-09T12:21:39
2021-10-20T18:57:11
null
{ "login": "MotzWanted", "id": 36195371, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
992,135,698
2,884
Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/...
closed
https://github.com/huggingface/datasets/pull/2884
2021-09-09T11:56:03
2021-09-20T09:17:58
2021-09-20T09:00:49
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
991,969,875
2,883
Fix data URLs and metadata in DocRED dataset
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
closed
https://github.com/huggingface/datasets/pull/2883
2021-09-09T08:55:34
2021-09-13T11:24:31
2021-09-13T11:24:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
991,800,141
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## ...
closed
https://github.com/huggingface/datasets/issues/2882
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
{ "login": "tmpr", "id": 51313597, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
991,639,142
2,881
Add BIOSSES dataset
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2881
2021-09-09T00:35:36
2021-09-13T14:20:40
2021-09-13T14:20:40
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
990,877,940
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2880
2021-09-08T08:42:43
2021-09-09T13:13:29
2021-09-09T13:13:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
990,257,404
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_datas...
closed
https://github.com/huggingface/datasets/issues/2879
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
{ "login": "rcgale", "id": 2279700, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
990,093,316
2,878
NotADirectoryError: [WinError 267] During load_from_disk
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-pr...
open
https://github.com/huggingface/datasets/issues/2878
2021-09-07T15:15:05
2021-09-07T15:15:05
null
{ "login": "Grassycup", "id": 1875064, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
990,027,249
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but i...
closed
https://github.com/huggingface/datasets/issues/2877
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
990,001,079
2,876
Extend support for streaming datasets that use pathlib.Path.glob
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2876
2021-09-07T13:43:45
2021-09-10T09:50:49
2021-09-10T09:50:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
989,919,398
2,875
Add Congolese Swahili speech datasets
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/14351963936...
open
https://github.com/huggingface/datasets/issues/2875
2021-09-07T12:13:50
2021-09-07T12:13:50
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
989,685,328
2,874
Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2874
2021-09-07T07:35:49
2021-09-07T18:25:22
2021-09-07T11:41:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
989,587,695
2,873
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
closed
https://github.com/huggingface/datasets/pull/2873
2021-09-07T04:44:53
2021-09-17T20:47:37
2021-09-17T20:47:37
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
989,453,069
2,872
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2872
2021-09-06T22:00:52
2021-09-07T04:36:32
2021-09-07T04:36:32
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
989,436,088
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi...
closed
https://github.com/huggingface/datasets/issues/2871
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
{ "login": "bwang482", "id": 6764450, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]