id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
890,439,523
2,354
Document DatasetInfo attributes
**Is your feature request related to a problem? Please describe.** As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
closed
https://github.com/huggingface/datasets/issues/2354
2021-05-12T20:01:29
2021-05-22T09:26:14
2021-05-22T09:26:14
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
890,296,262
2,353
Update README vallidation rules
This PR allows unexpected subsections under third-level headings. All except `Contributions`. @lhoestq
closed
https://github.com/huggingface/datasets/pull/2353
2021-05-12T16:57:26
2021-05-14T08:56:06
2021-05-14T08:56:06
{ "login": "gchhablani", "id": 29076344, "type": "User" }
[]
true
[]
889,810,100
2,352
Set to_json default to JSON lines
With this PR, the method `Dataset.to_json`: - is added to the docs - defaults to JSON lines
closed
https://github.com/huggingface/datasets/pull/2352
2021-05-12T08:19:25
2021-05-21T09:01:14
2021-05-21T09:01:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
889,584,953
2,351
simpllify faiss index save
Fixes #2350 In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU. In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transfor...
closed
https://github.com/huggingface/datasets/pull/2351
2021-05-12T03:54:10
2021-05-17T13:41:41
2021-05-17T13:41:41
{ "login": "Guitaricet", "id": 2821124, "type": "User" }
[]
true
[]
889,580,247
2,350
`FaissIndex.save` throws error on GPU
## Describe the bug After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error. ``` File "index_wikipedia.py", line 119, in <module> data["train"].save_faiss_index("text_emb", index_save_path) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8...
closed
https://github.com/huggingface/datasets/issues/2350
2021-05-12T03:41:56
2021-05-17T13:41:41
2021-05-17T13:41:41
{ "login": "Guitaricet", "id": 2821124, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
888,586,018
2,349
Update task_ids for Ascent KB
This "other-other-knowledge-base" task is better suited for the dataset.
closed
https://github.com/huggingface/datasets/pull/2349
2021-05-11T20:44:33
2021-05-17T10:53:14
2021-05-17T10:48:34
{ "login": "phongnt570", "id": 6749421, "type": "User" }
[]
true
[]
887,927,737
2,348
Add tests for dataset cards
Adding tests for dataset cards This PR will potentially remove the scripts being used for dataset tags and readme validation. Additionally, this will allow testing dataset readmes by providing the name as follows: ```bash pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist] ``` and ```bas...
closed
https://github.com/huggingface/datasets/pull/2348
2021-05-11T17:14:27
2021-05-21T12:10:47
2021-05-21T12:10:47
{ "login": "gchhablani", "id": 29076344, "type": "User" }
[]
true
[]
887,404,868
2,347
Add an API to access the language and pretty name of a dataset
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.
closed
https://github.com/huggingface/datasets/issues/2347
2021-05-11T14:10:08
2022-10-05T17:16:54
2022-10-05T17:16:53
{ "login": "sgugger", "id": 35901082, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
886,632,114
2,346
Add Qasper Dataset
[Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home) Doing NLP on NLP papers to do NLP ♻️ I had to add it~ - [x] Add README (just gotta fill out some more ) - [x] Dataloader code - [x] Make dummy dataset - [x] generate dataset infos - [x] Tests
closed
https://github.com/huggingface/datasets/pull/2346
2021-05-11T09:25:44
2021-05-18T12:28:28
2021-05-18T12:28:28
{ "login": "cceyda", "id": 15624271, "type": "User" }
[]
true
[]
886,586,872
2,345
[Question] How to move and reuse preprocessed dataset?
Hi, I am training a gpt-2 from scratch using run_clm.py. I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess), I tried to : copy path_to_cache_dir/datasets to new_cache_dir/datasets set export HF_DATASETS_CACHE="new_cache_dir/" but the program still re-preprocess the whole dataset...
closed
https://github.com/huggingface/datasets/issues/2345
2021-05-11T09:09:17
2021-06-11T04:39:11
2021-06-11T04:39:11
{ "login": "AtmaHou", "id": 15045402, "type": "User" }
[]
false
[]
885,331,505
2,344
Is there a way to join multiple datasets in one?
**Is your feature request related to a problem? Please describe.** I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2? **Describe the solution you'd like** Id like to join them with a merge or join method, just like pandas dataframes. **Add...
open
https://github.com/huggingface/datasets/issues/2344
2021-05-10T23:16:10
2022-10-05T17:27:05
null
{ "login": "avacaondata", "id": 35173563, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
883,208,539
2,343
Columns are removed before or after map function applied?
## Describe the bug According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes....
open
https://github.com/huggingface/datasets/issues/2343
2021-05-10T02:36:20
2022-10-24T11:31:55
null
{ "login": "taghizad3h", "id": 8199406, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
882,981,420
2,342
Docs - CER above 1
CER can actually be greater than 1.
closed
https://github.com/huggingface/datasets/pull/2342
2021-05-09T23:41:00
2021-05-10T13:34:00
2021-05-10T13:34:00
{ "login": "borisdayma", "id": 715491, "type": "User" }
[]
true
[]
882,370,933
2,341
Added the Ascent KB
Added the Ascent Commonsense KB of 8.9M assertions. - Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905) - Website: https://ascent.mpi-inf.mpg.de/ (I am the author of the dataset)
closed
https://github.com/huggingface/datasets/pull/2341
2021-05-09T14:17:39
2021-05-11T09:16:59
2021-05-11T09:16:59
{ "login": "phongnt570", "id": 6749421, "type": "User" }
[]
true
[]
882,370,824
2,340
More consistent copy logic
Use `info.copy()` instead of `copy.deepcopy(info)`. `Features.copy` now creates a deep copy.
closed
https://github.com/huggingface/datasets/pull/2340
2021-05-09T14:17:33
2021-05-11T08:58:33
2021-05-11T08:58:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
882,046,077
2,338
fixed download link for web_science
Fixes #2337. Should work with: `dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)`
closed
https://github.com/huggingface/datasets/pull/2338
2021-05-09T09:12:20
2021-05-10T13:35:53
2021-05-10T13:35:53
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
881,610,567
2,337
NonMatchingChecksumError for web_of_science dataset
NonMatchingChecksumError when trying to download the web_of_science dataset. >NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1'] Setting `ignore_verfications=True` results...
closed
https://github.com/huggingface/datasets/issues/2337
2021-05-09T02:02:02
2021-05-10T13:35:53
2021-05-10T13:35:53
{ "login": "nbroad1881", "id": 24982805, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
881,298,783
2,336
Fix overflow issue in interpolation search
Fixes #2335 More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
closed
https://github.com/huggingface/datasets/pull/2336
2021-05-08T20:51:36
2021-05-10T13:29:07
2021-05-10T13:26:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
881,291,887
2,335
Index error in Dataset.map
The following code, if executed on master, raises an IndexError (due to overflow): ```python >>> from datasets import * >>> d = load_dataset("bookcorpus", split="train") Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c...
closed
https://github.com/huggingface/datasets/issues/2335
2021-05-08T20:44:57
2021-05-10T13:26:12
2021-05-10T13:26:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
879,810,107
2,334
Updating the DART file checksums in GEM
The DART files were just updated on the source GitHub https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab
closed
https://github.com/huggingface/datasets/pull/2334
2021-05-07T21:53:44
2021-05-07T22:18:10
2021-05-07T22:18:10
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
879,214,067
2,333
Fix duplicate keys
As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys. Most of the time it was because the counter used for ids were reset at each new data file.
closed
https://github.com/huggingface/datasets/pull/2333
2021-05-07T15:28:08
2021-05-08T21:47:31
2021-05-07T15:57:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
879,041,608
2,332
Add note about indices mapping in save_to_disk docstring
closed
https://github.com/huggingface/datasets/pull/2332
2021-05-07T13:49:42
2021-05-07T17:20:48
2021-05-07T17:20:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
879,031,427
2,331
Add Topical-Chat
## Adding a Dataset - **Name:** Topical-Chat - **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles - **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf - **...
open
https://github.com/huggingface/datasets/issues/2331
2021-05-07T13:43:59
2021-05-07T13:43:59
null
{ "login": "ktangri", "id": 22266659, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
878,490,927
2,330
Allow passing `desc` to `tqdm` in `Dataset.map()`
It's normal to have many `map()` calls, and some of them can take a few minutes, it would be nice to have a description on the progress bar. Alternative solution: Print the description before/after the `map()` call.
closed
https://github.com/huggingface/datasets/issues/2330
2021-05-07T05:52:54
2021-05-26T14:59:21
2021-05-26T14:59:21
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
877,924,198
2,329
Add cache dir for in-memory datasets
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq. Should fix #2322
closed
https://github.com/huggingface/datasets/pull/2329
2021-05-06T19:35:32
2021-06-08T19:46:48
2021-06-08T19:06:46
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
877,673,896
2,328
Add Matthews/Pearson/Spearman correlation metrics
Added three metrics: - The Matthews correlation coefficient (from sklearn) - The Pearson correlation coefficient (from scipy) - The Spearman correlation coefficient (from scipy) cc @sgugger
closed
https://github.com/huggingface/datasets/pull/2328
2021-05-06T16:09:27
2021-05-06T16:58:10
2021-05-06T16:58:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
877,565,831
2,327
A syntax error in example
![image](https://user-images.githubusercontent.com/6883957/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png) Sorry to report with an image, I can't find the template source code of this snippet.
closed
https://github.com/huggingface/datasets/issues/2327
2021-05-06T14:34:44
2021-05-20T03:04:19
2021-05-20T03:04:19
{ "login": "mymusise", "id": 6883957, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
876,829,254
2,326
Enable auto-download for PAN-X / Wikiann domain in XTREME
This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains. While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for th...
closed
https://github.com/huggingface/datasets/pull/2326
2021-05-05T20:58:38
2021-05-07T08:41:10
2021-05-07T08:41:10
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
876,653,121
2,325
Added the HLGD dataset
Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task Dataset Link: https://github.com/tingofurro/headline_grouping Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf
closed
https://github.com/huggingface/datasets/pull/2325
2021-05-05T16:53:29
2021-05-12T14:55:13
2021-05-12T14:16:38
{ "login": "tingofurro", "id": 2609265, "type": "User" }
[]
true
[]
876,602,064
2,324
Create Audio feature
Create `Audio` feature to handle raw audio files. Some decisions to be further discussed: - I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanc...
closed
https://github.com/huggingface/datasets/pull/2324
2021-05-05T15:55:22
2021-10-13T10:26:33
2021-10-13T10:26:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
876,438,507
2,323
load_dataset("timit_asr") gives back duplicates of just one sample text
## Describe the bug When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant...
closed
https://github.com/huggingface/datasets/issues/2323
2021-05-05T13:14:48
2021-05-07T10:32:30
2021-05-07T10:32:30
{ "login": "ekeleshian", "id": 33647474, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
876,383,853
2,322
Calls to map are not cached.
## Describe the bug Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed? ## Steps to reproduce the bug ```python import datasets datasets.set_caching_enabled(True) sst = datasets.load_dataset("sst") def foo(samples, i): print("executed", i[:10])...
closed
https://github.com/huggingface/datasets/issues/2322
2021-05-05T12:11:27
2021-06-08T19:10:02
2021-06-08T19:08:21
{ "login": "villmow", "id": 2743060, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
876,304,364
2,321
Set encoding in OSCAR dataset
Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms. Fix #2319.
closed
https://github.com/huggingface/datasets/pull/2321
2021-05-05T10:27:03
2021-05-05T10:50:55
2021-05-05T10:50:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
876,257,026
2,320
Set default name in init_dynamic_modules
Set default value for the name of dynamic modules. Close #2318.
closed
https://github.com/huggingface/datasets/pull/2320
2021-05-05T09:30:03
2021-05-06T07:57:54
2021-05-06T07:57:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
876,251,376
2,319
UnicodeDecodeError for OSCAR (Afrikaans)
## Describe the bug When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("oscar", "unshuffled_deduplicated_af") ```...
closed
https://github.com/huggingface/datasets/issues/2319
2021-05-05T09:22:52
2021-05-05T10:57:31
2021-05-05T10:50:55
{ "login": "sgraaf", "id": 8904453, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
876,212,460
2,318
[api request] API to obtain "dataset_module" dynamic path?
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. This is an awesome library. It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet...
closed
https://github.com/huggingface/datasets/issues/2318
2021-05-05T08:40:48
2021-05-06T08:45:45
2021-05-06T07:57:54
{ "login": "richardliaw", "id": 4529381, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
875,767,318
2,317
Fix incorrect version specification for the pyarrow package
This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 . Simply, I put a comma between the version bounds. Fix #2316.
closed
https://github.com/huggingface/datasets/pull/2317
2021-05-04T19:30:20
2021-05-05T10:09:16
2021-05-05T09:21:58
{ "login": "cemilcengiz", "id": 32267027, "type": "User" }
[]
true
[]
875,756,353
2,316
Incorrect version specification for pyarrow
## Describe the bug The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77). Also as a snippet: ```python "pyarrow>=1.0.0<4.0.0", ``` ## Steps to reproduce the bug ```bash pip install...
closed
https://github.com/huggingface/datasets/issues/2316
2021-05-04T19:15:11
2021-05-05T10:10:03
2021-05-05T10:10:03
{ "login": "cemilcengiz", "id": 32267027, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
875,742,200
2,315
Datasets cli improvements
This PR: * replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO) * removes the `download` command (copied from the transformers repo?) * adds missing help messages to the cli commands
closed
https://github.com/huggingface/datasets/pull/2315
2021-05-04T18:55:11
2021-05-10T16:36:51
2021-05-10T16:36:50
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
875,729,271
2,314
Minor refactor prepare_module
Start to refactor `prepare_module` to try to decouple functionality. This PR does: - extract function `_initialize_dynamic_modules_namespace_package` - extract function `_find_module_in_github_or_s3` - some renaming of variables - use of f-strings
closed
https://github.com/huggingface/datasets/pull/2314
2021-05-04T18:37:26
2021-10-13T09:07:34
2021-10-13T09:07:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
875,475,367
2,313
Remove unused head_hf_s3 function
Currently, the function `head_hf_s3` is not used: - neither its returned result is used - nor it raises any exception, as exceptions are catched and returned (not raised) This PR removes it.
closed
https://github.com/huggingface/datasets/pull/2313
2021-05-04T13:42:06
2021-05-07T09:31:42
2021-05-07T09:31:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
875,435,726
2,312
Add rename_columnS method
Cherry-picked from #2255
closed
https://github.com/huggingface/datasets/pull/2312
2021-05-04T12:57:53
2021-05-04T13:43:13
2021-05-04T13:43:12
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
875,262,208
2,311
Add SLR52, SLR53 and SLR54 to OpenSLR
Add large speech datasets for Sinhala, Bengali and Nepali.
closed
https://github.com/huggingface/datasets/pull/2311
2021-05-04T09:08:03
2021-05-07T09:50:55
2021-05-07T09:50:55
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
875,096,051
2,310
Update README.md
Provides description of data instances and dataset features
closed
https://github.com/huggingface/datasets/pull/2310
2021-05-04T04:38:01
2022-07-06T15:19:58
2022-07-06T15:19:58
{ "login": "cryoff", "id": 15029054, "type": "User" }
[]
true
[]
874,644,990
2,309
Fix conda release
There were a few issues with conda releases (they've been failing for a while now). To fix this I had to: - add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075)) - set the python version of the conda build stage to 3.8 since 3.9 isn't suppor...
closed
https://github.com/huggingface/datasets/pull/2309
2021-05-03T14:52:59
2021-05-03T16:01:17
2021-05-03T16:01:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
873,961,435
2,302
Add SubjQA dataset
Hello datasetters 🙂! Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance). I f...
closed
https://github.com/huggingface/datasets/pull/2302
2021-05-02T14:51:20
2021-05-10T09:21:19
2021-05-10T09:21:19
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
873,941,266
2,301
Unable to setup dev env on Windows
Hi I tried installing the `".[dev]"` version on Windows 10 after cloning. Here is the error I'm facing: ```bat (env) C:\testing\datasets>pip install -e ".[dev]" Obtaining file:///C:/testing/datasets Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas...
closed
https://github.com/huggingface/datasets/issues/2301
2021-05-02T13:20:42
2021-05-03T15:18:01
2021-05-03T15:17:34
{ "login": "gchhablani", "id": 29076344, "type": "User" }
[]
false
[]
873,928,169
2,300
Add VoxPopuli
## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**:...
closed
https://github.com/huggingface/datasets/issues/2300
2021-05-02T12:17:40
2023-02-28T17:43:52
2023-02-28T17:43:51
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
873,914,717
2,299
My iPhone
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2299
2021-05-02T11:11:11
2021-07-23T09:24:16
2021-05-03T08:17:38
{ "login": "Jasonbuchanan1983", "id": 82856229, "type": "User" }
[]
false
[]
873,771,942
2,298
Mapping in the distributed setting
The barrier trick for distributed mapping as discussed on Thursday with @lhoestq
closed
https://github.com/huggingface/datasets/pull/2298
2021-05-01T21:23:05
2021-05-03T13:54:53
2021-05-03T13:54:53
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
872,974,907
2,296
1
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
closed
https://github.com/huggingface/datasets/issues/2296
2021-04-30T17:53:49
2021-05-03T08:17:31
2021-05-03T08:17:31
{ "login": "zinnyi", "id": 82880142, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
872,902,867
2,295
Create ExtractManager
Perform refactoring to decouple extract functionality.
closed
https://github.com/huggingface/datasets/pull/2295
2021-04-30T17:13:34
2021-07-12T14:12:03
2021-07-08T08:11:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
true
[]
872,136,075
2,294
Slow #0 when using map to tokenize.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, loa...
open
https://github.com/huggingface/datasets/issues/2294
2021-04-30T08:00:33
2021-05-04T11:00:11
null
{ "login": "VerdureChen", "id": 31714566, "type": "User" }
[]
false
[]
872,079,385
2,293
imdb dataset from Don't Stop Pretraining Paper
closed
https://github.com/huggingface/datasets/pull/2293
2021-04-30T06:40:48
2021-04-30T06:54:25
2021-04-30T06:54:25
{ "login": "BobbyManion", "id": 52530809, "type": "User" }
[]
true
[]
871,230,183
2,292
Fixed typo seperate->separate
closed
https://github.com/huggingface/datasets/pull/2292
2021-04-29T16:40:53
2021-04-30T13:29:18
2021-04-30T13:03:12
{ "login": "laksh9950", "id": 32505743, "type": "User" }
[]
true
[]
871,216,757
2,291
Don't copy recordbatches in memory during a table deepcopy
Fix issue #2276 and hopefully #2134 The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy. This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory. I fixed the copy similarly to #2287...
closed
https://github.com/huggingface/datasets/pull/2291
2021-04-29T16:26:05
2021-04-29T16:34:35
2021-04-29T16:34:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
871,145,817
2,290
Bbaw egyptian
This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :...
closed
https://github.com/huggingface/datasets/pull/2290
2021-04-29T15:27:58
2021-05-06T17:25:25
2021-05-06T17:25:25
{ "login": "phiwi", "id": 54144149, "type": "User" }
[]
true
[]
871,118,573
2,289
Allow collaborators to self-assign issues
Allow collaborators (without write access to the repository) to self-assign issues. In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`.
closed
https://github.com/huggingface/datasets/pull/2289
2021-04-29T15:07:06
2021-04-30T18:28:16
2021-04-30T18:28:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
871,111,235
2,288
Load_dataset for local CSV files
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``...
closed
https://github.com/huggingface/datasets/issues/2288
2021-04-29T15:01:10
2021-06-15T13:49:26
2021-06-15T13:49:26
{ "login": "sstojanoska", "id": 17052700, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
871,063,374
2,287
Avoid copying table's record batches
Fixes #2276
closed
https://github.com/huggingface/datasets/pull/2287
2021-04-29T14:15:01
2021-04-29T16:34:23
2021-04-29T16:34:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
871,032,393
2,286
Fix metadata validation with config names
I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config.
closed
https://github.com/huggingface/datasets/pull/2286
2021-04-29T13:44:32
2021-04-29T14:07:29
2021-04-29T14:07:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
871,005,236
2,285
Help understanding how to build a dataset for language modeling as with the old TextDataset
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text datas...
closed
https://github.com/huggingface/datasets/issues/2285
2021-04-29T13:16:45
2021-05-19T07:22:45
2021-05-19T07:22:39
{ "login": "danieldiezmallo", "id": 46021411, "type": "User" }
[]
false
[]
870,932,710
2,284
Initialize Imdb dataset as used in Don't Stop Pretraining Paper
closed
https://github.com/huggingface/datasets/pull/2284
2021-04-29T11:52:38
2021-04-29T12:54:34
2021-04-29T12:54:34
{ "login": "BobbyManion", "id": 52530809, "type": "User" }
[]
true
[]
870,926,475
2,283
Initialize imdb dataset from don't stop pretraining paper
closed
https://github.com/huggingface/datasets/pull/2283
2021-04-29T11:44:54
2021-04-29T11:50:24
2021-04-29T11:50:24
{ "login": "BobbyManion", "id": 52530809, "type": "User" }
[]
true
[]
870,900,332
2,282
Initialize imdb dataset from don't stop pretraining paper
closed
https://github.com/huggingface/datasets/pull/2282
2021-04-29T11:17:56
2021-04-29T11:43:51
2021-04-29T11:43:51
{ "login": "BobbyManion", "id": 52530809, "type": "User" }
[]
true
[]
870,792,784
2,281
Update multi_woz_v22 checksum
Fix issue https://github.com/huggingface/datasets/issues/1876 The files were changed in https://github.com/budzianowski/multiwoz/pull/72
closed
https://github.com/huggingface/datasets/pull/2281
2021-04-29T09:09:11
2021-04-29T13:41:35
2021-04-29T13:41:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
870,780,431
2,280
Fixed typo seperate->separate
closed
https://github.com/huggingface/datasets/pull/2280
2021-04-29T08:55:46
2021-04-29T16:41:22
2021-04-29T16:41:16
{ "login": "laksh9950", "id": 32505743, "type": "User" }
[]
true
[]
870,431,662
2,279
Compatibility with Ubuntu 18 and GLIBC 2.27?
## Describe the bug For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04). I'm not sure...
closed
https://github.com/huggingface/datasets/issues/2279
2021-04-28T22:08:07
2021-04-29T07:42:42
2021-04-29T07:42:42
{ "login": "tginart", "id": 11379648, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
870,088,059
2,278
Loss result inGptNeoForCasual
Is there any way you give the " loss" and "logits" results in the gpt neo api?
closed
https://github.com/huggingface/datasets/issues/2278
2021-04-28T15:39:52
2021-05-06T16:14:23
2021-05-06T16:14:23
{ "login": "Yossillamm", "id": 51174606, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
870,071,994
2,277
Create CacheManager
Perform refactoring to decouple cache functionality (method `as_dataset`).
open
https://github.com/huggingface/datasets/pull/2277
2021-04-28T15:23:42
2022-07-06T15:19:48
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
true
[]
870,010,511
2,276
concatenate_datasets loads all the data into memory
## Describe the bug When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk. Interestingly, this happens when trying to save the new dataset to disk or concatenating it again. ![image](https://user-images.githubusercontent.com/7063207/116...
closed
https://github.com/huggingface/datasets/issues/2276
2021-04-28T14:27:21
2021-05-03T08:41:55
2021-05-03T08:41:55
{ "login": "chbensch", "id": 7063207, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
869,378,311
2,275
SNLI dataset has labels of -1
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107...
closed
https://github.com/huggingface/datasets/issues/2275
2021-04-28T00:32:25
2021-05-17T13:34:18
2021-05-17T13:34:18
{ "login": "puzzler10", "id": 17426779, "type": "User" }
[]
false
[]
869,186,276
2,274
Always update metadata in arrow schema
We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types. For each function that transforms the feature types of the dataset, I added ...
closed
https://github.com/huggingface/datasets/pull/2274
2021-04-27T19:21:57
2022-06-03T08:31:19
2021-04-29T09:57:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
869,046,290
2,273
Added CUAD metrics
`EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD
closed
https://github.com/huggingface/datasets/pull/2273
2021-04-27T16:49:12
2021-04-29T13:59:47
2021-04-29T13:59:47
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
869,017,977
2,272
Bug in Dataset.class_encode_column
## Describe the bug All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded. ## Expected results All the original columns should be kept. This needs regression tests.
closed
https://github.com/huggingface/datasets/issues/2272
2021-04-27T16:13:18
2021-04-30T12:54:27
2021-04-30T12:54:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
869,002,141
2,271
Synchronize table metadata with features
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to kno...
closed
https://github.com/huggingface/datasets/issues/2271
2021-04-27T15:55:13
2022-06-01T17:13:21
2022-06-01T17:13:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
868,913,660
2,270
Fix iterable interface expected by numpy
Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.
closed
https://github.com/huggingface/datasets/pull/2270
2021-04-27T14:35:56
2021-04-28T17:39:27
2021-04-28T17:39:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
868,878,468
2,269
Fix query table with iterable
The benchmark runs are failing on master because it tries to use an iterable to query the dataset. However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable. This PR fixes it
closed
https://github.com/huggingface/datasets/pull/2269
2021-04-27T13:59:38
2021-04-27T14:21:57
2021-04-27T14:21:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
868,773,380
2,268
Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers
This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0. Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue
closed
https://github.com/huggingface/datasets/pull/2268
2021-04-27T11:58:28
2021-06-12T12:44:49
2021-04-27T13:43:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
868,291,129
2,267
DatasetDict save load Failing test in 1.6 not in 1.5
## Describe the bug We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema. Downgrading to `>1.6` -- fixes the problem. ## Steps to reproduce the bug ```python ### Load a dataset dict from jsonl path = '/test/foo' ds_dict.s...
open
https://github.com/huggingface/datasets/issues/2267
2021-04-27T00:03:25
2021-05-28T15:27:34
null
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
867,864,353
2,266
Make tests run faster
From 7min to 2min to run pytest. Ideally we should keep the whole CI run time below 10min. In this PR I removed the remote tests that were never used. I also replaced nested parametrized tests with unit tests. This makes me think that we could still add more high level tests to check for a few combinations of par...
closed
https://github.com/huggingface/datasets/pull/2266
2021-04-26T15:55:40
2021-04-29T10:00:13
2021-04-29T10:00:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
867,490,646
2,265
Update black
Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib. This makes the CI currently fail on master
closed
https://github.com/huggingface/datasets/pull/2265
2021-04-26T09:35:09
2021-04-26T09:47:48
2021-04-26T09:47:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
867,476,228
2,264
Fix memory issue in multiprocessing: Don't pickle table index
The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory. I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table. Fix issue #2256 We'll do a patch release asap !
closed
https://github.com/huggingface/datasets/pull/2264
2021-04-26T09:21:35
2021-04-26T10:30:28
2021-04-26T10:08:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
867,420,912
2,263
test data added, dataset_infos updated
Fixes #2262. Thanks for pointing out issue with dataset @jinmang2!
closed
https://github.com/huggingface/datasets/pull/2263
2021-04-26T08:27:18
2021-04-29T09:30:21
2021-04-29T09:30:20
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
867,325,351
2,262
NewsPH NLI dataset script fails to access test data.
In Newsph-NLI Dataset (#1192), it fails to access test data. According to the script below, the download manager will download the train data when trying to download the test data. https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71 If yo...
closed
https://github.com/huggingface/datasets/issues/2262
2021-04-26T06:44:41
2021-04-29T09:32:03
2021-04-29T09:30:20
{ "login": "jinmang2", "id": 37775784, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
867,088,818
2,261
Improve ReadInstruction logic and update docs
Improve ReadInstruction logic and docs.
closed
https://github.com/huggingface/datasets/pull/2261
2021-04-25T19:07:26
2021-05-17T18:24:44
2021-05-17T16:48:57
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
866,961,697
2,260
GooAQ dataset added
@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?
closed
https://github.com/huggingface/datasets/pull/2260
2021-04-25T09:26:48
2021-05-07T08:36:17
2021-05-07T08:36:17
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
866,880,092
2,259
Add support for Split.ALL
The title says it all.
closed
https://github.com/huggingface/datasets/pull/2259
2021-04-25T01:45:42
2021-06-28T08:21:27
2021-06-28T08:21:27
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
866,870,588
2,258
Fix incorrect update_metadata_with_features calls in ArrowDataset
Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151)
closed
https://github.com/huggingface/datasets/pull/2258
2021-04-25T00:48:38
2021-04-26T17:16:30
2021-04-26T16:54:04
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
866,755,203
2,257
added metrics for CUAD
For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here
closed
https://github.com/huggingface/datasets/pull/2257
2021-04-24T14:09:54
2021-04-29T09:53:38
2021-04-27T16:16:32
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
866,708,609
2,256
Running `datase.map` with `num_proc > 1` uses a lot of memory
## Describe the bug Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow. ## Steps to reproduce the bug ```python from datasets import load_dataset dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False) ...
closed
https://github.com/huggingface/datasets/issues/2256
2021-04-24T09:56:20
2021-04-26T17:12:15
2021-04-26T17:12:15
{ "login": "roskoN", "id": 8143425, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
866,242,892
2,255
Task casting for text classification & question answering
This PR implements task preparation for a given task, in the continuation of #2143 Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines Edit by @lewtun: This PR implements support for the following tasks: * `text-clas...
closed
https://github.com/huggingface/datasets/pull/2255
2021-04-23T16:00:41
2021-05-18T13:31:36
2021-05-18T13:31:35
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
866,169,312
2,254
Update format, fingerprint and indices after add_item
Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table.
closed
https://github.com/huggingface/datasets/pull/2254
2021-04-23T14:31:49
2021-04-27T16:30:49
2021-04-27T16:30:48
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
866,034,321
2,253
Perform minor refactoring: use config
Perform minor refactoring related to `config`.
closed
https://github.com/huggingface/datasets/pull/2253
2021-04-23T11:45:47
2021-05-27T09:12:45
2021-04-27T15:02:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
true
[]
865,870,710
2,252
Slow dataloading with big datasets issue persists
Hi, I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122). However, the problem seems to persist. Here is the profiled results: 1) Running with 60GB ``` Action | Mean duration (s) |Num calls | Total ...
closed
https://github.com/huggingface/datasets/issues/2252
2021-04-23T08:18:20
2024-01-26T15:10:28
2024-01-26T15:10:28
{ "login": "hwijeen", "id": 29157715, "type": "User" }
[]
false
[]
865,848,705
2,251
while running run_qa.py, ran into a value error
command: python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/ error: ValueError: External fe...
open
https://github.com/huggingface/datasets/issues/2251
2021-04-23T07:51:03
2021-04-23T07:51:03
null
{ "login": "nlee0212", "id": 44570724, "type": "User" }
[]
false
[]
865,402,449
2,250
some issue in loading local txt file as Dataset for run_mlm.py
![image](https://user-images.githubusercontent.com/14968123/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png) first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error. > FileNotFoundError: [Errno 2] No such file or directory: 'c' by ...
closed
https://github.com/huggingface/datasets/issues/2250
2021-04-22T19:39:13
2022-03-30T08:29:47
2022-03-30T08:29:47
{ "login": "alighofrani95", "id": 14968123, "type": "User" }
[]
false
[]
865,257,826
2,249
Allow downloading/processing/caching only specific splits
Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits. This PR implements two steps to handle only specific splits: - it allows processing/caching only specific splits into Arrow files - for some simple cases, it allows downloading only specific splits (w...
open
https://github.com/huggingface/datasets/pull/2249
2021-04-22T17:51:44
2022-07-06T15:19:48
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
864,853,447
2,248
Implement Dataset to JSON
Implement `Dataset.to_json`.
closed
https://github.com/huggingface/datasets/pull/2248
2021-04-22T11:46:51
2021-04-27T15:29:21
2021-04-27T15:29:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
864,817,520
2,247
Implement Dataset from Parquet
Implement instantiation of Dataset from Parquet file.
closed
https://github.com/huggingface/datasets/pull/2247
2021-04-22T11:01:38
2021-07-26T13:28:52
2021-07-26T13:28:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]