id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
988,276,859
2,870
Fix three typos in two files for documentation
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
closed
https://github.com/huggingface/datasets/pull/2870
2021-09-04T11:49:43
2021-09-06T08:21:21
2021-09-06T08:19:35
{ "login": "leny-mi", "id": 25124853, "type": "User" }
[]
true
[]
987,676,420
2,869
TypeError: 'NoneType' object is not callable
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Speci...
closed
https://github.com/huggingface/datasets/issues/2869
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
{ "login": "Chenfei-Kang", "id": 40911446, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
987,139,146
2,868
Add Common Objects in 3D (CO3D)
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-...
open
https://github.com/huggingface/datasets/issues/2868
2021-09-02T20:36:12
2024-01-17T12:03:59
null
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
986,971,224
2,867
Add CaSiNo dataset
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
closed
https://github.com/huggingface/datasets/pull/2867
2021-09-02T17:06:23
2021-09-16T15:12:54
2021-09-16T09:23:44
{ "login": "kushalchawla", "id": 8416863, "type": "User" }
[]
true
[]
986,706,676
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Dow...
closed
https://github.com/huggingface/datasets/issues/2866
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
986,460,698
2,865
Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is mult...
closed
https://github.com/huggingface/datasets/pull/2865
2021-09-02T09:42:24
2021-09-10T11:50:06
2021-09-10T11:50:06
{ "login": "iliaschalkidis", "id": 1626984, "type": "User" }
[]
true
[]
986,159,438
2,864
Fix data URL in ToTTo dataset
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
closed
https://github.com/huggingface/datasets/pull/2864
2021-09-02T05:25:08
2021-09-02T06:47:40
2021-09-02T06:47:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
986,156,755
2,863
Update dataset URL
null
closed
https://github.com/huggingface/datasets/pull/2863
2021-09-02T05:22:18
2021-09-02T08:10:50
2021-09-02T08:10:50
{ "login": "mrm8488", "id": 3653789, "type": "User" }
[]
true
[]
985,081,871
2,861
fix: 🐛 be more specific when catching exceptions
The same specific exception is catched in other parts of the same function.
closed
https://github.com/huggingface/datasets/pull/2861
2021-09-01T12:18:12
2021-09-02T09:53:36
2021-09-02T09:52:03
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
985,013,339
2,860
Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
closed
https://github.com/huggingface/datasets/issues/2860
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
{ "login": "mrm8488", "id": 3653789, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
984,324,500
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). ...
closed
https://github.com/huggingface/datasets/issues/2859
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
984,145,568
2,858
Fix s3fs version in CI
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
closed
https://github.com/huggingface/datasets/pull/2858
2021-08-31T18:05:43
2021-09-06T13:33:35
2021-08-31T21:29:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
984,093,938
2,857
Update: Openwebtext - update size
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) Close #2839, close #726.
closed
https://github.com/huggingface/datasets/pull/2857
2021-08-31T17:11:03
2022-02-15T10:38:03
2021-09-07T09:44:32
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,876,734
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See ht...
closed
https://github.com/huggingface/datasets/pull/2856
2021-08-31T13:40:07
2021-08-31T14:22:12
2021-08-31T14:22:12
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
983,858,229
2,855
Fix windows CI CondaError
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
closed
https://github.com/huggingface/datasets/pull/2855
2021-08-31T13:22:02
2021-08-31T13:35:34
2021-08-31T13:35:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,726,084
2,854
Fix caching when moving script
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code. Using the full path of the python script for the location of the code makes the hash change if a script like `run_ml...
closed
https://github.com/huggingface/datasets/pull/2854
2021-08-31T10:58:35
2021-08-31T13:13:36
2021-08-31T13:13:36
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
983,692,026
2,853
Add AMI dataset
This is an initial commit for AMI dataset
closed
https://github.com/huggingface/datasets/pull/2853
2021-08-31T10:19:01
2021-09-29T09:19:19
2021-09-29T09:19:19
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
983,609,352
2,852
Fix: linnaeus - fix url
The url was causing a `ConnectionError` because of the "/" at the end Close https://github.com/huggingface/datasets/issues/2821
closed
https://github.com/huggingface/datasets/pull/2852
2021-08-31T08:51:13
2021-08-31T13:12:10
2021-08-31T13:12:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
982,789,593
2,851
Update `column_names` showed as `:func:` in exploring.st
Hi, One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.
closed
https://github.com/huggingface/datasets/pull/2851
2021-08-30T13:21:46
2021-09-01T08:42:11
2021-08-31T14:45:46
{ "login": "ClementRomac", "id": 8899812, "type": "User" }
[]
true
[]
982,654,644
2,850
Wound segmentation datasets
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, wi...
open
https://github.com/huggingface/datasets/issues/2850
2021-08-30T10:44:32
2021-12-08T12:02:00
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
982,631,420
2,849
Add Open Catalyst Project Dataset
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATAS...
open
https://github.com/huggingface/datasets/issues/2849
2021-08-30T10:14:39
2021-08-30T10:14:39
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
981,953,908
2,848
Update README.md
Changed 'Tain' to 'Train'.
closed
https://github.com/huggingface/datasets/pull/2848
2021-08-28T23:58:26
2021-09-07T09:40:32
2021-09-07T09:40:32
{ "login": "odellus", "id": 4686956, "type": "User" }
[]
true
[]
981,589,693
2,847
fix regex to accept negative timezone
fix #2846
closed
https://github.com/huggingface/datasets/pull/2847
2021-08-27T20:54:05
2021-09-13T20:39:50
2021-09-07T09:34:23
{ "login": "jadermcs", "id": 7156771, "type": "User" }
[]
true
[]
981,587,590
2,846
Negative timezone
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```py...
closed
https://github.com/huggingface/datasets/issues/2846
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
{ "login": "jadermcs", "id": 7156771, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
981,487,861
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsi...
open
https://github.com/huggingface/datasets/issues/2845
2021-08-27T18:21:51
2021-08-27T18:24:05
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
981,382,806
2,844
Fix: wikicorpus - fix keys
As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`. I fixed that by taking into account the file index in the keys
closed
https://github.com/huggingface/datasets/pull/2844
2021-08-27T15:56:06
2021-09-06T14:07:28
2021-09-06T14:07:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
981,317,775
2,843
Fix extraction protocol inference from urls with params
Previously it was unable to infer the compression protocol for files at URLs like ``` https://foo.bar/train.json.gz?dl=1 ``` because of the query parameters. I fixed that, this should allow 10+ datasets to work in streaming mode: ``` "discovery", "emotion", "grail_qa", "guardian_authorship", "pra...
closed
https://github.com/huggingface/datasets/pull/2843
2021-08-27T14:40:57
2021-08-30T17:11:49
2021-08-30T13:12:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
980,725,899
2,842
always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an...
closed
https://github.com/huggingface/datasets/issues/2842
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
980,497,321
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
## Adding a Dataset - **Name:** GLUECoS - **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks - **Paper:** https://aclanthology.org/2020.acl-main.329/ - **Data:** https://github.com/microsoft/GLUECoS - **Motivation:** We currently only have [one othe...
open
https://github.com/huggingface/datasets/issues/2841
2021-08-26T17:47:39
2021-10-20T18:41:20
null
{ "login": "yjernite", "id": 10469459, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
980,489,074
2,840
How can I compute BLEU-4 score use `load_metric` ?
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4. If I want to compute BLEU-4 score, what can i do?
closed
https://github.com/huggingface/datasets/issues/2840
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
{ "login": "Doragd", "id": 26213546, "type": "User" }
[]
false
[]
980,271,715
2,839
OpenWebText: NonMatchingSplitsSizesError
## Describe the bug When downloading `openwebtext`, I'm getting: ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430...
closed
https://github.com/huggingface/datasets/issues/2839
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
980,067,186
2,838
Add error_bad_chunk to the JSON loader
Add the `error_bad_chunk` parameter to the JSON loader. Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error. Additional note: In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in stream...
open
https://github.com/huggingface/datasets/pull/2838
2021-08-26T10:07:32
2023-09-25T09:06:42
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
979,298,297
2,837
prepare_module issue when loading from read-only fs
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_datas...
closed
https://github.com/huggingface/datasets/issues/2837
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
979,230,142
2,836
Optimize Dataset.filter to only compute the indices to keep
Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space. This will be useful to process audio datasets for example cc @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/2836
2021-08-25T14:41:22
2021-09-14T14:51:53
2021-09-13T15:50:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
979,209,394
2,835
Update: timit_asr - make the dataset streamable
The TIMIT ASR dataset had two issues that was preventing it from being streamable: 1. it was missing a call to `open` before `pd.read_csv` 2. it was using `os.path.dirname` which is not supported for streaming I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.d...
closed
https://github.com/huggingface/datasets/pull/2835
2021-08-25T14:22:49
2021-09-07T13:15:47
2021-09-07T13:15:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
978,309,749
2,834
Fix IndexError by ignoring empty RecordBatch
We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables Close #2833 cc @SaulLu
closed
https://github.com/huggingface/datasets/pull/2834
2021-08-24T17:06:13
2021-08-24T17:21:18
2021-08-24T17:21:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
978,296,140
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.conca...
closed
https://github.com/huggingface/datasets/issues/2833
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
978,012,800
2,832
Logging levels not taken into account
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logge...
closed
https://github.com/huggingface/datasets/issues/2832
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
977,864,600
2,831
ArrowInvalid when mapping dataset with missing values
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingf...
open
https://github.com/huggingface/datasets/issues/2831
2021-08-24T08:50:42
2021-08-31T14:15:34
null
{ "login": "uniquefine", "id": 12694730, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
977,563,947
2,830
Add imagefolder dataset
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. Resolves #2508 --- Example Usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
closed
https://github.com/huggingface/datasets/pull/2830
2021-08-23T23:34:06
2022-03-01T16:29:44
2022-03-01T16:29:44
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
977,233,360
2,829
Optimize streaming from TAR archives
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage....
closed
https://github.com/huggingface/datasets/issues/2829
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
977,181,517
2,828
Add code-mixed Kannada Hope speech dataset
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available...
closed
https://github.com/huggingface/datasets/pull/2828
2021-08-23T15:55:09
2021-10-01T17:21:03
2021-10-01T17:21:03
{ "login": "adeepH", "id": 46108405, "type": "User" }
[]
true
[]
976,976,552
2,827
add a text classification dataset
null
closed
https://github.com/huggingface/datasets/pull/2827
2021-08-23T12:24:41
2021-08-23T15:51:18
2021-08-23T15:51:18
{ "login": "adeepH", "id": 46108405, "type": "User" }
[]
true
[]
976,974,254
2,826
Add a Text Classification dataset: KanHope
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/d...
closed
https://github.com/huggingface/datasets/issues/2826
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
{ "login": "adeepH", "id": 46108405, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
976,584,926
2,825
The datasets.map function does not load cached dataset after moving python script
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data pro...
closed
https://github.com/huggingface/datasets/issues/2825
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
{ "login": "hobbitlzy", "id": 35392624, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
976,394,721
2,824
Fix defaults in cache_dir docstring in load.py
Fix defaults in the `cache_dir` docstring.
closed
https://github.com/huggingface/datasets/pull/2824
2021-08-22T14:48:37
2021-08-26T13:23:32
2021-08-26T11:55:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
976,135,355
2,823
HF_DATASETS_CACHE variable in Windows
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each in...
closed
https://github.com/huggingface/datasets/issues/2823
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
{ "login": "rp2839", "id": 8453798, "type": "User" }
[]
false
[]
975,744,463
2,822
Add url prefix convention for many compression formats
## Intro When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`. In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the...
closed
https://github.com/huggingface/datasets/pull/2822
2021-08-20T16:11:23
2021-08-23T15:59:16
2021-08-23T15:59:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
975,556,032
2,821
Cannot load linnaeus dataset
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB,...
closed
https://github.com/huggingface/datasets/issues/2821
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
975,210,712
2,820
Downloading “reddit” dataset keeps timing out.
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_d...
closed
https://github.com/huggingface/datasets/issues/2820
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
{ "login": "smeyerhot", "id": 43877130, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
974,683,155
2,819
Added XL-Sum dataset
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
closed
https://github.com/huggingface/datasets/pull/2819
2021-08-19T13:47:45
2021-09-29T08:13:44
2021-09-23T17:49:05
{ "login": "abhik1505040", "id": 49608995, "type": "User" }
[]
true
[]
974,552,009
2,818
cannot load data from my loacal path
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tari...
closed
https://github.com/huggingface/datasets/issues/2818
2021-08-19T11:13:30
2023-07-25T17:42:15
2023-07-25T17:42:15
{ "login": "yang-collect", "id": 46920280, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
974,486,051
2,817
Rename The Pile subsets
After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names. I'm doing the changes for the subsets that @richarddwang added: - [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801 - [x] stack_exchange -> the_pile_stack_ex...
closed
https://github.com/huggingface/datasets/pull/2817
2021-08-19T09:56:22
2021-08-23T16:24:10
2021-08-23T16:24:09
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
974,031,404
2,816
Add Mostly Basic Python Problems Dataset
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consi...
open
https://github.com/huggingface/datasets/issues/2816
2021-08-18T20:28:39
2021-09-10T08:04:20
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
973,862,024
2,815
Tiny typo fixes of "fo" -> "of"
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
closed
https://github.com/huggingface/datasets/pull/2815
2021-08-18T16:36:11
2021-08-19T08:03:02
2021-08-19T08:03:02
{ "login": "aronszanto", "id": 9934829, "type": "User" }
[]
true
[]
973,632,645
2,814
Bump tqdm version
The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would p...
closed
https://github.com/huggingface/datasets/pull/2814
2021-08-18T12:51:29
2021-08-18T13:44:11
2021-08-18T13:39:50
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
973,470,580
2,813
Remove compression from xopen
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve ...
closed
https://github.com/huggingface/datasets/issues/2813
2021-08-18T09:35:59
2021-08-23T15:59:14
2021-08-23T15:59:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "generic discussion", "color": "c5def5" } ]
false
[]
972,936,889
2,812
arXiv Dataset verification problem
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
open
https://github.com/huggingface/datasets/issues/2812
2021-08-17T18:01:48
2022-01-19T14:15:35
null
{ "login": "eladsegal", "id": 13485709, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
972,522,480
2,811
Fix stream oscar
Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4. This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921 This PR: - removes that additional `open` - patches `gzip.open` with `xop...
closed
https://github.com/huggingface/datasets/pull/2811
2021-08-17T10:10:59
2021-08-26T10:26:15
2021-08-26T10:26:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
972,040,022
2,810
Add WIT Dataset
Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset.
closed
https://github.com/huggingface/datasets/pull/2810
2021-08-16T19:34:09
2022-05-06T12:27:29
2022-05-06T12:26:16
{ "login": "hassiahk", "id": 13920778, "type": "User" }
[]
true
[]
971,902,613
2,809
Add Beans Dataset
Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset.
closed
https://github.com/huggingface/datasets/pull/2809
2021-08-16T16:22:33
2021-08-26T11:42:27
2021-08-26T11:42:27
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
971,882,320
2,808
Enable streaming for Wikipedia corpora
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import ...
closed
https://github.com/huggingface/datasets/issues/2808
2021-08-16T15:59:12
2023-07-20T13:45:30
2023-07-20T13:45:30
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
971,849,863
2,807
Add cats_vs_dogs dataset
Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset.
closed
https://github.com/huggingface/datasets/pull/2807
2021-08-16T15:21:11
2021-08-30T16:35:25
2021-08-30T16:35:24
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
971,625,449
2,806
Fix streaming tar files from canonical datasets
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both f...
closed
https://github.com/huggingface/datasets/pull/2806
2021-08-16T11:10:28
2021-10-13T09:04:03
2021-10-13T09:04:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
971,436,456
2,805
Fix streaming zip files from canonical datasets
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This P...
closed
https://github.com/huggingface/datasets/pull/2805
2021-08-16T07:11:40
2021-08-16T10:34:00
2021-08-16T10:34:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
971,353,437
2,804
Add Food-101
Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
closed
https://github.com/huggingface/datasets/pull/2804
2021-08-16T04:26:15
2021-08-20T14:31:33
2021-08-19T12:48:06
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
970,858,928
2,803
add stack exchange
stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. I also change default `timeout` to 100 seconds instead of 10...
closed
https://github.com/huggingface/datasets/pull/2803
2021-08-14T08:11:02
2021-08-19T10:07:33
2021-08-19T08:07:38
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,848,302
2,802
add openwebtext2
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for cr...
closed
https://github.com/huggingface/datasets/pull/2802
2021-08-14T07:09:03
2021-08-23T14:06:14
2021-08-23T14:06:14
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,844,617
2,801
add books3
books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating...
closed
https://github.com/huggingface/datasets/pull/2801
2021-08-14T07:04:25
2021-08-19T16:43:09
2021-08-18T15:36:59
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
970,819,988
2,800
Support streaming tar files
This PR adds support to stream tar files by using the `fsspec` tar protocol. It also uses the custom `readline` implemented in PR #2786. The corresponding test is implemented in PR #2786.
closed
https://github.com/huggingface/datasets/pull/2800
2021-08-14T04:40:17
2021-08-26T10:02:30
2021-08-14T04:55:57
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
970,507,351
2,799
Loading JSON throws ArrowNotImplementedError
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
closed
https://github.com/huggingface/datasets/issues/2799
2021-08-13T15:31:48
2022-01-10T18:59:32
2022-01-10T18:59:32
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
970,493,126
2,798
Fix streaming zip files
Currently, streaming remote zip data files gives `FileNotFoundError` message: ```python data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) next(iter(ds)) ``` This PR fi...
closed
https://github.com/huggingface/datasets/pull/2798
2021-08-13T15:17:01
2021-08-16T14:16:50
2021-08-13T15:38:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
970,331,634
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under git...
open
https://github.com/huggingface/datasets/issues/2797
2021-08-13T11:54:49
2021-08-14T08:42:09
null
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
970,235,846
2,796
add cedr dataset
null
closed
https://github.com/huggingface/datasets/pull/2796
2021-08-13T09:37:35
2021-08-27T16:01:36
2021-08-27T16:01:36
{ "login": "naumov-al", "id": 22640075, "type": "User" }
[]
true
[]
969,728,545
2,794
Warnings and documentation about pickling incorrect
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: ...
open
https://github.com/huggingface/datasets/issues/2794
2021-08-12T23:09:13
2021-08-12T23:09:31
null
{ "login": "mbforbes", "id": 1170062, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
968,967,773
2,793
Fix type hint for data_files
Fix type hint for `data_files` in signatures and docstrings.
closed
https://github.com/huggingface/datasets/pull/2793
2021-08-12T14:42:37
2021-08-12T15:35:29
2021-08-12T15:35:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
968,650,274
2,792
Update: GooAQ - add train/val/test splits
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
closed
https://github.com/huggingface/datasets/pull/2792
2021-08-12T11:40:18
2021-08-27T15:58:45
2021-08-27T15:58:14
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
968,360,314
2,791
Fix typo in cnn_dailymail
null
closed
https://github.com/huggingface/datasets/pull/2791
2021-08-12T08:38:42
2021-08-12T11:17:59
2021-08-12T11:17:59
{ "login": "omaralsayed", "id": 42531544, "type": "User" }
[]
true
[]
967,772,181
2,790
Fix typo in test_dataset_common
null
closed
https://github.com/huggingface/datasets/pull/2790
2021-08-12T01:10:29
2021-08-12T11:31:29
2021-08-12T11:31:29
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
967,361,934
2,789
Updated dataset description of DaNE
null
closed
https://github.com/huggingface/datasets/pull/2789
2021-08-11T19:58:48
2021-08-12T16:10:59
2021-08-12T16:06:01
{ "login": "KennethEnevoldsen", "id": 23721977, "type": "User" }
[]
true
[]
967,149,389
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=[...
closed
https://github.com/huggingface/datasets/issues/2788
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
{ "login": "brijow", "id": 11220949, "type": "User" }
[]
false
[]
967,018,406
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
closed
https://github.com/huggingface/datasets/issues/2787
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
{ "login": "jinec", "id": 39627475, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
966,282,934
2,786
Support streaming compressed files
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
closed
https://github.com/huggingface/datasets/pull/2786
2021-08-11T09:02:06
2021-08-17T05:28:39
2021-08-16T06:36:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
965,461,382
2,783
Add KS task to SUPERB
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_comma...
closed
https://github.com/huggingface/datasets/pull/2783
2021-08-10T22:14:07
2021-08-12T16:45:01
2021-08-11T20:19:17
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
964,858,439
2,782
Fix renaming of corpus_bleu args
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, s...
closed
https://github.com/huggingface/datasets/pull/2782
2021-08-10T11:02:34
2021-08-10T11:16:07
2021-08-10T11:16:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,805,351
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #273...
closed
https://github.com/huggingface/datasets/issues/2781
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
964,794,764
2,780
VIVOS dataset for Vietnamese ASR
null
closed
https://github.com/huggingface/datasets/pull/2780
2021-08-10T09:47:36
2021-08-12T11:09:30
2021-08-12T11:09:30
{ "login": "binh234", "id": 57580923, "type": "User" }
[]
true
[]
964,775,085
2,779
Fix sacrebleu tokenizers
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()...
closed
https://github.com/huggingface/datasets/pull/2779
2021-08-10T09:24:27
2021-08-10T11:03:08
2021-08-10T10:57:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,737,422
2,778
Do not pass tokenize to sacrebleu
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will ...
closed
https://github.com/huggingface/datasets/pull/2778
2021-08-10T08:40:37
2021-08-10T10:03:37
2021-08-10T10:03:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,696,380
2,777
Use packaging to handle versions
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
closed
https://github.com/huggingface/datasets/pull/2777
2021-08-10T07:51:39
2021-08-18T13:56:27
2021-08-18T13:56:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
964,400,596
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-th...
open
https://github.com/huggingface/datasets/issues/2776
2021-08-09T21:23:17
2021-08-09T21:23:17
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
964,303,626
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se...
closed
https://github.com/huggingface/datasets/issues/2775
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
{ "login": "mbforbes", "id": 1170062, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
963,932,199
2,774
Prevent .map from using multiprocessing when loading from cache
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load fr...
closed
https://github.com/huggingface/datasets/pull/2774
2021-08-09T12:11:38
2021-09-09T10:20:28
2021-09-09T10:20:28
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
963,730,497
2,773
Remove dataset_infos.json
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_byt...
closed
https://github.com/huggingface/datasets/issues/2773
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
963,348,834
2,772
Remove returned feature constrain
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score...
open
https://github.com/huggingface/datasets/issues/2772
2021-08-08T04:01:30
2021-08-08T08:48:01
null
{ "login": "PosoSAgapo", "id": 33200481, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
963,257,036
2,771
[WIP][Common Voice 7] Add common voice 7.0
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need t...
closed
https://github.com/huggingface/datasets/pull/2771
2021-08-07T16:01:10
2021-12-06T23:24:02
2021-12-06T23:24:02
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
963,246,512
2,770
Add support for fast tokenizer in BertScore
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
closed
https://github.com/huggingface/datasets/pull/2770
2021-08-07T15:00:03
2021-08-09T12:34:43
2021-08-09T11:16:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
963,240,802
2,769
Allow PyArrow from source
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
closed
https://github.com/huggingface/datasets/pull/2769
2021-08-07T14:26:44
2021-08-09T15:38:39
2021-08-09T15:38:39
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
963,229,173
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets im...
closed
https://github.com/huggingface/datasets/issues/2768
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
963,002,120
2,767
equal operation to perform unbatch for huggingface datasets
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
closed
https://github.com/huggingface/datasets/issues/2767
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
{ "login": "dorooddorood606", "id": 79288051, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]