id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
932,143,634 | 2,560 | fix Dataset.map when num_procs > num rows | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map... | closed | https://github.com/huggingface/datasets/pull/2560 | 2021-06-29T02:24:11 | 2021-06-29T15:00:18 | 2021-06-29T14:53:31 | {
"login": "connor-mccarthy",
"id": 55268212,
"type": "User"
} | [] | true | [] |
931,849,724 | 2,559 | Memory usage consistently increases when processing a dataset with `.map` | ## Describe the bug
I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch... | closed | https://github.com/huggingface/datasets/issues/2559 | 2021-06-28T18:31:58 | 2023-07-20T13:34:10 | 2023-07-20T13:34:10 | {
"login": "apsdehal",
"id": 3616806,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
931,736,647 | 2,558 | Update: WebNLG - update checksums | The master branch changed so I computed the new checksums.
I also pinned a specific revision so that it doesn't happen again in the future.
Fix https://github.com/huggingface/datasets/issues/2553 | closed | https://github.com/huggingface/datasets/pull/2558 | 2021-06-28T16:16:37 | 2021-06-28T17:23:17 | 2021-06-28T17:23:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
931,633,823 | 2,557 | Fix `fever` keys | The keys has duplicates since they were reset to 0 after each file.
I fixed it by taking into account the file index as well. | closed | https://github.com/huggingface/datasets/pull/2557 | 2021-06-28T14:27:02 | 2021-06-28T16:11:30 | 2021-06-28T16:11:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
931,595,872 | 2,556 | Better DuplicateKeysError error to help the user debug the issue | As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys.
The current one is
```python
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 48
Keys s... | closed | https://github.com/huggingface/datasets/issues/2556 | 2021-06-28T13:50:57 | 2022-06-28T09:26:04 | 2022-06-28T09:26:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "good first issue",
"color": "7057ff"
}
] | false | [] |
931,585,485 | 2,555 | Fix code_search_net keys | There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552
I fixed the keys (it was an addition of the file and row indices, which was causing collisions)
Fix #2552. | closed | https://github.com/huggingface/datasets/pull/2555 | 2021-06-28T13:40:23 | 2021-09-02T08:24:43 | 2021-06-28T14:10:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
931,453,855 | 2,554 | Multilabel metrics not supported | When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L... | closed | https://github.com/huggingface/datasets/issues/2554 | 2021-06-28T11:09:46 | 2021-10-13T12:29:13 | 2021-07-08T08:40:15 | {
"login": "GuillemGSubies",
"id": 37592763,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
931,365,926 | 2,553 | load_dataset("web_nlg") NonMatchingChecksumError | Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev")
```
Gives
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['h... | closed | https://github.com/huggingface/datasets/issues/2553 | 2021-06-28T09:26:46 | 2021-06-28T17:23:39 | 2021-06-28T17:23:16 | {
"login": "alxthm",
"id": 33730312,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
931,354,687 | 2,552 | Keys should be unique error on code_search_net | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | closed | https://github.com/huggingface/datasets/issues/2552 | 2021-06-28T09:15:20 | 2021-09-06T14:08:30 | 2021-09-02T08:25:29 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
930,967,978 | 2,551 | Fix FileSystems documentation | ### What this fixes:
This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)).
### What were the issues?
When I originally tried implementing the code examples I faced several bugs attributed to:
-... | closed | https://github.com/huggingface/datasets/pull/2551 | 2021-06-27T16:18:42 | 2021-06-28T13:09:55 | 2021-06-28T13:09:54 | {
"login": "connor-mccarthy",
"id": 55268212,
"type": "User"
} | [] | true | [] |
930,951,287 | 2,550 | Allow for incremental cumulative metric updates in a distributed setup | Currently, using a metric allows for one of the following:
- Per example/batch metrics
- Cumulative metrics over the whole data
What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation.
... | closed | https://github.com/huggingface/datasets/issues/2550 | 2021-06-27T15:00:58 | 2021-09-26T13:42:39 | 2021-09-26T13:42:39 | {
"login": "eladsegal",
"id": 13485709,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
929,819,093 | 2,549 | Handling unlabeled datasets | Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"... | closed | https://github.com/huggingface/datasets/issues/2549 | 2021-06-25T04:32:23 | 2021-06-25T21:07:57 | 2021-06-25T21:07:56 | {
"login": "nelson-liu",
"id": 7272031,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
929,232,831 | 2,548 | Field order issue in loading json | ## Describe the bug
The `load_dataset` function expects columns in alphabetical order when loading json files.
Similar bug was previously reported for csv in #623 and fixed in #684.
## Steps to reproduce the bug
For a json file `j.json`,
```
{"c":321, "a": 1, "b": 2}
```
Running the following,
```
f= data... | closed | https://github.com/huggingface/datasets/issues/2548 | 2021-06-24T13:29:53 | 2021-06-24T14:36:43 | 2021-06-24T14:34:05 | {
"login": "luyug",
"id": 55288513,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
929,192,329 | 2,547 | Dataset load_from_disk is too slow | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in t... | open | https://github.com/huggingface/datasets/issues/2547 | 2021-06-24T12:45:44 | 2021-06-25T14:56:38 | null | {
"login": "avacaondata",
"id": 35173563,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
929,091,689 | 2,546 | Add license to the Cambridge English Write & Improve + LOCNESS dataset card | As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset.
I added it and I also filled a few other empty sections. | closed | https://github.com/huggingface/datasets/pull/2546 | 2021-06-24T10:39:29 | 2021-06-24T10:52:01 | 2021-06-24T10:52:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
929,016,580 | 2,545 | Fix DuplicatedKeysError in drop dataset | Close #2542.
cc: @VictorSanh. | closed | https://github.com/huggingface/datasets/pull/2545 | 2021-06-24T09:10:39 | 2021-06-24T14:57:08 | 2021-06-24T14:57:08 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
928,900,827 | 2,544 | Fix logging levels | Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info.
Close #2543.
cc: @stas00 | closed | https://github.com/huggingface/datasets/pull/2544 | 2021-06-24T06:41:36 | 2021-06-25T13:40:19 | 2021-06-25T13:40:19 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
928,571,915 | 2,543 | switching some low-level log.info's to log.debug? | In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.
The trouble is that now we get a ton of these:
```
06/23/2021 12:15:31 - INFO - da... | closed | https://github.com/huggingface/datasets/issues/2543 | 2021-06-23T19:26:55 | 2021-06-25T13:40:19 | 2021-06-25T13:40:19 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
928,540,382 | 2,542 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` | ## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("drop")
load_dataset("adversarial_qa", "adversarialQA")
```
## Expected results
Th... | closed | https://github.com/huggingface/datasets/issues/2542 | 2021-06-23T18:41:16 | 2021-06-25T21:50:05 | 2021-06-24T14:57:08 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
928,529,078 | 2,541 | update discofuse link cc @ekQ | Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee | closed | https://github.com/huggingface/datasets/pull/2541 | 2021-06-23T18:24:58 | 2021-06-28T14:34:51 | 2021-06-28T14:34:50 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
928,433,892 | 2,540 | Remove task templates if required features are removed during `Dataset.map` | This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`:
```python
from datasets import load_dataset
# `yelp_polarity` comes with a `Tex... | closed | https://github.com/huggingface/datasets/pull/2540 | 2021-06-23T16:20:25 | 2021-06-24T14:41:15 | 2021-06-24T13:34:03 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
927,952,429 | 2,539 | remove wi_locness dataset due to licensing issues | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | closed | https://github.com/huggingface/datasets/pull/2539 | 2021-06-23T07:35:32 | 2021-06-25T14:52:42 | 2021-06-25T14:52:42 | {
"login": "aseifert",
"id": 4944799,
"type": "User"
} | [] | true | [] |
927,940,691 | 2,538 | Loading partial dataset when debugging | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a wa... | open | https://github.com/huggingface/datasets/issues/2538 | 2021-06-23T07:19:52 | 2023-04-19T11:05:38 | null | {
"login": "reachtarunhere",
"id": 9061913,
"type": "User"
} | [] | false | [] |
927,472,659 | 2,537 | Add Parquet loader + from_parquet and to_parquet | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | closed | https://github.com/huggingface/datasets/pull/2537 | 2021-06-22T17:28:23 | 2021-06-30T16:31:03 | 2021-06-30T16:30:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
927,338,639 | 2,536 | Use `Audio` features for `AutomaticSpeechRecognition` task template | In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'.
The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in ... | closed | https://github.com/huggingface/datasets/issues/2536 | 2021-06-22T15:07:21 | 2022-06-01T17:18:16 | 2022-06-01T17:18:16 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
927,334,349 | 2,535 | Improve Features docs | - Fix rendering and cross-references in Features docs
- Add docstrings to Features methods | closed | https://github.com/huggingface/datasets/pull/2535 | 2021-06-22T15:03:27 | 2021-06-23T13:40:43 | 2021-06-23T13:40:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
927,201,435 | 2,534 | Sync with transformers disabling NOTSET | Close #2528. | closed | https://github.com/huggingface/datasets/pull/2534 | 2021-06-22T12:54:21 | 2021-06-24T14:42:47 | 2021-06-24T14:42:47 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
927,193,264 | 2,533 | Add task template for automatic speech recognition | This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import AutomaticSpeechRecognition
ds = load_dataset("timit_asr", split=... | closed | https://github.com/huggingface/datasets/pull/2533 | 2021-06-22T12:45:02 | 2021-06-23T16:14:46 | 2021-06-23T15:56:57 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
927,063,196 | 2,532 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance i... | closed | https://github.com/huggingface/datasets/issues/2532 | 2021-06-22T10:08:18 | 2021-06-23T05:17:25 | 2021-06-23T05:17:25 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
927,017,924 | 2,531 | Fix dev version | The dev version that ends in `.dev0` should be greater than the current version.
However it happens that `1.8.0 > 1.8.0.dev0` for example.
Therefore we need to use `1.8.1.dev0` for example in this case.
I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps a... | closed | https://github.com/huggingface/datasets/pull/2531 | 2021-06-22T09:17:10 | 2021-06-22T09:47:10 | 2021-06-22T09:47:09 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
927,013,773 | 2,530 | Fixed label parsing in the ProductReviews dataset | Fixed issue with parsing dataset labels. | closed | https://github.com/huggingface/datasets/pull/2530 | 2021-06-22T09:12:45 | 2021-06-22T12:55:20 | 2021-06-22T12:52:40 | {
"login": "yavuzKomecoglu",
"id": 5150963,
"type": "User"
} | [] | true | [] |
926,378,812 | 2,529 | Add summarization template | This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset(... | closed | https://github.com/huggingface/datasets/pull/2529 | 2021-06-21T16:08:31 | 2021-06-23T14:22:11 | 2021-06-23T13:30:10 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
926,314,656 | 2,528 | Logging cannot be set to NOTSET similar to transformers | ## Describe the bug
In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b5... | closed | https://github.com/huggingface/datasets/issues/2528 | 2021-06-21T15:04:54 | 2021-06-24T14:42:47 | 2021-06-24T14:42:47 | {
"login": "joshzwiebel",
"id": 34662010,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
926,031,525 | 2,527 | Replace bad `n>1M` size tag | Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc.
This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`. | closed | https://github.com/huggingface/datasets/pull/2527 | 2021-06-21T09:42:35 | 2021-06-21T15:06:50 | 2021-06-21T15:06:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
925,929,228 | 2,526 | Add COCO datasets | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets... | open | https://github.com/huggingface/datasets/issues/2526 | 2021-06-21T07:48:32 | 2023-06-22T14:12:18 | null | {
"login": "NielsRogge",
"id": 48327001,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
925,896,358 | 2,525 | Use scikit-learn package rather than sklearn in setup.py | The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats.
Note: this affects only TESTS_REQUIRE so I guess only developers not end users. | closed | https://github.com/huggingface/datasets/pull/2525 | 2021-06-21T07:04:25 | 2021-06-21T10:01:13 | 2021-06-21T08:57:33 | {
"login": "lesteve",
"id": 1680079,
"type": "User"
} | [] | true | [] |
925,610,934 | 2,524 | Raise FileNotFoundError in WindowsFileLock | Closes #2443 | closed | https://github.com/huggingface/datasets/pull/2524 | 2021-06-20T14:25:11 | 2021-06-28T09:56:22 | 2021-06-28T08:47:39 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
925,421,008 | 2,523 | Fr | __Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__ | closed | https://github.com/huggingface/datasets/issues/2523 | 2021-06-19T15:56:32 | 2021-06-19T18:48:23 | 2021-06-19T18:48:23 | {
"login": "aDrIaNo34500",
"id": 71971234,
"type": "User"
} | [] | false | [] |
925,334,379 | 2,522 | Documentation Mistakes in Dataset: emotion | As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dataset of English Twitter messages with eight b... | closed | https://github.com/huggingface/datasets/issues/2522 | 2021-06-19T07:08:57 | 2023-01-02T12:04:58 | 2023-01-02T12:04:58 | {
"login": "GDGauravDutta",
"id": 62606251,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
925,030,685 | 2,521 | Insert text classification template for Emotion dataset | This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset. | closed | https://github.com/huggingface/datasets/pull/2521 | 2021-06-18T15:56:19 | 2021-06-21T09:22:31 | 2021-06-21T09:22:31 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
925,015,004 | 2,520 | Datasets with tricky task templates | I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for.
## Text classification
* [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` ... | closed | https://github.com/huggingface/datasets/issues/2520 | 2021-06-18T15:33:57 | 2023-07-20T13:20:32 | 2023-07-20T13:20:32 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | false | [] |
924,903,240 | 2,519 | Improve performance of pandas arrow extractor | While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster. | closed | https://github.com/huggingface/datasets/pull/2519 | 2021-06-18T13:24:41 | 2021-06-21T09:06:06 | 2021-06-21T09:06:06 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
924,654,100 | 2,518 | Add task templates for tydiqa and xquad | This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub.
Notes:
* I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :)
* there exist other datasets like `fquad` and `mlqa` which are candida... | closed | https://github.com/huggingface/datasets/pull/2518 | 2021-06-18T08:06:34 | 2021-06-18T15:01:17 | 2021-06-18T14:50:33 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
924,643,345 | 2,517 | Fix typo in MatthewsCorrelation class name | Close #2513. | closed | https://github.com/huggingface/datasets/pull/2517 | 2021-06-18T07:53:06 | 2021-06-18T08:43:55 | 2021-06-18T08:43:55 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
924,597,470 | 2,516 | datasets.map pickle issue resulting in invalid mapping function | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is m... | open | https://github.com/huggingface/datasets/issues/2516 | 2021-06-18T06:47:26 | 2021-06-23T13:47:49 | null | {
"login": "david-waterworth",
"id": 5028974,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
924,435,447 | 2,515 | CRD3 dataset card | This PR adds additional information to the CRD3 dataset card. | closed | https://github.com/huggingface/datasets/pull/2515 | 2021-06-18T00:24:07 | 2021-06-21T10:18:44 | 2021-06-21T10:18:44 | {
"login": "wilsonyhlee",
"id": 1937386,
"type": "User"
} | [] | true | [] |
924,417,172 | 2,514 | Can datasets remove duplicated rows? | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like*... | open | https://github.com/huggingface/datasets/issues/2514 | 2021-06-17T23:35:38 | 2024-07-19T13:23:01 | null | {
"login": "liuxinglan",
"id": 16516583,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
924,174,413 | 2,513 | Corelation should be Correlation | https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66 | closed | https://github.com/huggingface/datasets/issues/2513 | 2021-06-17T17:28:48 | 2021-06-18T08:43:55 | 2021-06-18T08:43:55 | {
"login": "colbym-MM",
"id": 71514164,
"type": "User"
} | [] | false | [] |
924,069,353 | 2,512 | seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict' | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
seqeval = load_metric("seqeval")
seqeval.compute(predictions=[['A']], references=[['A']])
```
## Expected results
The function computes a dict with ... | closed | https://github.com/huggingface/datasets/issues/2512 | 2021-06-17T15:36:02 | 2021-06-17T15:46:07 | 2021-06-17T15:46:07 | {
"login": "avidale",
"id": 8642136,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
923,762,133 | 2,511 | Add C4 | ## Adding a Dataset
- **Name:** *C4*
- **Description:** *https://github.com/allenai/allennlp/discussions/5056*
- **Paper:** *https://arxiv.org/abs/1910.10683*
- **Data:** *https://huggingface.co/datasets/allenai/c4*
- **Motivation:** *Used a lot for pretraining*
Instructions to add a new dataset can be found [h... | closed | https://github.com/huggingface/datasets/issues/2511 | 2021-06-17T10:31:04 | 2021-07-05T12:36:58 | 2021-07-05T12:36:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
923,735,485 | 2,510 | Add align_labels_with_mapping to DatasetDict | https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method.
In this PR I also added `DatasetDict.align_labels_with_mapping` | closed | https://github.com/huggingface/datasets/pull/2510 | 2021-06-17T10:03:35 | 2021-06-17T10:45:25 | 2021-06-17T10:45:24 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
922,846,035 | 2,509 | Fix fingerprint when moving cache dir | The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented t... | closed | https://github.com/huggingface/datasets/pull/2509 | 2021-06-16T16:45:09 | 2021-06-21T15:05:04 | 2021-06-21T15:05:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
921,863,173 | 2,508 | Load Image Classification Dataset from Local | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load th... | closed | https://github.com/huggingface/datasets/issues/2508 | 2021-06-15T22:43:33 | 2022-03-01T16:29:44 | 2022-03-01T16:29:44 | {
"login": "Jacobsolawetz",
"id": 8428198,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
921,441,962 | 2,507 | Rearrange JSON field names to match passed features schema field names | This PR depends on PR #2453 (which must be merged first).
Close #2366. | closed | https://github.com/huggingface/datasets/pull/2507 | 2021-06-15T14:10:02 | 2021-06-16T10:47:49 | 2021-06-16T10:47:49 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
921,435,598 | 2,506 | Add course banner | This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too. | closed | https://github.com/huggingface/datasets/pull/2506 | 2021-06-15T14:03:54 | 2021-06-15T16:25:36 | 2021-06-15T16:25:35 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | true | [] |
921,234,797 | 2,505 | Make numpy arrow extractor faster | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | closed | https://github.com/huggingface/datasets/pull/2505 | 2021-06-15T10:11:32 | 2021-06-28T09:53:39 | 2021-06-28T09:53:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
920,636,186 | 2,503 | SubjQA wrong boolean values in entries | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are... | open | https://github.com/huggingface/datasets/issues/2503 | 2021-06-14T17:42:46 | 2021-08-25T03:52:06 | null | {
"login": "arnaudstiegler",
"id": 26485052,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
920,623,572 | 2,502 | JAX integration | Hi !
I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow).
It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects.
```python
from datasets import Dataset
d = Dataset.from_dict({"foo": [[0., 1., 2.]]})... | closed | https://github.com/huggingface/datasets/pull/2502 | 2021-06-14T17:24:23 | 2021-06-21T16:15:50 | 2021-06-21T16:15:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
920,579,634 | 2,501 | Add Zenodo metadata file with license | This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`.
Close #2472. | closed | https://github.com/huggingface/datasets/pull/2501 | 2021-06-14T16:28:12 | 2021-06-14T16:49:42 | 2021-06-14T16:49:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
920,471,411 | 2,500 | Add load_dataset_builder | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| closed | https://github.com/huggingface/datasets/pull/2500 | 2021-06-14T14:27:45 | 2025-06-20T18:07:24 | 2021-07-05T10:45:58 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
920,413,021 | 2,499 | Python Programming Puzzles | ## Adding a Dataset
- **Name:** Python Programming Puzzles
- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis
- **Paper:** https://arxiv.org/pdf/2106.05784.pdf
- **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scro... | open | https://github.com/huggingface/datasets/issues/2499 | 2021-06-14T13:27:18 | 2021-06-15T18:14:14 | null | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
920,411,285 | 2,498 | Improve torch formatting performance | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia an... | open | https://github.com/huggingface/datasets/issues/2498 | 2021-06-14T13:25:24 | 2022-07-15T17:12:04 | null | {
"login": "vblagoje",
"id": 458335,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
920,250,382 | 2,497 | Use default cast for sliced list arrays if pyarrow >= 4 | From pyarrow version 4, it is supported to cast sliced lists.
This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.
In relation with PR #2461 and #2490.
cc: @lhoestq, @abhi1thakur, @SBrandeis | closed | https://github.com/huggingface/datasets/pull/2497 | 2021-06-14T10:02:47 | 2021-06-15T18:06:18 | 2021-06-14T14:24:37 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
920,216,314 | 2,496 | Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map` | `Dataset.map` uses the dataset fingerprint (a hash) for caching.
However the fingerprint seems to change when someone moves the cache directory of the dataset.
This is because it uses the default fingerprint generation:
1. the dataset path is used to get the fingerprint
2. the modification times of the arrow file... | closed | https://github.com/huggingface/datasets/issues/2496 | 2021-06-14T09:20:26 | 2021-06-21T15:05:03 | 2021-06-21T15:05:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
920,170,030 | 2,495 | JAX formatting | We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well | closed | https://github.com/huggingface/datasets/issues/2495 | 2021-06-14T08:32:07 | 2021-06-21T16:15:49 | 2021-06-21T16:15:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
920,149,183 | 2,494 | Improve docs on Enhancing performance | In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases:
- How to make datasets the fastest
- How to make datasets take the less RAM
- How to make datasets take the less hard drive mem
cc: @thomwolf
| open | https://github.com/huggingface/datasets/issues/2494 | 2021-06-14T08:11:48 | 2025-06-28T18:55:38 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | false | [] |
919,833,281 | 2,493 | add tensorflow-macos support | ref - https://github.com/huggingface/datasets/issues/2068 | closed | https://github.com/huggingface/datasets/pull/2493 | 2021-06-13T16:20:08 | 2021-06-15T08:53:06 | 2021-06-15T08:53:06 | {
"login": "slayerjain",
"id": 12831254,
"type": "User"
} | [] | true | [] |
919,718,102 | 2,492 | Eduge | Hi, awesome folks behind the huggingface!
Here is my PR for the text classification dataset in Mongolian.
Please do let me know in case you have anything to clarify.
Thanks & Regards,
Enod | closed | https://github.com/huggingface/datasets/pull/2492 | 2021-06-13T05:10:59 | 2021-06-22T09:49:04 | 2021-06-16T10:41:46 | {
"login": "enod",
"id": 6023883,
"type": "User"
} | [] | true | [] |
919,714,506 | 2,491 | add eduge classification dataset | closed | https://github.com/huggingface/datasets/pull/2491 | 2021-06-13T04:37:01 | 2021-06-13T05:06:48 | 2021-06-13T05:06:38 | {
"login": "enod",
"id": 6023883,
"type": "User"
} | [] | true | [] | |
919,571,385 | 2,490 | Allow latest pyarrow version | Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0.
Close #2489. | closed | https://github.com/huggingface/datasets/pull/2490 | 2021-06-12T14:17:34 | 2021-07-06T16:54:52 | 2021-06-14T07:53:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
919,569,749 | 2,489 | Allow latest pyarrow version once segfault bug is fixed | As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568):
- it was fixed on 3 May 2021
- version 4.0.1 was released on 19 May 2021 with the bug fix | closed | https://github.com/huggingface/datasets/issues/2489 | 2021-06-12T14:09:52 | 2021-06-14T07:53:23 | 2021-06-14T07:53:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
919,500,756 | 2,488 | Set configurable downloaded datasets path | Part of #2480. | closed | https://github.com/huggingface/datasets/pull/2488 | 2021-06-12T09:09:03 | 2021-06-14T09:13:27 | 2021-06-14T08:29:07 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
919,452,407 | 2,487 | Set configurable extracted datasets path | Part of #2480. | closed | https://github.com/huggingface/datasets/pull/2487 | 2021-06-12T05:47:29 | 2021-06-14T09:30:17 | 2021-06-14T09:02:56 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
919,174,898 | 2,486 | Add Rico Dataset | Hi there!
I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.
1) There ... | closed | https://github.com/huggingface/datasets/pull/2486 | 2021-06-11T20:17:41 | 2022-10-03T09:38:18 | 2022-10-03T09:38:18 | {
"login": "ncoop57",
"id": 7613470,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
919,099,218 | 2,485 | Implement layered building | As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190):
> My suggestion for this would be to have this enabled by default.
>
> Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered b... | open | https://github.com/huggingface/datasets/issues/2485 | 2021-06-11T18:54:25 | 2021-06-11T18:54:25 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
919,092,635 | 2,484 | Implement loading a dataset builder | As discussed with @stas00 and @lhoestq, this would allow things like:
```python
from datasets import load_dataset_builder
dataset_name = "openwebtext"
builder = load_dataset_builder(dataset_name)
print(builder.cache_dir)
``` | closed | https://github.com/huggingface/datasets/issues/2484 | 2021-06-11T18:47:22 | 2021-07-05T10:45:57 | 2021-07-05T10:45:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
918,871,712 | 2,483 | Use gc.collect only when needed to avoid slow downs | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just m... | closed | https://github.com/huggingface/datasets/pull/2483 | 2021-06-11T15:09:30 | 2021-06-18T19:25:06 | 2021-06-11T15:31:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
918,846,027 | 2,482 | Allow to use tqdm>=4.50.0 | We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232))
They were due to open arrow files not properly closed by pyarrow.
Since https://github.com/huggin... | closed | https://github.com/huggingface/datasets/pull/2482 | 2021-06-11T14:49:21 | 2021-06-11T15:11:51 | 2021-06-11T15:11:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
918,680,168 | 2,481 | Delete extracted files to save disk space | As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user. | closed | https://github.com/huggingface/datasets/issues/2481 | 2021-06-11T12:21:52 | 2021-07-19T09:08:18 | 2021-07-19T09:08:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
918,678,578 | 2,480 | Set download/extracted paths configurable | As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path? | open | https://github.com/huggingface/datasets/issues/2480 | 2021-06-11T12:20:24 | 2021-06-15T14:23:49 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
918,672,431 | 2,479 | ❌ load_datasets ❌ | closed | https://github.com/huggingface/datasets/pull/2479 | 2021-06-11T12:14:36 | 2021-06-11T14:46:25 | 2021-06-11T14:46:25 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] | |
918,507,510 | 2,478 | Create release script | Create a script so that releases can be done automatically (as done in `transformers`). | open | https://github.com/huggingface/datasets/issues/2478 | 2021-06-11T09:38:02 | 2023-07-20T13:22:23 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
918,334,431 | 2,477 | Fix docs custom stable version | Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead. | closed | https://github.com/huggingface/datasets/pull/2477 | 2021-06-11T07:26:03 | 2021-06-14T09:14:20 | 2021-06-14T08:20:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
917,686,662 | 2,476 | Add TimeDial | Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags | closed | https://github.com/huggingface/datasets/pull/2476 | 2021-06-10T18:33:07 | 2021-07-30T12:57:54 | 2021-07-30T12:57:54 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
917,650,882 | 2,475 | Issue in timit_asr database | ## Describe the bug
I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).
I am using the next code line
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
The above code result with the same sentence duplicated ten times.
It al... | closed | https://github.com/huggingface/datasets/issues/2475 | 2021-06-10T18:05:29 | 2021-06-13T08:13:50 | 2021-06-13T08:13:13 | {
"login": "hrahamim",
"id": 85702107,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
917,622,055 | 2,474 | cache_dir parameter for load_from_disk ? | **Is your feature request related to a problem? Please describe.**
When using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cache... | closed | https://github.com/huggingface/datasets/issues/2474 | 2021-06-10T17:39:36 | 2022-02-16T14:55:01 | 2022-02-16T14:55:00 | {
"login": "chbensch",
"id": 7063207,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
917,538,629 | 2,473 | Add Disfl-QA | Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags | closed | https://github.com/huggingface/datasets/pull/2473 | 2021-06-10T16:18:00 | 2021-07-29T11:56:19 | 2021-07-29T11:56:18 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
917,463,821 | 2,472 | Fix automatic generation of Zenodo DOI | After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published".
I have contacted Zenodo support to fix this issue.
TODO:
- [x] Check with Zenodo to fix the issue
- [x] Check BibTeX entry is right | closed | https://github.com/huggingface/datasets/issues/2472 | 2021-06-10T15:15:46 | 2021-06-14T16:49:42 | 2021-06-14T16:49:42 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
917,067,165 | 2,471 | Fix PermissionError on Windows when using tqdm >=4.50.0 | See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111
```
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
``` | closed | https://github.com/huggingface/datasets/issues/2471 | 2021-06-10T08:31:49 | 2021-06-11T15:11:50 | 2021-06-11T15:11:50 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
916,724,260 | 2,470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any ti... | closed | https://github.com/huggingface/datasets/issues/2470 | 2021-06-09T22:40:22 | 2021-07-01T09:34:54 | 2021-07-01T09:11:13 | {
"login": "mbforbes",
"id": 1170062,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
916,440,418 | 2,469 | Bump tqdm version | closed | https://github.com/huggingface/datasets/pull/2469 | 2021-06-09T17:24:40 | 2021-06-11T15:03:42 | 2021-06-11T15:03:36 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] | |
916,427,320 | 2,468 | Implement ClassLabel encoding in JSON loader | Close #2365. | closed | https://github.com/huggingface/datasets/pull/2468 | 2021-06-09T17:08:54 | 2021-06-28T15:39:54 | 2021-06-28T15:05:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
915,914,098 | 2,466 | change udpos features structure | The structure is change such that each example is a sentence
The change is done for issues:
#2061
#2444
Close #2061 , close #2444. | closed | https://github.com/huggingface/datasets/pull/2466 | 2021-06-09T08:03:31 | 2021-06-18T11:55:09 | 2021-06-16T10:41:37 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [] | true | [] |
915,525,071 | 2,465 | adding masahaner dataset | Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner
@lhoestq , can you please review | closed | https://github.com/huggingface/datasets/pull/2465 | 2021-06-08T21:20:25 | 2021-06-14T14:59:05 | 2021-06-14T14:59:05 | {
"login": "dadelani",
"id": 23586676,
"type": "User"
} | [] | true | [] |
915,485,601 | 2,464 | fix: adjusting indexing for the labels. | The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES`
After this change, the `README.md` now reflects the content of `dataset_infos.json`.
Signed-off-by: Matteo Manica <drugilsberg@gmail.com> | closed | https://github.com/huggingface/datasets/pull/2464 | 2021-06-08T20:47:25 | 2021-06-09T10:15:46 | 2021-06-09T09:10:28 | {
"login": "drugilsberg",
"id": 5406908,
"type": "User"
} | [] | true | [] |
915,454,788 | 2,463 | Fix proto_qa download link | Fixes #2459
Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq. | closed | https://github.com/huggingface/datasets/pull/2463 | 2021-06-08T20:23:16 | 2021-06-10T12:49:56 | 2021-06-10T08:31:10 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
915,384,613 | 2,462 | Merge DatasetDict and Dataset | As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects... | open | https://github.com/huggingface/datasets/issues/2462 | 2021-06-08T19:22:04 | 2023-08-16T09:34:34 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
915,286,150 | 2,461 | Support sliced list arrays in cast | There is this issue in pyarrow:
```python
import pyarrow as pa
arr = pa.array([[i * 10] for i in range(4)])
arr.cast(pa.list_(pa.int32())) # works
arr = arr.slice(1)
arr.cast(pa.list_(pa.int32())) # fails
# ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented")
```
Howev... | closed | https://github.com/huggingface/datasets/pull/2461 | 2021-06-08T17:38:47 | 2021-06-08T17:56:24 | 2021-06-08T17:56:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
915,268,536 | 2,460 | Revert default in-memory for small datasets | Close #2458 | closed | https://github.com/huggingface/datasets/pull/2460 | 2021-06-08T17:14:23 | 2021-06-08T18:04:14 | 2021-06-08T17:55:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
915,222,015 | 2,459 | `Proto_qa` hosting seems to be broken | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets impo... | closed | https://github.com/huggingface/datasets/issues/2459 | 2021-06-08T16:16:32 | 2021-06-10T08:31:09 | 2021-06-10T08:31:09 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.