id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
915,199,693 | 2,458 | Revert default in-memory for small datasets | Users are reporting issues and confusion about setting default in-memory to True for small datasets.
We see 2 clear use cases of Datasets:
- the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)
- some edge cases (speed benchmarks, inter... | closed | https://github.com/huggingface/datasets/issues/2458 | 2021-06-08T15:51:41 | 2021-06-08T18:57:11 | 2021-06-08T17:55:43 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
915,079,441 | 2,457 | Add align_labels_with_mapping function | This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that... | closed | https://github.com/huggingface/datasets/pull/2457 | 2021-06-08T13:54:00 | 2022-01-12T08:57:41 | 2021-06-17T09:56:52 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
914,709,293 | 2,456 | Fix cross-reference typos in documentation | Fix some minor typos in docs that avoid the creation of cross-reference links. | closed | https://github.com/huggingface/datasets/pull/2456 | 2021-06-08T09:45:14 | 2021-06-08T17:41:37 | 2021-06-08T17:41:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
914,177,468 | 2,455 | Update version in xor_tydi_qa.py | Fix #2449
@lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`? | closed | https://github.com/huggingface/datasets/pull/2455 | 2021-06-08T02:23:45 | 2021-06-14T15:35:25 | 2021-06-14T15:35:25 | {
"login": "changjonathanc",
"id": 31893406,
"type": "User"
} | [] | true | [] |
913,883,631 | 2,454 | Rename config and environment variable for in memory max size | As discussed in #2409, both config and environment variable have been renamed.
cc: @stas00, huggingface/transformers#12056 | closed | https://github.com/huggingface/datasets/pull/2454 | 2021-06-07T19:21:08 | 2021-06-07T20:43:46 | 2021-06-07T20:43:46 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
913,729,258 | 2,453 | Keep original features order | When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.
I found this issue while working on #2366. | closed | https://github.com/huggingface/datasets/pull/2453 | 2021-06-07T16:26:38 | 2021-06-15T18:05:36 | 2021-06-15T15:43:48 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
913,603,877 | 2,452 | MRPC test set differences between torch and tensorflow datasets | ## Describe the bug
When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of import... | closed | https://github.com/huggingface/datasets/issues/2452 | 2021-06-07T14:20:26 | 2021-06-07T14:34:32 | 2021-06-07T14:34:32 | {
"login": "FredericOdermatt",
"id": 50372080,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
913,263,340 | 2,451 | Mention that there are no answers in adversarial_qa test set | As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set | closed | https://github.com/huggingface/datasets/pull/2451 | 2021-06-07T08:13:57 | 2021-06-07T08:34:14 | 2021-06-07T08:34:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
912,890,291 | 2,450 | BLUE file not found | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local... | closed | https://github.com/huggingface/datasets/issues/2450 | 2021-06-06T17:01:54 | 2021-06-07T10:46:15 | 2021-06-07T10:46:15 | {
"login": "mirfan899",
"id": 3822565,
"type": "User"
} | [] | false | [] |
912,751,752 | 2,449 | Update `xor_tydi_qa` url to v1.1 | The dataset is updated and the old url no longer works. So I updated it.
I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).
> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to ... | closed | https://github.com/huggingface/datasets/pull/2449 | 2021-06-06T09:44:58 | 2021-06-07T15:16:21 | 2021-06-07T08:31:04 | {
"login": "changjonathanc",
"id": 31893406,
"type": "User"
} | [] | true | [] |
912,360,109 | 2,448 | Fix flores download link | closed | https://github.com/huggingface/datasets/pull/2448 | 2021-06-05T17:30:24 | 2021-06-08T20:02:58 | 2021-06-07T08:18:25 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] | |
912,299,527 | 2,447 | dataset adversarial_qa has no answers in the "test" set | ## Describe the bug
When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')
## Steps to reproduce the bug
```
from datasets import load_dataset
examples ... | closed | https://github.com/huggingface/datasets/issues/2447 | 2021-06-05T14:57:38 | 2021-06-07T11:13:07 | 2021-06-07T11:13:07 | {
"login": "bjascob",
"id": 22728060,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
911,635,399 | 2,446 | `yelp_polarity` is broken | 
| closed | https://github.com/huggingface/datasets/issues/2446 | 2021-06-04T15:44:29 | 2021-06-04T18:56:47 | 2021-06-04T18:56:47 | {
"login": "JetRunner",
"id": 22514219,
"type": "User"
} | [] | false | [] |
911,577,578 | 2,445 | Fix broken URLs for bn_hate_speech and covid_tweets_japanese | Closes #2388 | closed | https://github.com/huggingface/datasets/pull/2445 | 2021-06-04T14:53:35 | 2021-06-04T17:39:46 | 2021-06-04T17:39:45 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
911,297,139 | 2,444 | Sentence Boundaries missing in Dataset: xtreme / udpos | I was browsing through annotation guidelines, as suggested by the datasets introduction.
The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldepend... | closed | https://github.com/huggingface/datasets/issues/2444 | 2021-06-04T09:10:26 | 2021-06-18T11:53:43 | 2021-06-18T11:53:43 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
909,983,574 | 2,443 | Some tests hang on Windows | Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO thr... | closed | https://github.com/huggingface/datasets/issues/2443 | 2021-06-03T00:27:30 | 2021-06-28T08:47:39 | 2021-06-28T08:47:39 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
909,677,029 | 2,442 | add english language tags for ~100 datasets | As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English... | closed | https://github.com/huggingface/datasets/pull/2442 | 2021-06-02T16:24:56 | 2021-06-04T09:51:40 | 2021-06-04T09:51:39 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | true | [] |
908,554,713 | 2,441 | DuplicatedKeysError on personal dataset | ## Describe the bug
Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.
Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')`
Note ... | closed | https://github.com/huggingface/datasets/issues/2441 | 2021-06-01T17:59:41 | 2021-06-04T23:50:03 | 2021-06-04T23:50:03 | {
"login": "lucaguarro",
"id": 22605313,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
908,521,954 | 2,440 | Remove `extended` field from dataset tagger | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.m... | closed | https://github.com/huggingface/datasets/issues/2440 | 2021-06-01T17:18:42 | 2021-06-09T09:06:31 | 2021-06-09T09:06:30 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
908,511,983 | 2,439 | Better error message when trying to access elements of a DatasetDict without specifying the split | As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name.
cc @thomwolf | closed | https://github.com/huggingface/datasets/pull/2439 | 2021-06-01T17:04:32 | 2021-06-15T16:03:23 | 2021-06-07T08:54:35 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
908,461,914 | 2,438 | Fix NQ features loading: reorder fields of features to match nested fields order in arrow data | As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.
To fix that I re-order the features based on the arrow schema:
```python
inferred_fe... | closed | https://github.com/huggingface/datasets/pull/2438 | 2021-06-01T16:09:30 | 2021-06-04T09:02:31 | 2021-06-04T09:02:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
908,108,882 | 2,437 | Better error message when using the wrong load_from_disk | As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one. | closed | https://github.com/huggingface/datasets/pull/2437 | 2021-06-01T09:43:22 | 2021-06-08T18:03:50 | 2021-06-08T18:03:50 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
908,100,211 | 2,436 | Update DatasetMetadata and ReadMe | This PR contains the changes discussed in #2395.
**Edit**:
In addition to those changes, I'll be updating the `ReadMe` as follows:
Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.
One way to make `ReadMe` consistent... | closed | https://github.com/huggingface/datasets/pull/2436 | 2021-06-01T09:32:37 | 2021-06-14T13:23:27 | 2021-06-14T13:23:26 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
907,505,531 | 2,435 | Insert Extractive QA templates for SQuAD-like datasets | This PR adds task templates for 9 SQuAD-like templates with the following properties:
* 1 config
* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)
* Less than 20... | closed | https://github.com/huggingface/datasets/pull/2435 | 2021-05-31T14:09:11 | 2021-06-03T14:34:30 | 2021-06-03T14:32:27 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
907,503,557 | 2,434 | Extend QuestionAnsweringExtractive template to handle nested columns | Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNot... | closed | https://github.com/huggingface/datasets/issues/2434 | 2021-05-31T14:06:51 | 2022-10-05T17:06:28 | 2022-10-05T17:06:28 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
907,488,711 | 2,433 | Fix DuplicatedKeysError in adversarial_qa | Fixes #2431 | closed | https://github.com/huggingface/datasets/pull/2433 | 2021-05-31T13:48:47 | 2021-06-01T08:52:11 | 2021-06-01T08:52:11 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
907,462,881 | 2,432 | Fix CI six installation on linux | For some reason we end up with this error in the linux CI when running pip install .[tests]
```
pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequireme... | closed | https://github.com/huggingface/datasets/pull/2432 | 2021-05-31T13:15:36 | 2021-05-31T13:17:07 | 2021-05-31T13:17:06 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
907,413,691 | 2,431 | DuplicatedKeysError when trying to load adversarial_qa | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET ... | closed | https://github.com/huggingface/datasets/issues/2431 | 2021-05-31T12:11:19 | 2021-06-01T08:54:03 | 2021-06-01T08:52:11 | {
"login": "hanss0n",
"id": 21348833,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
907,322,595 | 2,430 | Add version-specific BibTeX | As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.
This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.
See version-specific BibTeX entry here: https://zenodo.org/record/481776... | closed | https://github.com/huggingface/datasets/pull/2430 | 2021-05-31T10:05:42 | 2021-06-08T07:53:22 | 2021-06-08T07:53:22 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
907,321,665 | 2,429 | Rename QuestionAnswering template to QuestionAnsweringExtractive | Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR. | closed | https://github.com/huggingface/datasets/pull/2429 | 2021-05-31T10:04:42 | 2021-05-31T15:57:26 | 2021-05-31T15:57:24 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
907,169,746 | 2,428 | Add copyright info for wiki_lingua dataset | closed | https://github.com/huggingface/datasets/pull/2428 | 2021-05-31T07:22:52 | 2021-06-04T10:22:33 | 2021-06-04T10:22:33 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | true | [] | |
907,162,923 | 2,427 | Add copyright info to MLSUM dataset | closed | https://github.com/huggingface/datasets/pull/2427 | 2021-05-31T07:15:57 | 2021-06-04T09:53:50 | 2021-06-04T09:53:50 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | true | [] | |
906,473,546 | 2,426 | Saving Graph/Structured Data in Datasets | Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data ty... | closed | https://github.com/huggingface/datasets/issues/2426 | 2021-05-29T13:35:21 | 2021-06-02T01:21:03 | 2021-06-02T01:21:03 | {
"login": "gsh199449",
"id": 3295342,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
906,385,457 | 2,425 | Fix Docstring Mistake: dataset vs. metric | PR to fix #2412 | closed | https://github.com/huggingface/datasets/pull/2425 | 2021-05-29T06:09:53 | 2021-06-01T08:18:04 | 2021-06-01T08:18:04 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | true | [] |
906,193,679 | 2,424 | load_from_disk and save_to_disk are not compatible with each other | ## Describe the bug
load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split
## Steps to reproduce the bug
... | closed | https://github.com/huggingface/datasets/issues/2424 | 2021-05-28T23:07:10 | 2021-06-08T19:22:32 | 2021-06-08T19:22:32 | {
"login": "roholazandie",
"id": 7584674,
"type": "User"
} | [] | false | [] |
905,935,753 | 2,423 | add `desc` in `map` for `DatasetDict` object | `desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well | closed | https://github.com/huggingface/datasets/pull/2423 | 2021-05-28T19:28:44 | 2021-05-31T14:51:23 | 2021-05-31T13:08:04 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
905,568,548 | 2,422 | Fix save_to_disk nested features order in dataset_info.json | Fix issue https://github.com/huggingface/datasets/issues/2267
The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features. | closed | https://github.com/huggingface/datasets/pull/2422 | 2021-05-28T15:03:28 | 2021-05-28T15:26:57 | 2021-05-28T15:26:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
905,549,756 | 2,421 | doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES | closed | https://github.com/huggingface/datasets/pull/2421 | 2021-05-28T14:52:10 | 2021-06-04T09:52:45 | 2021-06-04T09:52:45 | {
"login": "borisdayma",
"id": 715491,
"type": "User"
} | [] | true | [] |
904,821,772 | 2,420 | Updated Dataset Description | Added Point of contact information and several other details about the dataset. | closed | https://github.com/huggingface/datasets/pull/2420 | 2021-05-28T07:10:51 | 2021-06-10T12:11:35 | 2021-06-10T12:11:35 | {
"login": "binny-mathew",
"id": 10741860,
"type": "User"
} | [] | true | [] |
904,347,339 | 2,419 | adds license information for DailyDialog. | closed | https://github.com/huggingface/datasets/pull/2419 | 2021-05-27T23:03:42 | 2021-05-31T13:16:52 | 2021-05-31T13:16:52 | {
"login": "aditya2211",
"id": 11574558,
"type": "User"
} | [] | true | [] | |
904,051,497 | 2,418 | add utf-8 while reading README | It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d | closed | https://github.com/huggingface/datasets/pull/2418 | 2021-05-27T18:12:28 | 2021-06-04T09:55:01 | 2021-06-04T09:55:00 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
903,956,071 | 2,417 | Make datasets PEP-561 compliant | Allows to type-check datasets with `mypy` when imported as a third-party library
PEP-561: https://www.python.org/dev/peps/pep-0561
MyPy doc on the subject: https://mypy.readthedocs.io/en/stable/installed_packages.html
| closed | https://github.com/huggingface/datasets/pull/2417 | 2021-05-27T16:16:17 | 2021-05-28T13:10:10 | 2021-05-28T13:09:16 | {
"login": "SBrandeis",
"id": 33657802,
"type": "User"
} | [] | true | [] |
903,932,299 | 2,416 | Add KLUE dataset | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| closed | https://github.com/huggingface/datasets/pull/2416 | 2021-05-27T15:49:51 | 2021-06-09T15:00:02 | 2021-06-04T17:45:15 | {
"login": "jungwhank",
"id": 53588015,
"type": "User"
} | [] | true | [] |
903,923,097 | 2,415 | Cached dataset not loaded | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | closed | https://github.com/huggingface/datasets/issues/2415 | 2021-05-27T15:40:06 | 2021-06-02T13:15:47 | 2021-06-02T13:15:47 | {
"login": "borisdayma",
"id": 715491,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
903,877,096 | 2,414 | Update README.md | Provides description of data instances and dataset features
| closed | https://github.com/huggingface/datasets/pull/2414 | 2021-05-27T14:53:19 | 2021-06-28T13:46:14 | 2021-06-28T13:04:56 | {
"login": "cryoff",
"id": 15029054,
"type": "User"
} | [] | true | [] |
903,777,557 | 2,413 | AttributeError: 'DatasetInfo' object has no attribute 'task_templates' | ## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset,... | closed | https://github.com/huggingface/datasets/issues/2413 | 2021-05-27T13:44:28 | 2021-06-01T01:05:47 | 2021-06-01T01:05:47 | {
"login": "jungwhank",
"id": 53588015,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
903,769,151 | 2,412 | Docstring mistake: dataset vs. metric | This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er... | closed | https://github.com/huggingface/datasets/issues/2412 | 2021-05-27T13:39:11 | 2021-06-01T08:18:04 | 2021-06-01T08:18:04 | {
"login": "PhilipMay",
"id": 229382,
"type": "User"
} | [] | false | [] |
903,671,778 | 2,411 | Add DOI badge to README | Once published the latest release, the DOI badge has been automatically generated by Zenodo. | closed | https://github.com/huggingface/datasets/pull/2411 | 2021-05-27T12:36:47 | 2021-05-27T13:42:54 | 2021-05-27T13:42:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
903,613,676 | 2,410 | fix #2391 add original answers in kilt-TriviaQA | cc @yjernite is it ok like this? | closed | https://github.com/huggingface/datasets/pull/2410 | 2021-05-27T11:54:29 | 2021-06-15T12:35:57 | 2021-06-14T17:29:10 | {
"login": "PaulLerner",
"id": 25532159,
"type": "User"
} | [] | true | [] |
903,441,398 | 2,409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | closed | https://github.com/huggingface/datasets/pull/2409 | 2021-05-27T09:07:00 | 2021-06-08T16:00:55 | 2021-05-27T09:33:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
903,422,648 | 2,408 | Fix head_qa keys | There were duplicate in the keys, as mentioned in #2382 | closed | https://github.com/huggingface/datasets/pull/2408 | 2021-05-27T08:50:19 | 2021-05-27T09:05:37 | 2021-05-27T09:05:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
903,111,755 | 2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected... | closed | https://github.com/huggingface/datasets/issues/2407 | 2021-05-27T01:54:26 | 2021-05-27T13:46:40 | 2021-05-27T13:46:40 | {
"login": "cindyxinyiwang",
"id": 7390482,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
902,643,844 | 2,406 | Add guide on using task templates to documentation | Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
| closed | https://github.com/huggingface/datasets/issues/2406 | 2021-05-26T16:28:26 | 2022-10-05T17:07:00 | 2022-10-05T17:07:00 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
901,227,658 | 2,405 | Add dataset tags | The dataset tags were provided by Peter Clark following the guide. | closed | https://github.com/huggingface/datasets/pull/2405 | 2021-05-25T18:57:29 | 2021-05-26T16:54:16 | 2021-05-26T16:40:07 | {
"login": "OyvindTafjord",
"id": 6453366,
"type": "User"
} | [] | true | [] |
901,179,832 | 2,404 | Paperswithcode dataset mapping | This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards.
As discussed:
- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.
- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as... | closed | https://github.com/huggingface/datasets/pull/2404 | 2021-05-25T18:14:26 | 2021-05-26T11:21:25 | 2021-05-26T11:17:18 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
900,059,014 | 2,403 | Free datasets with cache file in temp dir on exit | This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir.
Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function.
Fixes #2402 | closed | https://github.com/huggingface/datasets/pull/2403 | 2021-05-24T22:15:11 | 2021-05-26T17:25:19 | 2021-05-26T16:39:29 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
900,025,329 | 2,402 | PermissionError on Windows when using temp dir for caching | Currently, the following code raises a PermissionError on master if working on Windows:
```python
# run as a script or call exit() in REPL to initiate the temp dir cleanup
from datasets import *
d = load_dataset("sst", split="train", keep_in_memory=False)
set_caching_enabled(False)
d.map(lambda ex: ex)
```
... | closed | https://github.com/huggingface/datasets/issues/2402 | 2021-05-24T21:22:59 | 2021-05-26T16:39:29 | 2021-05-26T16:39:29 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
899,910,521 | 2,401 | load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset" | ## Describe the bug
load_dataset('natural_questions') throws ValueError
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset('natural_questions', split='validation[:10]')
```
## Expected results
Call to load_dataset returns data.
## Actual results
```
Using ... | closed | https://github.com/huggingface/datasets/issues/2401 | 2021-05-24T18:38:53 | 2021-06-09T09:07:25 | 2021-06-09T09:07:25 | {
"login": "jonrbates",
"id": 15602718,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
899,867,212 | 2,400 | Concatenate several datasets with removed columns is not working. | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] =... | closed | https://github.com/huggingface/datasets/issues/2400 | 2021-05-24T17:40:15 | 2021-05-25T05:52:01 | 2021-05-25T05:51:59 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
899,853,610 | 2,399 | Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.
This will allow to turn off default behavior: loading in memory (and not caching) small datasets.
Fix #2387. | closed | https://github.com/huggingface/datasets/pull/2399 | 2021-05-24T17:19:15 | 2021-05-27T09:07:15 | 2021-05-26T16:07:54 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
899,511,837 | 2,398 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | closed | https://github.com/huggingface/datasets/issues/2398 | 2021-05-24T10:03:34 | 2022-10-05T17:13:49 | 2022-10-05T17:13:49 | {
"login": "anassalamah",
"id": 8571003,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
899,427,378 | 2,397 | Fix number of classes in indic_glue sna.bn dataset | As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11. | closed | https://github.com/huggingface/datasets/pull/2397 | 2021-05-24T08:18:55 | 2021-05-25T16:32:16 | 2021-05-25T16:32:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
899,016,308 | 2,396 | strange datasets from OSCAR corpus | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2K... | open | https://github.com/huggingface/datasets/issues/2396 | 2021-05-23T13:06:02 | 2021-06-17T13:54:37 | null | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
898,762,730 | 2,395 | `pretty_name` for dataset in YAML tags | I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.
If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in t... | closed | https://github.com/huggingface/datasets/pull/2395 | 2021-05-22T09:24:45 | 2022-09-23T13:29:14 | 2022-09-23T13:29:13 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [
{
"name": "dataset contribution",
"color": "0e8a16"
}
] | true | [] |
898,156,795 | 2,392 | Update text classification template labels in DatasetInfo __post_init__ | This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.
To avoid storing state in `Dataset... | closed | https://github.com/huggingface/datasets/pull/2392 | 2021-05-21T15:29:41 | 2021-05-28T11:37:35 | 2021-05-28T11:37:32 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
898,128,099 | 2,391 | Missing original answers in kilt-TriviaQA | I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets
## Describe the bug
The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative ... | closed | https://github.com/huggingface/datasets/issues/2391 | 2021-05-21T14:57:07 | 2021-06-14T17:29:11 | 2021-06-14T17:29:11 | {
"login": "PaulLerner",
"id": 25532159,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
897,903,642 | 2,390 | Add check for task templates on dataset load | This PR adds a check that the features of a dataset match the schema of each compatible task template. | closed | https://github.com/huggingface/datasets/pull/2390 | 2021-05-21T10:16:57 | 2021-05-21T15:49:09 | 2021-05-21T15:49:06 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
897,822,270 | 2,389 | Insert task templates for text classification | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | closed | https://github.com/huggingface/datasets/pull/2389 | 2021-05-21T08:36:26 | 2021-05-28T15:28:58 | 2021-05-28T15:26:28 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
897,767,470 | 2,388 | Incorrect URLs for some datasets | ## Describe the bug
It seems that the URLs for the following datasets are invalid:
- [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a
- [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covi... | closed | https://github.com/huggingface/datasets/issues/2388 | 2021-05-21T07:22:35 | 2021-06-04T17:39:45 | 2021-06-04T17:39:45 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
897,566,666 | 2,387 | datasets 1.6 ignores cache | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/... | closed | https://github.com/huggingface/datasets/issues/2387 | 2021-05-21T00:12:58 | 2021-05-26T16:07:54 | 2021-05-26T16:07:54 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
897,560,049 | 2,386 | Accessing Arrow dataset cache_files | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried l... | closed | https://github.com/huggingface/datasets/issues/2386 | 2021-05-20T23:57:43 | 2021-05-21T19:18:03 | 2021-05-21T19:18:03 | {
"login": "Mehrad0711",
"id": 28717374,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
897,206,823 | 2,385 | update citations | To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
| closed | https://github.com/huggingface/datasets/pull/2385 | 2021-05-20T17:54:08 | 2021-05-21T12:38:18 | 2021-05-21T12:38:18 | {
"login": "adeepH",
"id": 46108405,
"type": "User"
} | [] | true | [] |
896,866,461 | 2,384 | Add args description to DatasetInfo | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | closed | https://github.com/huggingface/datasets/pull/2384 | 2021-05-20T13:53:10 | 2021-05-22T09:26:16 | 2021-05-22T09:26:14 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
895,779,723 | 2,383 | Improve example in rounding docs | Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding. | closed | https://github.com/huggingface/datasets/pull/2383 | 2021-05-19T18:59:23 | 2021-05-21T12:53:22 | 2021-05-21T12:36:29 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
895,610,216 | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
Duplicated... | closed | https://github.com/huggingface/datasets/issues/2382 | 2021-05-19T15:49:48 | 2021-05-30T13:26:16 | 2021-05-30T13:26:16 | {
"login": "helloworld123-lab",
"id": 75953751,
"type": "User"
} | [] | false | [] |
895,588,844 | 2,381 | add dataset card title | few of them were missed by me earlier which I've added now | closed | https://github.com/huggingface/datasets/pull/2381 | 2021-05-19T15:30:03 | 2021-05-20T18:51:40 | 2021-05-20T18:51:40 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
895,367,201 | 2,380 | maintain YAML structure reading from README | How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_c... | closed | https://github.com/huggingface/datasets/pull/2380 | 2021-05-19T12:12:07 | 2021-05-19T13:08:38 | 2021-05-19T13:08:38 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
895,252,597 | 2,379 | Disallow duplicate keys in yaml tags | Make sure that there's no duplidate keys in yaml tags.
I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure.
cc @julien-c | closed | https://github.com/huggingface/datasets/pull/2379 | 2021-05-19T10:10:07 | 2021-05-19T10:45:32 | 2021-05-19T10:45:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
895,131,774 | 2,378 | Add missing dataset_infos.json files | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPat... | open | https://github.com/huggingface/datasets/issues/2378 | 2021-05-19T08:11:12 | 2021-05-19T08:11:12 | null | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
894,918,927 | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arro... | open | https://github.com/huggingface/datasets/issues/2377 | 2021-05-19T02:04:37 | 2024-01-18T08:06:15 | null | {
"login": "Ark-kun",
"id": 1829149,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
894,852,264 | 2,376 | Improve task api code quality | Improves the code quality of the `TaskTemplate` dataclasses.
Changes:
* replaces `return NotImplemented` with raise `NotImplementedError`
* replaces `sorted` with `len` in the uniqueness check
* defines `label2id` and `id2label` in the `TextClassification` template as properties
* replaces the `object.__setatt... | closed | https://github.com/huggingface/datasets/pull/2376 | 2021-05-18T23:13:40 | 2021-06-02T20:39:57 | 2021-05-25T15:30:54 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
894,655,157 | 2,375 | Dataset Streaming | # Dataset Streaming
## API
Current API is
```python
from datasets import load_dataset
# Load an IterableDataset without downloading data
snli = load_dataset("snli", streaming=True)
# Access examples by streaming data
print(next(iter(snli["train"])))
# {'premise': 'A person on a horse jumps over a br... | closed | https://github.com/huggingface/datasets/pull/2375 | 2021-05-18T18:20:00 | 2021-06-23T16:35:02 | 2021-06-23T16:35:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
894,579,364 | 2,374 | add `desc` to `tqdm` in `Dataset.map()` | Fixes #2330. Please let me know if anything is also required in this | closed | https://github.com/huggingface/datasets/pull/2374 | 2021-05-18T16:44:29 | 2021-05-27T15:44:04 | 2021-05-26T14:59:21 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
894,499,909 | 2,373 | Loading dataset from local path | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to u... | closed | https://github.com/huggingface/datasets/issues/2373 | 2021-05-18T15:20:50 | 2021-05-18T15:36:36 | 2021-05-18T15:36:35 | {
"login": "kolakows",
"id": 34172905,
"type": "User"
} | [] | false | [] |
894,496,064 | 2,372 | ConvQuestions benchmark added | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | closed | https://github.com/huggingface/datasets/pull/2372 | 2021-05-18T15:16:50 | 2021-05-26T10:31:45 | 2021-05-26T10:31:45 | {
"login": "PhilippChr",
"id": 24608689,
"type": "User"
} | [] | true | [] |
894,193,403 | 2,371 | Align question answering tasks with sub-domains | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` bu... | closed | https://github.com/huggingface/datasets/issues/2371 | 2021-05-18T09:47:59 | 2023-07-25T16:52:05 | 2023-07-25T16:52:04 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
893,606,432 | 2,370 | Adding HendrycksTest dataset | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | closed | https://github.com/huggingface/datasets/pull/2370 | 2021-05-17T18:53:05 | 2023-05-11T05:42:57 | 2021-05-31T16:37:13 | {
"login": "andyzoujm",
"id": 43451571,
"type": "User"
} | [] | true | [] |
893,554,153 | 2,369 | correct labels of conll2003 | # What does this PR
It fixes/extends the `ner_tags` for conll2003 to include all.
Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf
Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
| closed | https://github.com/huggingface/datasets/pull/2369 | 2021-05-17T17:37:54 | 2021-05-18T08:27:42 | 2021-05-18T08:27:42 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
893,411,076 | 2,368 | Allow "other-X" in licenses | This PR allows "other-X" licenses during metadata validation.
@lhoestq | closed | https://github.com/huggingface/datasets/pull/2368 | 2021-05-17T14:47:54 | 2021-05-17T16:36:27 | 2021-05-17T16:36:27 | {
"login": "gchhablani",
"id": 29076344,
"type": "User"
} | [] | true | [] |
893,317,427 | 2,367 | Remove getchildren from hyperpartisan news detection | `Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).
https://bugs.python.org/issue29209 | closed | https://github.com/huggingface/datasets/pull/2367 | 2021-05-17T13:10:37 | 2021-05-17T14:07:13 | 2021-05-17T14:07:13 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
893,185,266 | 2,366 | Json loader fails if user-specified features don't match the json data fields order | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if s... | closed | https://github.com/huggingface/datasets/issues/2366 | 2021-05-17T10:26:08 | 2021-06-16T10:47:49 | 2021-06-16T10:47:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
893,179,697 | 2,365 | Missing ClassLabel encoding in Json loader | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would ... | closed | https://github.com/huggingface/datasets/issues/2365 | 2021-05-17T10:19:10 | 2021-06-28T15:05:34 | 2021-06-28T15:05:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
892,420,500 | 2,364 | README updated for SNLI, MNLI | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | closed | https://github.com/huggingface/datasets/pull/2364 | 2021-05-15T11:37:59 | 2021-05-17T14:14:27 | 2021-05-17T13:34:19 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
892,100,749 | 2,362 | Fix web_nlg metadata | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| closed | https://github.com/huggingface/datasets/pull/2362 | 2021-05-14T17:15:07 | 2021-05-17T13:44:17 | 2021-05-17T13:42:28 | {
"login": "julien-c",
"id": 326577,
"type": "User"
} | [] | true | [] |
891,982,808 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | closed | https://github.com/huggingface/datasets/pull/2361 | 2021-05-14T14:45:23 | 2021-08-17T08:30:04 | 2021-08-17T08:30:04 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
891,965,964 | 2,360 | Automatically detect datasets with compatible task schemas | See description of #2255 for details.
| open | https://github.com/huggingface/datasets/issues/2360 | 2021-05-14T14:23:40 | 2021-05-14T14:23:40 | null | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
891,946,017 | 2,359 | Allow model labels to be passed during task preparation | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:... | closed | https://github.com/huggingface/datasets/issues/2359 | 2021-05-14T13:58:28 | 2022-10-05T17:37:22 | 2022-10-05T17:37:22 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
891,269,577 | 2,358 | Roman Urdu Stopwords List | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | closed | https://github.com/huggingface/datasets/pull/2358 | 2021-05-13T18:29:27 | 2021-05-19T08:50:43 | 2021-05-17T14:05:10 | {
"login": "devzohaib",
"id": 58664161,
"type": "User"
} | [] | true | [] |
890,595,693 | 2,357 | Adding Microsoft CodeXGlue Datasets | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR wi... | closed | https://github.com/huggingface/datasets/pull/2357 | 2021-05-13T00:43:01 | 2021-06-08T09:29:57 | 2021-06-08T09:29:57 | {
"login": "ncoop57",
"id": 7613470,
"type": "User"
} | [] | true | [] |
890,484,408 | 2,355 | normalized TOCs and titles in data cards | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also ha... | closed | https://github.com/huggingface/datasets/pull/2355 | 2021-05-12T20:59:59 | 2021-05-14T13:23:12 | 2021-05-14T13:23:12 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.