id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,164,406,008 | 3,880 | Change the framework switches to the new syntax | This PR updates the syntax of the framework-specific code samples. With this new syntax, you'll be able to:
- have paragraphs of text be framework-specific instead of just code samples
- have support for Flax code samples if you want.
This should be merged after https://github.com/huggingface/doc-builder/pull/63... | closed | https://github.com/huggingface/datasets/pull/3880 | 2022-03-09T20:29:10 | 2022-03-15T14:13:28 | 2022-03-15T14:13:27 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | true | [] |
1,164,311,612 | 3,879 | SQuAD v2 metric: create README.md | Proposing SQuAD v2 metric card | closed | https://github.com/huggingface/datasets/pull/3879 | 2022-03-09T18:47:56 | 2022-03-10T16:48:59 | 2022-03-10T16:48:59 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,164,305,335 | 3,878 | Update cats_vs_dogs size | It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu). | closed | https://github.com/huggingface/datasets/pull/3878 | 2022-03-09T18:40:56 | 2022-09-30T08:47:43 | 2022-03-10T14:21:23 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,164,146,311 | 3,877 | Align metadata to DCAT/DCAT-AP | **Is your feature request related to a problem? Please describe.**
Align to DCAT metadata to describe datasets
**Describe the solution you'd like**
Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant
**Describe alternatives you've considered**
**Addition... | open | https://github.com/huggingface/datasets/issues/3877 | 2022-03-09T16:12:25 | 2022-03-09T16:33:42 | null | {
"login": "EmidioStani",
"id": 278367,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,164,045,075 | 3,876 | Fix download_mode in dataset_module_factory | Fix `download_mode` value set in `dataset_module_factory`.
Before the fix, it was set to `bool` (default to `False`).
Also set properly its default value in all public functions. | closed | https://github.com/huggingface/datasets/pull/3876 | 2022-03-09T14:54:33 | 2022-03-10T08:47:00 | 2022-03-10T08:46:59 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,164,029,673 | 3,875 | Module namespace cleanup for v2.0 | This is an attempt to make the user-facing `datasets`' submodule namespace cleaner:
In particular, this PR does the following:
* removes the unused `zip_nested` and `flatten_nest_dict` and their accompanying tests
* removes `pyarrow` from the top-level namespace
* properly uses `__all__` and the `from <module> i... | closed | https://github.com/huggingface/datasets/pull/3875 | 2022-03-09T14:43:07 | 2022-03-11T15:42:06 | 2022-03-11T15:42:05 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,164,013,511 | 3,874 | add MSE and MAE metrics - V2 | Created a new pull request to resolve unrelated changes in PR caused due to rebasing.
Ref Older PR : [#3845](https://github.com/huggingface/datasets/pull/3845)
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) | closed | https://github.com/huggingface/datasets/pull/3874 | 2022-03-09T14:30:16 | 2022-03-09T17:20:42 | 2022-03-09T17:18:20 | {
"login": "dnaveenr",
"id": 17746528,
"type": "User"
} | [] | true | [] |
1,163,961,578 | 3,873 | Create SQuAD metric README.md | Proposal for a metrics card structure (with an example based on the SQuAD metric).
@thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!). | closed | https://github.com/huggingface/datasets/pull/3873 | 2022-03-09T13:47:08 | 2022-03-10T16:45:57 | 2022-03-10T16:45:57 | {
"login": "sashavor",
"id": 14205986,
"type": "User"
} | [] | true | [] |
1,163,853,026 | 3,872 | HTTP error 504 Server Error: Gateway Time-out | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib... | closed | https://github.com/huggingface/datasets/issues/3872 | 2022-03-09T12:03:37 | 2022-03-15T16:19:50 | 2022-03-15T16:19:50 | {
"login": "illiyas-sha",
"id": 83509215,
"type": "User"
} | [] | false | [] |
1,163,714,113 | 3,871 | add pandas to env command | Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command. | closed | https://github.com/huggingface/datasets/pull/3871 | 2022-03-09T09:48:51 | 2022-03-09T11:21:38 | 2022-03-09T11:21:37 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
1,163,633,239 | 3,870 | Add wikitablequestions dataset | null | closed | https://github.com/huggingface/datasets/pull/3870 | 2022-03-09T08:27:43 | 2022-03-14T11:19:24 | 2022-03-14T11:16:19 | {
"login": "SivilTaram",
"id": 10275209,
"type": "User"
} | [] | true | [] |
1,163,434,800 | 3,869 | Making the Hub the place for datasets in Portuguese | Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community.
What are some datasets in Portuguese worth ... | open | https://github.com/huggingface/datasets/issues/3869 | 2022-03-09T03:06:18 | 2022-03-09T09:04:09 | null | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,162,914,114 | 3,868 | Ignore duplicate keys if `ignore_verifications=True` | Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`. | closed | https://github.com/huggingface/datasets/pull/3868 | 2022-03-08T17:14:56 | 2022-03-09T13:50:45 | 2022-03-09T13:50:44 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,162,896,605 | 3,867 | Update for the rename doc-builder -> hf-doc-utils | This PR adapts the job to the upcoming change of name of `doc-builder`. | closed | https://github.com/huggingface/datasets/pull/3867 | 2022-03-08T16:58:25 | 2023-09-24T09:54:44 | 2022-03-08T17:30:45 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | true | [] |
1,162,833,848 | 3,866 | Bring back imgs so that forsk dont get broken | null | closed | https://github.com/huggingface/datasets/pull/3866 | 2022-03-08T16:01:31 | 2022-03-08T17:37:02 | 2022-03-08T17:37:01 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,162,821,908 | 3,865 | Add logo img | null | closed | https://github.com/huggingface/datasets/pull/3865 | 2022-03-08T15:50:59 | 2023-09-24T09:54:31 | 2022-03-08T16:01:59 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,162,804,942 | 3,864 | Update image dataset tags | Align the existing image datasets' tags with new tags introduced in #3800. | closed | https://github.com/huggingface/datasets/pull/3864 | 2022-03-08T15:36:32 | 2022-03-08T17:04:47 | 2022-03-08T17:04:46 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,162,802,857 | 3,863 | Update code blocks | Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx | closed | https://github.com/huggingface/datasets/pull/3863 | 2022-03-08T15:34:43 | 2022-03-09T16:45:30 | 2022-03-09T16:45:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,162,753,733 | 3,862 | Manipulate columns on IterableDataset (rename columns, cast, etc.) | I added:
- add_column
- cast
- rename_column
- rename_columns
related to https://github.com/huggingface/datasets/issues/3444
TODO:
- [x] docs
- [x] tests | closed | https://github.com/huggingface/datasets/pull/3862 | 2022-03-08T14:53:57 | 2022-03-10T16:40:22 | 2022-03-10T16:40:21 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,162,702,044 | 3,861 | big_patent cased version | Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is th... | closed | https://github.com/huggingface/datasets/issues/3861 | 2022-03-08T14:08:55 | 2023-04-21T14:32:03 | 2023-04-21T14:32:03 | {
"login": "slvcsl",
"id": 25265140,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
1,162,623,329 | 3,860 | Small doc fixes | null | closed | https://github.com/huggingface/datasets/pull/3860 | 2022-03-08T12:55:39 | 2022-03-08T17:37:13 | 2022-03-08T17:37:13 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,162,559,333 | 3,859 | Unable to dowload big_patent (FileNotFoundError) | ## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent... | closed | https://github.com/huggingface/datasets/issues/3859 | 2022-03-08T11:47:12 | 2022-03-08T13:04:09 | 2022-03-08T13:04:04 | {
"login": "slvcsl",
"id": 25265140,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,162,526,688 | 3,858 | Udpate index.mdx margins | null | closed | https://github.com/huggingface/datasets/pull/3858 | 2022-03-08T11:11:52 | 2022-03-08T12:57:57 | 2022-03-08T12:57:56 | {
"login": "gary149",
"id": 3841370,
"type": "User"
} | [] | true | [] |
1,162,525,353 | 3,857 | Order of dataset changes due to glob.glob. | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.g... | open | https://github.com/huggingface/datasets/issues/3857 | 2022-03-08T11:10:30 | 2022-03-14T11:08:22 | null | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
1,162,522,034 | 3,856 | Fix push_to_hub with null images | This code currently raises an error because of the null image:
```python
import datasets
dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] }
features = datasets.Features({
'name': datasets.Value('string'),
'image': datasets.Image(),
})
dataset = datasets.Dataset.fro... | closed | https://github.com/huggingface/datasets/pull/3856 | 2022-03-08T11:07:09 | 2022-03-08T15:22:17 | 2022-03-08T15:22:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,162,448,589 | 3,855 | Bad error message when loading private dataset | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command th... | closed | https://github.com/huggingface/datasets/issues/3855 | 2022-03-08T09:55:17 | 2022-07-11T15:06:40 | 2022-07-11T15:06:40 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,162,434,199 | 3,854 | load only England English dataset from common voice english dataset | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Prop... | closed | https://github.com/huggingface/datasets/issues/3854 | 2022-03-08T09:40:52 | 2024-03-23T12:40:58 | 2022-03-09T08:13:33 | {
"login": "amanjaiswal777",
"id": 36677001,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
1,162,386,592 | 3,853 | add ontonotes_conll dataset | # Introduction of the dataset
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task
, inclu... | closed | https://github.com/huggingface/datasets/pull/3853 | 2022-03-08T08:53:42 | 2022-03-15T10:48:02 | 2022-03-15T10:48:02 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | true | [] |
1,162,252,337 | 3,852 | Redundant add dataset information and dead link. | > Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation.
The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this inform... | closed | https://github.com/huggingface/datasets/pull/3852 | 2022-03-08T05:57:05 | 2022-03-08T16:54:36 | 2022-03-08T16:54:36 | {
"login": "dnaveenr",
"id": 17746528,
"type": "User"
} | [] | true | [] |
1,162,137,998 | 3,851 | Load audio dataset error | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
prin... | closed | https://github.com/huggingface/datasets/issues/3851 | 2022-03-08T02:16:04 | 2022-09-27T12:13:55 | 2022-03-08T11:20:06 | {
"login": "lemoner20",
"id": 31890987,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,162,126,030 | 3,850 | [feat] Add tqdm arguments | In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible. | closed | https://github.com/huggingface/datasets/pull/3850 | 2022-03-08T01:53:25 | 2022-12-16T05:34:07 | 2022-12-16T05:34:07 | {
"login": "penguinwang96825",
"id": 28087825,
"type": "User"
} | [] | true | [] |
1,162,091,075 | 3,849 | Add "Adversarial GLUE" dataset to datasets library | Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/
```python
>>> import datasets
>>> >>> datasets.load_dataset('adv_glue')
Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0... | closed | https://github.com/huggingface/datasets/pull/3849 | 2022-03-08T00:47:11 | 2022-03-28T11:17:14 | 2022-03-28T11:12:04 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [] | true | [] |
1,162,076,902 | 3,848 | NonMatchingChecksumError when checksum is None | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | closed | https://github.com/huggingface/datasets/issues/3848 | 2022-03-08T00:24:12 | 2022-03-15T14:37:26 | 2022-03-15T12:28:23 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,161,856,417 | 3,847 | Datasets' cache not re-used | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with ca... | open | https://github.com/huggingface/datasets/issues/3847 | 2022-03-07T19:55:15 | 2025-05-19T11:58:55 | null | {
"login": "gejinchen",
"id": 15106980,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,161,810,226 | 3,846 | Update faiss device docstring | Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset` | closed | https://github.com/huggingface/datasets/pull/3846 | 2022-03-07T19:06:59 | 2022-03-07T19:21:23 | 2022-03-07T19:21:22 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,161,739,483 | 3,845 | add RMSE and MAE metrics. | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Please suggest any changes if r... | closed | https://github.com/huggingface/datasets/pull/3845 | 2022-03-07T17:53:24 | 2022-03-09T16:50:03 | 2022-03-09T16:50:03 | {
"login": "dnaveenr",
"id": 17746528,
"type": "User"
} | [] | true | [] |
1,161,686,754 | 3,844 | Add rmse and mae metrics. | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Any suggestions and changes req... | closed | https://github.com/huggingface/datasets/pull/3844 | 2022-03-07T17:06:38 | 2022-03-07T17:24:32 | 2022-03-07T17:15:06 | {
"login": "dnaveenr",
"id": 17746528,
"type": "User"
} | [] | true | [] |
1,161,397,812 | 3,843 | Fix Google Drive URL to avoid Virus scan warning in streaming mode | The streaming version of https://github.com/huggingface/datasets/pull/3787.
Fix #3835
CC: @albertvillanova | closed | https://github.com/huggingface/datasets/pull/3843 | 2022-03-07T13:09:19 | 2022-03-15T12:30:25 | 2022-03-15T12:30:23 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,161,336,483 | 3,842 | Align IterableDataset.shuffle with Dataset.shuffle | From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode).
Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe... | closed | https://github.com/huggingface/datasets/pull/3842 | 2022-03-07T12:10:46 | 2022-03-07T19:03:43 | 2022-03-07T19:03:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,161,203,842 | 3,841 | Pyright reportPrivateImportUsage when `from datasets import load_dataset` | ## Describe the bug
Pyright complains about module not exported.
## Steps to reproduce the bug
Use an editor/IDE with Pyright Language server with default configuration:
```python
from datasets import load_dataset
```
## Expected results
No complain from Pyright
## Actual results
Pyright complain below... | closed | https://github.com/huggingface/datasets/issues/3841 | 2022-03-07T10:24:04 | 2023-02-18T19:14:03 | 2023-02-13T13:48:41 | {
"login": "lkhphuc",
"id": 12573521,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,161,183,773 | 3,840 | Pin responses to fix CI for Windows | Temporarily fix CI for Windows by pinning `responses`.
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
Fix: #3839 | closed | https://github.com/huggingface/datasets/pull/3840 | 2022-03-07T10:06:53 | 2022-03-07T10:12:36 | 2022-03-07T10:07:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,161,183,482 | 3,839 | CI is broken for Windows | ## Describe the bug
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
```
___________________ test_datasetdict_from_text_split[test] ____________________
[gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe
split... | closed | https://github.com/huggingface/datasets/issues/3839 | 2022-03-07T10:06:42 | 2022-05-20T14:13:43 | 2022-03-07T10:07:24 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,161,137,406 | 3,838 | Add a data type for labeled images (image segmentation) | It might be a mix of Image and ClassLabel, and the color palette might be generated automatically.
---
### Example
every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette ... | open | https://github.com/huggingface/datasets/issues/3838 | 2022-03-07T09:38:15 | 2024-05-29T16:50:55 | null | {
"login": "severo",
"id": 1676121,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,161,109,031 | 3,837 | Release: 1.18.4 | null | closed | https://github.com/huggingface/datasets/pull/3837 | 2022-03-07T09:13:29 | 2022-03-07T11:07:35 | 2022-03-07T11:07:02 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,161,072,531 | 3,836 | Logo float left | <img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
| closed | https://github.com/huggingface/datasets/pull/3836 | 2022-03-07T08:38:34 | 2022-03-07T20:21:11 | 2022-03-07T09:14:11 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,161,029,205 | 3,835 | The link given on the gigaword does not work | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| closed | https://github.com/huggingface/datasets/issues/3835 | 2022-03-07T07:56:42 | 2022-03-15T12:30:23 | 2022-03-15T12:30:23 | {
"login": "martin6336",
"id": 26357784,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,160,657,937 | 3,834 | Fix dead dataset scripts creation link. | Previous link gives 404 error. Updated with a new dataset scripts creation link. | closed | https://github.com/huggingface/datasets/pull/3834 | 2022-03-06T16:45:48 | 2022-03-07T12:12:07 | 2022-03-07T12:12:07 | {
"login": "dnaveenr",
"id": 17746528,
"type": "User"
} | [] | true | [] |
1,160,543,713 | 3,833 | Small typos in How-to-train tutorial. | null | closed | https://github.com/huggingface/datasets/pull/3833 | 2022-03-06T07:49:49 | 2022-03-07T12:35:33 | 2022-03-07T12:13:17 | {
"login": "lkhphuc",
"id": 12573521,
"type": "User"
} | [] | true | [] |
1,160,503,446 | 3,832 | Making Hugging Face the place to go for Graph NNs datasets | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
In... | open | https://github.com/huggingface/datasets/issues/3832 | 2022-03-06T03:02:58 | 2022-03-14T07:45:38 | null | {
"login": "omarespejel",
"id": 4755430,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "graph",
"color": "7AFCAA"
}
] | false | [] |
1,160,501,000 | 3,831 | when using to_tf_dataset with shuffle is true, not all completed batches are made | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected resul... | closed | https://github.com/huggingface/datasets/issues/3831 | 2022-03-06T02:43:50 | 2022-03-08T15:18:56 | 2022-03-08T15:18:56 | {
"login": "greenned",
"id": 42107709,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,160,181,404 | 3,830 | Got error when load cnn_dailymail dataset | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirec... | closed | https://github.com/huggingface/datasets/issues/3830 | 2022-03-05T01:43:12 | 2022-03-07T06:53:41 | 2022-03-07T06:53:41 | {
"login": "wgong0510",
"id": 78331051,
"type": "User"
} | [
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,160,154,352 | 3,829 | [📄 Docs] Create a `datasets` performance guide. | ## Brief Overview
Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with... | open | https://github.com/huggingface/datasets/issues/3829 | 2022-03-05T00:28:06 | 2022-03-10T16:24:27 | null | {
"login": "dynamicwebpaige",
"id": 3712347,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,160,064,029 | 3,828 | The Pile's _FEATURE spec seems to be incorrect | ## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is specified to be a string, but it's actually a di... | closed | https://github.com/huggingface/datasets/issues/3828 | 2022-03-04T21:25:32 | 2022-03-08T09:30:49 | 2022-03-08T09:30:48 | {
"login": "dlwh",
"id": 9633,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,159,878,436 | 3,827 | Remove deprecated `remove_columns` param in `filter` | A leftover from #3803. | closed | https://github.com/huggingface/datasets/pull/3827 | 2022-03-04T17:23:26 | 2022-03-07T12:37:52 | 2022-03-07T12:37:51 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,159,851,110 | 3,826 | Add IterableDataset.filter | _Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_
I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`:
```python
def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None):
```
TODO:
- [x] tests
- [x] docs
rel... | closed | https://github.com/huggingface/datasets/pull/3826 | 2022-03-04T16:57:23 | 2022-03-09T17:23:13 | 2022-03-09T17:23:11 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,159,802,345 | 3,825 | Update version and date in Wikipedia dataset | CC: @geohci | closed | https://github.com/huggingface/datasets/pull/3825 | 2022-03-04T16:05:27 | 2022-03-04T17:24:37 | 2022-03-04T17:24:36 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,159,574,186 | 3,824 | Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute` | Fix #3818 | closed | https://github.com/huggingface/datasets/pull/3824 | 2022-03-04T12:04:40 | 2022-03-04T18:04:22 | 2022-03-04T18:04:21 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,159,497,844 | 3,823 | 500 internal server error when trying to open a dataset composed of Zarr stores | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of da... | closed | https://github.com/huggingface/datasets/issues/3823 | 2022-03-04T10:37:14 | 2022-03-08T09:47:39 | 2022-03-08T09:47:39 | {
"login": "jacobbieker",
"id": 7170359,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,159,395,728 | 3,822 | Add Biwi Kinect Head Pose Database | ## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles.
- ... | closed | https://github.com/huggingface/datasets/issues/3822 | 2022-03-04T08:48:39 | 2025-04-07T13:04:25 | 2022-06-01T13:00:47 | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,159,371,927 | 3,821 | Update Wikipedia dataset | This PR combines all updates to Wikipedia dataset.
Once approved, this will be used to generate the pre-processed Wikipedia datasets.
Finally, this PR will be able to be merged into master:
- NOT using squash
- BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved
TODO:
- [x] #3435
- [x]... | closed | https://github.com/huggingface/datasets/pull/3821 | 2022-03-04T08:19:21 | 2022-03-21T12:35:23 | 2022-03-21T12:31:00 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,159,106,603 | 3,820 | `pubmed_qa` checksum mismatch | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e... | closed | https://github.com/huggingface/datasets/issues/3820 | 2022-03-04T00:28:08 | 2022-03-04T09:42:32 | 2022-03-04T09:42:32 | {
"login": "jon-tow",
"id": 41410219,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,158,848,288 | 3,819 | Fix typo in doc build yml | cc: @lhoestq | closed | https://github.com/huggingface/datasets/pull/3819 | 2022-03-03T20:08:44 | 2022-03-04T13:07:41 | 2022-03-04T13:07:41 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,158,788,545 | 3,818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metr... | closed | https://github.com/huggingface/datasets/issues/3818 | 2022-03-03T18:57:54 | 2022-03-04T18:04:21 | 2022-03-04T18:04:21 | {
"login": "lmvasque",
"id": 6901031,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,158,592,335 | 3,817 | Simplify Common Voice code | In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming.
In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-strea... | closed | https://github.com/huggingface/datasets/pull/3817 | 2022-03-03T16:01:21 | 2022-03-04T14:51:48 | 2022-03-04T12:39:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,158,589,913 | 3,816 | Doc new UI test workflows2 | null | closed | https://github.com/huggingface/datasets/pull/3816 | 2022-03-03T15:59:14 | 2022-10-04T09:35:53 | 2022-03-03T16:42:15 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,158,589,512 | 3,815 | Fix iter_archive getting reset | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times... | closed | https://github.com/huggingface/datasets/pull/3815 | 2022-03-03T15:58:52 | 2022-03-03T18:06:37 | 2022-03-03T18:06:13 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,158,518,995 | 3,814 | Handle Nones in PyArrow struct | This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
| closed | https://github.com/huggingface/datasets/pull/3814 | 2022-03-03T15:03:35 | 2022-03-03T16:37:44 | 2022-03-03T16:37:43 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,158,474,859 | 3,813 | Add MetaShift dataset | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | closed | https://github.com/huggingface/datasets/issues/3813 | 2022-03-03T14:26:45 | 2022-04-10T13:39:59 | 2022-04-10T13:39:59 | {
"login": "osanseviero",
"id": 7246357,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
},
{
"name": "vision",
"color": "bfdadc"
}
] | false | [] |
1,158,369,995 | 3,812 | benchmark streaming speed with tar vs zip archives | # do not merge
## Hypothesis
packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar)
## Data
I host it [here](https://huggingface.co/datasets/polinaeter... | closed | https://github.com/huggingface/datasets/pull/3812 | 2022-03-03T12:48:41 | 2022-03-03T14:55:34 | 2022-03-03T14:55:33 | {
"login": "polinaeterna",
"id": 16348744,
"type": "User"
} | [] | true | [] |
1,158,234,407 | 3,811 | Update dev doc gh workflows | Reflect changes from https://github.com/huggingface/transformers/pull/15891 | closed | https://github.com/huggingface/datasets/pull/3811 | 2022-03-03T10:29:01 | 2022-10-04T09:35:54 | 2022-03-03T10:45:54 | {
"login": "mishig25",
"id": 11827707,
"type": "User"
} | [] | true | [] |
1,158,202,093 | 3,810 | Update version of xcopa dataset | Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases
We updated our loading script, but we did not bump a new version number:
- #3254
This PR updates our loading script version from `1.0.0` to `1.1.0`. | closed | https://github.com/huggingface/datasets/pull/3810 | 2022-03-03T09:58:25 | 2022-03-03T10:44:30 | 2022-03-03T10:44:29 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,158,143,480 | 3,809 | Checksums didn't match for datasets on Google Drive | ## Describe the bug
Datasets hosted on Google Drive do not seem to work right now.
Loading them fails with a checksum error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
for dataset in ["head_qa", "yelp_review_full"]:
try:
load_dataset(dataset)
except Exception as excep... | closed | https://github.com/huggingface/datasets/issues/3809 | 2022-03-03T09:01:10 | 2022-03-03T09:24:58 | 2022-03-03T09:24:05 | {
"login": "muelletm",
"id": 11507045,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
},
{
"name": "duplicate",
"color": "cfd3d7"
}
] | false | [] |
1,157,650,043 | 3,808 | Pre-Processing Cache Fails when using a Factory pattern | ## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bug
```python
def preprocess_function_factory(augmenta... | closed | https://github.com/huggingface/datasets/issues/3808 | 2022-03-02T20:18:43 | 2022-03-10T23:01:47 | 2022-03-10T23:01:47 | {
"login": "Helw150",
"id": 9847335,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,157,531,812 | 3,807 | NonMatchingChecksumError in xcopa dataset | ## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be loaded correctly.
## Actual results
Fails ... | closed | https://github.com/huggingface/datasets/issues/3807 | 2022-03-02T18:10:19 | 2022-05-20T06:00:42 | 2022-03-03T17:40:31 | {
"login": "afcruzs-ms",
"id": 93286455,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,157,505,826 | 3,806 | Fix Spanish data file URL in wiki_lingua dataset | This PR fixes the URL for Spanish data file.
Previously, Spanish had the same URL as Vietnamese data file. | closed | https://github.com/huggingface/datasets/pull/3806 | 2022-03-02T17:43:42 | 2022-03-03T08:38:17 | 2022-03-03T08:38:16 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,157,454,884 | 3,805 | Remove decode: true for image feature in head_qa | This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it. | closed | https://github.com/huggingface/datasets/pull/3805 | 2022-03-02T16:58:34 | 2022-03-07T12:13:36 | 2022-03-07T12:13:35 | {
"login": "craffel",
"id": 417568,
"type": "User"
} | [] | true | [] |
1,157,297,278 | 3,804 | Text builder with custom separator line boundaries | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line bound... | open | https://github.com/huggingface/datasets/issues/3804 | 2022-03-02T14:50:16 | 2022-03-16T15:53:59 | null | {
"login": "cronoik",
"id": 18630848,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
1,157,271,679 | 3,803 | Remove deprecated methods/params (preparation for v2.0) | This PR removes the following deprecated methos/params:
* `Dataset.cast_`/`DatasetDict.cast_`
* `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_`
* `Dataset.remove_columns_`/`DatasetDict.remove_columns_`
* `Dataset.rename_columns_`/`DatasetDict.rename_columns_`
* `prepare_module`
* param... | closed | https://github.com/huggingface/datasets/pull/3803 | 2022-03-02T14:29:12 | 2022-03-02T14:53:21 | 2022-03-02T14:53:21 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,157,009,964 | 3,802 | Release of FairLex dataset |
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing**
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Cou... | closed | https://github.com/huggingface/datasets/pull/3802 | 2022-03-02T10:40:18 | 2022-03-02T15:21:10 | 2022-03-02T15:18:54 | {
"login": "iliaschalkidis",
"id": 1626984,
"type": "User"
} | [] | true | [] |
1,155,649,279 | 3,801 | [Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters | Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing.
In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0**
In particular, `Dataset.ma... | closed | https://github.com/huggingface/datasets/pull/3801 | 2022-03-01T18:06:43 | 2022-03-07T16:30:30 | 2022-03-07T16:30:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,155,620,761 | 3,800 | Added computer vision tasks | Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks. | closed | https://github.com/huggingface/datasets/pull/3800 | 2022-03-01T17:37:46 | 2022-03-04T07:15:55 | 2022-03-04T07:15:55 | {
"login": "merveenoyan",
"id": 53175384,
"type": "User"
} | [] | true | [] |
1,155,356,102 | 3,799 | Xtreme-S Metrics | **Added datasets (TODO)**:
- [x] MLS
- [x] Covost2
- [x] Minds-14
- [x] Voxpopuli
- [x] FLoRes (need data)
**Metrics**: Done | closed | https://github.com/huggingface/datasets/pull/3799 | 2022-03-01T13:42:28 | 2022-03-16T14:40:29 | 2022-03-16T14:40:26 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
1,154,411,066 | 3,798 | Fix error message in CSV loader for newer Pandas versions | Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this:
```python
csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f
```
CC: @... | closed | https://github.com/huggingface/datasets/pull/3798 | 2022-02-28T18:24:10 | 2022-02-28T18:51:39 | 2022-02-28T18:51:38 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,154,383,063 | 3,797 | Reddit dataset card contribution | Description tags for webis-tldr-17 added. | closed | https://github.com/huggingface/datasets/pull/3797 | 2022-02-28T17:53:18 | 2023-03-09T22:08:58 | 2022-03-01T12:58:57 | {
"login": "anna-kay",
"id": 56791604,
"type": "User"
} | [] | true | [] |
1,154,298,629 | 3,796 | Skip checksum computation if `ignore_verifications` is `True` | This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance) | closed | https://github.com/huggingface/datasets/pull/3796 | 2022-02-28T16:28:45 | 2022-02-28T17:03:46 | 2022-02-28T17:03:46 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,153,261,281 | 3,795 | can not flatten natural_questions dataset | ## Describe the bug
after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('natural_questions',cache_dir = 'data/datase... | closed | https://github.com/huggingface/datasets/issues/3795 | 2022-02-27T13:57:40 | 2022-03-21T14:36:12 | 2022-03-21T14:36:12 | {
"login": "Hannibal046",
"id": 38466901,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,153,185,343 | 3,794 | Add Mahalanobis distance metric | Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P.
In this PR I implement the metric in a simple way with the help of numpy only.
Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can mak... | closed | https://github.com/huggingface/datasets/pull/3794 | 2022-02-27T10:56:31 | 2022-03-02T14:46:15 | 2022-03-02T14:46:15 | {
"login": "JoaoLages",
"id": 17574157,
"type": "User"
} | [] | true | [] |
1,150,974,950 | 3,793 | Docs new UI actions no self hosted | Removes the need to have a self-hosted runner for the dev documentation | closed | https://github.com/huggingface/datasets/pull/3793 | 2022-02-25T23:48:55 | 2022-03-01T15:55:29 | 2022-03-01T15:55:28 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [] | true | [] |
1,150,812,404 | 3,792 | Checksums didn't match for dataset source | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.co... | closed | https://github.com/huggingface/datasets/issues/3792 | 2022-02-25T19:55:09 | 2024-03-13T12:25:08 | 2022-02-28T08:44:18 | {
"login": "rafikg",
"id": 13174842,
"type": "User"
} | [
{
"name": "dataset-viewer",
"color": "E5583E"
}
] | false | [] |
1,150,733,475 | 3,791 | Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem | As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Addition... | closed | https://github.com/huggingface/datasets/pull/3791 | 2022-02-25T18:26:35 | 2022-03-01T13:10:43 | 2022-03-01T13:10:42 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
1,150,646,899 | 3,790 | Add doc builder scripts | I added the three scripts:
- build_dev_documentation.yml
- build_documentation.yml
- delete_dev_documentation.yml
I got them from `transformers` and did a few changes:
- I removed the `transformers`-specific dependencies
- I changed all the paths to be "datasets" instead of "transformers"
- I passed the `--lib... | closed | https://github.com/huggingface/datasets/pull/3790 | 2022-02-25T16:38:47 | 2022-03-01T15:55:42 | 2022-03-01T15:55:41 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
1,150,587,404 | 3,789 | Add URL and ID fields to Wikipedia dataset | This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using.
About the conversion from title to URL, I found that apart from replacing blanks with underscores, som... | closed | https://github.com/huggingface/datasets/pull/3789 | 2022-02-25T15:34:37 | 2022-03-04T08:24:24 | 2022-03-04T08:24:23 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,150,375,720 | 3,788 | Only-data dataset loaded unexpectedly as validation split | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | open | https://github.com/huggingface/datasets/issues/3788 | 2022-02-25T12:11:39 | 2022-02-28T11:22:22 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,150,235,569 | 3,787 | Fix Google Drive URL to avoid Virus scan warning | This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs.
Fix #3786, fix #3784. | closed | https://github.com/huggingface/datasets/pull/3787 | 2022-02-25T09:35:12 | 2022-03-04T20:43:32 | 2022-02-25T11:56:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,150,233,067 | 3,786 | Bug downloading Virus scan warning page from Google Drive URLs | ## Describe the bug
Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself.
See:
- #3758
- #3773
- #3784
| closed | https://github.com/huggingface/datasets/issues/3786 | 2022-02-25T09:32:23 | 2022-03-03T09:25:59 | 2022-02-25T11:56:35 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,150,069,801 | 3,785 | Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset) | This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets.
So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ
The new link now looks li... | closed | https://github.com/huggingface/datasets/pull/3785 | 2022-02-25T05:48:57 | 2022-03-03T16:43:47 | 2022-03-03T14:03:37 | {
"login": "AngadSethi",
"id": 58678541,
"type": "User"
} | [] | true | [] |
1,150,057,955 | 3,784 | Unable to Download CNN-Dailymail Dataset | ## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the link which would originally download the dat... | closed | https://github.com/huggingface/datasets/issues/3784 | 2022-02-25T05:24:47 | 2022-03-03T14:05:17 | 2022-03-03T14:05:17 | {
"login": "AngadSethi",
"id": 58678541,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
1,149,256,744 | 3,783 | Support passing str to iter_files | null | closed | https://github.com/huggingface/datasets/pull/3783 | 2022-02-24T12:58:15 | 2022-02-24T16:01:40 | 2022-02-24T16:01:40 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
1,148,994,022 | 3,782 | Error of writing with different schema, due to nonpreservation of nullability | ## 1. Case
```
dataset.map(
batched=True,
disable_nullable=True,
)
```
will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516
`pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema`
... | closed | https://github.com/huggingface/datasets/pull/3782 | 2022-02-24T08:23:07 | 2022-03-03T14:54:39 | 2022-03-03T14:54:39 | {
"login": "richarddwang",
"id": 17963619,
"type": "User"
} | [] | true | [] |
1,148,599,680 | 3,781 | Reddit dataset card additions | The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38
It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well.
The task at which t... | closed | https://github.com/huggingface/datasets/pull/3781 | 2022-02-23T21:29:16 | 2022-02-28T18:00:40 | 2022-02-28T11:21:14 | {
"login": "anna-kay",
"id": 56791604,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.