id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
864,220,031 | 2,246 | Faster map w/ input_columns & faster slicing w/ Iterable keys | @lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is ... | closed | https://github.com/huggingface/datasets/pull/2246 | 2021-04-21T19:49:07 | 2021-04-26T16:13:59 | 2021-04-26T16:13:59 | {
"login": "norabelrose",
"id": 39116809,
"type": "User"
} | [] | true | [] |
863,191,655 | 2,245 | Add `key` type and duplicates verification with hashing | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x]... | closed | https://github.com/huggingface/datasets/pull/2245 | 2021-04-20T20:03:19 | 2021-05-10T18:04:37 | 2021-05-10T17:31:22 | {
"login": "NikhilBartwal",
"id": 42388668,
"type": "User"
} | [] | true | [] |
863,029,946 | 2,244 | Set specific cache directories per test function call | Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests ar... | open | https://github.com/huggingface/datasets/pull/2244 | 2021-04-20T17:06:22 | 2022-07-06T15:19:48 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
862,909,389 | 2,243 | Map is slow and processes batches one after another | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't giv... | closed | https://github.com/huggingface/datasets/issues/2243 | 2021-04-20T14:58:20 | 2021-05-03T17:54:33 | 2021-05-03T17:54:32 | {
"login": "villmow",
"id": 2743060,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
862,870,205 | 2,242 | Link to datasets viwer on Quick Tour page returns "502 Bad Gateway" | Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc | closed | https://github.com/huggingface/datasets/issues/2242 | 2021-04-20T14:19:51 | 2021-04-20T15:02:45 | 2021-04-20T15:02:45 | {
"login": "martavillegas",
"id": 6735707,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
862,696,460 | 2,241 | Add SLR32 to OpenSLR | I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa | closed | https://github.com/huggingface/datasets/pull/2241 | 2021-04-20T11:02:45 | 2021-04-23T16:21:24 | 2021-04-23T15:36:15 | {
"login": "cahya-wirawan",
"id": 7669893,
"type": "User"
} | [] | true | [] |
862,537,856 | 2,240 | Clarify how to load wikihow | Explain clearer how to load the dataset in the manual download instructions.
En relation with #2239. | closed | https://github.com/huggingface/datasets/pull/2240 | 2021-04-20T08:02:58 | 2021-04-21T09:54:57 | 2021-04-21T09:54:57 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
861,904,306 | 2,239 | Error loading wikihow dataset | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](htt... | closed | https://github.com/huggingface/datasets/issues/2239 | 2021-04-19T21:02:31 | 2021-04-20T16:33:11 | 2021-04-20T16:33:11 | {
"login": "odellus",
"id": 4686956,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
861,518,291 | 2,238 | NLU evaluation data | New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data | closed | https://github.com/huggingface/datasets/pull/2238 | 2021-04-19T16:47:20 | 2021-04-23T15:32:05 | 2021-04-23T15:32:05 | {
"login": "dkajtoch",
"id": 32985207,
"type": "User"
} | [] | true | [] |
861,427,439 | 2,237 | Update Dataset.dataset_size after transformed with map | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | open | https://github.com/huggingface/datasets/issues/2237 | 2021-04-19T15:19:38 | 2021-04-20T14:22:05 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
861,388,145 | 2,236 | Request to add StrategyQA dataset | ## Request to add StrategyQA dataset
- **Name:** StrategyQA
- **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa)
- **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf)
- **Data:** [here](https://allenai.org/data/strategyqa)
- **Motivation:** uniquely-formulated dataset that als... | open | https://github.com/huggingface/datasets/issues/2236 | 2021-04-19T14:46:26 | 2021-04-19T14:46:26 | null | {
"login": "sarahwie",
"id": 8027676,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
861,040,716 | 2,235 | Update README.md | Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark | closed | https://github.com/huggingface/datasets/pull/2235 | 2021-04-19T08:21:02 | 2021-04-19T12:49:19 | 2021-04-19T12:49:19 | {
"login": "PierreColombo",
"id": 22492839,
"type": "User"
} | [] | true | [] |
860,442,246 | 2,234 | Fix bash snippet formatting in ADD_NEW_DATASET.md | This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting. | closed | https://github.com/huggingface/datasets/pull/2234 | 2021-04-17T16:01:08 | 2021-04-19T10:57:31 | 2021-04-19T07:51:36 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
860,097,084 | 2,233 | Fix `xnli` dataset tuple key | Closes #2229
The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int).
The key was thus ported to `str` keeping the original information intact. | closed | https://github.com/huggingface/datasets/pull/2233 | 2021-04-16T19:12:42 | 2021-04-19T08:56:42 | 2021-04-19T08:56:42 | {
"login": "NikhilBartwal",
"id": 42388668,
"type": "User"
} | [] | true | [] |
860,075,931 | 2,232 | Start filling GLUE dataset card | The dataset card was pretty much empty.
I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.
cc @sgugger | closed | https://github.com/huggingface/datasets/pull/2232 | 2021-04-16T18:37:37 | 2021-04-21T09:33:09 | 2021-04-21T09:33:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
859,850,488 | 2,231 | Fix map when removing columns on a formatted dataset | This should fix issue #2226
The `remove_columns` argument was ignored on formatted datasets | closed | https://github.com/huggingface/datasets/pull/2231 | 2021-04-16T14:08:55 | 2021-04-16T15:10:05 | 2021-04-16T15:10:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
859,817,159 | 2,230 | Keys yielded while generating dataset are not being checked | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https... | closed | https://github.com/huggingface/datasets/issues/2230 | 2021-04-16T13:29:47 | 2021-05-10T17:31:21 | 2021-05-10T17:31:21 | {
"login": "NikhilBartwal",
"id": 42388668,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
859,810,602 | 2,229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
... | closed | https://github.com/huggingface/datasets/issues/2229 | 2021-04-16T13:21:53 | 2021-04-19T08:56:42 | 2021-04-19T08:56:42 | {
"login": "NikhilBartwal",
"id": 42388668,
"type": "User"
} | [] | false | [] |
859,795,563 | 2,228 | [WIP] Add ArrayXD support for fixed size list. | Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size. | open | https://github.com/huggingface/datasets/pull/2228 | 2021-04-16T13:04:08 | 2022-07-06T15:19:48 | null | {
"login": "jblemoine",
"id": 22685854,
"type": "User"
} | [] | true | [] |
859,771,526 | 2,227 | Use update_metadata_with_features decorator in class_encode_column method | Following @mariosasko 's comment | closed | https://github.com/huggingface/datasets/pull/2227 | 2021-04-16T12:31:41 | 2021-04-16T13:49:40 | 2021-04-16T13:49:39 | {
"login": "SBrandeis",
"id": 33657802,
"type": "User"
} | [] | true | [] |
859,720,302 | 2,226 | Batched map fails when removing all columns | Hi @lhoestq ,
I'm hijacking this issue, because I'm currently trying to do the approach you recommend:
> Currently the optimal setup for single-column computations is probably to do something like
>
> ```python
> result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names)
> ```
He... | closed | https://github.com/huggingface/datasets/issues/2226 | 2021-04-16T11:17:01 | 2022-10-05T17:32:15 | 2022-10-05T17:32:15 | {
"login": "villmow",
"id": 2743060,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
858,469,561 | 2,225 | fixed one instance of 'train' to 'test' | I believe this should be 'test' instead of 'train' | closed | https://github.com/huggingface/datasets/pull/2225 | 2021-04-15T04:26:40 | 2021-04-15T22:09:50 | 2021-04-15T21:19:09 | {
"login": "alexwdong",
"id": 46733535,
"type": "User"
} | [] | true | [] |
857,983,361 | 2,224 | Raise error if Windows max path length is not disabled | On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.
Linked to discussion in #2220. | open | https://github.com/huggingface/datasets/issues/2224 | 2021-04-14T14:57:20 | 2021-04-14T14:59:13 | null | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | false | [] |
857,870,800 | 2,223 | Set test cache config | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | closed | https://github.com/huggingface/datasets/pull/2223 | 2021-04-14T12:55:24 | 2021-04-15T19:11:25 | 2021-04-15T19:11:25 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
857,847,231 | 2,222 | Fix too long WindowsFileLock name | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | closed | https://github.com/huggingface/datasets/pull/2222 | 2021-04-14T12:26:52 | 2021-04-14T15:00:25 | 2021-04-14T14:46:19 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | true | [] |
857,833,770 | 2,221 | Add SLR70 - SLR80 and SLR86 to OpenSLR dataset | I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are:
Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada. | closed | https://github.com/huggingface/datasets/pull/2221 | 2021-04-14T12:09:18 | 2021-04-14T13:50:19 | 2021-04-14T13:50:19 | {
"login": "cahya-wirawan",
"id": 7669893,
"type": "User"
} | [] | true | [] |
857,774,626 | 2,220 | Fix infinite loop in WindowsFileLock | Raise exception to avoid infinite loop. | closed | https://github.com/huggingface/datasets/pull/2220 | 2021-04-14T10:49:58 | 2021-04-14T14:59:50 | 2021-04-14T14:59:34 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "wontfix",
"color": "ffffff"
}
] | true | [] |
857,321,242 | 2,219 | Added CUAD dataset | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | closed | https://github.com/huggingface/datasets/pull/2219 | 2021-04-13T21:05:03 | 2021-04-24T14:25:51 | 2021-04-16T08:50:44 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
857,238,435 | 2,218 | Duplicates in the LAMA dataset | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | open | https://github.com/huggingface/datasets/issues/2218 | 2021-04-13T18:59:49 | 2021-04-14T21:42:27 | null | {
"login": "amarasovic",
"id": 7276193,
"type": "User"
} | [] | false | [] |
857,011,314 | 2,217 | Revert breaking change in cache_files property | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting... | closed | https://github.com/huggingface/datasets/pull/2217 | 2021-04-13T14:20:04 | 2021-04-14T14:24:24 | 2021-04-14T14:24:23 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
856,955,534 | 2,216 | added real label for glue/mrpc to test set | Added real label to `glue.py` `mrpc` task for test split. | closed | https://github.com/huggingface/datasets/pull/2216 | 2021-04-13T13:20:20 | 2021-04-13T13:53:20 | 2021-04-13T13:53:19 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
856,716,791 | 2,215 | Add datasets SLR35 and SLR36 to OpenSLR | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | closed | https://github.com/huggingface/datasets/pull/2215 | 2021-04-13T08:24:07 | 2021-04-13T14:05:14 | 2021-04-13T14:05:14 | {
"login": "cahya-wirawan",
"id": 7669893,
"type": "User"
} | [] | true | [] |
856,333,657 | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | closed | https://github.com/huggingface/datasets/issues/2214 | 2021-04-12T20:26:01 | 2021-04-23T15:20:02 | 2021-04-23T15:20:02 | {
"login": "nsaphra",
"id": 414788,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
856,025,320 | 2,213 | Fix lc_quad download checksum | Fixes #2211 | closed | https://github.com/huggingface/datasets/pull/2213 | 2021-04-12T14:16:59 | 2021-04-14T22:04:54 | 2021-04-14T13:42:25 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
855,999,133 | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | closed | https://github.com/huggingface/datasets/issues/2212 | 2021-04-12T13:49:56 | 2023-10-03T16:09:19 | 2023-10-03T16:09:18 | {
"login": "hanss0n",
"id": 21348833,
"type": "User"
} | [] | false | [] |
855,988,410 | 2,211 | Getting checksum error when trying to load lc_quad dataset | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | closed | https://github.com/huggingface/datasets/issues/2211 | 2021-04-12T13:38:58 | 2021-04-14T13:42:25 | 2021-04-14T13:42:25 | {
"login": "hanss0n",
"id": 21348833,
"type": "User"
} | [] | false | [] |
855,709,400 | 2,210 | dataloading slow when using HUGE dataset | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | closed | https://github.com/huggingface/datasets/issues/2210 | 2021-04-12T08:33:02 | 2021-04-13T02:03:05 | 2021-04-13T02:03:05 | {
"login": "hwijeen",
"id": 29157715,
"type": "User"
} | [] | false | [] |
855,638,232 | 2,209 | Add code of conduct to the project | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | closed | https://github.com/huggingface/datasets/pull/2209 | 2021-04-12T07:16:14 | 2021-04-12T17:55:52 | 2021-04-12T17:55:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
855,343,835 | 2,208 | Remove Python2 leftovers | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | closed | https://github.com/huggingface/datasets/pull/2208 | 2021-04-11T16:08:03 | 2021-04-14T22:05:36 | 2021-04-14T13:40:51 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
855,267,383 | 2,207 | making labels consistent across the datasets | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | closed | https://github.com/huggingface/datasets/issues/2207 | 2021-04-11T10:03:56 | 2022-06-01T16:23:08 | 2022-06-01T16:21:10 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
855,252,415 | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | closed | https://github.com/huggingface/datasets/issues/2206 | 2021-04-11T08:40:09 | 2021-11-10T12:18:30 | 2021-11-10T12:04:28 | {
"login": "yana-xuyan",
"id": 38536635,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
855,207,605 | 2,205 | Updating citation information on LinCE readme | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | closed | https://github.com/huggingface/datasets/pull/2205 | 2021-04-11T03:18:05 | 2021-04-12T17:53:34 | 2021-04-12T17:53:34 | {
"login": "gaguilar",
"id": 5833357,
"type": "User"
} | [] | true | [] |
855,144,431 | 2,204 | Add configurable options to `seqeval` metric | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | closed | https://github.com/huggingface/datasets/pull/2204 | 2021-04-10T19:58:19 | 2021-04-15T13:49:46 | 2021-04-15T13:49:46 | {
"login": "marrodion",
"id": 44571847,
"type": "User"
} | [] | true | [] |
855,053,595 | 2,203 | updated banking77 train and test data | closed | https://github.com/huggingface/datasets/pull/2203 | 2021-04-10T12:10:10 | 2021-04-23T14:33:39 | 2021-04-23T14:33:39 | {
"login": "hsali",
"id": 6765330,
"type": "User"
} | [] | true | [] | |
854,501,109 | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | closed | https://github.com/huggingface/datasets/pull/2202 | 2021-04-09T12:58:19 | 2021-04-12T17:58:00 | 2021-04-12T17:57:59 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
854,499,563 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the featur... | closed | https://github.com/huggingface/datasets/pull/2201 | 2021-04-09T12:56:19 | 2021-04-12T13:32:17 | 2021-04-12T13:32:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
854,449,656 | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | closed | https://github.com/huggingface/datasets/issues/2200 | 2021-04-09T11:47:13 | 2021-06-04T10:37:35 | 2021-06-04T10:37:35 | {
"login": "Gforky",
"id": 4157614,
"type": "User"
} | [] | false | [] |
854,417,318 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | closed | https://github.com/huggingface/datasets/pull/2199 | 2021-04-09T11:01:10 | 2021-04-09T15:57:05 | 2021-04-09T15:57:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
854,357,481 | 2,198 | added file_permission in load_dataset | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on t... | closed | https://github.com/huggingface/datasets/pull/2198 | 2021-04-09T09:39:06 | 2021-04-16T14:11:46 | 2021-04-16T14:11:46 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
854,356,559 | 2,197 | fix missing indices_files in load_form_disk | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | closed | https://github.com/huggingface/datasets/pull/2197 | 2021-04-09T09:37:57 | 2021-04-09T09:54:40 | 2021-04-09T09:54:39 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
854,126,114 | 2,196 | `load_dataset` caches two arrow files? | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | closed | https://github.com/huggingface/datasets/issues/2196 | 2021-04-09T03:49:19 | 2021-04-12T05:25:29 | 2021-04-12T05:25:29 | {
"login": "hwijeen",
"id": 29157715,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
854,070,194 | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | closed | https://github.com/huggingface/datasets/issues/2195 | 2021-04-09T01:37:12 | 2021-04-09T09:55:09 | 2021-04-09T09:54:39 | {
"login": "samsontmr",
"id": 15007950,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
853,909,452 | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language... | closed | https://github.com/huggingface/datasets/issues/2194 | 2021-04-08T21:02:48 | 2021-04-09T16:56:50 | 2021-04-09T01:52:57 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | false | [] |
853,725,707 | 2,193 | Filtering/mapping on one column is very slow | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | closed | https://github.com/huggingface/datasets/issues/2193 | 2021-04-08T18:16:14 | 2021-04-26T16:13:59 | 2021-04-26T16:13:59 | {
"login": "norabelrose",
"id": 39116809,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
853,547,910 | 2,192 | Fix typo in huggingface hub | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | closed | https://github.com/huggingface/datasets/pull/2192 | 2021-04-08T14:42:24 | 2021-04-08T15:47:41 | 2021-04-08T15:47:40 | {
"login": "LysandreJik",
"id": 30755778,
"type": "User"
} | [] | true | [] |
853,364,204 | 2,191 | Refactorize tests to use Dataset as context manager | Refactorize Dataset tests to use Dataset as context manager. | closed | https://github.com/huggingface/datasets/pull/2191 | 2021-04-08T11:21:04 | 2021-04-19T07:53:11 | 2021-04-19T07:53:10 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "refactoring",
"color": "B67A40"
}
] | true | [] |
853,181,564 | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | closed | https://github.com/huggingface/datasets/issues/2190 | 2021-04-08T07:53:43 | 2021-05-24T10:03:55 | 2021-05-24T10:03:55 | {
"login": "anassalamah",
"id": 8571003,
"type": "User"
} | [] | false | [] |
853,052,891 | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | closed | https://github.com/huggingface/datasets/issues/2189 | 2021-04-08T04:42:53 | 2022-06-01T16:32:15 | 2022-06-01T16:32:15 | {
"login": "shamanez",
"id": 16892570,
"type": "User"
} | [] | false | [] |
853,044,166 | 2,188 | Duplicate data in Timit dataset | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | closed | https://github.com/huggingface/datasets/issues/2188 | 2021-04-08T04:21:54 | 2021-04-08T12:13:19 | 2021-04-08T12:13:19 | {
"login": "thanh-p",
"id": 78190188,
"type": "User"
} | [] | false | [] |
852,939,736 | 2,187 | Question (potential issue?) related to datasets caching | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | open | https://github.com/huggingface/datasets/issues/2187 | 2021-04-08T00:16:28 | 2023-01-03T18:30:38 | null | {
"login": "ioana-blue",
"id": 17202292,
"type": "User"
} | [
{
"name": "question",
"color": "d876e3"
}
] | false | [] |
852,840,819 | 2,186 | GEM: new challenge sets | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | closed | https://github.com/huggingface/datasets/pull/2186 | 2021-04-07T21:39:07 | 2021-04-07T21:56:35 | 2021-04-07T21:56:35 | {
"login": "yjernite",
"id": 10469459,
"type": "User"
} | [] | true | [] |
852,684,395 | 2,185 | .map() and distributed training | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | closed | https://github.com/huggingface/datasets/issues/2185 | 2021-04-07T18:22:14 | 2021-10-23T07:11:15 | 2021-04-09T15:38:31 | {
"login": "VictorSanh",
"id": 16107619,
"type": "User"
} | [] | false | [] |
852,597,258 | 2,184 | Implementation of class_encode_column | Addresses #2176
I'm happy to discuss the API and internals! | closed | https://github.com/huggingface/datasets/pull/2184 | 2021-04-07T16:47:43 | 2021-04-16T11:44:37 | 2021-04-16T11:26:59 | {
"login": "SBrandeis",
"id": 33657802,
"type": "User"
} | [] | true | [] |
852,518,411 | 2,183 | Fix s3fs tests for py36 and py37+ | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in ser... | closed | https://github.com/huggingface/datasets/pull/2183 | 2021-04-07T15:17:11 | 2021-04-08T08:54:45 | 2021-04-08T08:54:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
852,384,872 | 2,182 | Set default in-memory value depending on the dataset size | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be c... | closed | https://github.com/huggingface/datasets/pull/2182 | 2021-04-07T13:00:18 | 2021-04-20T14:20:12 | 2021-04-20T10:04:04 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
852,261,607 | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | closed | https://github.com/huggingface/datasets/issues/2181 | 2021-04-07T10:26:46 | 2021-04-12T07:15:55 | 2021-04-12T07:15:55 | {
"login": "hwijeen",
"id": 29157715,
"type": "User"
} | [] | false | [] |
852,258,635 | 2,180 | Add tel to xtreme tatoeba | This should fix issue #2149 | closed | https://github.com/huggingface/datasets/pull/2180 | 2021-04-07T10:23:15 | 2021-04-07T15:50:35 | 2021-04-07T15:50:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
852,237,957 | 2,179 | Load small datasets in-memory instead of using memory map | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the cach... | closed | https://github.com/huggingface/datasets/issues/2179 | 2021-04-07T09:58:16 | 2021-04-20T10:04:04 | 2021-04-20T10:04:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false | [] |
852,215,058 | 2,178 | Fix cast memory usage by using map on subtables | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, ... | closed | https://github.com/huggingface/datasets/pull/2178 | 2021-04-07T09:30:50 | 2021-04-20T14:20:44 | 2021-04-13T09:28:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
852,065,307 | 2,177 | add social thumbnial | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-op... | closed | https://github.com/huggingface/datasets/pull/2177 | 2021-04-07T06:40:06 | 2021-04-07T08:16:01 | 2021-04-07T08:16:01 | {
"login": "philschmid",
"id": 32632186,
"type": "User"
} | [] | true | [] |
851,865,795 | 2,176 | Converting a Value to a ClassLabel | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | closed | https://github.com/huggingface/datasets/issues/2176 | 2021-04-06T22:54:16 | 2022-06-01T16:31:49 | 2022-06-01T16:31:49 | {
"login": "nelson-liu",
"id": 7272031,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
851,836,096 | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | closed | https://github.com/huggingface/datasets/issues/2175 | 2021-04-06T21:50:49 | 2021-04-16T12:21:16 | 2021-04-16T12:21:15 | {
"login": "shamanez",
"id": 16892570,
"type": "User"
} | [] | false | [] |
851,383,675 | 2,174 | Pin docutils for better doc | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx... | closed | https://github.com/huggingface/datasets/pull/2174 | 2021-04-06T12:40:20 | 2021-04-06T12:55:53 | 2021-04-06T12:55:53 | {
"login": "sgugger",
"id": 35901082,
"type": "User"
} | [] | true | [] |
851,359,284 | 2,173 | Add OpenSLR dataset | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR... | closed | https://github.com/huggingface/datasets/pull/2173 | 2021-04-06T12:08:34 | 2021-04-12T16:54:46 | 2021-04-12T16:54:46 | {
"login": "cahya-wirawan",
"id": 7669893,
"type": "User"
} | [] | true | [] |
851,229,399 | 2,172 | Pin fsspec lower than 0.9.0 | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | closed | https://github.com/huggingface/datasets/pull/2172 | 2021-04-06T09:19:09 | 2021-04-06T09:49:27 | 2021-04-06T09:49:26 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
851,090,662 | 2,171 | Fixed the link to wikiauto training data. | closed | https://github.com/huggingface/datasets/pull/2171 | 2021-04-06T07:13:11 | 2021-04-06T16:05:42 | 2021-04-06T16:05:09 | {
"login": "mounicam",
"id": 11708999,
"type": "User"
} | [] | true | [] | |
850,913,228 | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | open | https://github.com/huggingface/datasets/issues/2170 | 2021-04-06T03:13:18 | 2021-06-16T01:10:50 | null | {
"login": "leezu",
"id": 946903,
"type": "User"
} | [] | false | [] |
850,456,180 | 2,169 | Updated WER metric implementation to avoid memory issues | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| closed | https://github.com/huggingface/datasets/pull/2169 | 2021-04-05T15:43:20 | 2021-04-06T15:02:58 | 2021-04-06T15:02:58 | {
"login": "diego-fustes",
"id": 5707233,
"type": "User"
} | [] | true | [] |
849,957,941 | 2,168 | Preserve split type when realoding dataset | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arr... | closed | https://github.com/huggingface/datasets/pull/2168 | 2021-04-04T20:46:21 | 2021-04-19T10:57:05 | 2021-04-19T09:08:55 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
849,944,891 | 2,167 | Split type not preserved when reloading the dataset | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | closed | https://github.com/huggingface/datasets/issues/2167 | 2021-04-04T19:29:54 | 2021-04-19T09:08:55 | 2021-04-19T09:08:55 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | false | [] |
849,778,545 | 2,166 | Regarding Test Sets for the GEM datasets | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | closed | https://github.com/huggingface/datasets/issues/2166 | 2021-04-04T02:02:45 | 2021-04-06T08:13:12 | 2021-04-06T08:13:12 | {
"login": "vyraun",
"id": 17217068,
"type": "User"
} | [
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | false | [] |
849,771,665 | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | closed | https://github.com/huggingface/datasets/issues/2165 | 2021-04-04T01:01:48 | 2021-08-24T15:55:35 | 2021-04-07T15:06:04 | {
"login": "y-rokutan",
"id": 24562381,
"type": "User"
} | [] | false | [] |
849,739,759 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | closed | https://github.com/huggingface/datasets/pull/2164 | 2021-04-03T21:07:02 | 2021-04-06T14:41:09 | 2021-04-06T14:41:08 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
849,669,366 | 2,163 | Concat only unique fields in DatasetInfo.from_merge | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | closed | https://github.com/huggingface/datasets/pull/2163 | 2021-04-03T14:31:30 | 2021-04-06T14:40:00 | 2021-04-06T14:39:59 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [] | true | [] |
849,129,201 | 2,162 | visualization for cc100 is broken | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| closed | https://github.com/huggingface/datasets/issues/2162 | 2021-04-02T10:11:13 | 2022-10-05T13:20:24 | 2022-10-05T13:20:24 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
849,127,041 | 2,161 | any possibility to download part of large datasets only? | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | closed | https://github.com/huggingface/datasets/issues/2161 | 2021-04-02T10:06:46 | 2022-10-05T13:26:51 | 2022-10-05T13:26:51 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
849,052,921 | 2,160 | data_args.preprocessing_num_workers almost freezes | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | closed | https://github.com/huggingface/datasets/issues/2160 | 2021-04-02T07:56:13 | 2021-04-02T10:14:32 | 2021-04-02T10:14:31 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [] | false | [] |
848,851,962 | 2,159 | adding ccnet dataset | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite importan... | closed | https://github.com/huggingface/datasets/issues/2159 | 2021-04-01T23:28:36 | 2021-04-02T10:05:19 | 2021-04-02T10:05:19 | {
"login": "dorost1234",
"id": 79165106,
"type": "User"
} | [
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
848,506,746 | 2,158 | viewer "fake_news_english" error | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | closed | https://github.com/huggingface/datasets/issues/2158 | 2021-04-01T14:13:20 | 2022-10-05T13:22:02 | 2022-10-05T13:22:02 | {
"login": "emanuelevivoli",
"id": 9447991,
"type": "User"
} | [
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false | [] |
847,205,239 | 2,157 | updated user permissions based on umask | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | closed | https://github.com/huggingface/datasets/pull/2157 | 2021-03-31T19:38:29 | 2021-04-06T07:19:19 | 2021-04-06T07:19:19 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
847,198,295 | 2,156 | User permissions | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | closed | https://github.com/huggingface/datasets/pull/2156 | 2021-03-31T19:33:48 | 2021-03-31T19:34:24 | 2021-03-31T19:34:24 | {
"login": "bhavitvyamalik",
"id": 19718818,
"type": "User"
} | [] | true | [] |
846,786,897 | 2,155 | Add table classes to the documentation | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | closed | https://github.com/huggingface/datasets/pull/2155 | 2021-03-31T14:36:10 | 2021-04-01T16:46:30 | 2021-03-31T15:42:08 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
846,763,960 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | closed | https://github.com/huggingface/datasets/pull/2154 | 2021-03-31T14:22:50 | 2021-04-01T09:27:00 | 2021-04-01T09:16:08 | {
"login": "versae",
"id": 173537,
"type": "User"
} | [] | true | [] |
846,181,502 | 2,153 | load_dataset ignoring features | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | closed | https://github.com/huggingface/datasets/issues/2153 | 2021-03-31T08:30:09 | 2022-10-05T13:29:12 | 2022-10-05T13:29:12 | {
"login": "GuillemGSubies",
"id": 37592763,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
845,751,273 | 2,152 | Update README.md | Updated some descriptions of Wino_Bias dataset. | closed | https://github.com/huggingface/datasets/pull/2152 | 2021-03-31T03:21:19 | 2021-04-01T10:20:37 | 2021-04-01T10:20:36 | {
"login": "JieyuZhao",
"id": 22306304,
"type": "User"
} | [] | true | [] |
844,886,081 | 2,151 | Add support for axis in concatenate datasets | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | closed | https://github.com/huggingface/datasets/pull/2151 | 2021-03-30T16:58:44 | 2021-06-23T17:41:02 | 2021-04-19T16:07:18 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | true | [] |
844,776,448 | 2,150 | Allow pickling of big in-memory tables | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | closed | https://github.com/huggingface/datasets/pull/2150 | 2021-03-30T15:51:56 | 2021-03-31T10:37:15 | 2021-03-31T10:37:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
844,734,076 | 2,149 | Telugu subset missing for xtreme tatoeba dataset | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict ... | closed | https://github.com/huggingface/datasets/issues/2149 | 2021-03-30T15:26:34 | 2022-10-05T13:28:30 | 2022-10-05T13:28:30 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [] | false | [] |
844,700,910 | 2,148 | Add configurable options to `seqeval` metric | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | closed | https://github.com/huggingface/datasets/issues/2148 | 2021-03-30T15:04:06 | 2021-04-15T13:49:46 | 2021-04-15T13:49:46 | {
"login": "marrodion",
"id": 44571847,
"type": "User"
} | [] | false | [] |
844,687,831 | 2,147 | Render docstring return type as inline | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | closed | https://github.com/huggingface/datasets/pull/2147 | 2021-03-30T14:55:43 | 2021-03-31T13:11:05 | 2021-03-31T13:11:05 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [
{
"name": "documentation",
"color": "0075ca"
}
] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.