id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
664,412,137 | 429 | mlsum | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https... | closed | https://github.com/huggingface/datasets/pull/429 | 2020-07-23T11:52:39 | 2020-07-31T11:46:20 | 2020-07-31T11:46:20 | {
"login": "RachelKer",
"id": 36986299,
"type": "User"
} | [] | true | [] |
664,367,086 | 428 | fix concatenate_datasets | `concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423 | closed | https://github.com/huggingface/datasets/pull/428 | 2020-07-23T10:30:59 | 2020-07-23T10:35:00 | 2020-07-23T10:34:58 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
664,341,623 | 427 | Allow sequence features for beam + add processed Natural Questions | ## Allow Sequence features for Beam Datasets + add Natural Questions
### The issue
The steps of beam datasets processing is the following:
- download the source files and send them in a remote storage (gcs)
- process the files using a beam runner (dataflow)
- save output in remote storage (gcs)
- convert outp... | closed | https://github.com/huggingface/datasets/pull/427 | 2020-07-23T09:52:41 | 2020-07-23T13:09:30 | 2020-07-23T13:09:29 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
664,203,897 | 426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | closed | https://github.com/huggingface/datasets/issues/426 | 2020-07-23T05:00:41 | 2021-03-12T09:34:12 | 2020-09-07T14:48:04 | {
"login": "timothyjlaurent",
"id": 2000204,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
664,029,848 | 425 | Correct data structure for PAN-X task in XTREME dataset? | Hi 🤗 team!
## Description of the problem
Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
dataset_train = dataset['tr... | closed | https://github.com/huggingface/datasets/issues/425 | 2020-07-22T20:29:20 | 2020-08-02T13:30:34 | 2020-08-02T13:30:34 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
663,858,552 | 424 | Web of science | this PR adds the WebofScience dataset
#353 | closed | https://github.com/huggingface/datasets/pull/424 | 2020-07-22T15:38:31 | 2020-07-23T14:27:58 | 2020-07-23T14:27:56 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
663,079,359 | 423 | Change features vs schema logic | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `sche... | closed | https://github.com/huggingface/datasets/pull/423 | 2020-07-21T14:52:47 | 2020-07-25T09:08:34 | 2020-07-23T10:15:17 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
663,028,497 | 422 | - Corrected encoding for IMDB. | The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset. | closed | https://github.com/huggingface/datasets/pull/422 | 2020-07-21T13:46:59 | 2020-07-22T16:02:53 | 2020-07-22T16:02:53 | {
"login": "ghazi-f",
"id": 25091538,
"type": "User"
} | [] | true | [] |
662,213,864 | 421 | Style change | make quality and make style ran on scripts | closed | https://github.com/huggingface/datasets/pull/421 | 2020-07-20T20:08:29 | 2020-07-22T16:08:40 | 2020-07-22T16:08:39 | {
"login": "lordtt13",
"id": 35500534,
"type": "User"
} | [] | true | [] |
662,029,782 | 420 | Better handle nested features | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | closed | https://github.com/huggingface/datasets/pull/420 | 2020-07-20T16:44:13 | 2020-07-21T08:20:49 | 2020-07-21T08:09:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
661,974,747 | 419 | EmoContext dataset add | EmoContext Dataset add
Signed-off-by: lordtt13 <thakurtanmay72@yahoo.com> | closed | https://github.com/huggingface/datasets/pull/419 | 2020-07-20T15:48:45 | 2020-07-24T08:22:01 | 2020-07-24T08:22:00 | {
"login": "lordtt13",
"id": 35500534,
"type": "User"
} | [] | true | [] |
661,914,873 | 418 | Addition of google drive links to dl_manager | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig ... | closed | https://github.com/huggingface/datasets/issues/418 | 2020-07-20T14:52:02 | 2020-07-20T15:39:32 | 2020-07-20T15:39:32 | {
"login": "lordtt13",
"id": 35500534,
"type": "User"
} | [] | false | [] |
661,804,054 | 417 | Fix docstrins multiple metrics instances | We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).
This should fix #304 | closed | https://github.com/huggingface/datasets/pull/417 | 2020-07-20T13:08:59 | 2020-07-22T09:51:00 | 2020-07-22T09:50:59 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
661,635,393 | 416 | Fix xtreme panx directory | Fix #412 | closed | https://github.com/huggingface/datasets/pull/416 | 2020-07-20T10:09:17 | 2020-07-21T08:15:46 | 2020-07-21T08:15:44 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
660,687,076 | 415 | Something is wrong with WMT 19 kk-en dataset | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие... | open | https://github.com/huggingface/datasets/issues/415 | 2020-07-19T08:18:51 | 2020-07-20T09:54:26 | null | {
"login": "ChenghaoMou",
"id": 32014649,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
660,654,013 | 414 | from_dict delete? | AttributeError: type object 'Dataset' has no attribute 'from_dict' | closed | https://github.com/huggingface/datasets/issues/414 | 2020-07-19T07:08:36 | 2020-07-21T02:21:17 | 2020-07-21T02:21:17 | {
"login": "hackerxiaobai",
"id": 22817243,
"type": "User"
} | [] | false | [] |
660,063,655 | 413 | Is there a way to download only NQ dev? | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", bea... | closed | https://github.com/huggingface/datasets/issues/413 | 2020-07-18T10:28:23 | 2022-02-11T09:50:21 | 2022-02-11T09:50:21 | {
"login": "tholor",
"id": 1563902,
"type": "User"
} | [] | false | [] |
660,047,139 | 412 | Unable to load XTREME dataset from disk | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPho... | closed | https://github.com/huggingface/datasets/issues/412 | 2020-07-18T09:55:00 | 2020-07-21T08:15:44 | 2020-07-21T08:15:44 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | false | [] |
659,393,398 | 411 | Sbf | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | closed | https://github.com/huggingface/datasets/pull/411 | 2020-07-17T16:19:45 | 2020-07-21T09:13:46 | 2020-07-21T09:13:45 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
659,242,871 | 410 | 20newsgroup | Add 20Newsgroup dataset.
#353 | closed | https://github.com/huggingface/datasets/pull/410 | 2020-07-17T13:07:57 | 2020-07-20T07:05:29 | 2020-07-20T07:05:28 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
659,128,611 | 409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
-------------------------------------------... | closed | https://github.com/huggingface/datasets/issues/409 | 2020-07-17T10:36:28 | 2020-07-21T14:34:52 | 2020-07-21T14:34:52 | {
"login": "morganmcg1",
"id": 20516801,
"type": "User"
} | [] | false | [] |
659,064,144 | 408 | Add tests datasets gcp | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | closed | https://github.com/huggingface/datasets/pull/408 | 2020-07-17T09:23:27 | 2020-07-17T09:26:57 | 2020-07-17T09:26:56 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
658,672,736 | 407 | MissingBeamOptions for Wikipedia 20200501.en | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia... | closed | https://github.com/huggingface/datasets/issues/407 | 2020-07-16T23:48:03 | 2021-01-12T11:41:16 | 2020-07-17T14:24:28 | {
"login": "mitchellgordon95",
"id": 7490438,
"type": "User"
} | [] | false | [] |
658,581,764 | 406 | Faster Shuffling? | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`... | closed | https://github.com/huggingface/datasets/issues/406 | 2020-07-16T21:21:53 | 2023-08-16T09:52:39 | 2020-09-07T14:45:25 | {
"login": "mitchellgordon95",
"id": 7490438,
"type": "User"
} | [] | false | [] |
658,580,192 | 405 | Make select() faster by batching reads | Here's a benchmark:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1000, load_fr... | closed | https://github.com/huggingface/datasets/pull/405 | 2020-07-16T21:19:45 | 2020-07-17T17:05:44 | 2020-07-17T16:51:26 | {
"login": "mitchellgordon95",
"id": 7490438,
"type": "User"
} | [] | true | [] |
658,400,987 | 404 | Add seed in metrics | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment... | closed | https://github.com/huggingface/datasets/pull/404 | 2020-07-16T17:27:05 | 2020-07-20T10:12:35 | 2020-07-20T10:12:34 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
658,325,756 | 403 | return python objects instead of arrays by default | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| closed | https://github.com/huggingface/datasets/pull/403 | 2020-07-16T15:51:52 | 2020-07-17T11:37:01 | 2020-07-17T11:37:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
658,001,288 | 402 | Search qa | add SearchQA dataset
#336 | closed | https://github.com/huggingface/datasets/pull/402 | 2020-07-16T09:00:10 | 2020-07-16T14:27:00 | 2020-07-16T14:26:59 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
657,996,252 | 401 | add web_questions | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | closed | https://github.com/huggingface/datasets/pull/401 | 2020-07-16T08:54:59 | 2020-08-06T06:16:20 | 2020-08-06T06:16:19 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
657,975,600 | 400 | Web questions | add the WebQuestion dataset
#336 | closed | https://github.com/huggingface/datasets/pull/400 | 2020-07-16T08:28:29 | 2020-07-16T08:50:51 | 2020-07-16T08:42:54 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
657,841,433 | 399 | Spelling mistake | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | closed | https://github.com/huggingface/datasets/pull/399 | 2020-07-16T04:37:58 | 2020-07-16T06:49:48 | 2020-07-16T06:49:37 | {
"login": "BlancRay",
"id": 9410067,
"type": "User"
} | [] | true | [] |
657,511,962 | 398 | Add inline links | Add inline links to `Contributing.md` | closed | https://github.com/huggingface/datasets/pull/398 | 2020-07-15T17:04:04 | 2020-07-22T10:14:22 | 2020-07-22T10:14:22 | {
"login": "bharatr21",
"id": 13381361,
"type": "User"
} | [] | true | [] |
657,510,856 | 397 | Add contiguous sharding | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datas... | closed | https://github.com/huggingface/datasets/pull/397 | 2020-07-15T17:02:58 | 2020-07-17T16:59:31 | 2020-07-17T16:59:31 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
657,477,952 | 396 | Fix memory issue when doing select | We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.
Fix #395 | closed | https://github.com/huggingface/datasets/pull/396 | 2020-07-15T16:15:04 | 2020-07-16T08:07:32 | 2020-07-16T08:07:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
657,454,983 | 395 | Memory issue when doing select | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that ... | closed | https://github.com/huggingface/datasets/issues/395 | 2020-07-15T15:43:38 | 2020-07-16T08:07:31 | 2020-07-16T08:07:31 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
657,425,548 | 394 | Remove remaining nested dict | This PR deletes the remaining unnecessary nested dict
#378 | closed | https://github.com/huggingface/datasets/pull/394 | 2020-07-15T15:05:52 | 2020-07-16T07:39:52 | 2020-07-16T07:39:51 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
657,330,911 | 393 | Fix extracted files directory for the DownloadManager | The cache dir was often cluttered by extracted files because of the download manager.
For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to ca... | closed | https://github.com/huggingface/datasets/pull/393 | 2020-07-15T12:59:55 | 2020-07-17T17:02:16 | 2020-07-17T17:02:14 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
657,313,738 | 392 | Style change detection | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels... | closed | https://github.com/huggingface/datasets/pull/392 | 2020-07-15T12:32:14 | 2020-07-21T13:18:36 | 2020-07-17T17:13:23 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
656,956,384 | 390 | Concatenate datasets | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in... | closed | https://github.com/huggingface/datasets/pull/390 | 2020-07-14T23:24:37 | 2020-07-22T09:49:58 | 2020-07-22T09:49:58 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
656,921,768 | 389 | Fix pickling of SplitDict | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, '... | closed | https://github.com/huggingface/datasets/pull/389 | 2020-07-14T21:53:39 | 2020-08-04T14:38:10 | 2020-08-04T14:38:10 | {
"login": "mitchellgordon95",
"id": 7490438,
"type": "User"
} | [] | true | [] |
656,707,497 | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not ob... | closed | https://github.com/huggingface/datasets/issues/388 | 2020-07-14T15:36:41 | 2022-10-04T18:01:28 | 2022-10-04T18:01:28 | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
656,361,357 | 387 | Conversion through to_pandas output numpy arrays for lists instead of python objects | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi... | closed | https://github.com/huggingface/datasets/issues/387 | 2020-07-14T06:24:01 | 2020-07-17T11:37:00 | 2020-07-17T11:37:00 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | false | [] |
655,839,067 | 386 | Update dataset loading and features - Add TREC dataset | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is ... | closed | https://github.com/huggingface/datasets/pull/386 | 2020-07-13T13:10:18 | 2020-07-16T08:17:58 | 2020-07-16T08:17:58 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
655,663,997 | 385 | Remove unnecessary nested dict | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | closed | https://github.com/huggingface/datasets/pull/385 | 2020-07-13T08:46:23 | 2020-07-15T11:27:38 | 2020-07-15T10:03:53 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
655,291,201 | 383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets t... | closed | https://github.com/huggingface/datasets/pull/383 | 2020-07-11T22:35:20 | 2020-07-16T16:19:46 | 2020-07-16T16:19:46 | {
"login": "gaguilar",
"id": 5833357,
"type": "User"
} | [] | true | [] |
655,290,482 | 382 | 1080 | closed | https://github.com/huggingface/datasets/issues/382 | 2020-07-11T22:29:07 | 2020-07-11T22:49:38 | 2020-07-11T22:49:38 | {
"login": "saq194",
"id": 60942503,
"type": "User"
} | [] | false | [] | |
655,277,119 | 381 | NLp | closed | https://github.com/huggingface/datasets/issues/381 | 2020-07-11T20:50:14 | 2020-07-11T20:50:39 | 2020-07-11T20:50:39 | {
"login": "Spartanthor",
"id": 68147610,
"type": "User"
} | [] | false | [] | |
655,226,316 | 378 | [dataset] Structure of MLQA seems unecessary nested | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
... | closed | https://github.com/huggingface/datasets/issues/378 | 2020-07-11T15:16:08 | 2020-07-15T16:17:20 | 2020-07-15T16:17:20 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | false | [] |
655,215,790 | 377 | Iyy!!! | closed | https://github.com/huggingface/datasets/issues/377 | 2020-07-11T14:11:07 | 2020-07-11T14:30:51 | 2020-07-11T14:30:51 | {
"login": "ajinomoh",
"id": 68154535,
"type": "User"
} | [] | false | [] | |
655,047,826 | 376 | to_pandas conversion doesn't always work | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.... | closed | https://github.com/huggingface/datasets/issues/376 | 2020-07-10T21:33:31 | 2022-10-04T18:05:39 | 2022-10-04T18:05:39 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | false | [] |
655,023,307 | 375 | TypeError when computing bertscore | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most rece... | closed | https://github.com/huggingface/datasets/issues/375 | 2020-07-10T20:37:44 | 2022-06-01T15:15:59 | 2022-06-01T15:15:59 | {
"login": "willywsm1013",
"id": 13269577,
"type": "User"
} | [] | false | [] |
654,895,066 | 374 | Add dataset post processing for faiss indexes | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.... | closed | https://github.com/huggingface/datasets/pull/374 | 2020-07-10T16:25:59 | 2020-07-13T13:44:03 | 2020-07-13T13:44:01 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
654,845,133 | 373 | Segmentation fault when loading local JSON dataset as of #372 | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f... | closed | https://github.com/huggingface/datasets/issues/373 | 2020-07-10T15:04:25 | 2022-10-04T18:05:47 | 2022-10-04T18:05:47 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [] | false | [] |
654,774,420 | 372 | Make the json script more flexible | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In t... | closed | https://github.com/huggingface/datasets/pull/372 | 2020-07-10T13:15:15 | 2020-07-10T14:52:07 | 2020-07-10T14:52:06 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
654,668,242 | 371 | Fix cached file path for metrics with different config names | The config name was not taken into account to build the cached file path.
It should fix #368 | closed | https://github.com/huggingface/datasets/pull/371 | 2020-07-10T10:02:24 | 2020-07-10T13:45:22 | 2020-07-10T13:45:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
654,304,193 | 370 | Allow indexing Dataset via np.ndarray | closed | https://github.com/huggingface/datasets/pull/370 | 2020-07-09T19:43:15 | 2020-07-10T14:05:44 | 2020-07-10T14:05:43 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] | |
654,186,890 | 369 | can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/... | closed | https://github.com/huggingface/datasets/issues/369 | 2020-07-09T16:16:53 | 2020-12-15T23:07:22 | 2020-07-10T14:52:06 | {
"login": "vegarab",
"id": 24683907,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
654,087,251 | 368 | load_metric can't acquire lock anymore | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n... | closed | https://github.com/huggingface/datasets/issues/368 | 2020-07-09T14:04:09 | 2020-07-10T13:45:20 | 2020-07-10T13:45:20 | {
"login": "ydshieh",
"id": 2521628,
"type": "User"
} | [] | false | [] |
654,012,984 | 367 | Update Xtreme to add PAWS-X es | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | closed | https://github.com/huggingface/datasets/pull/367 | 2020-07-09T12:14:37 | 2020-07-09T12:37:11 | 2020-07-09T12:37:10 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
653,954,896 | 366 | Add quora dataset | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test sp... | closed | https://github.com/huggingface/datasets/pull/366 | 2020-07-09T10:34:22 | 2020-07-13T17:35:21 | 2020-07-13T17:35:21 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
653,845,964 | 365 | How to augment data ? | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=T... | closed | https://github.com/huggingface/datasets/issues/365 | 2020-07-09T07:52:37 | 2020-07-10T09:12:07 | 2020-07-10T08:22:15 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
653,821,597 | 364 | add MS MARCO dataset | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pd... | closed | https://github.com/huggingface/datasets/pull/364 | 2020-07-09T07:11:19 | 2020-08-06T06:15:49 | 2020-08-06T06:15:48 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
653,821,172 | 363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datas... | closed | https://github.com/huggingface/datasets/pull/363 | 2020-07-09T07:10:30 | 2020-08-24T09:59:35 | 2020-08-24T09:59:35 | {
"login": "eltoto1219",
"id": 14030663,
"type": "User"
} | [] | true | [] |
653,766,245 | 362 | [dateset subset missing] xtreme paws-x | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | closed | https://github.com/huggingface/datasets/issues/362 | 2020-07-09T05:04:54 | 2020-07-09T12:38:42 | 2020-07-09T12:38:42 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [] | false | [] |
653,757,376 | 361 | 🐛 [Metrics] ROUGE is non-deterministic | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe... | closed | https://github.com/huggingface/datasets/issues/361 | 2020-07-09T04:39:37 | 2022-09-09T15:20:55 | 2020-07-20T23:48:37 | {
"login": "astariul",
"id": 43774355,
"type": "User"
} | [] | false | [] |
653,687,176 | 360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t... | closed | https://github.com/huggingface/datasets/issues/360 | 2020-07-09T01:04:43 | 2020-07-09T19:31:51 | 2020-07-09T19:31:51 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
653,656,279 | 359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <mo... | closed | https://github.com/huggingface/datasets/issues/359 | 2020-07-08T23:24:05 | 2020-07-10T14:52:06 | 2020-07-10T14:52:06 | {
"login": "timothyjlaurent",
"id": 2000204,
"type": "User"
} | [] | false | [] |
653,645,121 | 358 | Starting to add some real doc | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.htm... | closed | https://github.com/huggingface/datasets/pull/358 | 2020-07-08T22:53:03 | 2020-07-14T09:58:17 | 2020-07-14T09:58:15 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
653,642,292 | 357 | Add hashes to cnn_dailymail | The URL hashes are helpful for comparing results from other sources. | closed | https://github.com/huggingface/datasets/pull/357 | 2020-07-08T22:45:21 | 2020-07-13T14:16:38 | 2020-07-13T14:16:38 | {
"login": "jbragg",
"id": 2238344,
"type": "User"
} | [] | true | [] |
653,537,388 | 356 | Add text dataset | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common... | closed | https://github.com/huggingface/datasets/pull/356 | 2020-07-08T19:21:53 | 2020-07-10T14:19:03 | 2020-07-10T14:19:03 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
653,451,013 | 355 | can't load SNLI dataset | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
... | closed | https://github.com/huggingface/datasets/issues/355 | 2020-07-08T16:54:14 | 2020-07-18T05:15:57 | 2020-07-15T07:59:01 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [] | false | [] |
653,357,617 | 354 | More faiss control | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | closed | https://github.com/huggingface/datasets/pull/354 | 2020-07-08T14:45:20 | 2020-07-09T09:54:54 | 2020-07-09T09:54:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
653,250,611 | 353 | [Dataset requests] New datasets for Text Classification | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- #386
- [x] Yelp-5
- #... | open | https://github.com/huggingface/datasets/issues/353 | 2020-07-08T12:17:58 | 2025-04-05T09:28:15 | null | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "help wanted",
"color": "008672"
},
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
653,128,883 | 352 | 🐛[BugFix]fix seqeval | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | closed | https://github.com/huggingface/datasets/pull/352 | 2020-07-08T09:12:12 | 2020-07-16T08:26:46 | 2020-07-16T08:26:46 | {
"login": "AlongWY",
"id": 20281571,
"type": "User"
} | [] | true | [] |
652,424,048 | 351 | add pandas dataset | Create a dataset from serialized pandas dataframes.
Usage:
```python
from nlp import load_dataset
dset = load_dataset("pandas", data_files="df.pkl")["train"]
``` | closed | https://github.com/huggingface/datasets/pull/351 | 2020-07-07T15:38:07 | 2020-07-08T14:15:16 | 2020-07-08T14:15:15 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
652,398,691 | 350 | add from_pandas and from_dict | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types v... | closed | https://github.com/huggingface/datasets/pull/350 | 2020-07-07T15:03:53 | 2020-07-08T14:14:33 | 2020-07-08T14:14:32 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
652,231,571 | 349 | Hyperpartisan news detection | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before... | closed | https://github.com/huggingface/datasets/pull/349 | 2020-07-07T11:06:37 | 2020-07-07T20:47:27 | 2020-07-07T14:57:11 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
652,158,308 | 348 | Add OSCAR dataset | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | closed | https://github.com/huggingface/datasets/pull/348 | 2020-07-07T09:22:07 | 2021-05-03T22:07:08 | 2021-02-09T10:19:19 | {
"login": "pjox",
"id": 635220,
"type": "User"
} | [] | true | [] |
652,106,567 | 347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | 
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I gues... | closed | https://github.com/huggingface/datasets/issues/347 | 2020-07-07T08:14:23 | 2020-09-07T14:51:45 | 2020-09-07T14:51:45 | {
"login": "cosmeowpawlitan",
"id": 50871412,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
652,044,151 | 346 | Add emotion dataset | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/me... | closed | https://github.com/huggingface/datasets/pull/346 | 2020-07-07T06:35:41 | 2022-05-30T15:16:44 | 2020-07-13T14:39:38 | {
"login": "lewtun",
"id": 26859204,
"type": "User"
} | [] | true | [] |
651,761,201 | 345 | Supporting documents in ELI5 | I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ... | closed | https://github.com/huggingface/datasets/issues/345 | 2020-07-06T19:14:13 | 2020-10-27T15:38:45 | 2020-10-27T15:38:45 | {
"login": "saverymax",
"id": 29262273,
"type": "User"
} | [] | false | [] |
651,495,246 | 344 | Search qa | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | closed | https://github.com/huggingface/datasets/pull/344 | 2020-07-06T12:23:16 | 2020-07-16T08:58:16 | 2020-07-16T08:58:16 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
651,419,630 | 343 | Fix nested tensorflow format | In #339 and #337 we are thinking about adding a way to export datasets to tfrecords.
However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`.
I also added ... | closed | https://github.com/huggingface/datasets/pull/343 | 2020-07-06T10:13:45 | 2020-07-06T13:11:52 | 2020-07-06T13:11:51 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
651,333,194 | 342 | Features should be updated when `map()` changes schema | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | closed | https://github.com/huggingface/datasets/issues/342 | 2020-07-06T08:03:23 | 2020-07-23T10:15:16 | 2020-07-23T10:15:16 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | false | [] |
650,611,969 | 341 | add fever dataset | This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf).
#336 | closed | https://github.com/huggingface/datasets/pull/341 | 2020-07-03T13:53:07 | 2020-07-06T13:03:48 | 2020-07-06T13:03:47 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
650,533,920 | 340 | Update cfq.py | Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions. | closed | https://github.com/huggingface/datasets/pull/340 | 2020-07-03T11:23:19 | 2020-07-03T12:33:50 | 2020-07-03T12:33:50 | {
"login": "brainshawn",
"id": 4437290,
"type": "User"
} | [] | true | [] |
650,156,468 | 339 | Add dataset.export() to TFRecords | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitt... | closed | https://github.com/huggingface/datasets/pull/339 | 2020-07-02T19:26:27 | 2020-07-22T09:16:12 | 2020-07-22T09:16:12 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
650,057,253 | 338 | Run `make style` | These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier. | closed | https://github.com/huggingface/datasets/pull/338 | 2020-07-02T16:19:47 | 2020-07-02T18:03:10 | 2020-07-02T18:03:10 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
650,035,887 | 337 | [Feature request] Export Arrow dataset to TFRecords | The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:
```python
# use these existing methods
ds = load_dataset("wikitext", "wik... | closed | https://github.com/huggingface/datasets/issues/337 | 2020-07-02T15:47:12 | 2020-07-22T09:16:12 | 2020-07-22T09:16:12 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
649,914,203 | 336 | [Dataset requests] New datasets for Open Question Answering | We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.
Namely, it would be really nice to add:
- WebQuestions (Berant et al., 2013) [done]
- CuratedTrec (Baudis et al. 2015) [not open-source]
- MS-MARCO (NGuyen et al. 2016) [done]
- SearchQA (Dunn et al.... | closed | https://github.com/huggingface/datasets/issues/336 | 2020-07-02T13:03:03 | 2020-07-16T09:04:22 | 2020-07-16T09:04:22 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [
{
"name": "help wanted",
"color": "008672"
},
{
"name": "dataset request",
"color": "e99695"
}
] | false | [] |
649,765,179 | 335 | BioMRC Dataset presented in BioNLP 2020 ACL Workshop | closed | https://github.com/huggingface/datasets/pull/335 | 2020-07-02T09:03:41 | 2020-07-15T08:02:07 | 2020-07-15T08:02:07 | {
"login": "PetrosStav",
"id": 15162021,
"type": "User"
} | [] | true | [] | |
649,661,791 | 334 | Add dataset.shard() method | Fixes https://github.com/huggingface/nlp/issues/312 | closed | https://github.com/huggingface/datasets/pull/334 | 2020-07-02T06:05:19 | 2020-07-06T12:35:36 | 2020-07-06T12:35:36 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | true | [] |
649,236,516 | 333 | fix variable name typo | closed | https://github.com/huggingface/datasets/pull/333 | 2020-07-01T19:13:50 | 2020-07-24T15:43:31 | 2020-07-24T08:32:16 | {
"login": "stas00",
"id": 10676103,
"type": "User"
} | [] | true | [] | |
649,140,135 | 332 | Add wiki_dpr | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73G... | closed | https://github.com/huggingface/datasets/pull/332 | 2020-07-01T17:12:00 | 2020-07-06T12:21:17 | 2020-07-06T12:21:16 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
648,533,199 | 331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in... | closed | https://github.com/huggingface/datasets/issues/331 | 2020-06-30T22:21:33 | 2020-07-09T13:03:40 | 2020-07-09T13:03:40 | {
"login": "jxmorris12",
"id": 13238952,
"type": "User"
} | [
{
"name": "dataset bug",
"color": "2edb81"
}
] | false | [] |
648,525,720 | 330 | Doc red | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to ... | closed | https://github.com/huggingface/datasets/pull/330 | 2020-06-30T22:05:31 | 2020-07-06T12:10:39 | 2020-07-05T12:27:29 | {
"login": "ghomasHudson",
"id": 13795113,
"type": "User"
} | [] | true | [] |
648,446,979 | 329 | [Bug] FileLock dependency incompatible with filesystem | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like thi... | closed | https://github.com/huggingface/datasets/issues/329 | 2020-06-30T19:45:31 | 2024-12-26T15:13:39 | 2020-06-30T21:33:06 | {
"login": "jarednielsen",
"id": 4564897,
"type": "User"
} | [] | false | [] |
648,326,841 | 328 | Fork dataset | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and... | closed | https://github.com/huggingface/datasets/issues/328 | 2020-06-30T16:42:53 | 2020-07-06T21:43:59 | 2020-07-06T21:43:59 | {
"login": "timothyjlaurent",
"id": 2000204,
"type": "User"
} | [] | false | [] |
648,312,858 | 327 | set seed for suffling tests | Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)` | closed | https://github.com/huggingface/datasets/pull/327 | 2020-06-30T16:21:34 | 2020-07-02T08:34:05 | 2020-07-02T08:34:04 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
648,126,103 | 326 | Large dataset in Squad2-format | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677... | closed | https://github.com/huggingface/datasets/issues/326 | 2020-06-30T12:18:59 | 2020-07-09T09:01:50 | 2020-07-09T09:01:50 | {
"login": "flozi00",
"id": 47894090,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.