dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
MIND
MIcrosoft News Dataset (MIND) is a large-scale dataset for news recommendation research. It was collected from anonymized behavior logs of Microsoft News website. The mission of MIND is to serve as a benchmark dataset for news recommendation and facilitate the research in news recommendation and recommender systems are...
Provide a detailed description of the following dataset: MIND
MLQA
MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly par...
Provide a detailed description of the following dataset: MLQA
KdConv
KdConv is a Chinese multi-domain Knowledge-driven Conversation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth disc...
Provide a detailed description of the following dataset: KdConv
STACKEX
STACKEX expands beyond the only existing genre (i.e., academic writing) in keyphrase generation tasks.
Provide a detailed description of the following dataset: STACKEX
SciREX
SCIREX is a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. The dataset is annotated by integrating automatic and human annotations, leveraging existing scientific knowledge resources.
Provide a detailed description of the following dataset: SciREX
CH-SIMS
CH-SIMS is a Chinese single- and multimodal sentiment analysis dataset which contains 2,281 refined video segments in the wild with both multimodal and independent unimodal annotations. It allows researchers to study the interaction between modalities or use independent unimodal annotations for unimodal sentiment analy...
Provide a detailed description of the following dataset: CH-SIMS
WCEP
The WCEP dataset for multi-document summarization (MDS) consists of short, human-written summaries about news events, obtained from the Wikipedia Current Events Portal (WCEP), each paired with a cluster of news articles associated with an event. These articles consist of sources cited by editors on WCEP, and are extend...
Provide a detailed description of the following dataset: WCEP
MATINF
Maternal and Infant (MATINF) Dataset is a large-scale dataset jointly labeled for classification, question answering and summarization in the domain of maternity and baby caring in Chinese. An entry in the dataset includes four fields: question (Q), description (D), class (C) and answer (A). Nearly two million quest...
Provide a detailed description of the following dataset: MATINF
FOBIE
The Focused Open Biology Information Extraction (FOBIE) dataset aims to support IE from Computer-Aided Biomimetics. The dataset contains ~1,500 sentences from scientific biological texts. These sentences are annotated with TRADE-OFFS and syntactically similar relations between unbounded arguments, as well as argument-m...
Provide a detailed description of the following dataset: FOBIE
CODA-19
CODA-19 is a human-annotated dataset that denotes the Background, Purpose, Method, Finding/Contribution, and Other for 10,966 English abstracts in the COVID-19 Open Research Dataset. CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk collectively within ten days. Each abstract was annotated by nine...
Provide a detailed description of the following dataset: CODA-19
RWWD
Real World Worry Dataset (RWWD) captures the emotional responses of UK residents to COVID-19 at a point in time where the impact of the COVID19 situation affected the lives of all individuals in the UK. The data were collected on the 6th and 7th of April 2020, a time at which the UK was under lockdown (news, 2020), and...
Provide a detailed description of the following dataset: RWWD
COVID-Q
COVID-Q consists of COVID-19 questions which have been annotated into a broad category (e.g. Transmission, Prevention) and a more specific class such that questions in the same class are all asking the same thing.
Provide a detailed description of the following dataset: COVID-Q
WT-WT
Will-They-Won't-They (WT-WT) is a large dataset of English tweets targeted at stance detection for the rumor verification task. The dataset is constructed based on tweets that discuss five recent merger and acquisition (M&A) operations of US companies, mainly from the healthcare sector. All the annotations are carri...
Provide a detailed description of the following dataset: WT-WT
iSarcasm
iSarcasm is a dataset of tweets, each labelled as either sarcastic or non_sarcastic. Each sarcastic tweet is further labelled for one of the following types of ironic speech: - sarcasm: tweets that contradict the state of affairs and are critical towards an addressee; - irony: tweets that contradict the state of af...
Provide a detailed description of the following dataset: iSarcasm
KLEJ
The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding task. Key benchmark features: - It contains a diverse set of tasks from different domains and with different objectives. - Most tasks are created from existing datasets but the auth...
Provide a detailed description of the following dataset: KLEJ
XQuAD
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translat...
Provide a detailed description of the following dataset: XQuAD
Microsoft Research Multimodal Aligned Recipe Corpus
To construct the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS the authors first extract a large number of text and video recipes from the web. The goal is to find joint alignments between multiple text recipes and multiple video recipes for the same dish. The task is challenging, as different recipes vary in the...
Provide a detailed description of the following dataset: Microsoft Research Multimodal Aligned Recipe Corpus
ClarQ
ClarQ, consists of ∼2M examples distributed across 173 domains of stackexchange. This dataset is meant for training and evaluation of Clarification Question Generation Systems.
Provide a detailed description of the following dataset: ClarQ
TechQA
TECHQA is a domain-adaptation question answering dataset for the technical support domain. The TECHQA corpus highlights two real-world issues from the automated customer support domain. First, it contains actual questions posed by users on a technical forum, rather than questions generated specifically for a competitio...
Provide a detailed description of the following dataset: TechQA
Refer360°
Refer360° is a novel large-scale referring expression recognition dataset consisting of 17,137 instruction sequences and ground-truth actions for completing these instructions in 360° scenes.
Provide a detailed description of the following dataset: Refer360°
MUStARD
We release the MUStARD dataset which is a multimodal video corpus for research in automated sarcasm discovery. The dataset is compiled from popular TV shows including Friends, The Golden Girls, The Big Bang Theory, and Sarcasmaholics Anonymous. MUStARD consists of audiovisual utterances annotated with sarcasm labels. E...
Provide a detailed description of the following dataset: MUStARD
ChID
ChID is a large-scale Chinese IDiom dataset for cloze test. ChID contains 581K passages and 729K blanks, and covers multiple domains. In ChID, the idioms in a passage were replaced with blank symbols. For each blank, a list of candidate idioms including the golden idiom are provided as choice.
Provide a detailed description of the following dataset: ChID
XQA
XQA is a data which consists of a total amount of 90k question-answer pairs in nine languages for cross-lingual open-domain question answering.
Provide a detailed description of the following dataset: XQA
TalkSumm
The **TalkSumm** dataset contains 1705 automatically-generated summaries of scientific papers from ACL, NAACL, EMNLP, SIGDIAL (2015-2018), and ICML (2017-2018). The dataset is provided as a list of titles and URLs and the corresponding summaries.
Provide a detailed description of the following dataset: TalkSumm
CONAN
COunter NArratives through Nichesourcing (CONAN) is a dataset that consists of 4,078 pairs over the 3 languages. Additionally, 3 types of metadata are provided: expert demographics, hate speech sub-topic and counter-narrative type. The dataset is augmented through translation (from Italian/French to English) and paraph...
Provide a detailed description of the following dataset: CONAN
VIST-Edit
The dataset, VIST-Edit, includes 14,905 human-edited versions of 2,981 machine-generated visual stories. The stories were generated by two state-of-the-art visual storytelling models, each aligned to 5 human-edited versions.
Provide a detailed description of the following dataset: VIST-Edit
OQGend
Dataset OQRanD and OQGenD for paper "Asking the crowd: Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums" by Zi Chai, Xinyu Xing, Xiaojun Wan and Bo Huang. This paper is accepted by ACL'19. The OQGenD dataset can be viewed at "OQGenD.xml". Each data (NQ-pairs) contai...
Provide a detailed description of the following dataset: OQGend
OQRanD
Dataset OQRanD and OQGenD for paper "Asking the crowd: Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums" by Zi Chai, Xinyu Xing, Xiaojun Wan and Bo Huang. This paper is accepted by ACL'19. The OQGenD dataset can be viewed at "OQGenD.xml". Each data (NQ-pairs) contai...
Provide a detailed description of the following dataset: OQRanD
PAWS
Paraphrase Adversaries from Word Scrambling (PAWS) is a dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the o...
Provide a detailed description of the following dataset: PAWS
LitBank
LitBank is an annotated dataset of 100 works of English-language fiction to support tasks in natural language processing and the computational humanities, described in more detail in the following publications: - David Bamman, Sejal Popat and Sheng Shen (2019), "An Annotated Dataset of Literary Entities," NAACL 2019...
Provide a detailed description of the following dataset: LitBank
Discovery Dataset
The *Discovery* datasets consists of adjacent sentence pairs (s1,s2) with a discourse marker (y) that occurred at the beginning of s2. They were extracted from the depcc web corpus. Markers prediction can be used in order to train a sentence encoders. Discourse markers can be considered as noisy labels for various s...
Provide a detailed description of the following dataset: Discovery Dataset
CLEVR-Dialog
CLEVR-Dialog is a large diagnostic dataset for studying multi-round reasoning in visual dialog. Specifically, that authors construct a dialog grammar that is grounded in the scene graphs of the images from the CLEVR dataset. This combination results in a dataset where all aspects of the visual dialog are fully annotate...
Provide a detailed description of the following dataset: CLEVR-Dialog
MultiSense
MultiSense is a dataset of 9,504 images annotated with an English verb and its translation in Spanish and German.
Provide a detailed description of the following dataset: MultiSense
SciQ
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
Provide a detailed description of the following dataset: SciQ
MedHop
With the same format as WikiHop, the MedHop dataset is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins.
Provide a detailed description of the following dataset: MedHop
NEWSROOM
CORNELL NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. The summaries are obtained from search and social metadata between 1998 and 2017 and use a variety of summarizat...
Provide a detailed description of the following dataset: NEWSROOM
ListOps
The ListOps examples are comprised of summary operations on lists of single digit integers, written in prefix notation. The full sequence has a corresponding solution which is also a single-digit integer, thus making it a ten-way balanced classification problem. For example, [MAX 2 9 [MIN 4 7 ] 0 ] has the solution 9....
Provide a detailed description of the following dataset: ListOps
DuoRC
DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie. **Why another RC dataset?** DuoRC pushes the NLP community to address challenges on incorporating knowledge and reasoning in neural ...
Provide a detailed description of the following dataset: DuoRC
PAWS-X
PAWS-X contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. All translated pairs are sourced from examples in PAWS-Wiki.
Provide a detailed description of the following dataset: PAWS-X
KnowledgeNet
KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus enabling the holistic end-to-end evaluation of knowledge base population systems as a whol...
Provide a detailed description of the following dataset: KnowledgeNet
CLINC150
This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data.
Provide a detailed description of the following dataset: CLINC150
WikiCREM
An unsupervised dataset for co-reference resolution. Presented in the publication: Kocijan et. al, WikiCREM: A Large Unsupervised Corpus for Coreference Resolution, presented at EMNLP 2019.
Provide a detailed description of the following dataset: WikiCREM
BiPaR
**BiPaR** is a manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension on novels. The biggest difference between BiPaR and existing reading comprehension datasets is that each triple (Passage, Q...
Provide a detailed description of the following dataset: BiPaR
PASTEL
PASTEL is a parallelly annotated stylistic language dataset. The dataset consists of ~41K parallel sentences and 8.3K parallel stories annotated across different personas.
Provide a detailed description of the following dataset: PASTEL
PubMedQA
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert labeled, 61.2k unlabeled and 211.3k artificially generated QA instances.
Provide a detailed description of the following dataset: PubMedQA
JuICe
JuICe is a corpus of 1.5 million examples with a curated test set of 3.7K instances based on online programming assignments. Compared with existing contextual code generation datasets, JuICe provides refined human-curated data, open-domain code, and an order of magnitude more training data.
Provide a detailed description of the following dataset: JuICe
VisPro
VisPro dataset contains coreference annotation of 29,722 pronouns from 5,000 dialogues.
Provide a detailed description of the following dataset: VisPro
RUN
The RUN dataset is based on OpenStreetMap (OSM). The map contains rich layers and an abundance of entities of different types. Each entity is complex and can contain (at least) four labels: name, type, is building=y/n, and house number. An entity can spread over several tiles. As the maps do not overlap, only very few...
Provide a detailed description of the following dataset: RUN
CrossWOZ
**CrossWOZ** is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and...
Provide a detailed description of the following dataset: CrossWOZ
TyDi QA
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 200K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology — the set of linguistic features that each language expresses — such that the authors expect models performing well on this set to gener...
Provide a detailed description of the following dataset: TyDi QA
BLiMP
BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted gr...
Provide a detailed description of the following dataset: BLiMP
BREAK
Break is a question understanding dataset, aimed at training models to reason over complex questions. It features 83,978 natural language questions, annotated with a new meaning representation, Question Decomposition Meaning Representation (QDMR). Each example has the natural question along with its QDMR representation...
Provide a detailed description of the following dataset: BREAK
OLPBENCH
OLPBENCH is a large Open Link Prediction benchmark, which was derived from the state-of-the-art Open Information Extraction corpus OPIEC (Gashteovski et al., 2019). OLPBENCH contains 30M open triples, 1M distinct open relations and 2.5M distinct mentions of approximately 800K entities. Open Link Prediction is defin...
Provide a detailed description of the following dataset: OLPBENCH
LIAR
LIAR is a publicly available dataset for fake news detection. A decade-long of 12.8K manually labeled short statements were collected in various contexts from POLITIFACT.COM, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well....
Provide a detailed description of the following dataset: LIAR
STAIR Captions
STAIR Captions is a large-scale dataset containing 820,310 Japanese captions. This dataset can be used for caption generation, multimodal retrieval, and image generation.
Provide a detailed description of the following dataset: STAIR Captions
BillSum
BillSum is the first dataset for summarization of US Congressional and California state bills. The BillSum dataset consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO)...
Provide a detailed description of the following dataset: BillSum
Business Scene Dialogue
The Japanese-English business conversation corpus, namely **Business Scene Dialogue** corpus, was constructed in 3 steps: 1. selecting business scenes, 2. writing monolingual conversation scenarios according to the selected scenes, and 3. translating the scenarios into the other language. Half of the monolingua...
Provide a detailed description of the following dataset: Business Scene Dialogue
X-WikiRE
X-WikiRE is a new, large-scale multilingual relation extraction dataset in which relation extraction is framed as a problem of reading comprehension to allow for generalization to unseen relations.
Provide a detailed description of the following dataset: X-WikiRE
ProofWriter
The ProofWriter dataset contains many small rulebases of facts and rules, expressed in English. Each rulebase also has a set of questions (English statements) which can either be proven true or false using proofs of various depths, or the answer is “Unknown” (in open-world setting, OWA) or assumed negative (in closed-w...
Provide a detailed description of the following dataset: ProofWriter
Open PI
**Open PI** is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. The dataset comprises 29,928 state changes over 4,050 sentences from 810 procedural real-world paragraphs from WikiHow.com. The state tracking task assumes new formulation i...
Provide a detailed description of the following dataset: Open PI
hasPart KB
This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms...
Provide a detailed description of the following dataset: hasPart KB
SciDocs
SciDocs evaluation framework consists of a suite of evaluation tasks designed for document-level tasks.
Provide a detailed description of the following dataset: SciDocs
GenericsKB
The **GenericsKB** contains 3.4M+ generic sentences about the world, i.e., sentences expressing general truths such as "Dogs bark," and "Trees remove carbon dioxide from the atmosphere." Generics are potentially useful as a knowledge source for AI systems requiring general world knowledge. The GenericsKB is the first l...
Provide a detailed description of the following dataset: GenericsKB
CORD-19
CORD-19 is a free resource of tens of thousands of scholarly articles about COVID-19, SARS-CoV-2, and related coronaviruses for use by the global research community.
Provide a detailed description of the following dataset: CORD-19
Quoref
Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard coreferences before selecting the appropriate span(s) in the paragraphs for answering ques...
Provide a detailed description of the following dataset: Quoref
ROPES
ROPES is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s), a novel situation that uses this background, and questions that require reasoning about effects of the relationshi...
Provide a detailed description of the following dataset: ROPES
QASC
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
Provide a detailed description of the following dataset: QASC
QuaRTz
QuaRTz is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question...
Provide a detailed description of the following dataset: QuaRTz
WIQA
The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions.
Provide a detailed description of the following dataset: WIQA
QuaRel
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
Provide a detailed description of the following dataset: QuaRel
ProPara
The **ProPara** dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process. ProPara aims to promote the research in natural language understan...
Provide a detailed description of the following dataset: ProPara
ComplexWebQuestions
ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways: 1. By interacting with a search engine; 2. As a reading comprehension task: the authors release 12,7...
Provide a detailed description of the following dataset: ComplexWebQuestions
ScienceExamCER
ScienceExamCER is a collection of resources for studying explanation-centered inference, including explanation graphs for 1,680 questions, with 4,950 tablestore rows, and other analyses of the knowledge required to answer elementary and middle-school science questions.
Provide a detailed description of the following dataset: ScienceExamCER
TupleInf Open IE Dataset
The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries....
Provide a detailed description of the following dataset: TupleInf Open IE Dataset
TQA
The TextbookQuestionAnswering (TQA) dataset is drawn from middle school science curricula. It consists of 1,076 lessons from Life Science, Earth Science and Physical Science textbooks. This includes 26,260 questions, including 12,567 that have an accompanying diagram. The TQA dataset encourages work on the task of M...
Provide a detailed description of the following dataset: TQA
Countix
Countix is a real world dataset of repetition videos collected in the wild (i.e.YouTube) covering a wide range of semantic settings with significant challenges such as camera and object motion, diverse set of periods and counts, and changes in the speed of repeated actions. Countix include repeated videos of workout ac...
Provide a detailed description of the following dataset: Countix
RL Unplugged
RL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, the datasets are provided with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been e...
Provide a detailed description of the following dataset: RL Unplugged
MineRL
**MineRL**is an imitation learning dataset with over 60 million frames of recorded human player data. The dataset includes a set of tasks which highlights many of the hardest problems in modern-day Reinforcement Learning: sparse rewards and hierarchical policies.
Provide a detailed description of the following dataset: MineRL
Mathematics Dataset
This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models.
Provide a detailed description of the following dataset: Mathematics Dataset
PGM
PGM dataset serves as a tool for studying both abstract reasoning and generalisation in models. Generalisation is a multi-faceted phenomenon; there is no single, objective way in which models can or should generalise beyond their experience. The PGM dataset provides a means to measure the generalization ability of mode...
Provide a detailed description of the following dataset: PGM
Slim
This dataset consists of virtual scenes rendered in MuJoCo with multiple views each presented in multiple modalities: image, and synthetic or natural language descriptions. Each scene consists of two or three objects placed on a square walled room, and for each of the 10 camera viewpoint the authors rendered a 3D view ...
Provide a detailed description of the following dataset: Slim
TableBank
To address the need for a standard open domain table benchmark dataset, the author propose a novel weak supervision approach to automatically create the TableBank, which is orders of magnitude larger than existing human labeled datasets for table analysis. Distinct from traditional weakly supervised training set, our a...
Provide a detailed description of the following dataset: TableBank
GitHub Typo Corpus
Are you the kind of person who makes a lot of typos when writing code? Or are you the one who fixes them by making "fix typo" commits? Either way, thank you—you contributed to the state-of-the-art in the NLP field. GitHub Typo Corpus is a large-scale dataset of misspellings and grammatical errors along with their co...
Provide a detailed description of the following dataset: GitHub Typo Corpus
word2word
word2word contains easy-to-use word translations for 3,564 language pairs. - A large collection of freely & publicly available bilingual lexicons for 3,564 language pairs across 62 unique languages. - Easy-to-use Python interface for accessing top-k word translations and for building a new bilingual lexicon from a ...
Provide a detailed description of the following dataset: word2word
Dakshina
The Dakshina dataset is a collection of text in both Latin and native scripts for 12 South Asian languages. For each language, the dataset includes a large collection of native script Wikipedia text, a romanization lexicon which consists of words in the native script with attested romanizations, and some full sentence ...
Provide a detailed description of the following dataset: Dakshina
Dataset of Legal Documents
Dataset of Legal Documents consists of court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH),...
Provide a detailed description of the following dataset: Dataset of Legal Documents
ChrEn
Cherokee-English Parallel Dataset is a low-resource dataset of 14,151 pairs of sentences with around 313K English tokens and 206K Cherokee tokens. The parallel corpus is accompanied by a monolingual Cherokee dataset of 5,120 sentences. Both datasets are mostly derived from Cherokee monolingual books.
Provide a detailed description of the following dataset: ChrEn
C4
**C4** is a colossal, cleaned version of Common Crawl's web crawl corpus. It was based on Common Crawl dataset: https://commoncrawl.org. It was used to train the T5 text-to-text Transformer models. The dataset can be downloaded in a pre-processed form from [allennlp](https://github.com/allenai/allennlp/discussions/5...
Provide a detailed description of the following dataset: C4
Image网
**Image网** (pronounced Imagewang; 网 means "net" in Chinese) is an image classification dataset combined from [Imagenette](/dataset/imagenette) and [Imagewoof](/dataset/imagewoof) datasets in a way to make it into a semi-supervised unbalanced classification problem: * the validation set is the same as the validation ...
Provide a detailed description of the following dataset: Image网
CCAligned
**CCAligned** consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matchin...
Provide a detailed description of the following dataset: CCAligned
WikiTableT
WikiTableT contains Wikipedia article sections and their corresponding tabular data and various metadata. WikiTableT contains millions of instances while covering a broad range of topics and a variety of kinds of generation tasks.
Provide a detailed description of the following dataset: WikiTableT
AutoWeakS
Collects all the courses from XuetangX5, one of the largest MOOCs in China, and this results in 1951 courses. The collected courses involve seven areas: computer science, economics, engineering, foreign language, math, physics, and social science. Each course contains 131 words in its descriptions on average. Contains ...
Provide a detailed description of the following dataset: AutoWeakS
MMDB
Multimodal Dyadic Behavior (MMDB) dataset is a unique collection of multimodal (video, audio, and physiological) recordings of the social and communicative behavior of toddlers. The MMDB contains 160 sessions of 3-5 minute semi-structured play interaction between a trained adult examiner and a child between the age of ...
Provide a detailed description of the following dataset: MMDB
GazeFollow
GazeFollow is a large-scale dataset annotated with the location of where people in images are looking. It uses several major datasets that contain people as a source of images: 1, 548 images from SUN, 33, 790 images from MS COCO, 9, 135 images from Actions 40, 7, 791 images from PASCAL, 508 images from the ImageNet det...
Provide a detailed description of the following dataset: GazeFollow
4DFAB
4DFAB is a large scale database of dynamic high-resolution 3D faces which consists of recordings of 180 subjects captured in four different sessions spanning over a five-year period (2012 - 2017), resulting in a total of over 1,800,000 3D meshes. It contains 4D videos of subjects displaying both spontaneous and posed f...
Provide a detailed description of the following dataset: 4DFAB
iQIYI-VID-2019
iQIYI-VID-2019 dataset is the first video dataset for multi-model person identification. This dataset aims to encourage the research of multi-modal based person identification. To get close to real applications, video clips are extracted from real online videos of extensive types. All the clips are labeled by human ann...
Provide a detailed description of the following dataset: iQIYI-VID-2019
iQIYI-VID
iQIYI-VID dataset, which comprises video clips from iQIYI variety shows, films, and television dramas. The whole dataset contains 500,000 videos clips of 5,000 celebrities. The length of each video is 1~30 seconds.
Provide a detailed description of the following dataset: iQIYI-VID
ELFW
Extended Labeled Faces in-the-Wild (ELFW) is a dataset supplementing with additional face-related categories —and also additional faces— the originally released semantic labels in the vastly used Labeled Faces in-the-Wild (LFW) dataset. Additionally, two object-based data augmentation techniques are deployed to synthet...
Provide a detailed description of the following dataset: ELFW
KANFace
KANFace consists of 40K still images and 44K sequences (14.5M video frames in total) captured in unconstrained, real-world conditions from 1,045 subjects. The dataset is manually annotated in terms of identity, exact age, gender and kinship.
Provide a detailed description of the following dataset: KANFace
BAVL
Blind Audio-Visual Localization (BAVL) Dataset consists of 20 audio-visual recordings of sound sources, which could be talking faces or music instruments. Most audio-visual recordings (19) are videos from Youtube except V8, which is from [1]. Besides, the video V7 was also used in[2][3], and V16 used in [3]. All 20 vi...
Provide a detailed description of the following dataset: BAVL