dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Enoch Oluwumi
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset If the description or image is from a different paper, please refer to it as follows: Source: [title](url) Image Source: [title...
Provide a detailed description of the following dataset: Enoch Oluwumi
Visuomotor affordance learning (VAL) robot interaction dataset
This data contains about 2500 trajectories (with images and actions) of a Sawyer robot interacting with various objects. Examples from the dataset are shown in the adjacent video. We provide two versions of the VAL dataset - one with low-res images (1.4 GB) and one with high-res images (162 GB). The data quantity an...
Provide a detailed description of the following dataset: Visuomotor affordance learning (VAL) robot interaction dataset
XFUND
XFUND is a multilingual form understanding benchmark dataset that includes human-labeled forms with key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
Provide a detailed description of the following dataset: XFUND
NTIRE 2021 HDR
The **NTIRE 2021 HDR** was built for the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021. The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) obs...
Provide a detailed description of the following dataset: NTIRE 2021 HDR
BAAI-VANJEE
**BAAI-VANJEE** is a dataset for benchmarking and training various computer vision tasks such as 2D/3D object detection and multi-sensor fusion. The BAAI-VANJEE roadside dataset consists of LiDAR data and RGB images collected by VANJEE smart base station placed on the roadside about 4.5m high. This dataset contains 250...
Provide a detailed description of the following dataset: BAAI-VANJEE
PS5k
We introduce a new data set containing 5000 scientific papers and their slides crawled from conference proceeding websites such as aclweb and usenix.
Provide a detailed description of the following dataset: PS5k
SemEval-2013 Task 2
The **SemEval-2013 Task 2** dataset contains data for two subtasks: A, an expression-level subtask, and B, a message-level subtask. Crowdsourcing was used to label a large Twitter training dataset along with additional test sets of Twitter and SMS messages for both subtasks.
Provide a detailed description of the following dataset: SemEval-2013 Task 2
MLQuestions
**MLQuestions** is a domain-adaptation dataset for the machine learning domain containing 50K unaligned passages and 35K unaligned questions, and 3K aligned passage and question pairs.
Provide a detailed description of the following dataset: MLQuestions
iWildCam 2021
**iWildCam 2021** is a dataset for counting the number of animals of each species that appear in sequences of images captured with camera traps. The training data and test data are from different cameras spread across the globe. The set of species seen in each camera overlap but are not identical. The challenge is to c...
Provide a detailed description of the following dataset: iWildCam 2021
NeoRL
- **NeoRL** is a collection of environments and datasets for offline reinforcement learning with a special focus on real-world applications. The design follows real-world properties like the conservative of behavior policies, limited amounts of data, high-dimensional state and action spaces, and the highly stochastic n...
Provide a detailed description of the following dataset: NeoRL
ARC Ukiyo-e Faces
**ARC Ukiyo-e Faces** is a large-scale (>10k paintings, >20k faces) Ukiyo-e dataset with coherent semantic labels and geometric annotations through augmenting and organizing existing datasets with automatic detection.
Provide a detailed description of the following dataset: ARC Ukiyo-e Faces
IBims-1
iBims-1 (independent Benchmark images and matched scans - version 1) is a new high-quality RGB-D dataset, especially designed for testing single-image depth estimation (SIDE) methods. A customized acquisition setup, composed of a digital single-lens reflex (DSLR) camera and a high-precision laser scanner was used to ac...
Provide a detailed description of the following dataset: IBims-1
UIT-ViSFD
UIT-ViSFD is a Vietnamese Smartphone Feedback Dataset as a new benchmark corpus built based on strict annotation schemes for evaluating aspect-based sentiment analysis, consisting of 11,122 human-annotated comments for mobile e-commerce, which is freely available for research purposes.
Provide a detailed description of the following dataset: UIT-ViSFD
ToyADMOS2
**ToyADMOS2** is a dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions.
Provide a detailed description of the following dataset: ToyADMOS2
Colored MNIST
Colored MNIST is a synthetic binary classification task derived from [MNIST](/dataset/mnist).
Provide a detailed description of the following dataset: Colored MNIST
Stanford Schema2QA Dataset
Schema2QA is the first large question answering dataset over real-world Schema.org data. It covers 6 common domains: restaurants, hotels, people, movies, books, and music, based on crawled Schema.org metadata from 6 different websites (Yelp, Hyatt, LinkedIn, IMDb, Goodreads, and last.fm.). In total, there are **over 2,...
Provide a detailed description of the following dataset: Stanford Schema2QA Dataset
CBC
The complete blood count (CBC) dataset contains 360 blood smear images along with their annotation files splitting into Training, Testing, and Validation sets. The training folder contains 300 images with annotations. The testing and validation folder both contain 60 images with annotations. We have done some modificat...
Provide a detailed description of the following dataset: CBC
Quo Vadis, Open Source?
This is an complete set of the data we collected and analyzed in our study "Quo Vadis, Open Source? The Limits of Open Source Growth". Please see our GitHub repository for details and tool chain.
Provide a detailed description of the following dataset: Quo Vadis, Open Source?
X4K1000FPS
Dataset of high-resolution (4096×2160), high-fps (1000fps) video frames with extreme motion. X-TEST consists of 15 video clips with 33-length of 4K-1000fps frames. X-TRAIN consists of 4,408 clips from various types of 110 scenes. The clips are 65-length of 1000fps frames
Provide a detailed description of the following dataset: X4K1000FPS
Webis-ConcluGen-21
**Webis-ConcluGen-21** is a large-scale corpus of 136,996 samples of argumentative texts and their conclusions used for the task of generating informative conclusions.
Provide a detailed description of the following dataset: Webis-ConcluGen-21
CTFW
**CTFW** is a large annotated procedural text dataset in the cybersecurity domain (3154 documents). It is used to generate flow graphs from procedural texts.
Provide a detailed description of the following dataset: CTFW
Herbarium 2021 Half–Earth
The **Herbarium Half-Earth** dataset is a large and diverse dataset of herbarium specimens to date for automatic taxon recognition. The Herbarium 2021: Half-Earth Challenge dataset includes more than 2.5M images representing nearly 65,000 species from the Americas and Oceania that have been aligned to a standardized pl...
Provide a detailed description of the following dataset: Herbarium 2021 Half–Earth
Dark Machines Anomaly Score
This dataset is the outcome of a data challenge conducted as part of the Dark Machines Initiative and the Les Houches 2019 workshop on Physics at TeV colliders. The challenge aims at detecting signals of new physics at the LHC using unsupervised machine learning algorithms. It consists on a large benchmark dataset,...
Provide a detailed description of the following dataset: Dark Machines Anomaly Score
PROST
The **PROST** (Physical Reasoning about Objects Through Space and Time) dataset contains 18,736 multiple-choice questions made from 14 manually curated templates, covering 10 physical reasoning concepts. All questions are designed to probe both causal and masked language models in a zero-shot setting.
Provide a detailed description of the following dataset: PROST
COVID-Fact
**COVID-Fact** is a FEVER-like dataset of claims concerning the COVID-19 pandemic. The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.
Provide a detailed description of the following dataset: COVID-Fact
AppleScabLDs
Dataset contains images with apple leaves infected by scab. The images are grouped in two folders: "Healthy" and "Scab". The collection of digital images were carried out in different locations of Latvia. Digital images with characteristic scab symptoms on leaves were collected by the Institute of Horticulture (LatHort...
Provide a detailed description of the following dataset: AppleScabLDs
AppleScabFDs
Dataset contains images with apples infected by scab. The images are grouped in two folders: "Healthy" and "Scab". The collection of digital images were carried out in different locations of Latvia. Digital images with characteristic scab symptoms on fruits were collected by the Institute of Horticulture (LatHort) unde...
Provide a detailed description of the following dataset: AppleScabFDs
LSEC
The **LSEC** (Live Stream E-Commerce) dataset has two subsets: LSEC-Small and LSEC-Large. It is a dataset for studying E-commerce transactions in the context of live streams, where the streames are talking about products while interacting with their audience. The dataset consists of interaction information among stream...
Provide a detailed description of the following dataset: LSEC
BiToD
**BiToD** is a bilingual multi-domain dataset for end-to-end task-oriented dialogue modeling. BiToD contains over 7k multi-domain dialogues (144k utterances) with a large and realistic bilingual knowledge base. It serves as an effective benchmark for evaluating bilingual ToD systems and cross-lingual transfer learning ...
Provide a detailed description of the following dataset: BiToD
CoSQA
CoSQA (Code Search and Question Answering) It includes 20,604 labels for pairs of natural language queries and codes, each annotated by at least 3 human annotators.
Provide a detailed description of the following dataset: CoSQA
TikTok Dataset
We learn high fidelity human depths by leveraging a collection of social media dance videos scraped from the [TikTok mobile social networking application](hhttps://www.tiktok.com/). It is by far one of the most popular video sharing applications across generations, which include short videos (10-15 seconds) of diverse ...
Provide a detailed description of the following dataset: TikTok Dataset
Unsplash2K
Unsplash2K is high-resolution image dataset with 2K resolution. Unsplash2K dataset is crawled from unsplash. Unsplash2K dataset contains 498 high-resolution images and corresponding low-resolution images which are downsampled by bicubic downsamling for x2, x4, x8 scale. Unsplash2K contains diverse contents such as anim...
Provide a detailed description of the following dataset: Unsplash2K
DIPS-Plus
How and where proteins interface with one another can ultimately impact the proteins' functions along with a range of other biological processes. As such, precise computational methods for protein interface prediction (PIP) come highly sought after as they could yield significant advances in drug discovery and design a...
Provide a detailed description of the following dataset: DIPS-Plus
Disfl-QA
**Disfl-QA** is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the [SQuAD-v2](squad) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distr...
Provide a detailed description of the following dataset: Disfl-QA
TimeDial
**TimeDial** presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the [DailyDialog](dailydialog), which is a multi-turn dialog corpus. TimeDial dataset consists of 1,104 di...
Provide a detailed description of the following dataset: TimeDial
CFD
CrackForest Dataset is an annotated road crack image database which can reflect urban road surface condition in general. If you use this crack image dataset, we appreciate it if you cite an appropriate subset of the following papers: @article{shi2016automatic, title={Automatic road crack detection using random...
Provide a detailed description of the following dataset: CFD
Rel3D
Understanding spatial relations (e.g., “laptop on table”) in visual input is important for both humans and robots. Existing datasets are insufficient as they lack largescale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel...
Provide a detailed description of the following dataset: Rel3D
Topo-boundary
**Topo-boundary** is a new benchmark dataset, named \textit{Topo-boundary}, for off-line topological road-boundary detection. The dataset contains 21,556 1000 X 1000-sized 4-channel aerial images. Each image is provided with 8 training labels for different sub-tasks. Image source: [https://github.com/TonyXuQAQ/Topo-...
Provide a detailed description of the following dataset: Topo-boundary
Swords
**Swords** (Standford Word Substitution) is a benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context. Swords is composed of context, target word, and substitute triples (c, w, w'), each of which has a score that indicates the appropriateness of the substitute.
Provide a detailed description of the following dataset: Swords
Emol news articles and comments
The dataset provides News articles obtained from emol.cl including their content, title and all the comments it received in JSON format
Provide a detailed description of the following dataset: Emol news articles and comments
JFT-3B
**JFT-3B** is an internal Google dataset and a larger version of the JFT-300M dataset. It consists of nearly 3 billion images, annotated with a class-hierarchy of around 30k labels via a semi-automatic pipeline. In other words, the data and associated labels are noisy.
Provide a detailed description of the following dataset: JFT-3B
VOID
The dataset was collected using the Intel RealSense D435i camera, which was configured to produce synchronized accelerometer and gyroscope measurements at 400 Hz, along with synchronized VGA-size (640 x 480) RGB and depth streams at 30 Hz. The depth frames are acquired using active stereo and is aligned to the RGB fram...
Provide a detailed description of the following dataset: VOID
FastZIP Data
# Structure of code/data folders and how to use them #### fastzip-code * Contains codebase to generate results in *fastzip-results* folder * Individual notebooks contain comments on their functionality/how to use them * *FastZIP-Resample.ipynb* (optional) * Resamples the collected sensor data to desire...
Provide a detailed description of the following dataset: FastZIP Data
TESTIMAGES
A collection of photographic and synthetic images intended for analysis of image processing techniques and quality assessment of displays. Image source: [https://testimages.org/](https://testimages.org/)
Provide a detailed description of the following dataset: TESTIMAGES
SEDE
**SEDE** is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other sem...
Provide a detailed description of the following dataset: SEDE
CoNaLa
The **CMU CoNaLa, the Code/Natural Language Challenge** dataset is a joint project from the Carnegie Mellon University [NeuLab](http://www.cs.cmu.edu/~neulab/) and [Strudel](https://cmustrudel.github.io/) labs. Its purpose is for testing the generation of code snippets from natural language. The data comes from StackOv...
Provide a detailed description of the following dataset: CoNaLa
SIPaKMeD
* a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset
Provide a detailed description of the following dataset: SIPaKMeD
CoNaLa-Ext
The **CoNaLa Extended With Question Text** is an extension to the original [CoNaLa Dataset](https://conala-corpus.github.io/) ([Papers With Code Link](https://paperswithcode.com/dataset/conala)) proposed in the NLP4Prog workshop paper "[Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractiv...
Provide a detailed description of the following dataset: CoNaLa-Ext
VALUE
**VALUE** is a Video-And-Language Understanding Evaluation benchmark to test models that are generalizable to diverse tasks, domains, and datasets. It is an assemblage of 11 VidL (video-and-language) datasets over 3 popular tasks: (i) text-to-video retrieval; (ii) video question answering; and (iii) video captioning. V...
Provide a detailed description of the following dataset: VALUE
Itihasa
Itihasa is a large-scale corpus for Sanskrit to English translation containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata.
Provide a detailed description of the following dataset: Itihasa
5k_presetation_slides
We crawled 5000 paper, slide pairs from conference proceeding websites. (e.g. acl.org and usenix.org).
Provide a detailed description of the following dataset: 5k_presetation_slides
Notre-Dame Cathedral Fire
**Number of images:** 1,657 images during or after the fire If you use the dataset, please cite the following works: > Padilha, Rafael and Andaló, Fernanda A. and Rocha, Anderson. “Improving the chronological sorting of images through occlusion: A study on the Notre-Dame cathedral fire,” in 45th International Con...
Provide a detailed description of the following dataset: Notre-Dame Cathedral Fire
CLINC-Single-Domain-OOS
A dataset with two separate domains, i.e., the "Banking'' domain and the "Credit cards'' domain with both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. Each domain in CLINC150 originally inc...
Provide a detailed description of the following dataset: CLINC-Single-Domain-OOS
BANKING77-OOS
A dataset with a single banking domain, includes both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. BANKING77 originally includes 77 intents. BANKING77-OOS includes 50 in-scope intents in this...
Provide a detailed description of the following dataset: BANKING77-OOS
ZeroWaste
**ZeroWaste** is a dataset for automatic waste detection and segmentation. This dataset contains over 1,800 fully segmented video frames collected from a real waste sorting plant along with waste material labels for training and evaluation of the segmentation methods, as well as over 6,000 unlabeled frames that can be ...
Provide a detailed description of the following dataset: ZeroWaste
ILDC
The **ILDC** dataset (Indian Legal Documents Corpus) is a large corpus of 35k Indian Supreme Court cases annotated with original court decisions. A portion of the corpus (a separate test set) is annotated with gold standard explanations by legal experts. The dataset is used for Court Judgment Prediction and Explanation...
Provide a detailed description of the following dataset: ILDC
CiteWorth
**CiteWorth** is a a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents.
Provide a detailed description of the following dataset: CiteWorth
IndiaPoliceEvents
**IndiaPoliceEvents** is a corpus of 21,391 sentences from 1,257 English-language Times of India articles about events in the state of Gujarat during March 2002. This dataset is used for automated event extraction.
Provide a detailed description of the following dataset: IndiaPoliceEvents
Multilingual TOP
**Multilingual TOP** is a dataset for multilingual semantic parsing with human-written sentences as opposed to machine translated ones. The dataset sentences are in English, Italian and Japanese and it is based on the Facebook Task Oriented Parsing (TOP) dataset.
Provide a detailed description of the following dataset: Multilingual TOP
MultiOpEd
**MultiOpEd** is a corpus of multi-perspective news editorials. It is an open-domain news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials, focusing on automatic perspective discovery. News editorial is a genre of persuasive text, where the argumentation structur...
Provide a detailed description of the following dataset: MultiOpEd
S_B_D
100,000 LR synthetic barcode datasets along with their corresponding bounding boxes ground truth masks. 100,000 UHR synthetic barcode datasets along with their corresponding bounding boxes ground truth masks.
Provide a detailed description of the following dataset: S_B_D
EMOTyDA
EMOTyDA is a multimodal Emotion aware Dialogue Act dataset collected from open-sourced dialogue datasets.
Provide a detailed description of the following dataset: EMOTyDA
Rent3D++
Rent3D++ is an extension of the Rent3D floorplans + photos dataset. The floorplans are annotated with room outline polygons, doors/windows as line segments, object-icons as axis-aligned bounding boxes, room-door-room connectivity graphs, and photo-room assignments. We have extracted rectified surface crops from archi...
Provide a detailed description of the following dataset: Rent3D++
Date Estimation in the Wild
~1M Flickr images from the XX century-aged from the 1910s to 1990s. Dataset was introduced by Müller et al. and can be found https://www.radar-service.eu/radar/en/dataset/tJzxrsYUkvPklBOw
Provide a detailed description of the following dataset: Date Estimation in the Wild
Dirty-MNIST
DirtyMNIST is a concatenation of MNIST + AmbiguousMNIST, with 60k samples each in the training set. AmbiguousMNIST contains additional ambiguous digits with varying ambiguity. The AmbiguousMNIST test set contains 60k ambiguous samples as well. ## Additional Guidance 1. DirtyMNIST is a concatenation of MNIST + Amb...
Provide a detailed description of the following dataset: Dirty-MNIST
Symmetric Solids
This is a pose estimation dataset, consisting of symmetric 3D shapes where multiple orientations are visually indistinguishable. The challenge is to predict all equivalent orientations when only one orientation is paired with each image during training (as is the scenario for most pose estimation datasets). In contrast...
Provide a detailed description of the following dataset: Symmetric Solids
Evidence-based Factual Error Correction
Intermediate annotations from the FEVER dataset that describe original facts extracted from Wikipedia and the mutations that were applied, yielding the claims in FEVER.
Provide a detailed description of the following dataset: Evidence-based Factual Error Correction
TI1K Dataset
Thumb Index 1000 (TI1K) is a dataset of 1000 hand images with the hand bounding box, and thumb and index fingertip positions. The dataset includes the natural movement of the thumb and index fingers making it suitable for mixed reality (MR) applications. The dataset contains images only with the thumb and index fing...
Provide a detailed description of the following dataset: TI1K Dataset
Bus Trajectory Dataset
This dataset contains the bus trajectory dataset collected by 6 volunteers who were asked to travel across the sub-urban city of Durgapur, India, on intra-city buses (route name: 54 Feet). During the travel, the volunteers captured sensor logs through an Android application installed on COTS smartphones.
Provide a detailed description of the following dataset: Bus Trajectory Dataset
MARS-DL
MARS dataset processed with our re-Detect and Link (DL) module. More information: [https://github.com/jackie840129/CF-AAN](https://github.com/jackie840129/CF-AAN)
Provide a detailed description of the following dataset: MARS-DL
DukeMTMC-VideoReID-DL
DukeMTMC-VideoReID-DL processed with our re-Detect and Link (DL) module.
Provide a detailed description of the following dataset: DukeMTMC-VideoReID-DL
PHASE
PHASE is a dataset of physically-grounded abstract social events, that resemble a wide range of real-life social interactions by including social concepts such as helping another agent. PHASE consists of 2D animations of pairs of agents moving in a continuous space generated procedurally using a physics engine and a hi...
Provide a detailed description of the following dataset: PHASE
AGENT
Inspired by cognitive development studies on intuitive psychology, we present a benchmark consisting of a large dataset of procedurally generated 3D animations, AGENT (Action, Goal, Efficiency, coNstraint, uTility), structured around four scenarios (goal preferences, action efficiency, unobserved constraints, and cost-...
Provide a detailed description of the following dataset: AGENT
TCIA Brain-Tumor-Progression
This collection includes datasets from 20 subjects with primary newly diagnosed glioblastoma who were treated with surgery and standard concomitant chemo-radiation therapy (CRT) followed by adjuvant chemotherapy. Two MRI exams are included for each patient: within 90 days following CRT completion and at progression (d...
Provide a detailed description of the following dataset: TCIA Brain-Tumor-Progression
Ruddit
Ruddit is a dataset of English language Reddit comments that has fine-grained, real-valued scores for offensive language detection between -1 (maximally supportive) and 1 (maximally offensive). The dataset was annotated using Best--Worst Scaling, a form of comparative annotation that has been shown to alleviate know...
Provide a detailed description of the following dataset: Ruddit
HuRDL
The **Human-Robot Dialogue Learning (HuRDL) Corpus** is a dataset about asking questions in situated task-based interactions. It is a dialogue corpus collected in an online interactive virtual environment in which human participants play the role of a robot performing a collaborative tool-organization task.
Provide a detailed description of the following dataset: HuRDL
HPO-B
HPO-B is a benchmark for assessing the performance of HPO (Hyperparameter optimization) algorithms.
Provide a detailed description of the following dataset: HPO-B
DUO
**DUO** is a dataset for Underwater object detection for robot picking. The dataset contains a collection of diverse underwater images with more rational annotations.
Provide a detailed description of the following dataset: DUO
SciCo
**SciCo** is an expert-annotated dataset for hierarchical CDCR (cross-document coreference resolution) for concepts in scientific papers, with the goal of jointly inferring coreference clusters and hierarchy between them.
Provide a detailed description of the following dataset: SciCo
Python Programming Puzzles (P3)
Python Programming Puzzles (P3) is an open-source dataset where each puzzle is defined by a short Python program , and the goal is to find an input which makes output "True". The puzzles are objective in that each one is specified entirely by the source code of its verifier, so evaluating is all that is needed to test...
Provide a detailed description of the following dataset: Python Programming Puzzles (P3)
2021 Hotel-ID
**2021 Hotel-ID** is a dataset for hotel recognition to help raise awareness of human trafficking and generate novel approaches. The dataset consists of hotel room images that have been crowd-sourced and uploaded through the TraffickCam mobile application.
Provide a detailed description of the following dataset: 2021 Hotel-ID
FEVEROUS
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supp...
Provide a detailed description of the following dataset: FEVEROUS
FetReg
Fetoscopic Placental Vessel Segmentation and Registration (**FetReg**) is a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos.
Provide a detailed description of the following dataset: FetReg
Replication Data for: Online Learning with Optimism and Delay
The model forecasts for the sub-seasonal forecasting application considered in the Online Learning under Optimism and Delay paper experiments. This dataset consists of a single ZIP archive (919MB) that contains 1) a "models" folder that contains, for each model the forecasts for the Precip. 3-4w, Precip. 5-6w, Temp. 3-...
Provide a detailed description of the following dataset: Replication Data for: Online Learning with Optimism and Delay
Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People's Republic of China
This data is for the Mis2-KDD 2021 under review paper: Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’s Republic of China We present our dataset that focuses on propaganda techniques in Mandarin based on a state-linked information operations dataset from the PRC released ...
Provide a detailed description of the following dataset: Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People's Republic of China
GitHub-Python
Repair AST parse (syntax) errors in Python code
Provide a detailed description of the following dataset: GitHub-Python
Artificial signal data for signal alignment testing
This is a set of signals-pairs, univariate and multivariate, that can be used to test alignment algorithms. Signals are morphologically different. Signal data is synchronized, but the provided timestamp is shifted with small time-jumps.
Provide a detailed description of the following dataset: Artificial signal data for signal alignment testing
BBBC005
Since robust foreground/background separation and segmentation of cellular objects (i.e.,identification of which pixels below to which objects) strongly depends on image quality, focus artifacts are detrimental to data quality. This image set provides examples of in- and out-of-focus synthetic images, which can be used...
Provide a detailed description of the following dataset: BBBC005
BBBC039
This image set is part of a high-throughput chemical screen on U2OS cells, with examples of 200 bioactive compounds. The effect of the treatments was originally imaged using the Cell Painting assay (fluorescence microscopy). This data set only includes the DNA channel of a single field of view per compound. These image...
Provide a detailed description of the following dataset: BBBC039
TNBC
Inolves an annotated a large number of cells, including normal epithelial and myoepithelial breast cells (localized in ducts and lobules), invasive carcinomatous cells, fibroblasts, endothelial cells, adipocytes, macrophages and inflammatory cells (lymphocytes and plasmocytes). In total, our data set consists of 50 ima...
Provide a detailed description of the following dataset: TNBC
alpha-matte MFIF dataset
A large-scale training dataset suffering from the defocus spread effect (DSE) is synthesized by applying an $\alpha$-matte boundary defocus model to the VOC 2012 dataset. Motivation: Due to the lack of large-scale datasets of multi-focus images, several data generation methods based on public natural image datasets ha...
Provide a detailed description of the following dataset: alpha-matte MFIF dataset
Dataset of Context information for Zero Interaction Security
We release both the processed data and evaluation results from our own experiments, and the underlying raw data that can be used for future experiments and schemes in the domain of Zero-Interaction Security. Find more details in the dataset description on Zenodo.
Provide a detailed description of the following dataset: Dataset of Context information for Zero Interaction Security
GitTables
GitTables is a corpus of currently 1M relational tables extracted from CSV files in GitHub covering 96 topics. Table columns in GitTables have been annotated with more than 2K different semantic types from Schema.org and DBpedia. The column annotations consist of semantic types, hierarchical relations, range types, tab...
Provide a detailed description of the following dataset: GitTables
PartialSpoof_v1
All existing databases of spoofed speech contain attack data that is spoofed in its entirety. In practice, it is entirely plausible that successful attacks can be mounted with utterances that are only partially spoofed. By definition, partially-spoofed utterances contain a mix of both spoofed and bona fide segments, wh...
Provide a detailed description of the following dataset: PartialSpoof_v1
SurfaceGrid
The SurfaceGrid dataset contains nearly a million 512x512 images for use in training neural networks on shape-fron-surface contour task.
Provide a detailed description of the following dataset: SurfaceGrid
WNUT 2020
The training and development dataset for our task was taken from previous work on wet lab corpus (Kulkarni et al., 2018) that consists of from the 623 protocols. We excluded the eight duplicate protocols from this dataset and then re-annotated the 615 unique protocols in BRAT (Stenetorp et al., 2012).
Provide a detailed description of the following dataset: WNUT 2020
selfie2anime
The selfie dataset contains 46,836 selfie images annotated with 36 different attributes. We only use photos of females as training data and test data. The size of the training dataset is 3400, and that of the test dataset is 100, with the image size of 256 x 256. For the anime dataset, we have firstly retrieved 69,926 ...
Provide a detailed description of the following dataset: selfie2anime
WNUT-2020 Task 2
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset If the description or image is from a different paper, please refer to it as follows: Source: [title](url) Image Source: [title...
Provide a detailed description of the following dataset: WNUT-2020 Task 2
DIR-LAB COPDgene
Inspiratory and exipratory breath-hold CT image pairs acquired from the National Heart Lung Blood Institute COPDgene study archive.
Provide a detailed description of the following dataset: DIR-LAB COPDgene
Children's Song Dataset
Children's Song Dataset is open source dataset for singing voice research. This dataset contains 50 Korean and 50 English songs sung by one Korean female professional pop singer. Each song is recorded in two separate keys resulting in a total of 200 audio recordings. Each audio recording is paired with a MIDI transcrip...
Provide a detailed description of the following dataset: Children's Song Dataset