dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
WMT 2016 IT | The IT Translation Task is a shared task introduced in the First Conference on Machine Translation. Compared to WMT 2016 News, this task brought several novelties to WMT:
* 4 out of the 7 langauges of the IT task are new in WMT,
* adaptation to the IT domain with its specifics such as frequent named entities (most... | Provide a detailed description of the following dataset: WMT 2016 IT |
WMT 2016 Biomedical | The Biomedical Translation Shared Task was first introduced at the First Conference of Machine Translation. The task aims to evaluate systems for the translation of biomedical titles and abstracts from scientific publications. The data includes three language pairs (English ↔ Portuguese, English ↔ Spanish, English ↔ ... | Provide a detailed description of the following dataset: WMT 2016 Biomedical |
XSum | The Extreme Summarization (**XSum**) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create a short, one-sentence new summary answering the question “What is the article about?”. The dataset consists of 226,711 news articles accompanied with a one-sentence summar... | Provide a detailed description of the following dataset: XSum |
WMT 2014 | **WMT 2014** is a collection of datasets used in shared tasks of the Ninth Workshop on Statistical Machine Translation. The workshop featured four tasks:
* a news translation task,
* a quality estimation task,
* a metrics task,
* a medical text translation task. | Provide a detailed description of the following dataset: WMT 2014 |
WMT 2014 Medical | The Medical Translation Task of WMT 2014 addresses the problem of domain-specific and genre-specific machine translation. The task is split into two subtasks: summary translation, focused on translation of sentences from summaries of medical articles, and query translation, focused on translation of queries entered by ... | Provide a detailed description of the following dataset: WMT 2014 Medical |
WMT 2015 | **WMT 2015** is a collection of datasets used in shared tasks of the Tenth Workshop on Statistical Machine Translation. The workshop featured five tasks:
* a news translation task,
* a metrics task,
* a tuning task,
* a quality estimation task,
* an automatic post-editing task. | Provide a detailed description of the following dataset: WMT 2015 |
WMT 2015 News | News translation is a recurring WMT task. The test set is a collection of parallel corpora consisting of about 1500 English sentences translated into 5 languages (Czech, German, Finnish, French, Russian) and additional 1500 sentences from each of the 5 languages translated to English. The sentences are taken from newsp... | Provide a detailed description of the following dataset: WMT 2015 News |
SHAPES | **SHAPES** is a dataset of synthetic images designed to benchmark systems for understanding of spatial and logical relations among multiple objects. The dataset consists of complex questions about arrangements of colored shapes. The questions are built around compositions of concepts and relations, e.g. Is there a red ... | Provide a detailed description of the following dataset: SHAPES |
AG’s Corpus | Antonio Gulli’s corpus of news articles is a collection of more than 1 million news articles. The articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004.
The dataset is provide... | Provide a detailed description of the following dataset: AG’s Corpus |
QUASAR-S | **QUASAR-S** is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 37,362 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. Th... | Provide a detailed description of the following dataset: QUASAR-S |
QUASAR-T | **QUASAR-T** is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 43,013 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for... | Provide a detailed description of the following dataset: QUASAR-T |
MLDoc | **Multilingual Document Classification Corpus** (**MLDoc**) is a cross-lingual document classification dataset covering English, German, French, Spanish, Italian, Russian, Japanese and Chinese. It is a subset of the Reuters Corpus Volume 2 selected according to the following design choices:
* uniform class coverage:... | Provide a detailed description of the following dataset: MLDoc |
WMT 2018 | **WMT 2018** is a collection of datasets used in shared tasks of the Third Conference on Machine Translation. The conference builds on a series of twelve previous annual workshops and conferences on Statistical Machine Translation.
The conference featured ten shared tasks:
- a news translation task,
- a biomedic... | Provide a detailed description of the following dataset: WMT 2018 |
WMT 2018 News | News translation is a recurring WMT task. The test set is a collection of parallel corpora consisting of about 1500 English sentences translated into 5 languages (Chinese, Czech, Estonian, German, Finnish, Russian, Turkish) and additional 1500 sentences from each of the 7 languages translated to English. The sentences ... | Provide a detailed description of the following dataset: WMT 2018 News |
ArxivPapers | The **ArxivPapers** dataset is an unlabelled collection of over 104K papers related to machine learning and published on arXiv.org between 2007–2020. The dataset includes around 94K papers (for which LaTeX source code is available) in a structured form in which paper is split into a title, abstract, sections, paragraph... | Provide a detailed description of the following dataset: ArxivPapers |
SegmentedTables | The **SegmentedTables** dataset is a collection of almost 2,000 tables extracted from 352 machine learning papers. Each table consists of rich text content, layout and caption. Tables are annotated with types (leaderboard, ablation, irrelevant) and cells of relevant tables are annotated with semantic roles (such as “pa... | Provide a detailed description of the following dataset: SegmentedTables |
LinkedResults | The **LinkedResults** dataset contains around 1,600 results capturing performance of machine learning models from tables of 239 papers. All tables come from a subset of SegmentedTables dataset. Each result is a tuple of form (task, dataset, metric name, metric value) and is linked to a particular table, row and cell it... | Provide a detailed description of the following dataset: LinkedResults |
PWC Leaderboards | The **Papers with Code Leaderboards** dataset is a collection of over 5,000 results capturing performance of machine learning models. Each result is a tuple of form (task, dataset, metric name, metric value). The data was collected using the Papers with Code review interface.
Source: [AxCell: Automatic Extraction of R... | Provide a detailed description of the following dataset: PWC Leaderboards |
SKU110K | The Sku110k dataset provides 11,762 images with more than 1.7 million annotated bounding boxes captured in densely packed scenarios, including 8,233 images for training, 588 images for validation, and 2,941 images for testing. There are around 1,733,678 instances in total. The images are collected from thousands of sup... | Provide a detailed description of the following dataset: SKU110K |
UBIRIS.v2 | The **UBIRIS.v2** iris dataset contains 11,102 iris images from 261 subjects with 10 images each subject. The images were captured under unconstrained conditions (at-a-distance, on-the-move and on the visible wavelength), with realistic noise factors.
Source: [Constrained Design of Deep Iris Networks](https://arxiv.or... | Provide a detailed description of the following dataset: UBIRIS.v2 |
VIVA | The **VIVA** challenge’s dataset is a multimodal dynamic hand gesture dataset specifically designed with difficult settings of cluttered background, volatile illumination, and frequent occlusion for studying natural human activities in real-world driving settings. This dataset was captured using a Microsoft Kinect devi... | Provide a detailed description of the following dataset: VIVA |
ITOP | The **ITOP** dataset consists of 40K training and 10K testing depth images for each of the front-view and top-view tracks. This dataset contains depth images with 20 actors who perform 15 sequences each and is recorded by two Asus Xtion Pro cameras. The ground-truth of this dataset is the 3D coordinates of 15 body join... | Provide a detailed description of the following dataset: ITOP |
Dayton | The **Dayton** dataset is a dataset for ground-to-aerial (or aerial-to-ground) image translation, or cross-view image synthesis. It contains images of road views and aerial views of roads. There are 76,048 images in total and the train/test split is 55,000/21,048. The images in the original dataset have 354×354 resolut... | Provide a detailed description of the following dataset: Dayton |
AOLP | The application-oriented license plate (**AOLP**) benchmark database has 2049 images of Taiwan license plates. This database is categorized into three subsets: access control (AC) with 681 samples, traffic law enforcement (LE) with 757 samples, and road patrol (RP) with 611 samples. AC refers to the cases that a vehicl... | Provide a detailed description of the following dataset: AOLP |
Set11 | **Set11** is a dataset of 11 grayscale images. It is a dataset used for image reconstruction and image compression.
Source: [ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing](https://arxiv.org/abs/1706.07929)
Image Source: [https://arxiv.org/pdf/1706.07929.pdf](https://arxiv.org... | Provide a detailed description of the following dataset: Set11 |
SALICON | The SALIency in CONtext (**SALICON**) dataset contains 10,000 training images, 5,000 validation images and 5,000 test images for saliency prediction. This dataset has been created by annotating saliency in images from MS COCO.
The ground-truth saliency annotations include fixations generated from mouse trajectories. T... | Provide a detailed description of the following dataset: SALICON |
GRID Dataset | The QMUL underGround Re-IDentification (**GRID**) dataset contains 250 pedestrian image pairs. Each pair contains two images of the same individual seen from different camera views. All images are captured from 8 disjoint camera views installed in a busy underground station. The figures beside show a snapshot of each o... | Provide a detailed description of the following dataset: GRID Dataset |
Flickr30K Entities | The **Flickr30K Entities** dataset is an extension to the Flickr30K dataset. It augments the original 158k captions with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. This is used to define a... | Provide a detailed description of the following dataset: Flickr30K Entities |
FGVC-Aircraft | FGVC-Aircraft contains 10,200 images of aircraft, with 100 images for each of 102 different aircraft model variants, most of which are airplanes. The (main) aircraft in each image is annotated with a tight bounding box and a hierarchical airplane model label.
Aircraft models are organized in a four-levels hierarchy. T... | Provide a detailed description of the following dataset: FGVC-Aircraft |
DUTS | **DUTS** is a saliency detection dataset containing 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenar... | Provide a detailed description of the following dataset: DUTS |
LIP | The **LIP** (**Look into Person**) dataset is a large-scale dataset focusing on semantic understanding of a person. It contains 50,000 images with elaborated pixel-wise annotations of 19 semantic human part labels and 2D human poses with 16 key points. The images are collected from real-world scenarios and the subjects... | Provide a detailed description of the following dataset: LIP |
ApolloScape | **ApolloScape** is a large dataset consisting of over 140,000 video frames (73 street scene videos) from various locations in China under varying weather conditions. Pixel-wise semantic annotation of the recorded data is provided in 2D, with point-wise semantic annotation in 3D for 28 classes. In addition, the dataset ... | Provide a detailed description of the following dataset: ApolloScape |
PoseTrack | The **PoseTrack** dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validati... | Provide a detailed description of the following dataset: PoseTrack |
ICVL Hand Posture | The ICVL dataset is a hand pose estimation dataset that consists of 330K training frames and 2 testing sequences with each 800 frames. The dataset is collected from 10 different subjects with 16 hand joint annotations for each frame.
Source: [AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation](https://arxi... | Provide a detailed description of the following dataset: ICVL Hand Posture |
SegTrack-v2 | SegTrack v2 is a video segmentation dataset with full pixel-level annotations on multiple objects at each frame within each video. | Provide a detailed description of the following dataset: SegTrack-v2 |
Foggy Cityscapes | **Foggy Cityscapes** is a synthetic foggy dataset which simulates fog on real scenes. Each foggy image is rendered with a clear image and depth map from Cityscapes. Thus the annotations and data split in Foggy Cityscapes are inherited from Cityscapes. | Provide a detailed description of the following dataset: Foggy Cityscapes |
Vimeo90K | The Vimeo-90K is a large-scale high-quality video dataset for lower-level video processing. It proposes three different video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution. | Provide a detailed description of the following dataset: Vimeo90K |
MPIIGaze | **MPIIGaze** is a dataset for appearance-based gaze estimation in the wild. It contains 213,659 images collected from 15 participants during natural everyday laptop use over more than three months. It has a large variability in appearance and illumination. | Provide a detailed description of the following dataset: MPIIGaze |
ReferItGame | The ReferIt dataset contains 130,525 expressions for referring to 96,654 objects in 19,894 images of natural scenes. | Provide a detailed description of the following dataset: ReferItGame |
MultiTHUMOS | The **MultiTHUMOS** dataset contains dense, multilabel, frame-level action annotations for 30 hours across 400 videos in the THUMOS'14 action detection dataset. It consists of 38,690 annotations of 65 action classes, with an average of 1.5 labels per frame and 10.5 action classes per video. | Provide a detailed description of the following dataset: MultiTHUMOS |
CrowdHuman | **CrowdHuman** is a large and rich-annotated human detection dataset, which contains 15,000, 4,370 and 5,000 images collected from the Internet for training, validation and testing respectively. The number is more than 10× boosted compared with previous challenging pedestrian detection dataset like CityPersons. The tot... | Provide a detailed description of the following dataset: CrowdHuman |
MSRDailyActivity3D | **DailyActivity3D** dataset is a daily activity dataset captured by a Kinect device. There are 16 activity types: drink, eat, read book, call cellphone, write on a paper, use laptop, use vacuum cleaner, cheer up, sit still, toss paper, play game, lay down on sofa, walk, play guitar, stand up, sit down. If possible, eac... | Provide a detailed description of the following dataset: MSRDailyActivity3D |
McMaster | The **McMaster** dataset is a dataset for color demosaicing, which contains 18 cropped images of size 500×500. | Provide a detailed description of the following dataset: McMaster |
Sketch | The **Sketch** dataset contains over 20,000 sketches evenly distributed over 250 object categories. | Provide a detailed description of the following dataset: Sketch |
Wireframe | The **Wireframe** dataset consists of 5,462 images (5,000 for training, 462 for test) of indoor and outdoor man-made scenes. | Provide a detailed description of the following dataset: Wireframe |
MNIST-M | **MNIST-M** is created by combining MNIST digits with the patches randomly extracted from color photos of BSDS500 as their background. It contains 59,001 training and 90,001 test images. | Provide a detailed description of the following dataset: MNIST-M |
tieredImageNet | The **tieredImageNet** dataset is a larger subset of ILSVRC-12 with 608 classes (779,165 images) grouped into 34 higher-level nodes in the ImageNet human-curated hierarchy. This set of nodes is partitioned into 20, 6, and 8 disjoint sets of training, validation, and testing nodes, and the corresponding classes form the... | Provide a detailed description of the following dataset: tieredImageNet |
aPY | **aPY** is a coarse-grained dataset composed of 15339 images from 3 broad categories (animals, objects and vehicles), further divided into a total of 32 subcategories (aeroplane, …, zebra). | Provide a detailed description of the following dataset: aPY |
VisDA-2017 | **VisDA-2017** is a simulation-to-real dataset for domain adaptation with over 280,000 images across 12 categories in the training, validation and testing domains. The training images are generated from the same object under different circumstances, while the validation images are collected from MSCOCO.. | Provide a detailed description of the following dataset: VisDA-2017 |
ImageNet-32 | Imagenet32 is a huge dataset made up of small images called the down-sampled version of Imagenet. Imagenet32 is composed of 1,281,167 training data and 50,000 test data with 1,000 labels. | Provide a detailed description of the following dataset: ImageNet-32 |
MVTecAD | MVTec AD is a dataset for benchmarking anomaly detection methods with a focus on industrial inspection. It contains over 5000 high-resolution images divided into fifteen different object and texture categories. Each category comprises a set of defect-free training images and a test set of images with various kinds of d... | Provide a detailed description of the following dataset: MVTecAD |
Kvasir | The KVASIR Dataset was released as part of the medical multimedia challenge presented by MediaEval. It is based on images obtained from the GI tract via an endoscopy procedure. The dataset is composed of images that are annotated and verified by medical doctors, and captures 8 different classes. The classes are based o... | Provide a detailed description of the following dataset: Kvasir |
Syn2Real | **Syn2Real**, a synthetic-to-real visual domain adaptation benchmark meant to encourage further development of robust domain transfer methods. The goal is to train a model on a synthetic "source" domain and then update it so that its performance improves on a real "target" domain, without using any target annotations. ... | Provide a detailed description of the following dataset: Syn2Real |
ANLI | The Adversarial Natural Language Inference (**ANLI**, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa. | Provide a detailed description of the following dataset: ANLI |
Cityscapes | **Cityscapes** is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of aro... | Provide a detailed description of the following dataset: Cityscapes |
PASCAL VOC | The PASCAL Visual Object Classes (VOC) 2012 dataset contains 20 object categories including vehicles, household, animals, and other: aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, TV/monitor, bird, cat, cow, dog, horse, sheep, and person. Each image in this datase... | Provide a detailed description of the following dataset: PASCAL VOC |
VGG Face | The **VGG Face** dataset is face identity recognition dataset that consists of 2,622 identities. It contains over 2.6 million images. | Provide a detailed description of the following dataset: VGG Face |
LibriSpeech | The **LibriSpeech** corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’... | Provide a detailed description of the following dataset: LibriSpeech |
CASIA-WebFace | The **CASIA-WebFace** dataset is used for face verification and face identification tasks. The dataset contains 494,414 face images of 10,575 real identities collected from the web. | Provide a detailed description of the following dataset: CASIA-WebFace |
Set14 | The **Set14** dataset is a dataset consisting of 14 images commonly used for testing performance of Image Super-Resolution models.
Image Source: [https://www.ece.rice.edu/~wakin/images/](https://www.ece.rice.edu/~wakin/images/) | Provide a detailed description of the following dataset: Set14 |
MS-Celeb-1M | The **MS-Celeb-1M** dataset is a large-scale face recognition dataset consists of 100K identities, and each identity has about 100 facial images. The original identity labels are obtained automatically from webpages.
**NOTE**: This dataset [is currently inactive](https://exposing.ai/msceleb/). | Provide a detailed description of the following dataset: MS-Celeb-1M |
UCI Machine Learning Repository | **UCI Machine Learning Repository** is a collection of over 550 datasets. | Provide a detailed description of the following dataset: UCI Machine Learning Repository |
SYNTHIA | The **SYNTHIA** dataset is a synthetic dataset that consists of 9400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Each frame has resolution of 1280 × 960. | Provide a detailed description of the following dataset: SYNTHIA |
NYUv2 | The **NYU-Depth V2** data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features:
* 1449 densely labeled pairs of aligned RGB and depth images
* 464 new scenes taken from 3 cities
* 407,024 new unlabeled frames
* Eac... | Provide a detailed description of the following dataset: NYUv2 |
Urban100 | The **Urban100** dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models.
Image Source: [http://vllab.ucmerced.edu/wlai24/LapSRN/](http://vllab.ucmerced.edu/wlai24/LapSRN/) | Provide a detailed description of the following dataset: Urban100 |
VGGFace2 | The **VGGFace2** dataset is made of around 3.31 million images divided into 9131 classes, each representing a different person identity. The dataset is divided into two splits, one for the training and one for test. The latter contains around 170000 images divided into 500 identities while all the other images belong t... | Provide a detailed description of the following dataset: VGGFace2 |
PASCAL3D+ | The Pascal3D+ multi-view dataset consists of images in the wild, i.e., images of object categories exhibiting high variability, captured under uncontrolled settings, in cluttered scenes and under many different poses. Pascal3D+ contains 12 categories of rigid objects selected from the PASCAL VOC 2012 dataset. These obj... | Provide a detailed description of the following dataset: PASCAL3D+ |
SUN RGB-D | The SUN RGBD dataset contains 10335 real RGB-D images of room scenes. Each RGB image has a corresponding depth and segmentation map. As many as 700 object categories are labeled. The training and testing sets contain 5285 and 5050 images, respectively. | Provide a detailed description of the following dataset: SUN RGB-D |
SUNCG | **SUNCG** is a large-scale dataset of synthetic 3D scenes with dense volumetric annotations.
The dataset is currently not available. | Provide a detailed description of the following dataset: SUNCG |
Places205 | The **Places205** dataset is a large-scale scene-centric dataset with 205 common scene categories. The training dataset contains around 2,500,000 images from these categories. In the training set, each scene category has the minimum 5,000 and maximum 15,000 images. The validation set contains 100 images per category (a... | Provide a detailed description of the following dataset: Places205 |
ModelNet | The **ModelNet**40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as a... | Provide a detailed description of the following dataset: ModelNet |
YAGO | **Yet Another Great Ontology** (**YAGO**) is a Knowledge Graph that augments WordNet with common knowledge facts extracted from Wikipedia, converting WordNet from a primarily linguistic resource to a common knowledge base. YAGO originally consisted of more than 1 million entities and 5 million facts describing relation... | Provide a detailed description of the following dataset: YAGO |
MPI Sintel | MPI (Max Planck Institute) Sintel is a dataset for optical flow evaluation that has 1064 synthesized stereo images and ground truth data for disparity. Sintel is derived from open-source 3D animated short film Sintel. The dataset has 23 different scenes. The stereo images are RGB while the disparity is grayscale. Both ... | Provide a detailed description of the following dataset: MPI Sintel |
Helen | The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline. | Provide a detailed description of the following dataset: Helen |
Omniglot | The Omniglot data set is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people. Each image is paired with stroke data, a sequences of [x,... | Provide a detailed description of the following dataset: Omniglot |
FrameNet | **FrameNet** is a linguistic knowledge graph containing information about lexical and predicate argument semantics of the English language. FrameNet contains two distinct entity classes: frames and lexical units, where a frame is a meaning and a lexical unit is a single meaning for a word. | Provide a detailed description of the following dataset: FrameNet |
LSUN | The Large-scale Scene Understanding (**LSUN**) challenge aims to provide a different benchmark for large-scale scene classification and understanding. The LSUN classification dataset contains 10 scene categories, such as dining room, bedroom, chicken, outdoor church, and so on. For training data, each category contains... | Provide a detailed description of the following dataset: LSUN |
LFPW | The **Labeled Face Parts in-the-Wild** (**LFPW**) consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset. | Provide a detailed description of the following dataset: LFPW |
CARLA | **CARLA** (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, gro... | Provide a detailed description of the following dataset: CARLA |
OTB | Object Tracking Benchmark (**OTB**) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. [OTB-2013](otb-2013) dataset contains 51 ... | Provide a detailed description of the following dataset: OTB |
Places365 | The **Places365** dataset is a scene recognition dataset. It is composed of 10 million images comprising 434 scene classes. There are two versions of the dataset: Places365-Standard with 1.8 million train and 36000 validation images from K=365 scene classes, and Places365-Challenge-2016, in which the size of the traini... | Provide a detailed description of the following dataset: Places365 |
Extended Yale B | The **Extended Yale B** database contains 2414 frontal-face images with size 192×168 over 38 subjects and about 64 images per subject. The images were captured under different lighting conditions and various facial expressions. | Provide a detailed description of the following dataset: Extended Yale B |
IMDb Movie Reviews | The **IMDb Movie Reviews** dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database (IMDb) labeled as positive or negative. The dataset contains an even number of positive and negative reviews. Only highly polarizing reviews are considered. A negative review has a sco... | Provide a detailed description of the following dataset: IMDb Movie Reviews |
BookCorpus | **BookCorpus** is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.). | Provide a detailed description of the following dataset: BookCorpus |
FaceWarehouse | **FaceWarehouse** is a 3D facial expression database that provides the facial geometry of 150 subjects, covering a wide range of ages and ethnic backgrounds. | Provide a detailed description of the following dataset: FaceWarehouse |
LSP | The **Leeds Sports Pose** (**LSP**) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing. Each image is annotated with 14 joint locations, where left and right joints are consistent... | Provide a detailed description of the following dataset: LSP |
KTH | The efforts to create a non-trivial and publicly available dataset for action recognition was initiated at the **KTH** Royal Institute of Technology in 2004. The KTH dataset is one of the most standard datasets, which contains six actions: walk, jog, run, box, hand-wave, and hand clap. To account for performance nuance... | Provide a detailed description of the following dataset: KTH |
Places | The **Places** dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category. | Provide a detailed description of the following dataset: Places |
MoCap | Collection of various motion capture recordings (walking, dancing, sports, and others) performed by over 140 subjects. The database contains free motions which you can download and use. There is a zip file of all asf/amc's on the FAQs page.
Source: [https://www.re3data.org/repository/r3d100012183](https://www.re3data.... | Provide a detailed description of the following dataset: MoCap |
KIT Whole-Body Human Motion | The **KIT Whole-Body Human Motion** Database is a large-scale dataset of whole-body human motion with methods and tools, which allows a unifying representation of captured human motion, and efficient search in the database, as well as the transfer of subject-specific motions to robots with different embodiments. Captur... | Provide a detailed description of the following dataset: KIT Whole-Body Human Motion |
Meta-Dataset | The **Meta-Dataset** benchmark is a large few-shot learning benchmark and consists of multiple datasets of different data distributions. It does not restrict few-shot tasks to have fixed ways and shots, thus representing a more realistic scenario. It consists of 10 datasets from diverse domains:
* ILSVRC-2012 (the ... | Provide a detailed description of the following dataset: Meta-Dataset |
USF | The **USF** **Human ID Gait Challenge Dataset** is a dataset of videos for gait recognition. It has videos from 122 subjects in up to 32 possible combinations of variations in factors.
Source: [http://www.eng.usf.edu/cvprg/Gait_Data.html](http://www.eng.usf.edu/cvprg/Gait_Data.html) | Provide a detailed description of the following dataset: USF |
BirdSong | The **BirdSong** dataset consists of audio recordings of bird songs at the H. J. Andrews (HJA) Experimental Forest, using unattended microphones. The goal of the dataset is to provide data to automatically identify the species of bird responsible for each utterance in these recordings. The dataset contains 548 10-secon... | Provide a detailed description of the following dataset: BirdSong |
Oxford5k | Oxford5K is the **Oxford Buildings** Dataset, which contains 5062 images collected from Flickr. It offers a set of 55 queries for 11 landmark buildings, five for each landmark. | Provide a detailed description of the following dataset: Oxford5k |
CBSD68 | **Color BSD68** dataset for image denoising benchmarks is part of The Berkeley Segmentation Dataset and Benchmark. It is used for measuring image denoising algorithms performance. It contains 68 images. | Provide a detailed description of the following dataset: CBSD68 |
ScribbleSup | The **PASCAL-Scribble Dataset** is an extension of the PASCAL dataset with scribble annotations for semantic segmentation. The annotations follow two different protocols. In the first protocol, the PASCAL VOC 2012 set is annotated, with 20 object categories (aeroplane, bicycle, ...) and one background category. There a... | Provide a detailed description of the following dataset: ScribbleSup |
Stanford Background | The **Stanford Background** dataset contains 715 RGB images and the corresponding label images. Images are approximately 240×320 pixels in size and pixels are classified into eight different categories | Provide a detailed description of the following dataset: Stanford Background |
New College | The **New College** Data is a freely available dataset collected from a robot completing several loops outdoors around the New College campus in Oxford. The data includes odometry, laser scan, and visual information. The dataset URL is not working anymore. | Provide a detailed description of the following dataset: New College |
MALF | The **MALF** dataset is a large dataset with 5,250 images annotated with multiple facial attributes and it is specifically constructed for fine grained evaluation.
Source: [Pushing the Limits of Unconstrained Face Detection:a Challenge Dataset and Baseline Results](https://arxiv.org/abs/1804.10275)
Image Source: [http... | Provide a detailed description of the following dataset: MALF |
Oxford-Affine | The **Oxford-Affine** dataset is a small dataset containing 8 scenes with sequence of 6 images per scene. The images in a sequence are related by homographies.
Source: [A Large Dataset for Improving Patch Matching](https://arxiv.org/abs/1801.01466)
Image Source: [https://www.robots.ox.ac.uk/~vgg/data/affine/](https://... | Provide a detailed description of the following dataset: Oxford-Affine |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.