dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Argoverse | **Argoverse** is a tracking benchmark with over 30K scenarios collected in Pittsburgh and Miami. Each scenario is a sequence of frames sampled at 10 HZ. Each sequence has an interesting object called “agent”, and the task is to predict the future locations of agents in a 3 seconds future horizon. The sequences are spli... | Provide a detailed description of the following dataset: Argoverse |
MIB Dataset | You need to request access to download and use the dataset.
It contains fake and real accounts of Twitter and their follower's/friends' ids (can create a graph based on that). | Provide a detailed description of the following dataset: MIB Dataset |
CLEVR | **CLEVR** (**Compositional Language and Elementary Visual Reasoning**) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories. Those categories fall into 5 classes of tasks: Exist, ... | Provide a detailed description of the following dataset: CLEVR |
PROBA-V | The PROBA-V Super-Resolution dataset is the official dataset of ESA's Kelvins competition for "PROBA-V Super Resolution". It contains satellite data from 74 hand-selected regions around the globe at different points in time. The data is composed of radiometrically and geometrically corrected Top-Of-Atmosphere (TOA) ref... | Provide a detailed description of the following dataset: PROBA-V |
Tai-Chi-HD | **Thai-Chi-HD** is a high resolution dataset which can be used as reference benchmark for evaluating frameworks for image animation and video generation. It consists of cropped videos of full human bodies performing Tai Chi actions.
Image source: [https://papers.nips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62... | Provide a detailed description of the following dataset: Tai-Chi-HD |
CMU-MOSEI | CMU Multimodal Opinion Sentiment and Emotion Intensity (**CMU-MOSEI**) is the largest dataset of sentence level sentiment analysis and emotion recognition in online videos. CMU-MOSEI contains more than 65 hours of annotated video from more than 1000 speakers and 250 topics. | Provide a detailed description of the following dataset: CMU-MOSEI |
AffectNet | **AffectNet** is a large facial expression dataset with around 0.4 million images manually labeled for the presence of eight (neutral, happy, angry, sad, fear, surprise, disgust, contempt) facial expressions along with the intensity of valence and arousal. | Provide a detailed description of the following dataset: AffectNet |
FER+ | The **FER+** dataset is an extension of the original FER dataset, where the images have been re-labelled into one of 8 emotion types: neutral, happiness, surprise, sadness, anger, disgust, fear, and contempt. | Provide a detailed description of the following dataset: FER+ |
CommonGen | CommonGen is constructed through a combination of crowdsourced and existing caption corpora, consists of 79k commonsense descriptions over 35k unique concept-sets. | Provide a detailed description of the following dataset: CommonGen |
The China Physiological Signal Challenge 2018 | The China Physiological Signal Challenge 2018 aims to encourage the development of algorithms to identify the rhythm/morphology abnormalities from 12-lead ECGs. The data used in CPSC 2018 include one normal ECG type and eight abnormal types. | Provide a detailed description of the following dataset: The China Physiological Signal Challenge 2018 |
University-1652 | Contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world. University-1652 is a drone-based geo-localization dataset and enables two new tasks, i.e., drone-view target localization and drone navigation. | Provide a detailed description of the following dataset: University-1652 |
FQuAD | A French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ samples for the 1.0 version and 60,000+ samples for the 1.1 version. | Provide a detailed description of the following dataset: FQuAD |
HARD | The Hotel Arabic-Reviews Dataset (HARD) contains 93700 hotel reviews in Arabic language. The hotel reviews were collected from Booking.com website during June/July 2016. The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic. | Provide a detailed description of the following dataset: HARD |
3DFAW | **3DFAW** contains 23k images with 66 3D face keypoint annotations.
Source: [Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild](https://arxiv.org/abs/1911.11130)
Image Source: [http://mhug.disi.unitn.it/workshop/3dfaw/](http://mhug.disi.unitn.it/workshop/3dfaw/) | Provide a detailed description of the following dataset: 3DFAW |
Breakfast | The **Breakfast** Actions Dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens. The dataset is one of the largest fully annotated datasets available. The actions are recorded “in the wild” as opposed to a single controlled lab environment. It c... | Provide a detailed description of the following dataset: Breakfast |
NELL-995 | NELL-995 KG Completion Dataset | Provide a detailed description of the following dataset: NELL-995 |
Food-101N | The Food-101N dataset is introduced in "CleanNet: Transfer Learning for Scalable
Image Training with Label Noise (CVPR'18). It is an image dataset containing about 310,009 images of food recipes classified in 101 classes (categories). Food-101N and the Food-101 dataset share the same 101 classes, whereas Food-101N ha... | Provide a detailed description of the following dataset: Food-101N |
Composition-1K | Composition-1K is a large-scale image matting dataset including 49300 training images and 1000 testing images.
Image source: [https://arxiv.org/pdf/1703.03872v3.pdf](https://arxiv.org/pdf/1703.03872v3.pdf) | Provide a detailed description of the following dataset: Composition-1K |
KolektorSDD | The dataset is constructed from images of defective production items that were provided and annotated by [Kolektor Group d.o.o.](https://www.kolektordigital.com/en/advanced-visual-tecnologies). The images were captured in a controlled industrial environment in a real-world case.
The dataset consists of 399 images at... | Provide a detailed description of the following dataset: KolektorSDD |
ASLG-PC12 | An artificial corpus built using grammatical dependencies rules due to the lack of resources for Sign Language. | Provide a detailed description of the following dataset: ASLG-PC12 |
CIFAR10-DVS | **CIFAR10-DVS** is an event-stream dataset for object classification. 10,000 frame-based images that come from CIFAR-10 dataset are converted into 10,000 event streams with an event-based sensor, whose resolution is 128×128 pixels. The dataset has an intermediate difficulty with 10 different classes. The repeated close... | Provide a detailed description of the following dataset: CIFAR10-DVS |
Stanford Online Products | **Stanford Online Products** (SOP) dataset has 22,634 classes with 120,053 product images. The first 11,318 classes (59,551 images) are split for training and the other 11,316 (60,502 images) classes are used for testing | Provide a detailed description of the following dataset: Stanford Online Products |
In-Shop | In-shop Clothes Retrieval Benchmark evaluates the performance of in-shop Clothes Retrieval. This is a large subset of DeepFashion, containing large pose and scale variations. It also has large diversities, large quantities, and rich annotations, including:
- 7,982 number of clothing items;
- 52,712 number of in-sho... | Provide a detailed description of the following dataset: In-Shop |
Ecoli | The **Ecoli** dataset is a dataset for protein localization. It contains 336 E.coli proteins split into 8 different classes. | Provide a detailed description of the following dataset: Ecoli |
Yeast | Yeast dataset consists of a protein-protein interaction network. Interaction detection methods have led to the discovery of thousands of interactions between proteins, and discerning relevance within large-scale data sets is important to present-day biology. | Provide a detailed description of the following dataset: Yeast |
MOT17 | The **Multiple Object Tracking 17** (**MOT17**) dataset is a dataset for multiple object tracking. Similar to its previous version MOT16, this challenge contains seven different indoor and outdoor scenes of public places with pedestrians as the objects of interest. A video for each scene is divided into two clips, one ... | Provide a detailed description of the following dataset: MOT17 |
MOT20 | **MOT20** is a dataset for multiple object tracking. The dataset contains 8 challenging video sequences (4 train, 4 test) in unconstrained environments, from crowded places such as train stations, town squares and a sports stadium.
Image Source: [https://motchallenge.net/vis/MOT20-04](https://motchallenge.net/vis/MOT20... | Provide a detailed description of the following dataset: MOT20 |
SEMAINE | The **SEMAINE** videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate ... | Provide a detailed description of the following dataset: SEMAINE |
R2R | R2R is a dataset for visually-grounded natural language navigation in real buildings. The dataset requires autonomous agents to follow human-generated navigation instructions in previously unseen buildings, as illustrated in the demo above. For training, each instruction is associated with a Matterport3D Simulator traj... | Provide a detailed description of the following dataset: R2R |
SceneNN | SceneNN is an RGB-D scene dataset consisting of more than 100 indoor scenes. The scenes are captured at various places, e.g., offices, dormitory, classrooms, pantry, etc., from University of Massachusetts Boston and Singapore University of Technology and Design.
All scenes are reconstructed into triangle meshes and ha... | Provide a detailed description of the following dataset: SceneNN |
EGTEA | Extended GTEA Gaze+
EGTEA Gaze+ is a large-scale dataset for FPV actions and gaze. It subsumes GTEA Gaze+ and comes with HD videos (1280x960), audios, gaze tracking data, frame-level action annotations, and pixel-level hand masks at sampled frames.
Specifically, EGTEA Gaze+ contains 28 hours (de-identified) of cookin... | Provide a detailed description of the following dataset: EGTEA |
GAP | **GAP** is a graph processing benchmark suite with the goal of helping to standardize graph processing evaluations. Fewer differences between graph processing evaluations will make it easier to compare different research efforts and quantify improvements. The benchmark not only specifies graph kernels, input graphs, an... | Provide a detailed description of the following dataset: GAP |
BIPED | #Details
It contains 250 outdoor images of 1280$\times$720 pixels each. These images have been carefully annotated by experts on the computer vision field, hence no redundancy has been considered. In spite of that, all results have been cross-checked several times in order to correct possible mistakes or wrong edges b... | Provide a detailed description of the following dataset: BIPED |
StereoSet | A large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. | Provide a detailed description of the following dataset: StereoSet |
MIT-States | The **MIT-States** dataset has 245 object classes, 115 attribute classes and ∼53K images. There is a wide range of objects (e.g., fish, persimmon, room) and attributes (e.g., mossy, deflated, dirty). On average, each object instance is modified by one of the 9 attributes it affords. | Provide a detailed description of the following dataset: MIT-States |
Caltech-256 | **Caltech-256** is an object recognition dataset containing 30,607 real-world images, of different sizes, spanning 257 classes (256 object classes and an additional clutter class). Each class is represented by at least 80 images. The dataset is a superset of the Caltech-101 dataset. | Provide a detailed description of the following dataset: Caltech-256 |
SCDE | **SCDE** is a human-created sentence cloze dataset, collected from public school English examinations in China. The task requires a model to fill up multiple blanks in a passage from a shared candidate set with distractors designed by English teachers. | Provide a detailed description of the following dataset: SCDE |
VATEX | **VATEX** is multilingual, large, linguistically complex, and diverse dataset in terms of both video and natural language descriptions. It has two tasks for video-and-language research: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) V... | Provide a detailed description of the following dataset: VATEX |
ViGGO | The ViGGO corpus is a set of 6,900 meaning representation to natural language utterance pairs in the video game domain. The meaning representations are of 9 different dialogue acts. | Provide a detailed description of the following dataset: ViGGO |
REAL275 | REAL275 is a benchmark for category-level pose estimation. It contains 4300 training frames, 950 validation and 2750 for testing across 18 different real scenes. | Provide a detailed description of the following dataset: REAL275 |
ISTD | The Image Shadow Triplets dataset (**ISTD**) is a dataset for shadow understanding that contains 1870 image triplets of shadow image, shadow mask, and shadow-free image. | Provide a detailed description of the following dataset: ISTD |
LCQMC | **LCQMC** is a large-scale Chinese question matching corpus. LCQMC is more general than paraphrase corpus as it focuses on intent matching rather than paraphrase. The corpus contains 260,068 question pairs with manual annotation. | Provide a detailed description of the following dataset: LCQMC |
CoNLL-2009 | The task builds on the CoNLL-2008 task and extends it to multiple languages. The core of the task is to predict syntactic and semantic dependencies and their labeling. Data is provided for both statistical training and evaluation, which extract these labeled dependencies from manually annotated treebanks such as the Pe... | Provide a detailed description of the following dataset: CoNLL-2009 |
Ciao | The **Ciao** dataset contains rating information of users given to items, and also contain item category information. The data comes from the Epinions dataset. | Provide a detailed description of the following dataset: Ciao |
SICK | The **Sentences Involving Compositional Knowledge** (**SICK**) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. ... | Provide a detailed description of the following dataset: SICK |
FB15k | The **FB15k** dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it was found that a large number of te... | Provide a detailed description of the following dataset: FB15k |
CJRC | The Chinese judicial reading comprehension (CJRC) dataset contains approximately 10K documents and almost 50K questions with answers. The documents come from judgment documents and the questions are annotated by law experts. | Provide a detailed description of the following dataset: CJRC |
eRisk 2017 | Data for depression | Provide a detailed description of the following dataset: eRisk 2017 |
HyperLex | A dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. | Provide a detailed description of the following dataset: HyperLex |
DBLP | The **DBLP** is a citation network dataset. The citation data is extracted from DBLP, ACM, MAG (Microsoft Academic Graph), and other sources. The first version contains 629,814 papers and 632,752 citations. Each paper is associated with abstract, authors, year, venue, and title.
The data set can be used for clustering... | Provide a detailed description of the following dataset: DBLP |
ACM | The **ACM** dataset contains papers published in KDD, SIGMOD, SIGCOMM, MobiCOMM, and VLDB and are divided into three classes (Database, Wireless Communication, Data Mining). An heterogeneous graph is constructed, which comprises 3025 papers, 5835 authors, and 56 subjects. Paper features correspond to elements of a bag-... | Provide a detailed description of the following dataset: ACM |
FNC-1 | **FNC-1** was designed as a stance detection dataset and it contains 75,385 labeled headline and article pairs. The pairs are labelled as either agree, disagree, discuss, and unrelated. Each headline in the dataset is phrased as a statement | Provide a detailed description of the following dataset: FNC-1 |
GYAFC | Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs.
Yahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corp... | Provide a detailed description of the following dataset: GYAFC |
AIDS | **AIDS** is a graph dataset. It consists of 2000 graphs representing molecular compounds which are constructed from the AIDS Antiviral Screen Database of Active Compounds. It contains 4395 chemical compounds, of which 423 belong to class CA, 1081 to CM, and the remaining compounds to CI. | Provide a detailed description of the following dataset: AIDS |
Sydney Urban Objects | This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees.
It was collected in order to test matching and classification algorithms. It... | Provide a detailed description of the following dataset: Sydney Urban Objects |
Digits | The DIGITS dataset consists of 1797 8×8 grayscale images (1439 for training and 360 for testing) of handwritten digits. | Provide a detailed description of the following dataset: Digits |
Brazil Air-Traffic | Brazil Air-Traffic | Provide a detailed description of the following dataset: Brazil Air-Traffic |
USA Air-Traffic | Leonardo Filipe Rodrigues Ribeiro, Pedro H. P. Saverese, and Daniel R. Figueiredo. struc2vec: Learning node
representations from structural identity. | Provide a detailed description of the following dataset: USA Air-Traffic |
Mutagenicity | **Mutagenicity** is a chemical compound dataset of drugs, which can be categorized into two classes: mutagen and non-mutagen.
Source: [Hierarchical Graph Pooling with Structure Learning](https://arxiv.org/abs/1911.05954) | Provide a detailed description of the following dataset: Mutagenicity |
SIDER | **SIDER** contains information on marketed medicines and their recorded adverse drug reactions. The information is extracted from public documents and package inserts. The available information include side effect frequency, drug and side effect classifications as well as links to further information, for example drug–... | Provide a detailed description of the following dataset: SIDER |
RCV1 | The **RCV1** dataset is a benchmark dataset on text categorization. It is a collection of newswire articles producd by Reuters in 1996-1997. It contains 804,414 manually labeled newswire documents, and categorized with respect to three controlled vocabularies: industries, topics and regions. | Provide a detailed description of the following dataset: RCV1 |
CrossTask | **CrossTask** dataset contains instructional videos, collected for 83 different tasks. For each task an ordered list of steps with manual descriptions is provided. The dataset is divided in two parts: 18 primary and 65 related tasks. Videos for the primary tasks are collected manually and provided with annotations for ... | Provide a detailed description of the following dataset: CrossTask |
YouCook2 | **YouCook2** is the largest task-oriented, instructional video dataset in the vision community. It contains 2000 long untrimmed videos from 89 cooking recipes; on average, each distinct recipe has 22 videos. The procedure steps for each video are annotated with temporal boundaries and described by imperative English se... | Provide a detailed description of the following dataset: YouCook2 |
FaceForensics | FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:
* Source-to... | Provide a detailed description of the following dataset: FaceForensics |
Stacked MNIST | The **Stacked MNIST** dataset is derived from the standard MNIST dataset with an increased number of discrete modes. 240,000 RGB images in the size of 32×32 are synthesized by stacking three random digit images from MNIST along the color channel, resulting in 1,000 explicit modes in a uniform distribution corresponding... | Provide a detailed description of the following dataset: Stacked MNIST |
CARPK | The Car Parking Lot Dataset (**CARPK**) contains nearly 90,000 cars from 4 different parking lots collected by means of drone (PHANTOM 3 PROFESSIONAL). The images are collected with the drone-view at approximate 40 meters height. The image set is annotated by bounding box per car. All labeled bounding boxes have been w... | Provide a detailed description of the following dataset: CARPK |
Pix3D | The **Pix3D** dataset is a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. | Provide a detailed description of the following dataset: Pix3D |
Cell | The CELL benchmark is made of fluorescence microscopy images of cell. | Provide a detailed description of the following dataset: Cell |
FBMS | The **Freiburg-Berkeley Motion Segmentation** Dataset (**FBMS**-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames is annotated. It has pixel-accurate segmentation annotations of moving objects. FBMS-59 comes with a split into a training set and a test set. | Provide a detailed description of the following dataset: FBMS |
NVGesture | The **NVGesture** dataset focuses on touchless driver controlling. It contains 1532 dynamic gestures fallen into 25 classes. It includes 1050 samples for training and 482 for testing. The videos are recorded with three modalities (RGB, depth, and infrared).
Source: [Searching Multi-Rate and Multi-Modal Temporal Enhanc... | Provide a detailed description of the following dataset: NVGesture |
SUN09 | The **SUN09** dataset consists of 12,000 annotated images with more than 200 object categories. It consists of natural, indoor and outdoor images. Each image contains an average of 7 different annotated objects and the average occupancy of each object is 5% of image size. The frequencies of object categories follow a p... | Provide a detailed description of the following dataset: SUN09 |
COIN | The **COIN** dataset (a large-scale dataset for COmprehensive INstructional video analysis) consists of 11,827 videos related to 180 different tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. The videos are all collected from YouTube. The average length of a video is 2.36 minutes. Each vid... | Provide a detailed description of the following dataset: COIN |
Kinetics-600 | The **Kinetics-600** is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTub... | Provide a detailed description of the following dataset: Kinetics-600 |
AudioSet | Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that t... | Provide a detailed description of the following dataset: AudioSet |
DIVA-HisDB | The database consists of 150 annotated pages of three different medieval manuscripts with challenging layouts. Furthermore, we provide a layout analysis ground-truth which has been iterated on, reviewed, and refined by an expert in medieval studies. | Provide a detailed description of the following dataset: DIVA-HisDB |
VQA-RAD | VQA-RAD consists of 3,515 question–answer pairs on 315 radiology images. | Provide a detailed description of the following dataset: VQA-RAD |
TDIUC | **Task Directed Image Understanding Challenge** (**TDIUC**) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K images sourced from MS COCO and the Visual Genome Dataset. The image-question pairs are split into 12 categories and 4 additional evaluation matrices which help evaluate m... | Provide a detailed description of the following dataset: TDIUC |
Mall | The **Mall** is a dataset for crowd counting and profiling research. Its images are collected from publicly accessible webcam. It mainly includes 2,000 video frames, and the head position of every pedestrian in all frames is annotated. A total of more than 60,000 pedestrians are annotated in this dataset. | Provide a detailed description of the following dataset: Mall |
A3D | A new dataset of diverse traffic accidents. | Provide a detailed description of the following dataset: A3D |
FRGC | The data for **FRGC** consists of 50,000 recordings divided into training and validation partitions. The training partition is designed for training algorithms and the validation partition is for assessing performance of an approach in a laboratory setting. The validation partition consists of data from 4,003 subject s... | Provide a detailed description of the following dataset: FRGC |
HAR | The Human Activity Recognition Dataset has been collected from 30 subjects performing six different activities (Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, Laying). It consists of inertial sensor data that was collected using a smartphone carried by the subjects. | Provide a detailed description of the following dataset: HAR |
MOT15 | MOT2015 is a dataset for multiple object tracking. It contains 11 different indoor and outdoor scenes of public places with pedestrians as the objects of interest, where camera motion, camera angle and imaging condition vary greatly. The dataset provides detections generated by the ACF-based detector. | Provide a detailed description of the following dataset: MOT15 |
CASIA-MFSD | **CASIA-MFSD** is a dataset for face anti-spoofing. It contains 50 subjects, and 12 videos for each subject under different resolutions and light conditions. Three different spoof attacks are designed: replay, warp print and cut print attacks. The database contains 600 video recordings, in which 240 videos of 20 subjec... | Provide a detailed description of the following dataset: CASIA-MFSD |
Replay-Attack | The **Replay-Attack** Database for face spoofing consists of 1300 video clips of photo and video attack attempts to 50 clients, under different lighting conditions. All videos are generated by either having a (real) client trying to access a laptop through a built-in webcam or by displaying a photo or a video recording... | Provide a detailed description of the following dataset: Replay-Attack |
Delicious | **Delicious** : This data set contains tagged web pages retrieved from the website delicious.com.
Source: [Text segmentation on multilabel documents: A distant-supervised approach](https://arxiv.org/abs/1904.06730)
Image Source: [http://mlkd.csd.auth.gr/multilabel.html](http://mlkd.csd.auth.gr/multilabel.html) | Provide a detailed description of the following dataset: Delicious |
WeChat | The **WeChat** dataset for fake news detection contains more than 20k news labelled as fake news or not. | Provide a detailed description of the following dataset: WeChat |
KDD12 | A clickthrough prediction dataset, for more information please see the [Kaggle page](https://www.kaggle.com/c/kddcup2012-track2) | Provide a detailed description of the following dataset: KDD12 |
RAF-DB | The **Real-world Affective Faces** Database (**RAF-DB**) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occl... | Provide a detailed description of the following dataset: RAF-DB |
FERG | **FERG** is a database of cartoon characters with annotated facial expressions containing 55,769 annotated face images of six characters. The images for each character are grouped into 7 types of cardinal expressions, viz. anger, disgust, fear, joy, neutral, sadness and surprise.
Source: [VGAN-Based Image Representati... | Provide a detailed description of the following dataset: FERG |
COCO-Text | The **COCO-Text** dataset is a dataset for text detection and recognition. It is based on the MS COCO dataset, which contains images of complex everyday scenes. The COCO-Text dataset contains non-text images, legible text images and illegible text images. In total there are 22184 training images and 7026 validation ima... | Provide a detailed description of the following dataset: COCO-Text |
DiscoFuse | DiscoFuse was created by applying a rule-based splitting method on two corpora -
sports articles crawled from the Web, and Wikipedia. See the paper for a detailed
description of the dataset generation process and evaluation.
DiscoFuse has two parts with 44,177,443 and 16,642,323 examples sourced from Sports articl... | Provide a detailed description of the following dataset: DiscoFuse |
FIGER | The **FIGER** dataset is an entity recognition dataset where entities are labelled using fine-grained system 112 tags, such as *person/doctor*, *art/written_work* and *building/hotel*. The tags are derivied from Freebase types. The training set consists of Wikipedia articles automatically annotated with distant supervi... | Provide a detailed description of the following dataset: FIGER |
CUHK-SYSU | The CUKL-SYSY dataset is a large scale benchmark for person search, containing 18,184 images and 8,432 identities. Different from previous re-id benchmarks, matching query persons with manually cropped pedestrians, this dataset is much closer to real application scenarios by searching person from whole images in the ga... | Provide a detailed description of the following dataset: CUHK-SYSU |
Chairs | The **Chairs** dataset contains rendered images of around 1000 different three-dimensional chair models. | Provide a detailed description of the following dataset: Chairs |
ZINC | **ZINC** is a free database of commercially-available compounds for virtual screening. ZINC contains over 230 million purchasable compounds in ready-to-dock, 3D formats. ZINC also contains over 750 million purchasable compounds that can be searched for analogs. | Provide a detailed description of the following dataset: ZINC |
QED | **QED** is a linguistically principled framework for explanations in question answering. Given a question and a passage, QED represents an explanation of the answer as a combination of discrete, human-interpretable steps:
sentence selection := identification of a sentence implying an answer to the question
referential ... | Provide a detailed description of the following dataset: QED |
MEF | Multi-exposure image fusion (MEF) is considered
an effective quality enhancement technique widely adopted in
consumer electronics, but little work has been dedicated to the
perceptual quality assessment of multi-exposure fused images.
In this paper, we first build an MEF database and carry
out a subjective user st... | Provide a detailed description of the following dataset: MEF |
DICM | **DICM** is a dataset for low-light enhancement which consists of 69 images collected with commercial digital cameras.
Source: [Deep Retinex Decomposition for Low-Light Enhancement](https://arxiv.org/abs/1808.04560)
Image Source: [GLADNet: Low-Light Enhancement Network with Global Awareness](https://ieeexplore.ieee.or... | Provide a detailed description of the following dataset: DICM |
GuessWhat?! | **GuessWhat?!** is a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images.
GuessWhat?! is a cooperative two-player game in which
both players see the picture of a rich visual scene with several objects. One player – the oracle – is randomly assign... | Provide a detailed description of the following dataset: GuessWhat?! |
ObjectNet | **ObjectNet** is a test set of images collected directly using crowd-sourcing. ObjectNet is unique as the objects are captured at unusual poses in cluttered, natural scenes, which can severely degrade recognition performance. There are 50,000 images in the test set which controls for rotation, background and viewpoint.... | Provide a detailed description of the following dataset: ObjectNet |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.