dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
CIFAR-100N | This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N), equipping the training dataset of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels that we collect from Amazon Mechanical Turk. | Provide a detailed description of the following dataset: CIFAR-100N |
MedMNIST v2 | MedMNIST v2 is a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28 x 28 (2D) or 28 x 28 x 28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering prim... | Provide a detailed description of the following dataset: MedMNIST v2 |
OpenBMAT | Open Broadcast Media Audio from TV (OpenBMAT) is an open, annotated dataset for the task of music detection that contains over 27 hours of TV broadcast audio from 4 countries distributed over 1647 one-minute long excerpts. It is designed to encompass several essential features for any music detection dataset and is the... | Provide a detailed description of the following dataset: OpenBMAT |
IndoNLG | IndoNLG is a benchmark to measure natural language generation (NLG) progress in three low-resource—yet widely spoken—languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems tod... | Provide a detailed description of the following dataset: IndoNLG |
Continual World | Continual World is a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed. | Provide a detailed description of the following dataset: Continual World |
Natural Instructions | Natural-Instructions is a dataset of 61 distinct tasks, their human-authored instructions and 193k task instances. The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema. | Provide a detailed description of the following dataset: Natural Instructions |
Pyxis | Pyxis is a performance dataset for specialized accelerators on sparse data. Pyxis collects accelerator designs and real execution performance statistics. Currently, there are 73.8 K instances in Pyxis. | Provide a detailed description of the following dataset: Pyxis |
OPERAnet | **OPERAnet** is a multimodal activity recognition dataset acquired from radio frequency and vision-based sensors. Approximately 8 hours of annotated measurements are provided, which are collected across two different rooms from 6 participants performing 6 activities, namely, sitting down on a chair, standing from sit, ... | Provide a detailed description of the following dataset: OPERAnet |
AI-TOD | AI-TOD comes with 700,621 object instances for eight categories across 28,036 aerial images. Compared to existing object detection datasets in aerial images, the mean size of objects in AI-TOD is about 12.8 pixels, which is much smaller than others. | Provide a detailed description of the following dataset: AI-TOD |
URLB | URLB consists of two phases: reward-free pre-training and downstream task adaptation with extrinsic rewards. Building on the DeepMind Control Suite, it provides twelve continuous control tasks from three domains for evaluation. | Provide a detailed description of the following dataset: URLB |
CoVA | We labeled _7,740_ webpage screenshots spanning _408_ domains (Amazon, Walmart, Target, etc.). Each of these webpages contains exactly one labeled price, title, and image. All other web elements are labeled as background. On average, there are _90_ web elements in a webpage.
Webpage screenshots and bounding boxes ca... | Provide a detailed description of the following dataset: CoVA |
Persian Reverse Dictionary Dataset | The Persian Reverse Dictionary Dataset is a collection of 855217 words along with the phrases describing them. The phrases were extracted from the top three most well-known Persian dictionaries (including Amid, Moeen, and Dehkhoda), Persian Wikipedia, and a Persian Wordnet (called Farsnet). | Provide a detailed description of the following dataset: Persian Reverse Dictionary Dataset |
CADB | To the best of our knowledge, there is no prior dataset specifically constructed for composition assessment. To support the research on this task, we build a dataset upon the existing AADB dataset, from which we collect a total of 9,958 real-world photos. We adopt a composition rating scale from 1 to 5, where a larger ... | Provide a detailed description of the following dataset: CADB |
RWanda Built-up Region Segmentation | We create Rwanda built-up regions dataset, a different and versatile in nature from previously available datasets. The varying structure size and formation, irregular patterns of construction, buildings in forests and deserts, and the existence of mud houses make it very challenging. A total of 787 satellite images of ... | Provide a detailed description of the following dataset: RWanda Built-up Region Segmentation |
Genome-wide miRNA detection | We've made available several genome-wide datasets, which can be used for training microRNA (miRNA) classifiers. The hairpin sequences available are from the genomes of: Homo sapiens, Arabidopsis thaliana, Anopheles gambiae, Caenorhabditis elegans and Drosophila melanogaster. Hairpin.s are small RNA sequences that natur... | Provide a detailed description of the following dataset: Genome-wide miRNA detection |
GSM8K | GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7.5K training problems and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of element... | Provide a detailed description of the following dataset: GSM8K |
Inter4K | A video dataset for benchmarking upsampling methods. Inter4K contains 1,000 ultra-high resolution videos with 60 frames per second (fps) from online resources. The dataset provides standardized video resolutions at ultra-high definition (UHD/4K), quad-high definition (QHD/2K), full-high definition (FHD/1080p), (standar... | Provide a detailed description of the following dataset: Inter4K |
TUDA | Overall duration per microphone: about 36 hours (31 hrs train / 2.5 hrs dev / 2.5 hrs test)
Count of microphones: 3 (Microsoft Kinect, Yamaha, Samson)
Count of wave-files per microphone: about 14500
Overall count of participations: 180 (130 male / 50 female) | Provide a detailed description of the following dataset: TUDA |
Market-1501-C | **Market-1501-C** is an evaluation set that consists of algorithmically generated corruptions applied to the Market-1501 test-set. These corruptions consist of Noise: Gaussian, shot,
impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rai... | Provide a detailed description of the following dataset: Market-1501-C |
MSMT17-C | **MSMT17-C** is an evaluation set that consists of algorithmically generated corruptions applied to the MSMT17 test-set. These corruptions consist of Noise: Gaussian, shot,
impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital... | Provide a detailed description of the following dataset: MSMT17-C |
CUHK03-C | **CUHK03-C** is an evaluation set that consists of algorithmically generated corruptions applied to the CUHK03 test-set. These corruptions consist of Noise: Gaussian, shot,
impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital... | Provide a detailed description of the following dataset: CUHK03-C |
SYSU-MM01-C | **SYSU-MM01-C** is an evaluation set that consists of algorithmically generated corruptions applied to the SYSU-MM01 test-set. These corruptions consist of Noise: Gaussian, shot,
impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; D... | Provide a detailed description of the following dataset: SYSU-MM01-C |
Supporting data for "Multi-Stage Malaria Parasites Recognition by Deep Learning" | Malaria, a mosquito-borne infectious disease affecting humans and other animals, is widespread in the tropical and subtropical regions. Microscopy is the most common method in diagnosing the malaria parasite from stained blood smears. However, this procedure is time-consuming, error-prone, and requires a well-trained p... | Provide a detailed description of the following dataset: Supporting data for "Multi-Stage Malaria Parasites Recognition by Deep Learning" |
Mouse Grooming Behavior | This dataset was generated to characterize mouse grooming behavior.
Mouse grooming serves many adaptive functions such as coat and body care, stress reduction, de-arousal, social functions, thermoregulation, nociception, as well as other functions. Alteration of this behavior is measured and used for mouse pre-clinica... | Provide a detailed description of the following dataset: Mouse Grooming Behavior |
PQ-decaNLP | Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. PQ-decaNLP is a crowd-sourced corpus of paraphrased questions, annotated with paraphrase phenomena. This... | Provide a detailed description of the following dataset: PQ-decaNLP |
map2seq | 7,672 human written natural language navigation instructions for routes in OpenStreetMap with a focus on visual landmarks. Validated in Street View. | Provide a detailed description of the following dataset: map2seq |
DrugProt | DrugProt corpus, where domain experts have exhaustively labeled:(a) all chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types (DrugProt relation classes). | Provide a detailed description of the following dataset: DrugProt |
RegDB-C | RegDB-C is an evaluation set that consists of algorithmically generated corruptions applied to the RegDB test-set (color images). These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; ... | Provide a detailed description of the following dataset: RegDB-C |
Building air quality and pandemic risk simulation | The original paper contains a high-level explanation of the dataset characteristics, and potential use cases of the dataset. ArchABM can help to quantify the impact of some of these building- and company policy-related measures.
**Baseline experiment**
A baseline case with no measures and reduced ventilation is ... | Provide a detailed description of the following dataset: Building air quality and pandemic risk simulation |
LSVTD | **LSVTD** is a large scale video text dataset for promoting the video text spotting community, which contains 100 text videos from 22 different real-life scenarios. LSVTD covers a wide range of 13 indoor (eg. bookstore, shopping mall) and 9 outdoor scenarios, which is more than 3 times the diversity of IC15. | Provide a detailed description of the following dataset: LSVTD |
DriverMHG | **Driver Micro Hand Gestures** (**DriverMHG**) is a dataset for dynamic recognition of driver micro hand gestures, which consists of RGB, depth and infrared modalities. | Provide a detailed description of the following dataset: DriverMHG |
BCI Competition Datasets | The goal of the "BCI Competition" is to validate signal processing and classification methods for Brain-Computer Interfaces (BCIs). | Provide a detailed description of the following dataset: BCI Competition Datasets |
UQuAD | Large scale machine reading comprehension dataset in Urdu language. | Provide a detailed description of the following dataset: UQuAD |
Adaptiope | Adaptiope is a domain adaptation dataset with 123 classes in the three domains synthetic, product and real life. One of the main goals of Adaptiope is to offer a clean and well curated set of images for domain adaptation. This was necessary as many other common datasets in the area suffer from label noise and low quali... | Provide a detailed description of the following dataset: Adaptiope |
Modern Office-31 | Modern Office-31 is a refurbished version of the commonly used [Office-31](https://paperswithcode.com/dataset/office-31) dataset. Modern Office-31 rectifies many of the annotation errors and low quality images in the Amazon domain of the original Office-31 dataset. Additionally, this dataset adds another synthetic doma... | Provide a detailed description of the following dataset: Modern Office-31 |
AVASpeech-SMAD | We propose a dataset, AVASpeech-SMAD, to assist speech and music activity detection research. With frame-level music labels, the proposed dataset extends the existing AVASpeech dataset, which originally consists of 45 hours of audio and speech activity labels. To the best of our knowledge, the proposed AVASpeech-SMAD i... | Provide a detailed description of the following dataset: AVASpeech-SMAD |
Ballroom | This data set includes beat and bar annotations of the ballroom dataset, introduced by Gouyon et al. [1].
[1] Gouyon F., A. Klapuri, S. Dixon, M. Alonso, G. Tzanetakis, C. Uhle, and P. Cano. An experimental comparison of audio tempo induction algorithms. Transactions on Audio, Speech and Language Processing 14(5), p... | Provide a detailed description of the following dataset: Ballroom |
Beatles | This dataset includes the beat and downbeat annotations for Beatles albums. The annotations are provided by M. E. P. Davies et. al [1].
M. E. P. Davies, N. Degara, and M. D. Plumbley, “Evaluation methods for musical audio beat tracking algorithms,” in Technical Report C4DM-TR-09-06, Centre for Digital Music, Queen ... | Provide a detailed description of the following dataset: Beatles |
Rock Corpus | This dataset contains 200 famous songs in different genres (mostly in rock) and the beats and downbeat annotations are provided by T. de Clercq and D. Temperley [1].
[1] T. de Clercq and D. Temperley., “A corpus analysis of
rock harmony,” Popular Music, vol. 30, no. 1, pp. 47–
70, 2011. | Provide a detailed description of the following dataset: Rock Corpus |
Carnatic | This dataset includes music time information i.e. Beat, Bar, and meter annotations of the Indian Carnatic music dataset. The dataset is gathered by A. Srinivasamurthy and X. Serra [1].
[1] A. Srinivasamurthy and X. Serra, “A supervised approach to hierarchical metrical cycle tracking from audio music recordings,” in... | Provide a detailed description of the following dataset: Carnatic |
SINGA:PURA | This repository contains the SINGA:PURA dataset, a strongly-labelled polyphonic urban sound dataset with spatiotemporal context. The data were collected via a number of recording units deployed across Singapore as a part of a wireless acoustic sensor network. These recordings were made as part of a project to identify ... | Provide a detailed description of the following dataset: SINGA:PURA |
ACAV100M | ACAV100M processes 140 million full-length videos (total duration 1,030 years) which are used to produce a dataset of 100 million 10-second clips (31 years) with high audio-visual correspondence. This is two orders of magnitude larger than the current largest video dataset used in the audio-visual learning literature, ... | Provide a detailed description of the following dataset: ACAV100M |
LAION-400M | **LAION-400M** is a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search.
#### ⚠️ Disclaimer & Content Warning (from the authors)
*Our filtering protocol only removed NSFW images detected as illegal, but the dataset still has NSFW c... | Provide a detailed description of the following dataset: LAION-400M |
DeepNets-1M | The DeepNets-1M dataset is composed of neural network architectures represented as graphs where nodes are operations (convolution, pooling, etc.) and edges correspond to the forward pass flow of data through the network.
DeepNets-1M has 1 million training architectures and 1402 in-distribution (ID) and out-of-distribu... | Provide a detailed description of the following dataset: DeepNets-1M |
RLV | We provide video observations of humans performing two simple tasks in natural environments. The tasks are pushing and drawer opening. | Provide a detailed description of the following dataset: RLV |
Earth’s Mantle Convection | The dataset, generated from a scientific simulation, consists of a time series (251 steps) of 3D scalar fields on a spherical 180x201x360 grid covering 500 Myr of geological time. Each time step is 2 Myrs, and the fields are:
* temperature [degrees K],
* three Cartesian velocity components [m/s],
* thermal conduct... | Provide a detailed description of the following dataset: Earth’s Mantle Convection |
FEAFA+ | **FEAFA+** is a dataset for Facial expression analysis and 3D Facial animation. It includes 150 video sequences from FEAFA and [DISFA](disfa), with a total of 230,184 frames being manually annotated on floating-point intensity value of 24 redefined AUs using the Expression Quantitative Tool. | Provide a detailed description of the following dataset: FEAFA+ |
GO21 | GO21 is a biomedical knowledge graph that models genes, proteins, drugs, and the hierarchy of the biological processes they participate in. It consists of 806,136 triples with 21 relations and 89127 entities. GO21 can be used for knowledge graph completion tasks (link prediction) as well as hierarchical reasoning tasks... | Provide a detailed description of the following dataset: GO21 |
A Datacube for the analysis of wildfires in Greece | This dataset is meant to be used to develop models for next-day fire hazard forecasting in Greece. It contains data from 2009 to 2020 at a 1km x 1km x 1 daily grid.
Check the [Jupyter notebook](https://github.com/DeepCube-org/uc3-public-notebooks/blob/main/1_UC3_Datacube_Access_and_Plotting.ipynb) for an example sho... | Provide a detailed description of the following dataset: A Datacube for the analysis of wildfires in Greece |
CLUES | CLUES (Constrained Language Understanding Evaluation Standard) is a benchmark for evaluating the few-shot learning capabilities of NLU models. | Provide a detailed description of the following dataset: CLUES |
Only Time Will Tell | Simulation results of time-respecting and time-ignoring horizon of code review network at Microsoft as JSON. For further details, please look at https://github.com/michaeldorner/only-time-will-tell | Provide a detailed description of the following dataset: Only Time Will Tell |
LRA | Long-range arena (LRA) is an effort toward systematic evaluation of efficient transformer models. The project aims at establishing benchmark tasks/datasets using which we can evaluate transformer-based models in a systematic way, by assessing their generalization power, computational efficiency, memory foot-print, etc.... | Provide a detailed description of the following dataset: LRA |
AdvGLUE | Adversarial GLUE (AdvGLUE) is a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. In particular, we systematically apply 14 textual adversarial attack methods to [GLUE](/dataset/glue) tasks... | Provide a detailed description of the following dataset: AdvGLUE |
CoDEx Medium | CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty. CoDEx comprises three knowledge graphs varying in size and structure, multilingual descriptions of entities and relations,... | Provide a detailed description of the following dataset: CoDEx Medium |
CoDEx Large | CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty. CoDEx comprises three knowledge graphs varying in size and structure, multilingual descriptions of entities and relations,... | Provide a detailed description of the following dataset: CoDEx Large |
IDDA | **IDDA** is a large scale, synthetic dataset for semantic segmentation with more than 100 different source visual domains. The dataset has been created to explicitly address the challenges of domain shift between training and test data in various weather and view point conditions, in seven different city types. | Provide a detailed description of the following dataset: IDDA |
SyRIP | **SyRIP** is a hybrid synthetic and real infant pose (SyRIP) dataset with small yet diverse real infant images as well as generated synthetic infant poses and (2) a multi-stage invariant representation learning strategy that could transfer the knowledge from the adjacent domains of adult poses and synthetic infant imag... | Provide a detailed description of the following dataset: SyRIP |
RobustBench | **RobustBench** is a benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget. To this end, we start by considering the image classification task and introduce restrictions (possibly loosened in the future) on the allo... | Provide a detailed description of the following dataset: RobustBench |
WaveFake | WaveFake is a dataset for audio deepfake detection. The dataset consists of a large-scale dataset of over 100K generated audio clips. | Provide a detailed description of the following dataset: WaveFake |
WWU DUNEuro reference data set | The provided dataset consists of high-quality realistic head models and combined EEG/MEG data which can be used for state-of-the-art methods in brain research, such as modern finite element methods (FEM) to compute the EEG/MEG forward problems using the software toolbox DUNEuro ([http://duneuro.org](http://duneuro.org)... | Provide a detailed description of the following dataset: WWU DUNEuro reference data set |
VSLID | VSLID stands for Very Small Lego Image Dataset. It has a bit over 1800 images of piles of LEGO bricks of 85 different types. There are between 1 and 10 bricks per image. Backgrounds and lighting conditions vary. All images are annotated with a list of the visible bricks. The images can have two resolutions, so rescalin... | Provide a detailed description of the following dataset: VSLID |
WORD | **WORD** is a dataset for organ semantic segmentation that contains 150 abdominal CT volumes (30,495 slices) and each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotation, which may be the largest dataset with whole abdominal organs annotation. | Provide a detailed description of the following dataset: WORD |
REAL-M | **Real-M** is a crowd-sourced speech-separation corpus of real-life mixtures. The mixtures are recorded in different acoustic environments using a wide variety of recording devices such as laptops and smartphones, thus reflecting more closely potential application scenarios. | Provide a detailed description of the following dataset: REAL-M |
RIKEN Microstructural Imaging Metadatabase | The **RIKEN Microstructural Imaging Metadatabase** is a semantic web-based imaging database in which image metadata are described using the Resource Description Framework (RDF) and detailed biological properties observed in the images can be represented as Linked Open Data. The metadata are used to develop a large-scal... | Provide a detailed description of the following dataset: RIKEN Microstructural Imaging Metadatabase |
BOBSL | BOBSL is a large-scale dataset of British Sign Language (BSL). It comprises 1,962 episodes (approximately 1,400 hours) of BSL-interpreted BBC broadcast footage accompanied by written English subtitles. From horror, period and medical dramas, history, nature and science documentaries, sitcoms, children’s shows and progr... | Provide a detailed description of the following dataset: BOBSL |
AdobeVFR syn | Subset of AdobeVFR. The dataset contains images depicting English text and consists of 1000 synthetic images for training and 100 for testing, for each of 2383 font classes. The training and test sets are called *VFR_syn_train* and *VFR_syn_val*, respectively.
The other part of AdobeVFR consists of "real-world text ... | Provide a detailed description of the following dataset: AdobeVFR syn |
Explor_all | Explor_all font image dataset https://drive.google.com/file/d/1P2DbNbVw4Q__WcV1YdzE7zsDKilmd3pO/view | Provide a detailed description of the following dataset: Explor_all |
SDSS Galaxies | This is a dataset of 306,006 galaxies whose coordinates are taken from the
Sloan Digital Sky Survey Data Release 7 and a modified catalogue from
Brinchmann+2003 and Wilman+2010. This volume complete sample has
an r-band absolute magnitude limit of $M_r\leq-20$ and a redshift limit of
$z\leq0.08$. See Arora+2019... | Provide a detailed description of the following dataset: SDSS Galaxies |
VFR-447 | A synthetic dataset containing 447 typefaces with only one font variation for each typeface, created for visual font recognition.
> Each class in VFR-447 and VFR-2420 has 1,000 synthetic word images, which are evenly split into 500 training and 500 testing. There are no common words between the training and testing ... | Provide a detailed description of the following dataset: VFR-447 |
VFR-2420 | A synthetic dataset containing word images of 447 typefaces with font variations for each typeface, created for visual font recognition.
> We collect in total 447 typefaces, each with different number of variations resulting from combinations of different styles, e.g., regular, semibold, bold, black, and italic, lea... | Provide a detailed description of the following dataset: VFR-2420 |
VFR-Wild | 325 word images intended for font recognition, whose fonts are included in [VFR-447] (and [VFR-2420]).
> (...) 325 real world test images for the font classes we have in the training set. These images were collected from typography forums, such as myfonts.com, where people post these images seeking help from experts... | Provide a detailed description of the following dataset: VFR-Wild |
AdobeVFR real | Subset of AdobeVFR. The dataset contains "real-world text images".
> We collected 201,780 text images from various typography forums, where people post these images seeking help from experts to identify the fonts. Most of them come with hand-annotated font labels which may be inaccurate. (...) Finally, we obtain 4,3... | Provide a detailed description of the following dataset: AdobeVFR real |
Federated Stack Overflow | This dataset is derived from the Stack Overflow Data hosted by kaggle.com and available to query through Kernels using the BigQuery API: https://www.kaggle.com/stackoverflow/stackoverflow | Provide a detailed description of the following dataset: Federated Stack Overflow |
BPCIS | BPCIS is collection of 364 bacterial phase contrast images and corresponding label matrices for instance segmentation. Labels were made according to fluorescence channels where possible. Prior to manual annotation, images were automatically cropped into microcolonies and tiled into ensemble images to reduce the empty (... | Provide a detailed description of the following dataset: BPCIS |
Audio demo files | Audio files that supplement "Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication". | Provide a detailed description of the following dataset: Audio demo files |
SustainBench | SustainBench is a collection of 15 benchmark tasks across 7 sustainable development goals (SDGs), including tasks related to economic development, agriculture, health, education, water and sanitation, climate action, and life on land. The goals for SustainBench are to:
- lower the barriers to entry for the machine l... | Provide a detailed description of the following dataset: SustainBench |
HC18 | Automated measurement of fetal head circumference using 2D ultrasound images | Provide a detailed description of the following dataset: HC18 |
BCSS | The BCSS dataset contains over 20,000 segmentation annotations of tissue regions from breast cancer images from The Cancer Genome Atlas (TCGA). This large-scale dataset was annotated through the collaborative effort of pathologists, pathology residents, and medical students using the Digital Slide Archive. It enables ... | Provide a detailed description of the following dataset: BCSS |
unarXive | A scholarly data set with publications’ full-text, annotated in-text citations, and links to metadata.
The unarXive data set contains
* One million papers in plain text
* 63 million citation contexts
* 39 million reference strings
* A citation network of 16 million connections
The data is generated from all... | Provide a detailed description of the following dataset: unarXive |
Next2You data and results dataset | This record serves as an index to the other dataset releases that are part of the paper "Next2You: Robust Copresence Detection Based on Channel State Information" by Mikhail Fomichev, Luis F. Abanto-Leon, Max Stiegler, Alejandro Molina, Jakob Link, Matthias Hollick, in ACM Transactions on Internet of Things (2021). | Provide a detailed description of the following dataset: Next2You data and results dataset |
GRB | **Graph Robustness Benchmark** (**GRB**) provides scalable, unified, modular, and reproducible evaluation on the adversarial robustness of graph machine learning models. GRB has elaborated datasets, unified evaluation pipeline, modular coding framework, and reproducible leaderboards, which facilitate the developments o... | Provide a detailed description of the following dataset: GRB |
NAO | **Natural Adversarial Objects** (**NAO**) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. | Provide a detailed description of the following dataset: NAO |
Retinal-Lesions | Over 1.5K images selected from the public Kaggle DR Detection dataset;
Five DR grades (DR0 / DR1 / DR2 / DR3 / DR4), re-labeled by a panel of 45 experienced ophthalmologists;
Eight retinal lesion classes, including microaneurysm, intraretinal hemorrhage, hard exudate, cotton-wool spot, vitreous hemorrhage, preretinal... | Provide a detailed description of the following dataset: Retinal-Lesions |
DSurVD | A large-scale dataset, namely Distorted Surveillance Video Database (DSurVD), which can be downloaded from the link: https://sites.google.com/site/sorsyuanyuan/home/dsurvd
Image source: [https://sites.google.com/site/sorsyuanyuan/home/dsurvd](https://sites.google.com/site/sorsyuanyuan/home/dsurvd) | Provide a detailed description of the following dataset: DSurVD |
WildReceipt | WildReceipt is a collection of receipts.
It contains, for each photo, of a list of OCRs - with bounding box, text, and class.
It contains 1765 photos, with 25 classes, and 50000 text boxes.
The goal is to benchmark "key information extraction" - extracting key information from documents. There are two differen... | Provide a detailed description of the following dataset: WildReceipt |
ParsTwiner | An open, broad-coverage corpus for informal Persian named entity recognition was collected from Twitter. | Provide a detailed description of the following dataset: ParsTwiner |
ESC50 | The ESC-50 dataset is a labeled collection of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification.
The dataset consists of 5-second-long recordings organized into 50 semantical classes (with 40 examples per class) loosely arranged into 5 major categories.
Re... | Provide a detailed description of the following dataset: ESC50 |
Kinetics-Sound | This is a subset of Kinetics-400, introduced in Look, Listen and Learn by Relja Arandjelovic and Andrew Zisserman. | Provide a detailed description of the following dataset: Kinetics-Sound |
PhysioNet Challenge 2018 | Data for this challenge were contributed by the Massachusetts General Hospital’s (MGH) Computational Clinical Neurophysiology Laboratory (CCNL), and the Clinical Data Animation Laboratory (CDAC). The dataset includes 1,985 subjects which were monitored at an MGH sleep laboratory for the diagnosis of sleep disorders. Th... | Provide a detailed description of the following dataset: PhysioNet Challenge 2018 |
MoviePlotEvents | A version of the CMU Movie Summary Corpus (http://www.cs.cmu.edu/~ark/personas/), which was originally scraped from plot summaries from Wikipedia, with some cleaning and sentences turned into events & sorted into "genres" (via LDA). | Provide a detailed description of the following dataset: MoviePlotEvents |
CMU Movie Summary Corpus | Dataset [46 M] and readme: 42,306 movie plot summaries extracted from Wikipedia + aligned metadata extracted from Freebase, including:
*Movie box office revenue, genre, release date, runtime, and language
*Character names and aligned information about the actors who portray them, including gender and estimated age at... | Provide a detailed description of the following dataset: CMU Movie Summary Corpus |
Scifi TV Shows | A collection of long-running (80+ episodes) science fiction TV show synopses, scraped from Fandom.com wikis. Collected Nov 2017. Each episode is considered a "story".
Contains plot summaries from :
* Babylon 5 (https://babylon5.fandom.com/wiki/Main_Page) - 84 stories
* Doctor Who (https://tardis.fandom.com/wiki... | Provide a detailed description of the following dataset: Scifi TV Shows |
Embrapa ADD 256 | [](https://zenodo.org/badge/latestdoi/419452503)
This is a detailed description of the dataset, a data sheet for the dataset as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010)
Motivation for Dataset Creation
-------------------------------
### Why w... | Provide a detailed description of the following dataset: Embrapa ADD 256 |
Cryptics | Official dataset of Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP.
See github.com/jsrozner/decrypt and https://doi.org/10.5061/dryad.n02v6wwzp | Provide a detailed description of the following dataset: Cryptics |
mini-ImageNet-LT | mini-ImageNet was proposed by Matching networks for one-shot learning for few-shot learning evaluation, in an attempt to have a dataset like ImageNet while requiring fewer resources. Similar to the statistics for CIFAR-100-LT with an imbalance factor of 100, we construct a long-tailed variant of mini-ImageNet that feat... | Provide a detailed description of the following dataset: mini-ImageNet-LT |
SentiMix | Sentiment analysis of codemixed tweets. | Provide a detailed description of the following dataset: SentiMix |
fNIRS2MW | The Tufts fNIRS to Mental Workload (fNIRS2MW) open-access dataset is a new dataset for building machine learning classifiers that can consume a short window (30 seconds) of multivariate fNIRS recordings and predict the mental workload intensity of the user during that window.
You can use this dataset for tasks like
... | Provide a detailed description of the following dataset: fNIRS2MW |
Multilingual Terms of Service | The first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of Service. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different languages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the co... | Provide a detailed description of the following dataset: Multilingual Terms of Service |
Archival bundle of the data used for "Predictive Auto-scaling with OpenStack Monasca" (UCC 2021) | Follow the instructions provided in the [companion repo](https://github.com/giacomolanciano/UCC2021-predictive-auto-scaling-openstack) to automatically download and decompress the archive. The following files are included:
| File | Description ... | Provide a detailed description of the following dataset: Archival bundle of the data used for "Predictive Auto-scaling with OpenStack Monasca" (UCC 2021) |
MONK's Problems | There are three MONK's problems. The domains for all MONK's problems are the same (described below). One of the MONK's problems has noise added. For each problem, the domain has been partitioned into a train and test set. | Provide a detailed description of the following dataset: MONK's Problems |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.