dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
HC3
The HC3 (Human ChatGPT Comparison Corpus) dataset consists of nearly 40K questions and their corresponding human/ChatGPT answers. The motivation for this dataset was to study ChatGPT's answers in contrast to human's answers. The questions range from a wide variety of domains, including open-domain, financial, medical, ...
Provide a detailed description of the following dataset: HC3
STEDUCOV: A DATASET ON STANCE DETECTION IN TWEETS TOWARDS ONLINE EDUCATION DURING COVID-19 PANDEMIC
StEduCov, a dataset annotated for stances toward online education during the COVID-19 pandemic. StEduCov has 17,097 tweets gathered over 15 months, from March 2020 to May 2021, using Twitter API. The tweets are manually annotated into agree, disagree or neutral classes. We used a set of relevant hashtags and keywords. ...
Provide a detailed description of the following dataset: STEDUCOV: A DATASET ON STANCE DETECTION IN TWEETS TOWARDS ONLINE EDUCATION DURING COVID-19 PANDEMIC
WMT-SLT22
We provide separate training, development and test data. The training data is available right away. The development and test data will be released in several stages, starting with a release of the development sources only. The training data comprises two corpora, called FocusNews and SRF, see below for a more detail...
Provide a detailed description of the following dataset: WMT-SLT22
Jester (Gesture Recognition)
**Jester Gesture Recognition** dataset includes 148,092 labeled video clips of humans performing basic, pre-defined hand gestures in front of a laptop camera or webcam. It is designed for training machine learning models to recognize human hand gestures like sliding two fingers down, swiping left or right and drumming ...
Provide a detailed description of the following dataset: Jester (Gesture Recognition)
OMMO
**OMMO** is a new benchmark for several outdoor NeRF-based tasks, such as novel view synthesis, surface reconstruction, and multi-modal NeRF. It contains complex objects and scenes with calibrated images, point clouds and prompt annotations.
Provide a detailed description of the following dataset: OMMO
Govdocs1
GovDocs is a corpus of nearly 1 million documents that are freely available for research and may be, to the best of the authors' knowledge, freely redistributed. These documents were obtained by performing searches for words randomly chosen from the Unix dictionary, numbers randomly chosen between 1 and 1 million, and ...
Provide a detailed description of the following dataset: Govdocs1
OmniObject3D
**OmniObject3D** is a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects. OmniObject3D has several appealing properties: 1) Large Vocabulary: It comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets (e.g., ImageNet and LVIS), benef...
Provide a detailed description of the following dataset: OmniObject3D
SPEC5G
**SPEC5G** is a dataset for the analysis of natural language specification of 5G Cellular network protocol specification. SPEC5G contains 3,547,587 sentences with 134M words, from 13094 cellular network specifications and 13 online websites. It is designed for security-related text classification and summarisation.
Provide a detailed description of the following dataset: SPEC5G
CMMD
Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients' lifespans. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage ...
Provide a detailed description of the following dataset: CMMD
WDC Products
**WDC Products** is an entity matching benchmark which provides for the systematic evaluation of matching systems along combinations of three dimensions while relying on real-word data. The three dimensions are i) amount of corner-cases ii) generalization to unseen entities, and iii) development set size ...
Provide a detailed description of the following dataset: WDC Products
Complete data from the Barro Colorado 50-ha plot: 423617 trees, 35 years
The 50-ha plot at Barro Colorado Island was initially demarcated and fully censused in 1982, and has been fully censused 7 times since, every 5 years from 1985 through 2015. Every measurement of every stem over 8 censuses is included in this archive. Most users will need only the 8 R Analytical Tables in the format tre...
Provide a detailed description of the following dataset: Complete data from the Barro Colorado 50-ha plot: 423617 trees, 35 years
Trinity Speech-Gesture Dataset
**Trinity Gesture Dataset** includes 23 takes, totalling 244 minutes of motion capture and audio of a male native English speaker producing spontaneous speech on different topics. The actor’s motion was captured with 20 Viconcameras at 59.94 frames per second(fps), and the skeleton includes 69 joints.
Provide a detailed description of the following dataset: Trinity Speech-Gesture Dataset
FZ queries
A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier (CUI). In a retrieval setting, the task consists of retrieving an article from the FindZebra corpus with a CUI that matches the query CUI.
Provide a detailed description of the following dataset: FZ queries
MTTN
**MTTN** is a large scale derived and synthesized dataset built with on real prompts and indexed with popular image-text datasets like MS-COCO, Flickr, etc. MTTN consists of over 2.4M sentences that are divided over 5 stages creating a combination amounting to over 12M pairs, along with a vocab size of consisting more ...
Provide a detailed description of the following dataset: MTTN
UICaption
**UICaption** is a dataset of 114k UI images paired with descriptions of their functionality. It is designed for the tasks of UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition.
Provide a detailed description of the following dataset: UICaption
PIMA Diabetes Dataset with Paper, Experiments, and Code
Please refer to the following paper which includes a description of the dataset and a link to the dataset and the paper code: Alain Hennebelle, Huned Materwal, and Leila Ismail, "HealthEdge: A Machine Learning-Based Smart Healthcare Framework for Prediction of Type 2 Diabetes in an Integrated IoT, Edge, and Cloud Co...
Provide a detailed description of the following dataset: PIMA Diabetes Dataset with Paper, Experiments, and Code
BiodivTab
The BioDiv dataset includes manually labeled tables for CTA and CEA from the biodiversity domain.
Provide a detailed description of the following dataset: BiodivTab
PushWorld
**PushWorld** is an environment with simplistic physics that requires manipulation planning with both movable obstacles and tools. It contains more than 200 PushWorld puzzles in PDDL and in an OpenAI Gym environment.
Provide a detailed description of the following dataset: PushWorld
REN-20k Dataset
Reader Emotion News 20k Dataset
Provide a detailed description of the following dataset: REN-20k Dataset
EHT
The English Headline Treebank (EHT) is an English headline treebank of 1,055 manually annotated and adjudicated universal dependency (UD) syntactic dependency trees to encourage research in improving NLP pipelines for English headlines.
Provide a detailed description of the following dataset: EHT
BANDON
**BANDON** is a dataset for building change detection with off-nadir aerial images dataset, which is composed of off-Nadir image pairs of urban and rural areas. Overall, the BANDON dataset contains 2283 pairs of images, 2283 change labels,1891 BT-flows labels, 1891 pairs of segmentation labels, and 1891 pair of ST-offs...
Provide a detailed description of the following dataset: BANDON
PubMedCite
**PubMedCite** is a domain-specific dataset with about 192K biomedical scientific papers and a large citation graph preserving 917K citation relationships between them. It is characterized by preserving the salient contents extracted from full texts of references, and the weighted correlation between the salient.
Provide a detailed description of the following dataset: PubMedCite
MusicCaps
**MusicCaps** is a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts. For each 10-second music clip, MusicCaps provides: 1) A free-text caption consisting of four sentences on average, describing the music and 2) A list of music aspects, describing genre, mood, tem...
Provide a detailed description of the following dataset: MusicCaps
Civil Comments
At the end of 2017 the Civil Comments platform shut down and chose make their ~2m public comments from their platform available in a lasting open archive so that researchers could understand and improve civility in online conversations for years to come. Jigsaw sponsored this effort and extended annotation of this data...
Provide a detailed description of the following dataset: Civil Comments
RealDOF
This dataset consists of 50 high resolution image pairs captured by dual-camera setup for single image defocus deblurring . Please note this is not a training set but a benchmark assessment.
Provide a detailed description of the following dataset: RealDOF
ConsInv Dataset
ConsInv is a stereo RGB + IMU dataset designed for Dynamic SLAM testing and contains two subsets: - **ConsInv-Indoors** contains sequences in an office setting where small objects are moved. - **ConsInv-Outdoors** contains sequences in an urban environment, where cars and/or people move. The novelty of ConsInv d...
Provide a detailed description of the following dataset: ConsInv Dataset
DPD (Dual-view)
DPD dataset has two versions - single view and dual-view. This branch is for dual view benchmark evaluation.
Provide a detailed description of the following dataset: DPD (Dual-view)
ASHRAE energy prediction III
Assessing the value of energy efficiency improvements can be challenging as there's no way to truly know how much energy a building would have used without the improvements. The best we can do is to build counterfactual models. Once a building is overhauled the new (lower) energy consumption is compared against modeled...
Provide a detailed description of the following dataset: ASHRAE energy prediction III
MarKG
The MarKG dataset has 11,292 entities, 192 relations and 76,424 images, including 2,063 analogy entities and 27 analogy relations. The original intention of MarKG is to provide prior knowledge of analogy entities and relations for better multimodal analogical reasoning.
Provide a detailed description of the following dataset: MarKG
MARS (Multimodal Analogical Reasoning dataSet)
Analogical reasoning is fundamental to human cognition and holds an important place in various fields. However, previous studies mainly focus on single-modal analogical reasoning and ignore taking advantage of structure knowledge. We introduce the new task of multimodal analogical reasoning over knowledge graphs, which...
Provide a detailed description of the following dataset: MARS (Multimodal Analogical Reasoning dataSet)
BabyLM
**BabyLM** is a dataset for small scale language modeling, human language acquisition, low-resource NLP, and cognitive modeling. In partnership with CoNLL and CMCL, it provides a platform for approaches to pretraining with a limited-size corpus sourced from data inspired by the input to children. The task has three tra...
Provide a detailed description of the following dataset: BabyLM
CMU Book Summary Dataset
This dataset contains plot summaries for 16,559 books extracted from Wikipedia, along with aligned metadata from Freebase, including book author, title, and genre. All data is released under a Creative Commons Attribution-ShareAlike License. For questions or comments, please contact David Bamman (dbamman@cs.cmu.edu)...
Provide a detailed description of the following dataset: CMU Book Summary Dataset
SQA3D
SQA3D is a dataset for embodied scene understanding, where an agent needs to comprehend the scene it situates from an first person's perspective and answer questions. The questions are designed to be situated, embodied and knowledge-intensive. We offer three different modalities to represent a 3D scene: 3D scan, egoce...
Provide a detailed description of the following dataset: SQA3D
RGB Arabic Alphabets Sign Language Dataset
This paper introduces the RGB Arabic Alphabet Sign Language (AASL) dataset. AASL comprises 7,856 raw and fully labeled RGB images of the Arabic sign language alphabets, which to our best knowledge is the first publicly available RGB dataset. The dataset is aimed to help those interested in developing real-life Arabic s...
Provide a detailed description of the following dataset: RGB Arabic Alphabets Sign Language Dataset
ATUE
**ATUE** is an antibody study benchmark with four real-world supervised tasks covering therapeutic antibody engineering, B cell analysis, and antibody discovery.
Provide a detailed description of the following dataset: ATUE
E-FB15k237
This dataset is based on FB15k237 and a pre-trained language-model-based KGE. The main task is to correct the wrong knowledge stored in the pre-trained model and replace the incorrect entities with alternative entities. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5R...
Provide a detailed description of the following dataset: E-FB15k237
PanopTOP31K
Starting from the Panoptic Dataset, we use the PanopTOP framework to generate the PanopTOP31K dataset, consisting of 31K images from 23 different subjects recorded from diverse and challenging viewpoints, also including the top-view.
Provide a detailed description of the following dataset: PanopTOP31K
ACL-Fig
**ACL-Fig** is a large-scale automatically annotated corpus consisting of 112,052 scientific figures extracted from 56K research papers in the ACL Anthology. The ACL-Fig-pilot dataset contains 1,671 manually labeled scientific figures belonging to 19 categories.
Provide a detailed description of the following dataset: ACL-Fig
LiDAR-CS
**LiDAR-CS** is a dataset for 3D object detection in real traffic. It contains 84,000 point cloud frames under 6 groups of different sensors but with same corresponding scenarios, captured from hybrid realistic LivDAR simulator.
Provide a detailed description of the following dataset: LiDAR-CS
AVSBench
**AVSBench** is a pixel-level audio-visual segmentation benchmark that provides ground truth labels for sounding objects. The dataset is divided into three subsets: AVSBench-object (Single-source subset, Multi-sources subset) and AVSBench-semantic (Semantic-labels subset). Accordingly, three settings are studied: 1...
Provide a detailed description of the following dataset: AVSBench
Coffereview Dataset
The data set is based on roughly 6,000 coffee bean review spublished on the website Coffereviews going back to 1997. All of these reviews are scored with the q-grading scale (Coffee Review, 2021)
Provide a detailed description of the following dataset: Coffereview Dataset
Processed CMIP5 EWS Data
Processed CMIP5 data used for testing the CNN-LSTM model. Details in Zenodo description
Provide a detailed description of the following dataset: Processed CMIP5 EWS Data
GHOSTS
**GHOSTS** is the first natural-language dataset made and curated by working researchers in mathematics that (1) aims to cover graduate-level mathematics and (2) provides a holistic overview of the mathematical capabilities of language models. It a collection of multiple datasets of prompts, totalling 728 prompts, for ...
Provide a detailed description of the following dataset: GHOSTS
Fraunhofer EZRT XXL-CT Instance Segmentation Me163
The 'Me 163' was a Second World War fighter airplane and a result of the German air force secret developments. One of these airplanes is currently owned and displayed in the historic aircraft exhibition of the 'Deutsches Museum' in Munich, Germany. To gain insights with respect to its history, design and state of prese...
Provide a detailed description of the following dataset: Fraunhofer EZRT XXL-CT Instance Segmentation Me163
EPIC-SOUNDS
**EPIC-SOUNDS** is a large scale dataset of audio annotations capturing temporal extents and class labels within the audio stream of the egocentric videos from EPIC-KITCHENS-100. EPIC-SOUNDS includes 78.4k categorised and 39.2k non-categorised segments of audible events and actions, distributed across 44 classes.
Provide a detailed description of the following dataset: EPIC-SOUNDS
CaRB
CaRB [Bhardwaj et al., 2019] is developed by re-annotating the dev and test splits of OIE2016 via crowd-sourcing. Besides improving annotation quality, CaRB also provides a new matching scorer. CaRB scorer uses token level match and it matches relation with relation, arguments with arguments.
Provide a detailed description of the following dataset: CaRB
LSOIE
LSOIE is a large-scale OpenIE data converted from QA-SRL 2.0 in two domains, i.e., Wikipedia and Science. It is 20 times larger than the next largest human-annotated OpenIE data, and thus is reliable for fair evaluation. LSOIE provides n-ary OpenIE annotations and gold tuples are in the 〈ARG0, Relation, ARG1, . . . , A...
Provide a detailed description of the following dataset: LSOIE
Motion Capture Data for Hand Motion Embodiment
**Motion Capture Data for Hand Motion Embodiment** contains demonstrations of different hand motion recorded with the Qualisys MOCAP system.
Provide a detailed description of the following dataset: Motion Capture Data for Hand Motion Embodiment
DigiCall
We release 3.691 earning call transcripts and also annotated data set, labeled particularly for the digital strategy maturity by linguists. https://github.com/hpataci/DigiCall
Provide a detailed description of the following dataset: DigiCall
SkinCon
**SkinCon** is a skin disease dataset densely annotated by dermatologists. SkinCon includes 3230 images from the Fitzpatrick 17k skin disease (Fitzpatrick Skin Tone) dataset densely labelled with 48 clinical concepts, 22 of which have at least 50 images representing the concept. The concepts used were chosen by two der...
Provide a detailed description of the following dataset: SkinCon
FaceOcc
**FaceOcc** is a high-quality face occlusion dataset which contains all mislabeled occlusions in CelebAMask-HQ and complements some occlusions and textures from the internet. The occlusion types cover sunglasses, spectacles, hands, masks, scarfs, microphones, etc.
Provide a detailed description of the following dataset: FaceOcc
FES
FES is an indoor dataset that can be used for evaluation of deep learning approaches. It consists of 301 top-view fisheye images from an indoor scene. Annotations include bounding boxes and instance segmentation masks for 6 classes.
Provide a detailed description of the following dataset: FES
Rice Grains BRRI
dataset (balanced) of 200 images consists of three classes - False Smut, Neck Blast and healthy grain class. Some of these images contain both diseases together. Field data collected under the supervision of staff from the Bangladesh Rice Research Institute (BRRI).
Provide a detailed description of the following dataset: Rice Grains BRRI
The Copiale Cipher
The Copiale Cipher is a 105 pages manuscript containing all in all around 75 000 characters. Beautifully bound in green and gold brocade paper, written on high quality paper with two different watermarks, the manuscript can be dated back to around 1750. Apart from what is obviously an owner's mark (“Philipp 1866”) and ...
Provide a detailed description of the following dataset: The Copiale Cipher
Fontenay Dataset
This data set encompasses 104 images and transcriptions of digital images of original charters from the Cistercian abbey Fontenay in Burgundy (France), dating mainly from the 12th c. and until 1213. The original data set was created as part of the ANR ORIFLAMMS (ANR-12-CORP-0010) project. Texts were transcribed in the ...
Provide a detailed description of the following dataset: Fontenay Dataset
Google1000
A collection of 1000 public domain volumes that were scanned as part of the Google Book Search project. It is being distributed to support research in a variety of disciplines. Each volume comes with the scanned images, OCR output, page tags and basic metadata. The volumes in this dataset are written in 4 languages: E...
Provide a detailed description of the following dataset: Google1000
MaxwellBlobs
The dataset consists of random electromagnetic scatterers and their associated fields when illuminated by a 1000nm plane-wave illumination. The scatterers have a refractive index of $n=1.5$ and are surrounded by air ($n=1.0$), and the side-length of the simulated area is 5.12 microns. | Type | Samples | Input data ...
Provide a detailed description of the following dataset: MaxwellBlobs
MotionID: IMU specific motion
Dataset for User Verification part of MotionID: Human Authentication Approach. Data type: bin (should be converted by attached notebook). ~50 hours of IMU (Inertial Measurement Units) data for one specific motion pattern, provided by 101 users. For data collection only one smartphone (Samsung Galaxy S20) was used....
Provide a detailed description of the following dataset: MotionID: IMU specific motion
MotionID: IMU all motions part1
Dataset (part 1/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach. Data type: bin (should be converted by attached notebook). Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones wit...
Provide a detailed description of the following dataset: MotionID: IMU all motions part1
MotionID: IMU all motions part2
Dataset (part 2/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach. Data type: bin (should be converted by attached notebook). Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones wit...
Provide a detailed description of the following dataset: MotionID: IMU all motions part2
MotionID: IMU all motions part3
Dataset (part 3/3) for Motion Patterns Identification part of MotionID: Human Authentication Approach. Data type: bin (should be converted by attached notebook). Six users each with a Samsung Galaxy S10e smartphone collected IMU data every day for 2 weeks. At the end of two weeks, the users switched smartphones wit...
Provide a detailed description of the following dataset: MotionID: IMU all motions part3
MOSE
**CoMplex video Object SEgmentation (MOSE)** is a dataset to study the tracking and segmenting objects in complex environments. MOSE contains 2,149 video clips and 5,200 objects from 36 categories, with 431,725 high-quality object segmentation masks. The most notable feature of MOSE dataset is complex scenes with crowd...
Provide a detailed description of the following dataset: MOSE
DIV2KRK
Using the validation set (100 images) from the widely used DIV2K dataset, we blurred and subsampled each image with a different, randomly generated kernel. Kernels were 11x11 anisotropic gaussians with random lengths λ1, λ2∼U(0.6, 5) independently distributed for each axis, rotated by a random angle θ∼U[−π, π].
Provide a detailed description of the following dataset: DIV2KRK
ChatGPT-software-testing
## Dataset Description Our dataset contains questions from a well-known software testing book **Introduction to Software Testing 2nd Edition** by Ammann and Offutt. We use all the text-book questions in Chapters 1 to 5 that have solutions available on the book’s official website. Our dataset contains 40 such que...
Provide a detailed description of the following dataset: ChatGPT-software-testing
Facebook MSC
Despite recent improvements in open-domain dialogue models, state of the art models are trained and evaluated on short conversations with little context. In contrast, the long-term conversation setting has hardly been studied. In this work we collect and release a humanhuman dataset consisting of multiple chat sessions...
Provide a detailed description of the following dataset: Facebook MSC
IISc VEED-Dynamic
**IISc VEED-Dynamic** consists of 200 diverse indoor and outdoor scenes (see samples below). The videos are rendered using blender and the blend files obtained for the scenes are mainly from blendswap and turbosquid. 4 different camera trajectories are added to each scene and thus we have a total of 800 videos. Motion ...
Provide a detailed description of the following dataset: IISc VEED-Dynamic
DIBCO 2009
DIBCO 2009 is the first International Document Image Binarization Contest organized in the context of ICDAR 2009 conference. The general objective of the contest is to identify current advances in document image binarization using established evaluation performance measures.
Provide a detailed description of the following dataset: DIBCO 2009
H-DIBCO 2010
H-DIBCO 2010 is the International Document Image Binarization Contest which is dedicated to handwritten document images organized in conjunction with ICFHR 2010 conference. The general objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation perform...
Provide a detailed description of the following dataset: H-DIBCO 2010
IISc VEED
**IISc VEED** consists of 200 diverse indoor and outdoor scenes (see samples below). The videos are rendered with blender and the blend files are obtained for the scenes mainly from blendswap and turbosquid. 4 different camera trajectories are added to each scene and thus we have a total of 800 videos. The videos are r...
Provide a detailed description of the following dataset: IISc VEED
TACRED-Revisited
The TACRED-Revisited dataset improves the crowd-sourced TACRED dataset for relation extraction by relabeling the dev and test sets using expert linguistic annotators. Relabeling focuses on the 5K most challenging instances in dev and test, in total, 51.2% of these are corrected. Published at ACL 2020. Paper (arXiv...
Provide a detailed description of the following dataset: TACRED-Revisited
SurgT
**SurgT** is a dataset for benchmarking 2D Trackers in Minimally Invasive Surgery (MIS). It contains a total of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters.
Provide a detailed description of the following dataset: SurgT
WUSTL_EHMS_2020
The WUSTL-EHMS-2020 dataset was created using a real-time Enhanced Healthcare Monitoring System (EHMS) testbed [1]. This testbed collects both the network flow metrics and patients' biometrics due to the scarcity of a dataset that combines these biometrics.
Provide a detailed description of the following dataset: WUSTL_EHMS_2020
FINDSum
**FINDSum** is a large-scale dataset for long text and multi-table summarization. It is built on 21,125 annual reports from 3,794 companies and has two subsets for summarizing each company’s results of operations and liquidity.
Provide a detailed description of the following dataset: FINDSum
Apnea-ECG
The data consist of 70 records, divided into a learning set of 35 records (a01 through a20, b01 through b05, and c01 through c10), and a test set of 35 records (x01 through x35), all of which may be downloaded from this page. Recordings vary in length from slightly less than 7 hours to nearly 10 hours each. Each record...
Provide a detailed description of the following dataset: Apnea-ECG
APRICOT-Mask
We present the APRICOT-Mask dataset, which augments the APRICOT dataset with pixel-level annotations of adversarial patches. We hope APRICOT-Mask along with the APRICOT dataset can facilitate the research in building defenses against physical patch attacks, especially patch detection and removal techniques.
Provide a detailed description of the following dataset: APRICOT-Mask
TTStroke-21 ME22
TTStroke-21 for MediaEval 2022. The task is of interest to researchers in the areas of machine learning (classification), visual content analysis, computer vision and sport performance. We explicitly encourage researchers focusing specifically in domains of computer-aided analysis of sport performance. Our focus is ...
Provide a detailed description of the following dataset: TTStroke-21 ME22
Tasksource
Huggingface Datasets is a great library, but it lacks standardization, and datasets require preprocessing work to be used interchangeably. tasksource automates this and facilitates reproducible multi-task learning scaling. Each dataset is standardized to either MultipleChoice, Classification, or TokenClassification ...
Provide a detailed description of the following dataset: Tasksource
TTStroke-21 ME21
This task offers researchers an opportunity to test their fine-grained classification methods for detecting and recognizing strokes in table tennis videos. (The low inter-class variability makes the task more difficult than with usual general datasets like UCF-101.) The task offers two subtasks: Subtask 1: Stroke De...
Provide a detailed description of the following dataset: TTStroke-21 ME21
HeiChole Benchmark
Analyzing the surgical workflow is a prerequisite for many applications in computer assisted surgery (CAS), such as context-aware visualization of navigation information, specifying the most probable tool required next by the surgeon or determining the remaining duration of surgery. Since laparoscopic surgeries are per...
Provide a detailed description of the following dataset: HeiChole Benchmark
Endoscapes
Cholecystectomy is a very common abdominal surgical procedure almost ubiquitously performed with a laparoscopic approach, hence guided by an endoscopic video. Deep learning models for LC video analysis have been developed with the aim of assisting surgeons during interventions, improving staff awareness and readiness,...
Provide a detailed description of the following dataset: Endoscapes
ManiSkill2
**ManiSkill2** is the next generation of the SAPIEN ManiSkill benchmark, to address critical pain points often encountered by researchers when using benchmarks for generalizable manipulation skills. It includes 20 manipulation task families with 2000+ object models and 4M+ demonstration frames, which cover stationary/m...
Provide a detailed description of the following dataset: ManiSkill2
OLIVES Dataset
Clinical diagnosis of the eye is performed over multifarious data modalities including scalar clinical labels, vectorized biomarkers, two-dimensional fundus images, and three-dimensional Optical Coherence Tomography (OCT) scans. While the clinical labels, fundus images and OCT scans are instrumental measurements, the v...
Provide a detailed description of the following dataset: OLIVES Dataset
DeePhy
DeePhy is a novel DeepFake Phylogeny dataset consisting of 5040 DeepFake videos generated using three different generation techniques. It is one of the first datasets which incorporates the concept of Deepfake Phylogeny which refers to the idea of generation of DeepFakes using multiple generation techniques in a sequen...
Provide a detailed description of the following dataset: DeePhy
WiRe57
We manually performed the task of Open Information Extraction on 5 short documents, elaborating tentative guidelines for the task, and resulting in a ground truth reference of 347 tuples. [section 1] A small corpus of 57 sentences taken from the beginning of 5 documents in English was used as the source text from wh...
Provide a detailed description of the following dataset: WiRe57
DocOIE
We manually annotate 800 sentences from 80 documents in two domains (Healthcare and Transportation) to form a DocOIE dataset for evaluation.
Provide a detailed description of the following dataset: DocOIE
Dataset for MPLP
This dataset is used for MPLP considering time windows constraints of customers and parking space. To randomly generate the dataset, please visit the link: https://github.com/Yubin-Liu/Hybrid-Q-Learning-Network-Approach-for-MPLP.
Provide a detailed description of the following dataset: Dataset for MPLP
ATPChecker
A novel dataset for identifying privacy policy compliance of Android third-party libraries.
Provide a detailed description of the following dataset: ATPChecker
OSASUD
Polysomnography (PSG) is a fundamental diagnostical method for the detection of Obstructive Sleep Apnea Syndrome (OSAS). Historically, trained physicians have been manually identifying OSAS episodes in individuals based on PSG recordings. Such a task is highly important for stroke patients, since in such cases OSAS is ...
Provide a detailed description of the following dataset: OSASUD
HWU64
This project contains natural language data for human-robot interaction in home domain which we collected and annotated for evaluating NLU Services/platforms.
Provide a detailed description of the following dataset: HWU64
DocILE
**DocILE** is a large dataset of business documents for the tasks of Key Information Localization and Extraction and Line Item Recognition. It contains 6.7k annotated business documents, 100k synthetically generated documents, and nearly 1M unlabeled documents for unsupervised pre-training. The dataset has been built w...
Provide a detailed description of the following dataset: DocILE
DTTD
**Digital-Twin Tracking Dataset (DTTD)** is a novel RGB-D dataset to enable further research of the problem and extend potential solutions towards longer ranges and mm localization accuracy. In total, 103 scenes of 10 common off-the-shelf objects with rich textures are recorded, with each frame annotated with a per-pix...
Provide a detailed description of the following dataset: DTTD
CSL-Daily
CSL-Daily (Chinese Sign Language Corpus) is a large-scale continuous SLT dataset. It provides both spoken language translations and gloss-level annotations. The topic revolves around people's daily lives (e.g., travel, shopping, medical care), the most likely SLT application scenario. [1] [Improving Sign Language Tr...
Provide a detailed description of the following dataset: CSL-Daily
PN9
It is a new large-scale pulmonary nodule dataset named PN9, which contains 8,798 thoracic CT scans and a total of 40,439 annotated nodules.
Provide a detailed description of the following dataset: PN9
Office-Home-LMT
The dataset is for research on the label distribution shift between multiple domain adaptations. We use **Cl**, **Pr**, and **Rw** to resample two reverse long-tailed distributions and one Gaussian d for each of them for BTDA with label shift.
Provide a detailed description of the following dataset: Office-Home-LMT
Prophesee GEN4 Dataset
The dataset is split between train, test and val folders. Files consist of 60 seconds recordings that were cut from longer recording sessions. Cuts from a single recording session are all in the same training split. Each dat file is a binary file in which events are encoded using 4 bytes (unsigned int32) for the...
Provide a detailed description of the following dataset: Prophesee GEN4 Dataset
Workshop Tools Dataset
# Workshop Tools Dataset Motivated by the need for a dataset that also includes inertial information about the objects, we contribute the following dataset. It contains 20 common workshop tools, and for each object: - a watertight triangular surface mesh; - a synthetic colored surface point-cloud; - ground truth in...
Provide a detailed description of the following dataset: Workshop Tools Dataset
ETD500
The paper used 500 scanned Electronic Theses and Dissertation cover pages (i.e., front pages). The dataset contains several intermediate datasets, briefly discussed in the paper.
Provide a detailed description of the following dataset: ETD500
A-FB15k237
This dataset is based on FB15k237 and a pre-trained language-model-based KGE. The main task is to add the new knowledge that the pre-trained model didn't see in the previous training stage. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link).
Provide a detailed description of the following dataset: A-FB15k237
E-WN18RR
This dataset is based on WN18RR and a pre-trained language-model-based KGE. The main task is to correct the wrong knowledge stored in the pre-trained model and replace the incorrect entities with alternative entities. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnE...
Provide a detailed description of the following dataset: E-WN18RR
A-WN18RR
This dataset is based on WN18RR and a pre-trained language-model-based KGE. The main task is to add the new knowledge that the pre-trained model didn't see in the previous training stage. The model can be downloaded from [here](https://drive.google.com/drive/folders/1EOHdg8rC9iwgSyKl5RnEv9z6ATW5Ntbr?usp=share_link).
Provide a detailed description of the following dataset: A-WN18RR