dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
CommitChronicle | **CommitChronicle** is a dataset for commit message generation (and/or completion).
Its key features:
* *large-scale and multilingual*: contains 10.7M commits from 11.9k GitHub repositories in 20 programming languages;
* *diverse*: avoids restrictive filtering on commit messages or commit diffs structure;
* *su... | Provide a detailed description of the following dataset: CommitChronicle |
MEIS | MEIS comprises a total of 2,639 images in the size of 1024 × 768 toward two recording views (Aortic Valve (AV) and
Left Ventricle (LV)) with 1,521 (747 in AV + 774 in LV) images for training and 1,118 (559 in AV + 559 in LV) for
testing, respectively. Each view must be detected with two objects to calculate the measu... | Provide a detailed description of the following dataset: MEIS |
ImageNet-Atr | We build a new evaluation set by adding spotting words to the images of ImageNet 2012 evaluation sets. There are 1,000 categories in ImageNet. For each category c, we find its most confusing category c*and spot the category name to every evaluation image.
This evaluation set is challenging for many CLIP models. For... | Provide a detailed description of the following dataset: ImageNet-Atr |
PUMaVOS | PUMaVOS is a dataset of challenging and practical use cases inspired by the movie production industry.
**Partial and Unusual Masks for Video Object Segmentation (PUMaVOS)** dataset has the following properties:
- **24** videos, **21187** densely-annotated frames;
- Covers complex practical use cases such as object... | Provide a detailed description of the following dataset: PUMaVOS |
PoPArt | Throughout the history of art, the pose—as the holistic abstraction of the human body's expression—has proven to be a constant in numerous studies. However, due to the enormous amount of data that so far had to be processed by hand, its crucial role to the formulaic recapitulation of art-historical motifs since antiqui... | Provide a detailed description of the following dataset: PoPArt |
ZTBus | This repository contains the Zurich Transit Bus (ZTBus) dataset, which consists of data recorded during driving missions of electric city buses in Zurich, Switzerland. The data was collected over several years on two trolley buses as part of multiple research projects. It involves more than a thousand missions spanning... | Provide a detailed description of the following dataset: ZTBus |
ISEKAI | **ISEKAI** dataset’s images are generated by Midjourney’s text-to-image model using well-crafted instructions. Images were manually selected to ensure core concept consistency. The dataset currently comprises 20 groups, and 40 categories in total (continues to grow). Each group pairs a new concept with a related real-w... | Provide a detailed description of the following dataset: ISEKAI |
MISP2021 | The MISP2021 challenge dataset is a collection of audio-visual conversational data recorded in a home TV scenario using distant multi-microphones. The dataset captures interactions between several individuals who are engaged in conversations in Chinese while watching TV and interacting with a smart speaker/TV in a livi... | Provide a detailed description of the following dataset: MISP2021 |
DivEMT | DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkis... | Provide a detailed description of the following dataset: DivEMT |
UDED | This dataset is a collection of 1, 2, or 3 images from: BIPED, BSDS500, BSDS300, DIV2K, WIRE-FRAME, CID, CITYSCAPES, ADE20K, MDBD, NYUD, THANGKA, PASCAL-Context, SET14, URBAN10, and the camera-man image. The image selection process consists on computing the Inter-Quartile Range (IQR) intensity value on all the images, ... | Provide a detailed description of the following dataset: UDED |
Random Signals for Recurrent Autoencoder | The dataset contains generated random signals for autoencoding purposes.
It was used as a benchmark for autoencoder performance comparison.
All dataset files are "pickled" and placed in the folder `datasets` in https://github.com/rsusik/raesc | Provide a detailed description of the following dataset: Random Signals for Recurrent Autoencoder |
CrashD | CrashD is a test benchmark for the robustness and generalization of 3D object detection models. It contains a wide range of out-of-distribution vehicles, including damaged, classic, and sports cars. | Provide a detailed description of the following dataset: CrashD |
loaded-dice v1.4 | This repository contains the code and data to reproduce all results in "Climate uncertainty impacts on social cost of carbon and optimal mitigation pathways", Smith et al. (2023), Environmental Research Letters.
The journal article (gold open access) is at https://iopscience.iop.org/article/10.1088/1748-9326/acedc6.... | Provide a detailed description of the following dataset: loaded-dice v1.4 |
Pins Face Recognition | This images has been collected from Pinterest and cropped. There are 105 celebrities and 17534 faces. | Provide a detailed description of the following dataset: Pins Face Recognition |
CommitPack | CommitPack is is a 4TB dataset of commits scraped from GitHub repositories that are permissively licensed. | Provide a detailed description of the following dataset: CommitPack |
CommitPackFT | CommitPackFT is a 2GB filtered version of CommitPack to contain only high-quality commit messages that resemble natural language instructions. | Provide a detailed description of the following dataset: CommitPackFT |
HumanEvalPack | HumanEvalPack is an extension of OpenAI's HumanEval to cover 6 total languages across 3 tasks. The evaluation suite is fully created by humans. | Provide a detailed description of the following dataset: HumanEvalPack |
Sparrow | Sparrow-V0: A Reinforcement Learning Friendly Simulator for Mobile Robot
Features:
Vectorizable (Enable fast data collection; Single environment is also supported)
Domain Randomization (control interval, control delay, maximum velocity, inertia, friction, the magnitude of sensor noise and maps can be randomized wh... | Provide a detailed description of the following dataset: Sparrow |
ChatHaruhi | **ChatHaruhi** is a dataset covering 32 Chinese / English TV / anime characters with over 54k simulated dialogues. | Provide a detailed description of the following dataset: ChatHaruhi |
WorldView-3 PAirMax | The PAirMax dataset is a collection of images for evaluating the performance of pansharpening algorithms. This data collection includes nine test cases at full resolution, acquired by different sensors belonging to Maxar's constellation of high-resolution satellites. Nine related test cases at reduced resolution, simul... | Provide a detailed description of the following dataset: WorldView-3 PAirMax |
WorldView-2 PairMax | This dataset refers to the two images acquired by the WorldView-2 satellite, representing Miami.
The PAirMax dataset is a collection of images for evaluating the performance of pansharpening algorithms. This data collection includes nine test cases at full resolution, acquired by different sensors belonging to Maxar... | Provide a detailed description of the following dataset: WorldView-2 PairMax |
GeoEye-1 PairMax | This dataset refers to the two images acquired by the GeoEye-1 satellite, representing London and Trenton, respectively.
The PAirMax dataset is a collection of images for evaluating the performance of pansharpening algorithms. This data collection includes nine test cases at full resolution, acquired by different se... | Provide a detailed description of the following dataset: GeoEye-1 PairMax |
CMB | **CMB** is a comprehensive, multi-level Medical Benchmark in Chinese. It encompasses 280,839 multiple-choice questions and 74 complex case consultation questions, covering all clinical medical specialties and various professional levels. The platform aims to holistically evaluate a model's medical knowledge and clinica... | Provide a detailed description of the following dataset: CMB |
EgoSchema | **EgoSchema** is very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, co... | Provide a detailed description of the following dataset: EgoSchema |
Spatial LibriSpeech | **Spatial LibriSpeech** is spatial audio dataset with over 650 hours of 19-channel audio, first-order ambisonics, and optional distractor noise. Spatial LibriSpeech is designed for machine learning model training, and it includes labels for source position, speaking direction, room acoustics and geometry. | Provide a detailed description of the following dataset: Spatial LibriSpeech |
NIH 3T3 microtubule cell dataset | The data consists of 21 images of microtubules in PFA-fixed NIH 3T3 mouse embryonic fibroblasts (DSMZ: ACC59) labeled with a mouse anti-alpha-tubulin monoclonal IgG1 antibody (Thermofisher A11126, primary antibody) and visualized by a blue-fluorescent Alexa Fluor 405 goat anti-mouse IgG antibody (Thermofisher A-31553, ... | Provide a detailed description of the following dataset: NIH 3T3 microtubule cell dataset |
LaRS | LaRS is the largest and most diverse **panoptic** maritime obstacle detection dataset.
Highlights:
* Diverse scenes from manual capture, public online videos and existing datasets
* USV-centric point of view
* **4000+** manually per-pixel labelled frames:
* **3 stuff** categories and **8 thing** (dyn... | Provide a detailed description of the following dataset: LaRS |
HarmfulQA | [**Paper**](https://arxiv.org/abs/2308.09662) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)| [**Model**](https://huggingface.co/declare-lab/starling-7B)
As a part of our research efforts toward making LLMs more safe for public use, ... | Provide a detailed description of the following dataset: HarmfulQA |
CONG | A dataset for position-constrained robot grasp planning. | Provide a detailed description of the following dataset: CONG |
WanJuan | **WanJuan** is a large-scale training corpus that includes multiple modalities. The dataset incorporates text, image-text, and video modalities, with a total volume exceeding 2TB. | Provide a detailed description of the following dataset: WanJuan |
VIDIMU: Multimodal video and IMU kinematic dataset on daily life activities using affordable devices | Human activity recognition and clinical biomechanics are challenging problems in physical telerehabilitation medicine. However, most publicly available datasets on human body movements cannot be used to study both problems in an out-of-the-lab movement acquisition setting. The objective of the VIDIMU dataset is to pave... | Provide a detailed description of the following dataset: VIDIMU: Multimodal video and IMU kinematic dataset on daily life activities using affordable devices |
sim-combi | # Data simulator for polypharmacies / drug combinations
## TL;DR
```python create_dataset.py [--config path/to/config.json --seed your_seed]```
Template of `config.json` in `configs/`
## Example end result
| Rx1 | Rx2 | Rx3 | Rx4 | ... | RxN | RR |
|-----|-----|-----|-----|-----|-----|------|
| 1 | 0 ... | Provide a detailed description of the following dataset: sim-combi |
StoryBench | StoryBench is a multi-task benchmark to reliably evaluate the ability of text-to-video models to generate stories from a sequence of captions and their duration. It includes three datasets (DiDeMo, Oops, UVO) and three video generation tasks of increasing difficulty: action execution, where the next action must be gene... | Provide a detailed description of the following dataset: StoryBench |
SONAR | **SONAR**, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders. It substantially outperforms existing sentence embeddings such as LASER3 and LabSE on the xsim and xsim++ multilingual similarity search tasks. | Provide a detailed description of the following dataset: SONAR |
Accompnaying Dataset for: Chemical Heredity as Group Selection at the Molecular Level | Accompnaying Dataset for: Chemical Heredity as Group Selection at the Molecular Level. File descriptions are provided in the Appendix of [Markovitch, Witkowski and Virgo; Chemical Heredity as Group Selection at the Molecular Level, arXiv (2018)] (https://arxiv.org/abs/1802.08024). | Provide a detailed description of the following dataset: Accompnaying Dataset for: Chemical Heredity as Group Selection at the Molecular Level |
Accompanying dataset for: Predicting Species Emergence in Simulated Complex Pre-Biotic Networks | This is the accompanying data and code for the publication [Markovitch & Krasnogor: Predicting Species Emergence in Simulated Complex Pre-Biotic Networks] containing the full set of 10,000 lognormal networks studied, their network communities and the compotype species observed during simulations with the GARD model. De... | Provide a detailed description of the following dataset: Accompanying dataset for: Predicting Species Emergence in Simulated Complex Pre-Biotic Networks |
AutoPoster dataset | Dataset proposed by ACM MM 2023 paper "AutoPoster: A Highly Automatic and Content-aware Design System for Advertising Poster Generation"
We gather 76537 advertising posters from an e-commerce advertising platform. The posters are designed manually and cover a broad range of product categories, resulting in a diverse... | Provide a detailed description of the following dataset: AutoPoster dataset |
VA (Virtual Apartment) | A synthetic depth estimation dataset for benchmark rendered from a high-quality CAD indoor environment
- About 3.5K RGBD pairs with left-right stereo
- Challenging viewing direction
- Challenging different light condition | Provide a detailed description of the following dataset: VA (Virtual Apartment) |
Voxceleb-3D | A dataset for voice and 3D face structure study. It contains about 1.4K identities with their 3D face models and voice data. 3D face models are fitted from VGGFace using BFM 3D models, and voice data are processed from Voxceleb | Provide a detailed description of the following dataset: Voxceleb-3D |
USPTO-30K | We introduce USPTO-30K, a large-scale benchmark dataset of annotated molecule images, which overcomes these limitations. It is created using the pairs of images and MolFiles by the United States Patent and Trademark Office. Each molecule was independently selected among all the available documents from 2001 to 2020. Th... | Provide a detailed description of the following dataset: USPTO-30K |
MolGrapher-Synthetic-300K | The set is created using molecule SMILES retrieved from the database PubChem. Images are then generated from SMILES using the molecule drawing library RDKit. The synthetic set is augmented at multiple levels:
Molecule level: Molecules are randomly transformed by: (1) displaying explicit hydrogens, (2) reducing of th... | Provide a detailed description of the following dataset: MolGrapher-Synthetic-300K |
WRV | # G2LP
Wire-removal Dataset in G2LP-Net: Global to Local Progressive Video Inpainting Network
# Wire-removal Dataset
The WRV dataset has been specifically curated for the challenges of video inpainting in irregularly slender regions. It encompasses 150 video clips (now 190) that are extracted from movies and TV seri... | Provide a detailed description of the following dataset: WRV |
BEE23 | We collected 32 videos that record bee colony activity from different periods on several sunny days.
The total size of the dataset is 3,562 frames and 43,169 annotations. | Provide a detailed description of the following dataset: BEE23 |
GMOT-40 | GMOT-40 is the first public dense dataset for Generic Multiple Object Tracking (GMOT). It contains 40 carefully annotated sequences evenly distributed among 10 object categories. Beyond the data, a challenging protocal, one-shot GMOT, is adopted and a series of baseline algorithms is introduced. GMOT-40 is featured in
... | Provide a detailed description of the following dataset: GMOT-40 |
NeRF-MVL | We establish an object-centric **m**ulti-**v**iew **L**iDAR dataset, which we
dub the **NeRF-MVL** dataset, containing carefully calibrated sensor poses,
acquired from multi-LiDAR sensor data from real autonomous vehicles. It contains
more than **76k frames** covering two types of collecting vehicles, three LiDAR
s... | Provide a detailed description of the following dataset: NeRF-MVL |
EchoNet LVH | Echocardiography, or cardiac ultrasound, is the most widely used and readily available imaging modality to assess cardiac function and structure. Combining portable instrumentation, rapid image acquisition, high temporal resolution, and without the risks of ionizing radiation, echocardiography is one of the most freque... | Provide a detailed description of the following dataset: EchoNet LVH |
PubChemQA | PubChemQA consists of molecules and their corresponding textual descriptions from PubChem. It contains a single type of question, i.e., please describe the molecule. We remove molecules that cannot be processed by RDKit [Landrum et al., 2021] to generate 2D molecular graphs. We also remove texts with less than 4 words,... | Provide a detailed description of the following dataset: PubChemQA |
UniProtQA | UniProtQA consists of proteins and textual queries about their functions and properties. The dataset is constructed from UniProt, and consists 4 types of questions with regard to functions, official names, protein families, and sub-cellular locations. We collect a total of 569, 516 proteins and 1, 891, 506 question-ans... | Provide a detailed description of the following dataset: UniProtQA |
DEEP-VOICE: DeepFake Voice Recognition | # DEEP-VOICE: Real-time Detection of AI-Generated Speech for DeepFake Voice Conversion
This dataset contains examples of real human speech, and DeepFake versions of those speeches by using Retrieval-based Voice Conversion.
*Can machine learning be used to detect when speech is AI-generated?*
## Introduction
... | Provide a detailed description of the following dataset: DEEP-VOICE: DeepFake Voice Recognition |
TYC Dataset | We introduce the trapped yeast cell (TYC) dataset, a novel dataset for understanding instance-level semantics and motions of cells in microstructures. We release $105$ dense annotated high-resolution brightfield microscopy images, including about $19$k instance masks. We also release $261$ curated video clips composed ... | Provide a detailed description of the following dataset: TYC Dataset |
MusicQA | MusicQA dataset, designed for training the MU-LLaMA model | Provide a detailed description of the following dataset: MusicQA |
DAIR-V2X-Seq | An extension of DAIR-V2X with addition temporal information | Provide a detailed description of the following dataset: DAIR-V2X-Seq |
MusicQA Dataset | We propose the MusicQA dataset to train Music-enabled question-answering models and is used for training and evaluating our MU-LLaMA model. This dataset is generated using the MusicCaps and MagnaTagATune datasets. We utilize the descriptions/tags from existing datasets to prompt the MPT-7B Chat model to generate questi... | Provide a detailed description of the following dataset: MusicQA Dataset |
NERDS 360 | We present a large-scale dataset for 3D urban scene understanding. Compared to existing datasets, our dataset consists of 75 outdoor urban scenes with diverse backgrounds, encompassing over 15,000 images. These scenes offer 360◦ hemispherical views, capturing diverse foreground objects illuminated under various lightin... | Provide a detailed description of the following dataset: NERDS 360 |
Pylon Benchmark | We create a new dataset from GitTables, a data lake of 1.7M tables extracted from CSV files on GitHub. The benchmark comprises 1,746 tables including union-able table subsets under topics selected from Schema.org: scholarly article, job posting, and music playlist. We end up with these three topics since we can find a ... | Provide a detailed description of the following dataset: Pylon Benchmark |
Quechua-SER | Quechua Collao corpus for automatic emotion recognition in speech. Audios are provided, alongside csv files with labels from 4 annotators for valence, arousal, and dominance values, using a 1 to 5 scale.
Categorical labels are also included, as well as the script used for recording. This script contains sets of wor... | Provide a detailed description of the following dataset: Quechua-SER |
HardZiPA Dataset | The HardZiPA folder contains illuminance and RGB data as well as CO2 and TVOC data for five sensing devices. | Provide a detailed description of the following dataset: HardZiPA Dataset |
SD7K | SD7K is the only large-scale high-resolution dataset that satisfies all important data features about document shadow currently, which covers a large number of document shadow images. Mean resolution is $2462 \times 3699$ | Provide a detailed description of the following dataset: SD7K |
WikiFANE_Gold | The gold-standard and automatically-developed fine-grained Arabic named entity corpora are resources created by annotating Named Entities into 50 fine-grained classes.
The annotation uses two-levels taxonomy in which an entity has been annotated into coarse- and fine-grained classes. | Provide a detailed description of the following dataset: WikiFANE_Gold |
TVIL | Temporal Video Inpainting Localization Dataset. | Provide a detailed description of the following dataset: TVIL |
Do-Not-Answer | **Do-Not-Answer** is a dataset to evaluate safeguards in large language models, and deploy safer open-source LLMs at a low cost. The dataset is curated and filtered to consist only of instructions that responsible language models should not follow. We annotate and assess the responses of six popular LLMs to these instr... | Provide a detailed description of the following dataset: Do-Not-Answer |
OVDEval | **OVDEval** includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. | Provide a detailed description of the following dataset: OVDEval |
DiaASQ | DiaASQ is a fine-grained Aspect-based Sentiment Analysis (ABSA) benchmark under the conversation scenario. It challenges existing ABSA methods by 1) extracting quadruple of target-aspect-opinion-sentiment in a dialogue, and 2) modeling the dialogue discourse structures. The dataset is constructed by systematically craw... | Provide a detailed description of the following dataset: DiaASQ |
WebVid-CoVR | The WebVid-CoVR dataset is a collection of video-text-video triplets that can be used for the task of composed video retrieval (CoVR). CoVR is a task that involves searching for videos that match both a query image and a query text. The text typically specifies the desired modification to the query image.
The WebVi... | Provide a detailed description of the following dataset: WebVid-CoVR |
MIMIC-GAZE-JPG | 1083 cases from the MIMIC-CXR dataset. For each case, a gray-scaled X-ray image with the size of around 3000x3000, eye-gaze data, and ground-truth classification labels are provided. These cases are classified into 3 categories: Normal, Congestive Heart Failure (CHF), and Pneumonia. | Provide a detailed description of the following dataset: MIMIC-GAZE-JPG |
Taobao (TGN Style) | Taobao dataset which is pre-processed in TGN Style. | Provide a detailed description of the following dataset: Taobao (TGN Style) |
ML25m (TGN Style) | ML25m dataset which is pre-processed in TGN Style. | Provide a detailed description of the following dataset: ML25m (TGN Style) |
DGraphFin (TGN Style) | DGraphFin dataset which is pre-processed in TGN Style. | Provide a detailed description of the following dataset: DGraphFin (TGN Style) |
World Across Time | The **World Across Time (WAT)** dataset used in paper "CLNeRF: Continual Learning Meets NeRF". It contains multiple colmap reconstructed scenes used for continual learning of NeRFs. For each scene, we provide multiple scans captured at different time where the same scene has different appearance and geometry conditions... | Provide a detailed description of the following dataset: World Across Time |
TUR2SQL | The field of converting natural language into corresponding SQL queries using deep learning techniques has attracted significant attention in recent years. While existing Text-to-SQL datasets primarily focus on English and other languages such as Chinese, there is a lack of resources for the Turkish language. In this s... | Provide a detailed description of the following dataset: TUR2SQL |
MedShapeNet | MedShapeNet contains over 100,000 medical shapes, including bones, organs, vessels, muscles, etc., as well as surgical instruments. You can search, display them in 3D and download the individual shapes by using our shape search engine. Note that MedShapeNet is provided for research and educational purposes only. | Provide a detailed description of the following dataset: MedShapeNet |
WeatherBench 2 | **WeatherBench 2** is an update to the global, medium-range (1–14 day) weather forecasting benchmark proposed by rasp_weatherbench_2020, designed with the aim to accelerate progress in data-driven weather modeling. WeatherBench 2 consists of an open-source evaluation framework, publicly available training, ground truth... | Provide a detailed description of the following dataset: WeatherBench 2 |
DGL Version of OpenCatalyst (OC20) ISRE | We provide DGL compatible graphs in lmdb format for the OpenCatalyst IS2RE task based on the OC20 dataset. | Provide a detailed description of the following dataset: DGL Version of OpenCatalyst (OC20) ISRE |
MatSci-NLP Benchmark Dataset | We present MatSci-NLP, a natural language benchmark for evaluating the performance of natural language processing (NLP) models on materials science text. We construct the benchmark from publicly available materials science text data to encompass seven different NLP tasks, including conventional NLP tasks like named ent... | Provide a detailed description of the following dataset: MatSci-NLP Benchmark Dataset |
Belebele | Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-2... | Provide a detailed description of the following dataset: Belebele |
Tennessee Eastman Process | This dataset contains simulations of a complex, large-scale chemical plant proposed by Downs and Vogel (1993). As described by Reinartz, Kulahci and Ravn (2021):
The process involves the production of two liquid product components G and H from four gaseous reactants A, C, D and E with an additional inert B and a byp... | Provide a detailed description of the following dataset: Tennessee Eastman Process |
ACSPublicCoverage | ACSPublicCoverage: predict whether an individual is covered by public health insurance, after filtering the ACS PUMS data sample to only include individuals under the age of 65, and those with an income of less than $30,000. This filtering focuses the prediction problem on low-income individuals who are not eligible fo... | Provide a detailed description of the following dataset: ACSPublicCoverage |
EMDB | EMDB contains in-the-wild videos of human activity recorded with a hand-held iPhone. It features reference SMPL body pose and shape parameters, as well as global body root and camera trajectories. The reference 3D poses were obtained by jointly fitting SMPL to 12 body-worn electromagnetic sensors and image data. For t... | Provide a detailed description of the following dataset: EMDB |
RemFX | Audio samples processed with sound effects, to evaluate effect removal models. The audio effects applied are from the set (Distortion, Delay, Dynamic Range Compressor, Phasor, Reverb) and randomly sampled without replacement for each example; the targets are the original audio.
The audio samples are source from Voca... | Provide a detailed description of the following dataset: RemFX |
OCB | OCB contains two graph datasets, Ckt-Bench-101 and Ckt-Bench-301, for representation learning over analog circuits. Ckt-Bench-101 and Ckt-Bench-301 contain graphs (DAGs) that represent analog circuits and provide their corresponding graph-level properties: DC gain (Gain), bandwidth (BW), phase margin (PM),Figure of Mer... | Provide a detailed description of the following dataset: OCB |
The HYPSO-1 Sea-Land-Cloud-Labeled Dataset | Hyperspectral Imaging, employed in satellites for space remote sensing, like HYPSO-1, faces constraints due to few labeled data sets, affecting the training of AI models demanding these ground-truth annotations. In this work, we introduce The HYPSO-1 Sea-Land-Cloud-Labeled Dataset, an open dataset with 200 diverse hype... | Provide a detailed description of the following dataset: The HYPSO-1 Sea-Land-Cloud-Labeled Dataset |
Defects4J | Defects4J is a collection of reproducible bugs and a supporting infrastructure with the goal of advancing software engineering research.
Defects4J contains 835 bugs (plus 29 deprecated bugs) from the following open-source projects:
| Identifier | Project name | Number of active bugs | Active bu... | Provide a detailed description of the following dataset: Defects4J |
Gait3D-Parsing | **Gait3D-Parsing** is a dataset for gait recognition in the wild. It is an extension of the large-scale and challenging Gait-3D dataset which is collected from an in-the-wild environment. The train set has 3,000 IDs, and the test set has 1,000 IDs. Meanwhile, 1,000 sequences in the test set are taken as the query set, ... | Provide a detailed description of the following dataset: Gait3D-Parsing |
BioCoder | **BioCoder** is a benchmark developed to evaluate existing pre-trained models in generating bioinformatics code. In relation to function-code generation, BioCoder covers potential package dependencies, class declarations, and global variables. It incorporates 1026 functions and 1243 methods in Python and Java from GitH... | Provide a detailed description of the following dataset: BioCoder |
Iridium Message Headers (25MS/s) | Labelled dataset of Iridium “ring alert” downlink messages, including message headers captured at 25MS/s. Message metadata includes satellite and transmitter identifier, satellite position, timestamp, and estimated noise level. The dataset contains 1706556 messages.
The dataset has been split into numpy files for ea... | Provide a detailed description of the following dataset: Iridium Message Headers (25MS/s) |
SatIQ Model Weights | Model weights for use with the SatIQ fingerprinting models used in the paper “Watch This Space: Securing Satellite Communication through Resilient Transmitter Fingerprinting”. The models are used to authenticate Iridium satellites from high sample rate message headers.
The data collection and model code can be found... | Provide a detailed description of the following dataset: SatIQ Model Weights |
CongNaMul | CongNaMul Dataset | Provide a detailed description of the following dataset: CongNaMul |
TILT corpus | A corpus of GDPR machine-readable transparency information powered by the Transparency Information Language and Toolkit (TILT). These statements were extracted from real-world services for academic research purposes. They contain information about the collection, processing, and use of personal data in accordance with ... | Provide a detailed description of the following dataset: TILT corpus |
DeepFakeFace | The rise of deepfake images, especially of well-known personalities, poses a serious threat to the dissemination of authentic information. To tackle this, we present a thorough investigation into how deepfakes are produced and how they can be identified. The cornerstone of our research is a rich collection of artificia... | Provide a detailed description of the following dataset: DeepFakeFace |
AI-ready multiplex IHC-IF dataset | We introduce a new AI-ready computational pathology dataset containing restained and co-registered digitized images from eight head-and-neck squamous cell carcinoma patients. Specifically, the same tumor sections were stained with the expensive multiplex immunofluorescence (mIF) assay first and then restained with chea... | Provide a detailed description of the following dataset: AI-ready multiplex IHC-IF dataset |
Facial Skeletal angles | Facial Skeletal Angles (Glabella and Maxilla Angle and Length and Width of Piriformis) | Provide a detailed description of the following dataset: Facial Skeletal angles |
FormAI Dataset | FormAI is a novel AI-generated dataset comprising 112,000 compilable and independent C programs. All the programs in the dataset were generated by GPT-3.5-turbo using dynamic zero-shot prompting technique and comprises programs with varying levels of complexity. Some programs handle complicated tasks such as network ma... | Provide a detailed description of the following dataset: FormAI Dataset |
Unity Synthetic Humans | A package for creating Unity Perception compatible synthetic people. | Provide a detailed description of the following dataset: Unity Synthetic Humans |
Sound-Dr | As the burden of respiratory diseases continues to fall on society worldwide, this paper proposes a high-quality and reliable dataset of human sounds for studying respiratory illnesses, including pneumonia and COVID-19. It consists of coughing, mouth breathing, and nose breathing sounds together with metadata on relate... | Provide a detailed description of the following dataset: Sound-Dr |
dacl10k | dacl10k stands for damage classification 10k images and is a **multi-label semantic segmentation** dataset for **19 classes (13 damages and 6 objects)** present on bridges.
The dacl10k dataset includes images collected during concrete bridge inspections acquired from databases at authorities and engineering offices,... | Provide a detailed description of the following dataset: dacl10k |
CLPD | The CLPD dataset comprises 1200 images that encompass various regions within mainland China. These images were sourced from diverse origins, including the internet, mobile devices, and in-car recording devices. While the majority of the images were recorded during daylight hours, a portion of them were captured at nigh... | Provide a detailed description of the following dataset: CLPD |
CD-HARD | CD-HARD comprises 102 images featuring vehicles with oblique license plates sourced from the Cars dataset. Each image within this dataset exclusively depicts a single vehicle and was captured during daylight hours. While the dataset encompasses images from diverse geographic regions, it predominantly consists of images... | Provide a detailed description of the following dataset: CD-HARD |
CSPRD | The Chinese Stock Policy Retrieval Dataset (CSPRD) contains a Chinese policy corpus of 10,002 articles and 709 prospectus examples from 545 companies listed on China’s Science and Technology Innovation Board (STAR Market). CSPRD is bilingual in Chinese and English (Translated by ChatGPT) and is annotated by experienced... | Provide a detailed description of the following dataset: CSPRD |
DUDE | DUDE is formulated as an instance of Document Question Answering (DocQA) to evaluate how well current solutions deal with multi-page documents, if they can navigate and reason over the layout, and if they can generalize these skills to different document types and domains. Since we cannot provide question-answer pairs ... | Provide a detailed description of the following dataset: DUDE |
VIST-E | VIST-E consists of 49,913 training samples, 4,963 validation samples and 5,030 test samples, which is modified from VIST dataset. As every sample in VIST contains a story of five sentences, each sample in VIST-E contains the story ending, the ending-related image and the first four sentences in the story as the story c... | Provide a detailed description of the following dataset: VIST-E |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.