dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
WikiCaps | **WikiCaps** is a large-scale multilingual but non-parallel data set for multimodal machine translation and retrieval. The image-caption data was extracted from Wikimedia Commons and is thus a representative of the collection of largely available non-descriptive image-caption pairs in the web. The current version of th... | Provide a detailed description of the following dataset: WikiCaps |
WikiCLIR | **WikiCLIR** is a large-scale (German-English) retrieval data set for Cross-Language Information Retrieval (CLIR). It contains a total of 245,294 German single-sentence queries with 3,200,393 automatically extracted relevance judgments for 1,226,741 English Wikipedia articles as documents. Queries are well-formed natur... | Provide a detailed description of the following dataset: WikiCLIR |
CPM-Synt-1 | **CPM-Synt-1** is a dataset consisting of 5555 images with synthesis - makeup images with pattern segmentation mask | Provide a detailed description of the following dataset: CPM-Synt-1 |
CPM-Synt-2 | **CPM-Synt-2** is a dataset consisting of 1625 images with synthesis - triplets: makeup, non-makeup, ground-truth. | Provide a detailed description of the following dataset: CPM-Synt-2 |
Stickers | Stickers is a dataset consisting of 577 high-quality sticker images with alpha channel.
Image source: [https://github.com/VinAIResearch/CPM](https://github.com/VinAIResearch/CPM) | Provide a detailed description of the following dataset: Stickers |
CPM-Real | **CPM-Real** is a dataset consisting of 3895 images representing real - makeup styles.
Image source: [https://github.com/VinAIResearch/CPM](https://github.com/VinAIResearch/CPM) | Provide a detailed description of the following dataset: CPM-Real |
BAIR Robot Pushing | Dataset of 64x64 images of a robot pushing objects on a table top. From Berkeley AI Research (BAIR). | Provide a detailed description of the following dataset: BAIR Robot Pushing |
BTAD | The **BTAD** ( beanTech Anomaly Detection) dataset is a real-world industrial anomaly dataset. The dataset contains a total of 2830 real-world images of 3 industrial products showcasing body and surface defects. | Provide a detailed description of the following dataset: BTAD |
COPA-HR | The COPA-HR dataset (Choice of plausible alternatives in Croatian) is a translation of the English [COPA dataset](copa) by following the [XCOPA dataset](xcopa) translation methodology. The dataset consists of 1000 premises (My body cast a shadow over the grass), each given a question (What is the cause?), and two choic... | Provide a detailed description of the following dataset: COPA-HR |
CASP13 MQA | CASP13 MQA is a dataset that contains predicted models for CASP13 targets and their scores. | Provide a detailed description of the following dataset: CASP13 MQA |
ACID | ACID consists of thousands of aerial drone videos of different coastline and nature scenes on YouTube. Structure-from-motion is used to get camera poses.
Image source: [https://arxiv.org/pdf/2012.09855v2.pdf](https://arxiv.org/pdf/2012.09855v2.pdf) | Provide a detailed description of the following dataset: ACID |
KazakhTTS | **KazakhTTS** is an open-source speech synthesis dataset for Kazakh, a low-resource language spoken by over 13 million people worldwide. The dataset consists of about 91 hours of transcribed audio recordings spoken by two professional speakers (female and male). It is the first publicly available large-scale dataset de... | Provide a detailed description of the following dataset: KazakhTTS |
italki NLI | A large, crowd-sourced dataset for the Native Language Identification (NLI) task. People learning English as a second language write practice Notebooks which can be used to classify the author's native language using word choice, spelling mistakes and other language features.
The dataset has:
- 11 languages (Arab... | Provide a detailed description of the following dataset: italki NLI |
Toronto NeuroFace Dataset | Toronto NeuroFace Dataset: A New Dataset for Facial Motion Analysis in Individuals with Neurological Disorders
Toronto NeuroFace Dataset is a public dataset with videos of oro-facial gestures performed by individuals with oro-facial impairment due to neurological disorders, such as amyotrophic lateral sclerosis (ALS... | Provide a detailed description of the following dataset: Toronto NeuroFace Dataset |
GE852 | GE852 is a dataset of 852 game engine repositories mined from GitHub in two languages, namely Java and C++. The dataset contains metadata of all the mined repositories including commits, pull requests, issues and so on. This dataset can lays the foundation for empirical investigation in the area of game engines. | Provide a detailed description of the following dataset: GE852 |
Newer College | The **Newer College Dataset** is a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam... | Provide a detailed description of the following dataset: Newer College |
VideoSet | **VideoSet** is a large-scale compressed video quality dataset based on just-noticeable-difference (JND) measurement.
The dataset consists of 220 5-second sequences in four resolutions (i.e., 1920×1080, 1280×720, 960×540 and 640×360). Each of the 880 video clips is encoded using the H.264 codec with QP=1,⋯,51 and me... | Provide a detailed description of the following dataset: VideoSet |
Cable TV News | **Cable TV news** is a data set of nearly 24/7 video, audio, and text captions from three U.S. cable TV networks (CNN, FOX, and MSNBC) from January 2010 to July 2019. Using machine learning tools, the authors detect faces in 244,038 hours of video, label each face's presented gender, identify prominent public figures, ... | Provide a detailed description of the following dataset: Cable TV News |
AISHELL-3 | AISHELL-3 is a large-scale and high-fidelity multi-speaker Mandarin speech corpus which could be used to train multi-speaker Text-to-Speech (TTS) systems. The corpus contains roughly 85 hours of emotion-neutral recordings spoken by 218 native Chinese mandarin speakers and total 88035 utterances. Their auxiliary attribu... | Provide a detailed description of the following dataset: AISHELL-3 |
Election2020 | **Election2020** is a Twitter dataset on the 2020 US presidential elections. To facilitate the understanding of political discourse and try to empower the Computational Social Science research community, the authors decided to publicly release this massive-scale, longitudinal dataset of U.S. politics- and election-rela... | Provide a detailed description of the following dataset: Election2020 |
Wikipedia Citations | Wikipedia Citations is a comprehensive dataset of citations extracted from Wikipedia. A total of 29.3M citations were extracted from 6.1M English Wikipedia articles as of May 2020, and classified as being to books, journal articles or Web contents. We were thus able to extract 4.0M citations to scholarly publications w... | Provide a detailed description of the following dataset: Wikipedia Citations |
OMD | The **Oxford Multimotion Dataset** (OMD) provides a number of multimotion estimation problems of varying complexity. It includes both complex problems that challenge existing algorithms as well as a number of simpler problems to support development. These include observations from both static and dynamic sensors, a var... | Provide a detailed description of the following dataset: OMD |
Cry Wolf | **Cry Wolf** is a dataset for cyber security analysis tasks. It is an open-access dataset of 73 true and false Intrusion Detection System (IDS) alarms derived from real-world examples of "impossible travel" scenarios. | Provide a detailed description of the following dataset: Cry Wolf |
IITM-Bandersnatch | **IITM-Bandersnatch** is a dataset to evaluate traffic analysis techniques. The dataset comprises of data points of the form {encrypted traces, ground truth choices}. To collect each data point, we asked the viewer to watch Bandersnatch from the beginning and note down the choices made by them. At the same time, we col... | Provide a detailed description of the following dataset: IITM-Bandersnatch |
Hate Counter | This dataset is built from Twitter and contains 1290 hate tweet and counterspeech reply pairs. After the annotation process, the dataset consists of 558 unique hate tweets from 548 user and 1290 counterspeech replies from 1239 users. | Provide a detailed description of the following dataset: Hate Counter |
Alexa Domains | This dataset is composed of the URLs of the top 1 million websites.
The domains are ranked using the Alexa traffic ranking which
is determined using a combination of the browsing behavior
of users on the website, the number of unique visitors, and the number of pageviews. In more detail, unique visitors are the
num... | Provide a detailed description of the following dataset: Alexa Domains |
ChestX-Det | ChestX-Det is a chest X-Ray dataset with instance-level annotations (boxes and masks). ChestX-Det is a subset of the public dataset [NIH ChestX-ray14](chestx-ray14). It contains ~3500 images of 13 common disease categories labeled by three board-certified radiologists. | Provide a detailed description of the following dataset: ChestX-Det |
MAI | **MAI** is a dataset for multi-scene recognition in single aerial images. It consists of 3,923 labelled large-scale images from Google Earth imagery that covers the United States, Germany, and France. The size of each image is 512 ×512, and spatial resolutions vary from 0.3 m/pixel to 0.6 m/pixel. After capturing aeria... | Provide a detailed description of the following dataset: MAI |
Event-Human3.6m | **Event-Human3.6m** is a challenging dataset for event-based human pose estimation by simulating events from the RGB [Human3.6m](human3-6m) dataset. It is built by converting the RGB recordings of Human3.6m into events and synchronising raw joints ground-truth with events frames through interpolation. | Provide a detailed description of the following dataset: Event-Human3.6m |
MLDS | **MLDS** is a collection of thousands of trained neural networks labelled with the data used to train them. MLDS allows meta weight-space analysis across thousands of networks trained with identical or similar training data. | Provide a detailed description of the following dataset: MLDS |
CaseHOLD | **CaseHOLD** (Case Holdings On Legal Decisions) is a law dataset comprised of over 53,000+ multiple choice questions to identify the relevant holding of a cited case. This dataset presents a fundamental task to lawyers and is both legally meaningful and difficult from an NLP perspective (F1 of 0.4 with a BiLSTM baselin... | Provide a detailed description of the following dataset: CaseHOLD |
Hateful Users on Twitter | This is a Twitter dataset of 100,386 users along with up to 200 tweets from their timelines with a random-walk-based crawler on the retweet graph, with a subsample of 4,972 which is manually annotated as hateful or not through crowdsourcing. The dataset can be used to examine the difference between user activity patter... | Provide a detailed description of the following dataset: Hateful Users on Twitter |
Alibaba Cluster Trace | **Alibaba Cluster Trace** captures detailed statistics for the co-located workloads of long-running and batch jobs over a course of 24 hours. The trace consists of three parts: (1) statistics of the studied homogeneous cluster of 1,313 machines, including each machine’s hardware configuration, and the runtime {CPU, Mem... | Provide a detailed description of the following dataset: Alibaba Cluster Trace |
Continuous Defect Prediction | Continuous Defect Prediction (CDP) is a dataset of more than 11 million data rows, representing files involved in Continuous Integration (CI) builds, that synthesize the results of CI builds with data mined from software repositories. The dataset embraces 1,265 software projects, 30,022 distinct commit authors and seve... | Provide a detailed description of the following dataset: Continuous Defect Prediction |
BB-MAS | **BB-MAS** is a behavioural biometrics dataset. It consists of data collected from 117 subjects for typing (both fixed and free text), gait (walking, upstairs and downstairs) and touch on Desktop, Tablet and Phone. The dataset consists a total of about: 3.5 million keystroke events; 57.1 million data-points for acceler... | Provide a detailed description of the following dataset: BB-MAS |
UK-DALE | **UK-DALE** is an open-access dataset from the UK recording Domestic Appliance-Level Electricity to conduct research on disaggregation algorithms, with data describing not just the aggregate demand per building but also the `ground truth' demand of individual appliances. It was built at a sample rate of 16 kHz for the ... | Provide a detailed description of the following dataset: UK-DALE |
RAE | The Rainforest Automation Energy (RAE) dataset was create to help smart grid researchers test their algorithms which make use of smart meter data. This initial release of RAE contains 1Hz data (mains and sub-meters) from two residential houses. In addition to power data, environmental and sensor data from the house's t... | Provide a detailed description of the following dataset: RAE |
Dataset of Dockerfiles | This dataset of approximately 178,000 unique Dockerfiles collected from GitHub to facilitate sophisticated semantics-aware static analysis of Dockerfiles. To enhance the usability of this data, the authors use five representations for working with, mining from, and analyzing these Dockerfiles. Each Dockerfile represent... | Provide a detailed description of the following dataset: Dataset of Dockerfiles |
LIDDI | LInked Drug-Drug Interactions (LIDDI) is a public nanopublication-based RDF dataset with trusty URIs that encompasses some of the most cited prediction methods and sources to provide researchers a resource for leveraging the work of others into their prediction methods. As one of the main issues to overcome the usage o... | Provide a detailed description of the following dataset: LIDDI |
UofTPed50 | **UofTPed50** is an object detection and tracking dataset which uses GPS to ground truth the position and velocity of a pedestrian.
It can be used for benchmarking the positional accuracy of 3D pedestrian detection. It contains accurate positioning information by attaching a GPS system to the pedestrian itself. This... | Provide a detailed description of the following dataset: UofTPed50 |
Credibility Factors 2020 | This dataset focuses on 50 articles about climate science, which were annotated completely by 49 students, 26 Upwork workers, 3 science and 3 journalism experts.
Before participation, crowd raters filled out a demographic survey followed by committing to an Annotator Code of Conduct of performing their duties in as ... | Provide a detailed description of the following dataset: Credibility Factors 2020 |
Acticipate | **Acticipate** is a publicly available dataset with recordings of human body-motion and eye-gaze, acquired in an experimental scenario with an actor interacting with three subjects. It contains synchronised and labelled video+gaze and body motion in a dyadic scenario of interaction. | Provide a detailed description of the following dataset: Acticipate |
Human Optical Flow | A synthetic data of videos of human action sequences and the corresponding optical flow. | Provide a detailed description of the following dataset: Human Optical Flow |
Overruling | The **Overruling** dataset is a law dataset corresponding to the task of determining when a sentence is overruling a prior decision. This is a binary classification task, where positive examples are overruling sentences and negative examples are non-overruling sentences extracted from legal opinions. In law, an overrul... | Provide a detailed description of the following dataset: Overruling |
Terms of Service | The **Terms of Service** dataset is a law dataset corresponding to the task of identifying whether contractual terms are potentially unfair. This is a binary classification task, where positive examples are potentially unfair contractual terms (clauses) from the terms of service in consumer contracts. Article 3 of the ... | Provide a detailed description of the following dataset: Terms of Service |
WebQuestionsSP | The WebQuestionsSP dataset is released as part of our ACL-2016 paper “The Value of Semantic Parse Labeling for Knowledge Base Question Answering” [Yih, Richardson, Meek, Chang & Suh, 2016], in which we evaluated the value of gathering semantic parses, vs. answers, for a set of questions that originally comes from WebQu... | Provide a detailed description of the following dataset: WebQuestionsSP |
NBA SportVU | The NBA SportVU dataset contains player and ball trajectories for 631 games from the 2015-2016 NBA season. The raw tracking data is in the JSON format, and each moment includes information about the identities of the players on the court, the identities of the teams, the period, the game clock, and the shot clock. | Provide a detailed description of the following dataset: NBA SportVU |
robo-vln | The Robo-VLN dataset is a continuous control formulation of the VLN-CE dataset by [Krantz et al](https://arxiv.org/pdf/2004.02857.pdf) ported over from Room-to-Room (R2R) dataset created by [Anderson et al](http://openaccess.thecvf.com/content_cvpr_2018/papers/Anderson_Vision-and-Language_Navigation_Interpreting_CVPR_2... | Provide a detailed description of the following dataset: robo-vln |
2devs | **2devs** is a publicly available dataset of fine-grained untangled code changes collected by recording the development sessions of two developers over the course of four months, and the corresponding manual clustering. | Provide a detailed description of the following dataset: 2devs |
ToN_IoT | The **TON_IoT** datasets are new generations of Internet of Things (IoT) and Industrial IoT (IIoT) datasets for evaluating the fidelity and efficiency of different cybersecurity applications based on Artificial Intelligence (AI). The datasets have been called ‘ToN_IoT’ as they include heterogeneous data sources collect... | Provide a detailed description of the following dataset: ToN_IoT |
APND | **APND** (Arm Point Nav Dataset) is a dataset for the generalizable object manipulation task called ARMPOINTNAV, which consists on moving an object in the scene from a source location to a target location.
The dataset consists of 30 kitchen scenes in AI2-THOR that include more than 150 object categories (69 interact... | Provide a detailed description of the following dataset: APND |
Mid-level perceptual musical features | This dataset contains annotations for 5000 music files on the following music properties:
* Melodiousness
* Articulation
* Rhythmic stability
* Rhythmic complexity
* Dissonance
* Tonal stability
* Modality
The annotations were given by musicians and collected through a crowd-sourcing platform (Toloka). | Provide a detailed description of the following dataset: Mid-level perceptual musical features |
Warblr | **Warblr** is a dataset for the acoustic detection of birds. The dataset comes from a UK bird-sound crowdsourcing research spinout called Warblr. From this initiative the authors collected over 10,000 ten-second smartphone audio recordings from around the UK. The audio totals around 28 hours duration.
The audio cove... | Provide a detailed description of the following dataset: Warblr |
JoCAD | **JoCAD** is a dataset for anomaly detection in citation networks. | Provide a detailed description of the following dataset: JoCAD |
Hand Poses | This is a dataset for benchmarking in-hand manipulation on different robot platforms. | Provide a detailed description of the following dataset: Hand Poses |
BEIR | **BEIR** (Benchmarking IR) is an heterogeneous benchmark containing different information retrieval (IR) tasks. Through BEIR, it is possible to systematically study the zero-shot generalization capabilities of multiple neural retrieval approaches.
The benchmark contains a total of 9 information retrieval tasks (Fact... | Provide a detailed description of the following dataset: BEIR |
KUMC | The **KUMC** dataset for polyp detection and classification was collected from the University of Kansas Medical Center. It contains 80 colonoscopy video sequences which are manually labeled with bounding boxes as well as the polyp classes for the entire dataset. | Provide a detailed description of the following dataset: KUMC |
SumeCzech-NER | SumeCzech-NER contains named entity annotations of SumeCzech 1.0, a Czech news-based summarization dataset. | Provide a detailed description of the following dataset: SumeCzech-NER |
Signal-1M Related Tweets | Signal-1M Related Tweets consists of A TREC-like data collection to evaluate approaches for the task of related-tweet retrieval for news articles.
Learn more about the data collection process [here](https://research.signal-ai.com/datasets/signal1m-tweetir.html). | Provide a detailed description of the following dataset: Signal-1M Related Tweets |
MTST | The Mobile Turkish Scene Text (MTST 200) dataset consists of 200 indoor and outdoor Turkish scene text images.
The images were collected with mobile phones and downsized to 576 × 1024 (portrait) or 1024 × 576 (landscape) pixels. The text lines are horizontal or near horizontal, some with slight in– and out-of-plane ... | Provide a detailed description of the following dataset: MTST |
LHQ | A dataset of 90,000 high-resolution nature landscape images, crawled from Unsplash and Flickr and preprocessed with Mask R-CNN and Inception V3. | Provide a detailed description of the following dataset: LHQ |
SemEval2017 | DOI: [10.18653/v1/S17-2091](http://dx.doi.org/10.18653/v1/S17-2091) | Provide a detailed description of the following dataset: SemEval2017 |
Inspec | Paper: Improved automatic keyword extraction given more linguistic knowledge
Doi: [10.3115/1119355.1119383](https://doi.org/10.3115/1119355.1119383) | Provide a detailed description of the following dataset: Inspec |
Essays | J. W. Pennebaker and L. A. King, “Linguistic styles: Language use as an individual difference,” J. Pers. Soc. Psychol., vol. 77, no. 6, pp. 1296–1312, Dec. 1999, doi: [10.1037/0022-3514.77.6.1296](https://psycnet.apa.org/doi/10.1037/0022-3514.77.6.1296). | Provide a detailed description of the following dataset: Essays |
MIT Indoor Scenes | Context
This is the Original data provided by MIT .
Indoor scene recognition is a challenging open problem in high level vision. Most scene recognition models that work well for outdoor scenes perform poorly in the indoor domain. The main difficulty is that while some indoor scenes (e.g. corridors) can be well char... | Provide a detailed description of the following dataset: MIT Indoor Scenes |
Intel Image Classification | Context
This is image data of Natural Scenes around the world.
Content
This Data contains around 25k images of size 150x150 distributed under 6 categories.
{'buildings' -> 0,
'forest' -> 1,
'glacier' -> 2,
'mountain' -> 3,
'sea' -> 4,
'street' -> 5 }
The Train, Test and Prediction data is separated in eac... | Provide a detailed description of the following dataset: Intel Image Classification |
UPFD | For benchmarking, please refer to its variant [UPFD-POL](https://paperswithcode.com/dataset/upfd-pol) and [UPFD-GOS](https://paperswithcode.com/dataset/upfd-gos).
The dataset has been integrated with [Pytorch Geometric](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/upfd.py) (PyG) and [Deep Graph ... | Provide a detailed description of the following dataset: UPFD |
InfographicVQA | **InfographicVQA** is a dataset that comprises a diverse collection of infographics along with natural language questions and answers annotations. The collected questions require methods to jointly reason over the document layout, textual content, graphical elements, and data visualizations. We curate the dataset with ... | Provide a detailed description of the following dataset: InfographicVQA |
LDV | **LDV** is a dataset for video enhancement. It contains 240 videos with diverse categories of content, different kinds of motion and various frame-rates. | Provide a detailed description of the following dataset: LDV |
Earnings-21 | Earnings-21, a 39-hour corpus of earnings calls containing entity-dense speech from nine different financial sectors. This corpus is intended to benchmark ASR (Automatic Speech Recognition) systems in the wild with special attention towards named entity recognition. | Provide a detailed description of the following dataset: Earnings-21 |
Comparative Question Completion | **Comparative Question Completion** is a dataset to evaluate what do large Language Models learn.
The dataset includes short questions in natural language that make comparisons between entity pairs, for example, “is a cockroach or beetle more dangerous?”
The questions are in three subject domains: animals, cities... | Provide a detailed description of the following dataset: Comparative Question Completion |
ECTF | **ECTF** is a dataset for Twitter fake news detection in the Covid-19 domain. | Provide a detailed description of the following dataset: ECTF |
Extended UCF Crime | The **Extended UCF Crime** extends the [UCF Crime](ucf-crime) data set that consists of 13 anomaly classes. The extension adds two different anomaly classes to the data set, which are ”molotov bomb” and ”protest” classes. It also adds 33 videos to the fighting class. In total, the extension adds 216 videos to the train... | Provide a detailed description of the following dataset: Extended UCF Crime |
EntailmentBank | **EntailmentBank** is a dataset that contains multistep entailment trees. At each node in the tree (typically) two or more facts compose together to produce a new conclusion. Given a hypothesis (question + answer), three increasingly difficult explanation tasks are defined: generate a valid entailment tree given (a) al... | Provide a detailed description of the following dataset: EntailmentBank |
AM2iCo | **AM2iCo** is a wide-coverage and carefully designed cross-lingual and multilingual evaluation set. It aims to assess the ability of state-of-the-art representation models to reason over cross-lingual lexical-level concept alignment in context for 14 language pairs.
English (EN), German (DE), Russian (RU), Japane... | Provide a detailed description of the following dataset: AM2iCo |
TED-talks | In order to create the TED-talks dataset, 3,035 YouTube videos were downloaded using the "TED talks" query. From these initial candidates, videos in which the upper part of the person is visible for at least 64 frames, and the height of the person bounding box was at least 384 pixels were selected. Static videos were ... | Provide a detailed description of the following dataset: TED-talks |
ExampleStack | This is a dataset of code snippets in StackOverflow that have been used in Github repositories by extending and adapting them. The dataset links SO posts to GitHub counterparts based on clone detection, time stamp analysis, and explicit URL references.
The authors qualitatively inspected 400 SO examples and their G... | Provide a detailed description of the following dataset: ExampleStack |
MGif | MGif is a dataset of videos containing movements of different cartoon animals. Each video is a moving gif file. The dataset consists of 1000 videos. The dataset
is particularly challenging because of the high appearance variation and motion diversity. | Provide a detailed description of the following dataset: MGif |
Android Common Libraries | This dataset was constructed from an analysis of about 1.5 million apps from Google Play to identify a set of common libraries, to facilitate Android app analysis. It contains 1,113 libraries supporting common functionalities and 240 libraries for advertisement. | Provide a detailed description of the following dataset: Android Common Libraries |
Workflow Trace Archive | The Workflow Trace Archive (WTA) is an open-access archive of workflow traces from diverse computing infrastructures. The WTA includes >48 million workflows captured from >10 computing infrastructures, representing a broad diversity of trace domains and characteristics. | Provide a detailed description of the following dataset: Workflow Trace Archive |
Dex-Net 2.0 | **Dex-Net 2.0** is a dataset associating 6.7 million point clouds and analytic grasp quality metrics with parallel-jaw grasps planned using robust quasi-static GWS analysis on a dataset of 1,500 3D object models. | Provide a detailed description of the following dataset: Dex-Net 2.0 |
Italian disinformation | This is a large-scale dataset of tweets associated to thousands of news articles published on Italian disinformation websites in the context of 2019 European elections. | Provide a detailed description of the following dataset: Italian disinformation |
MUSAN | **MUSAN** is a corpus of music, speech and noise. This dataset is suitable for training models for voice activity detection (VAD) and music/speech discrimination. The dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. | Provide a detailed description of the following dataset: MUSAN |
BLEBeacon | The BLEBeacon dataset is a collection of Bluetooth Low Energy (BLE) advertisement packets/traces generated from BLE beacons carried by people following their daily routine inside a university building for a whole month. A network of Raspberry Pi 3 (RPi)-based edge devices were deployed inside a multi-floor facility con... | Provide a detailed description of the following dataset: BLEBeacon |
AdobeIndoorNav | **AdobeIndoorNav** is a dataset collected in real-world to facilitate the research in DRL based visual navigation. The dataset includes 3D reconstruction for real-world scenes as well as densely captured real 2D images from the scenes. It provides high-quality visual inputs with real-world scene complexity to the robot... | Provide a detailed description of the following dataset: AdobeIndoorNav |
nicolingua-0003-west-african-radio-corpus | This dataset contains 17,090 audio clips of length 30 seconds sampled from archives collected from 6 Guinean radio stations. The broadcasts consist of news and various radio shows in languages including French, Guerze, Koniaka, Kissi, Kono, Maninka, Mano, Pular, Susu, and Toma. Some radio shows include phone calls, bac... | Provide a detailed description of the following dataset: nicolingua-0003-west-african-radio-corpus |
nicolingua-0004-west-african-va-asr-corpus | This dataset contains 10,083 recorded utterances in French, Maninka, Pular and Susu from 49 speakers (16 female and 33 male) ranging from 5 to 76 years old on a variety of devices.
Please see our paper for more details on this dataset. Additional resources can be found in the following git repository: https://github... | Provide a detailed description of the following dataset: nicolingua-0004-west-african-va-asr-corpus |
GermanQuAD | **GermanQuAD** is a Question Answering (QA) dataset of 13,722 extractive question/answer pairs in German. | Provide a detailed description of the following dataset: GermanQuAD |
GermanDPR | **GermanDPR** is a dataset for passage retrieval in German. GermanDPR comprises 8,245 question/answer pairs in the training set, 1,030 pairs in the development set, and 1,025 pairs in the test set. For each pair, there are one positive context and three hard negative contexts. | Provide a detailed description of the following dataset: GermanDPR |
LoED | **LoED** (LoRaWAN at the Edge Dataset) is a dataset from nine LoRaWAN gateways collected in an urban environment. The dataset contains raw payload information, along with other metadata from the gateway. The dataset contains packet header information and all physical layer properties reported by gateways such as the CR... | Provide a detailed description of the following dataset: LoED |
BuGL | **BuGL** is a large-scale cross-language dataset for bug localization in code. BuGL constitutes of more than 10,000 bug reports drawn from open-source projects written in four programming languages, namely C, C++, Java, and Python. The dataset consists of information which includes Bug Reports and Pull-Requests. BuGL a... | Provide a detailed description of the following dataset: BuGL |
VOICES | The VOICES corpus is a dataset to promote speech and signal processing research of speech recorded by far-field microphones in noisy room conditions.
For this corpus, audio was recorded in furnished rooms with background noise played in conjunction with foreground speech selected from the LibriSpeech corpus. Multip... | Provide a detailed description of the following dataset: VOICES |
AndroZoo | **AndroZoo** is a growing collection of Android apps collected from several sources, including the official Google Play app market and a growing collection of various metadata of those collected apps aiming at facilitating the Android-relevant research works.
It currently contains 15,097,876 different APKs, each of... | Provide a detailed description of the following dataset: AndroZoo |
BIRD (Big Impulse Response Dataset) | **BIRD** (Big Impulse Response Dataset) is an open dataset that consists of 100,000 multichannel room impulse responses (RIRs) generated from simulations using the Image Method, making it the largest multichannel open dataset currently available. These RIRs can be used to perform efficient online data augmentation for ... | Provide a detailed description of the following dataset: BIRD (Big Impulse Response Dataset) |
BCSD | The dataset consists of images of 158 filled out bank checks containing various complex backgrounds, and handwritten text and signatures in the respective fields, along with both pixel-level and patch-level segmentation masks for the signatures on the checks. Please visit the dataset homepage for more details.
If yo... | Provide a detailed description of the following dataset: BCSD |
AbuseAnalyzer Dataset | **The dataset contains 7,601 Gab posts classified on three different aspects: abuse presence or not, abuse severity and abuse target.**
The Binary label distribution is as follows:
*Abusive Posts: 4,120
*Non-Abusive Posts: 3481
The Abuse Severity label distribution is as follows:
*Biased Attitude: 1830
*Act o... | Provide a detailed description of the following dataset: AbuseAnalyzer Dataset |
MoGaze | **MoGaze** is a dataset of full-body motion for everyday manipulation tasks, which includes 1) long sequences of manipulation tasks, 2) the 3D model of the workspace geometry, and 3) eye-gaze. The motion data was captured using a traditional motion capture system based on reflective markers. The eye-gaze was captured u... | Provide a detailed description of the following dataset: MoGaze |
Software Heritage Graph Dataset | Software Heritage is the largest existing public archive of software source code and accompanying development history. It spans more than five billion unique source code files and one billion unique commits , coming from more than 80 million software projects. These software artifacts were retrieved from major collabor... | Provide a detailed description of the following dataset: Software Heritage Graph Dataset |
MM-COVID | **MM-COVID** is a dataset for fake news detection related to COVID-19. This dataset provides the multilingual fake news and the relevant social context. It contains 3,981 pieces of fake news content and 7,192 trustworthy information from English, Spanish, Portuguese, Hindi, French and Italian, 6 different languages. | Provide a detailed description of the following dataset: MM-COVID |
Vent | The Vent dataset is a large annotated dataset of text, emotions, and social connections. It comprises more than 33 millions of posts by nearly a million of users together with their social connections. Each post has an associated emotion. There are 705 different emotions, organized in 63 "emotion categories", forming a... | Provide a detailed description of the following dataset: Vent |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.