--- language: - en license: cc-by-sa-4.0 tags: - retrieval - text-retrieval - beir - programming - stack-exchange - duplicate-question-detection - community-question-answering - benchmark pretty_name: BEIR CQADupStack — Programmers (retrieval) size_categories: "10K **Note:** Full retrieval evaluation also requires the **corpus** (passage text keyed by ID). This dataset card describes the **query + qrels** side as prepared for CTERA-style evaluation rows; align corpus IDs with the same **BEIR CQADupStack / Programmers** corpus you use for indexing. ## Data format (this repository) Each record includes: | Field | Description | |--------|-------------| | `id` | UUID for this example row. | | `input` | The **query text** (question). | | `expected_output` | JSON string: list of objects `{"id": "", "score": }`. Scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). | | `metadata.query_id` | Original BEIR / CQADupStack query identifier (string). | | `metadata.split` | Split name (here: `test`). | ### Example 1 — single annotated duplicate ```json { "id": "4fc8cccf-f1d8-4685-adcc-6506b470d0c1", "input": "How do I write a specification?", "expected_output": "[{\"id\": \"34356\", \"score\": 1}]", "metadata.query_id": "132074", "metadata.split": "test" } ``` ### Example 2 — multiple duplicate targets ```json { "id": "0e2e4cb0-3d07-4208-94bb-bc20761f4bb6", "input": "Is this a violation of the Liskov Substitution Principle?", "expected_output": "[{\"id\": \"224350\", \"score\": 1}, {\"id\": \"254398\", \"score\": 1}, {\"id\": \"229549\", \"score\": 1}, {\"id\": \"189222\", \"score\": 1}, {\"id\": \"177831\", \"score\": 1}, {\"id\": \"145941\", \"score\": 1}, {\"id\": \"82682\", \"score\": 1}, {\"id\": \"237843\", \"score\": 1}, {\"id\": \"132612\", \"score\": 1}, {\"id\": \"107723\", \"score\": 1}, {\"id\": \"231300\", \"score\": 1}]", "metadata.query_id": "170138", "metadata.split": "test" } ``` ## References ### CQADupStack (original dataset) **Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin** *CQADupStack: A Benchmark Data Set for Community Question-Answering Research* Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), pages 3:1–3:8. **Abstract (summary):** The paper presents CQADupStack, a resource derived from Stack Exchange with **duplicate question annotations** across multiple communities, together with standard splits and evaluation tooling for **retrieval** and **classification** experiments on duplicate detection in cQA. - Paper: [ACM DL 10.1145/2838931.2838934](https://doi.org/10.1145/2838931.2838934) — IR Anthology entry: [hoogeveen-2015-cqadupstack](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/). - Original data landing page (University of Melbourne NLP): [CQADupStack resources](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/). ### BEIR benchmark (CQADupStack as a subset) **Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych** *BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models* NeurIPS 2021 (Datasets and Benchmarks Track). **Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”* - Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ); code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir). ### Related resources - **MTEB** packages the same retrieval task as [`mteb/cqadupstack-programmers`](https://huggingface.co/datasets/mteb/cqadupstack-programmers) (corpus + queries + qrels) with descriptive statistics. - **BEIR-style mirrors** on Hugging Face for raw JSONL / TSV layouts (e.g. `BeIR/cqadupstack-*` datasets where applicable) — see the upstream BEIR project for the canonical file layout. ## Citation If you use **CQADupStack**, cite the ADCS 2015 paper. If you use the **BEIR** benchmark packaging, cite the BEIR NeurIPS 2021 paper. BibTeX for CQADupStack is available from the [IR Anthology](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/) (BibTeX download on that page). ## License Stack Exchange content is typically shared under **Creative Commons** terms. This card marks **`cc-by-sa-4.0`** as a common license for Stack Exchange–derived text; verify against your corpus snapshot and upstream terms if compliance is strict. --- *Dataset card maintained for the `orgrctera/beir_cqadupstack_programmers` Hub repository.*