Datasets:
language:
- en
license: cc-by-sa-4.0
tags:
- retrieval
- text-retrieval
- beir
- programming
- stack-exchange
- duplicate-question-detection
- community-question-answering
- benchmark
pretty_name: BEIR CQADupStack — Programmers (retrieval)
size_categories: 10K<n<100K
task_categories:
- text-retrieval
CQADupStack / Programmers (BEIR) — programming Q&A retrieval
Dataset description
CQADupStack is a benchmark for community question answering (cQA) built from Stack Exchange data. It was introduced by Hoogeveen, Verspoor, and Baldwin at ADCS 2015 to support research on duplicate questions: finding earlier posts that match or subsume a newly asked question, so users can reuse existing answers instead of opening redundant threads.
The full CQADupStack release aggregates twelve Stack Exchange subcommunities (subforums). Each subforum is distributed as its own slice with pre-defined splits and duplicate annotations (which posts are duplicates of which), enabling comparable retrieval and classification experiments. The Programmers slice corresponds to the historical Programmers.StackExchange community (software engineering and professional programming topics, distinct from “pure code” Q&A).
BEIR (Benchmarking IR) repackaged CQADupStack — including the Programmers sub-benchmark — as part of a heterogeneous zero-shot IR benchmark spanning many tasks and domains. In BEIR’s retrieval setting, each slice is a standard corpus + queries + qrels collection: systems must rank corpus documents so that human-annotated duplicates appear at the top.
This repository (orgrctera/beir_cqadupstack_programmers) exposes the BEIR CQADupStack / Programmers test split in Parquet form for retrieval evaluation pipelines. Each row is one query with relevance judgments (qrels) pointing at corpus document identifiers, aligned with the BEIR release.
Scale (BEIR / Programmers retrieval setting)
For the Programmers task as used in embedding benchmarks (e.g. MTEB’s “CQADupstackProgrammersRetrieval”), the test split is on the order of:
- ~32k unique documents in the corpus (Stack Exchange question posts).
- 876 queries in the official test set.
- ~1.9 relevant documents per query on average (some queries have many duplicates annotated).
Exact counts depend on the upstream BEIR snapshot; see BEIR on GitHub for version-precise figures.
Task: retrieval (CQADupStack Programmers)
The task is ad hoc passage (or document) retrieval for duplicate question finding:
- Input: a natural-language question (the query) posted on the Programmers forum.
- Output: a ranked list of document IDs from the Programmers corpus (or scores over the full collection), such that relevant IDs — posts marked as duplicates of the query in the official qrels — receive high rank.
Evaluation uses standard IR metrics (e.g. nDCG@k, Recall@k, MRR), as in BEIR’s evaluation utilities or frameworks such as Pyserini / MTEB.
Note: Full retrieval evaluation also requires the corpus (passage text keyed by ID). This dataset card describes the query + qrels side as prepared for CTERA-style evaluation rows; align corpus IDs with the same BEIR CQADupStack / Programmers corpus you use for indexing.
Data format (this repository)
Each record includes:
| Field | Description |
|---|---|
id |
UUID for this example row. |
input |
The query text (question). |
expected_output |
JSON string: list of objects {"id": "<corpus-doc-id>", "score": <relevance>}. Scores follow the BEIR qrels convention (typically 1 for relevant in binary settings). |
metadata.query_id |
Original BEIR / CQADupStack query identifier (string). |
metadata.split |
Split name (here: test). |
Example 1 — single annotated duplicate
{
"id": "4fc8cccf-f1d8-4685-adcc-6506b470d0c1",
"input": "How do I write a specification?",
"expected_output": "[{\"id\": \"34356\", \"score\": 1}]",
"metadata.query_id": "132074",
"metadata.split": "test"
}
Example 2 — multiple duplicate targets
{
"id": "0e2e4cb0-3d07-4208-94bb-bc20761f4bb6",
"input": "Is this a violation of the Liskov Substitution Principle?",
"expected_output": "[{\"id\": \"224350\", \"score\": 1}, {\"id\": \"254398\", \"score\": 1}, {\"id\": \"229549\", \"score\": 1}, {\"id\": \"189222\", \"score\": 1}, {\"id\": \"177831\", \"score\": 1}, {\"id\": \"145941\", \"score\": 1}, {\"id\": \"82682\", \"score\": 1}, {\"id\": \"237843\", \"score\": 1}, {\"id\": \"132612\", \"score\": 1}, {\"id\": \"107723\", \"score\": 1}, {\"id\": \"231300\", \"score\": 1}]",
"metadata.query_id": "170138",
"metadata.split": "test"
}
References
CQADupStack (original dataset)
Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin
CQADupStack: A Benchmark Data Set for Community Question-Answering Research
Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), pages 3:1–3:8.
Abstract (summary): The paper presents CQADupStack, a resource derived from Stack Exchange with duplicate question annotations across multiple communities, together with standard splits and evaluation tooling for retrieval and classification experiments on duplicate detection in cQA.
- Paper: ACM DL 10.1145/2838931.2838934 — IR Anthology entry: hoogeveen-2015-cqadupstack.
- Original data landing page (University of Melbourne NLP): CQADupStack resources.
BEIR benchmark (CQADupStack as a subset)
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models
NeurIPS 2021 (Datasets and Benchmarks Track).
Abstract (from arXiv): “Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”
- Paper: arXiv:2104.08663 — OpenReview; code and data: BEIR on GitHub.
Related resources
- MTEB packages the same retrieval task as
mteb/cqadupstack-programmers(corpus + queries + qrels) with descriptive statistics. - BEIR-style mirrors on Hugging Face for raw JSONL / TSV layouts (e.g.
BeIR/cqadupstack-*datasets where applicable) — see the upstream BEIR project for the canonical file layout.
Citation
If you use CQADupStack, cite the ADCS 2015 paper. If you use the BEIR benchmark packaging, cite the BEIR NeurIPS 2021 paper. BibTeX for CQADupStack is available from the IR Anthology (BibTeX download on that page).
License
Stack Exchange content is typically shared under Creative Commons terms. This card marks cc-by-sa-4.0 as a common license for Stack Exchange–derived text; verify against your corpus snapshot and upstream terms if compliance is strict.
Dataset card maintained for the orgrctera/beir_cqadupstack_programmers Hub repository.