File size: 8,495 Bytes
19e8d2e
cdd1d6b
 
 
 
 
 
 
 
 
 
 
 
 
b4dc580
ec042fe
 
19e8d2e
ec042fe
cdd1d6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec042fe
cdd1d6b
ec042fe
cdd1d6b
 
 
 
 
 
 
 
 
ec042fe
cdd1d6b
ec042fe
cdd1d6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec042fe
cdd1d6b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
language:
  - en
license: cc-by-sa-4.0
tags:
  - retrieval
  - text-retrieval
  - beir
  - programming
  - stack-exchange
  - duplicate-question-detection
  - community-question-answering
  - benchmark
pretty_name: BEIR CQADupStack  Programmers (retrieval)
size_categories: "10K<n<100K"
task_categories:
  - text-retrieval
---

# CQADupStack / Programmers (BEIR) — programming Q&A retrieval

## Dataset description

**CQADupStack** is a benchmark for **community question answering (cQA)** built from Stack Exchange data. It was introduced by Hoogeveen, Verspoor, and Baldwin at **ADCS 2015** to support research on **duplicate questions**: finding earlier posts that match or subsume a newly asked question, so users can reuse existing answers instead of opening redundant threads.

The full CQADupStack release aggregates **twelve** Stack Exchange subcommunities (subforums). Each subforum is distributed as its own slice with **pre-defined splits** and **duplicate annotations** (which posts are duplicates of which), enabling comparable retrieval and classification experiments. The **Programmers** slice corresponds to the historical **Programmers.StackExchange** community (software engineering and professional programming topics, distinct from “pure code” Q&A).

**BEIR** (*Benchmarking IR*) repackaged CQADupStack — including the **Programmers** sub-benchmark — as part of a **heterogeneous zero-shot IR benchmark** spanning many tasks and domains. In BEIR’s retrieval setting, each slice is a standard **corpus + queries + qrels** collection: systems must rank corpus documents so that human-annotated duplicates appear at the top.

This repository (`orgrctera/beir_cqadupstack_programmers`) exposes the **BEIR CQADupStack / Programmers** **test** split in **Parquet** form for retrieval evaluation pipelines. Each row is one **query** with **relevance judgments** (`qrels`) pointing at corpus document identifiers, aligned with the BEIR release.

### Scale (BEIR / Programmers retrieval setting)

For the **Programmers** task as used in embedding benchmarks (e.g. MTEB’s “CQADupstackProgrammersRetrieval”), the **test** split is on the order of:

- **~32k** unique **documents** in the corpus (Stack Exchange question posts).
- **876** **queries** in the official test set.
- **~1.9** relevant documents per query on average (some queries have many duplicates annotated).

Exact counts depend on the upstream BEIR snapshot; see [BEIR on GitHub](https://github.com/beir-cellar/beir) for version-precise figures.

## Task: retrieval (CQADupStack Programmers)

The task is **ad hoc passage (or document) retrieval** for **duplicate question finding**:

1. **Input:** a natural-language **question** (the query) posted on the Programmers forum.
2. **Output:** a ranked list of **document IDs** from the Programmers corpus (or scores over the full collection), such that **relevant** IDs — posts marked as duplicates of the query in the official qrels — receive high rank.

Evaluation uses standard IR metrics (e.g. **nDCG@k**, **Recall@k**, **MRR**), as in BEIR’s evaluation utilities or frameworks such as Pyserini / MTEB.

> **Note:** Full retrieval evaluation also requires the **corpus** (passage text keyed by ID). This dataset card describes the **query + qrels** side as prepared for CTERA-style evaluation rows; align corpus IDs with the same **BEIR CQADupStack / Programmers** corpus you use for indexing.

## Data format (this repository)

Each record includes:

| Field | Description |
|--------|-------------|
| `id` | UUID for this example row. |
| `input` | The **query text** (question). |
| `expected_output` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <relevance>}`. Scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). |
| `metadata.query_id` | Original BEIR / CQADupStack query identifier (string). |
| `metadata.split` | Split name (here: `test`). |

### Example 1 — single annotated duplicate

```json
{
  "id": "4fc8cccf-f1d8-4685-adcc-6506b470d0c1",
  "input": "How do I write a specification?",
  "expected_output": "[{\"id\": \"34356\", \"score\": 1}]",
  "metadata.query_id": "132074",
  "metadata.split": "test"
}
```

### Example 2 — multiple duplicate targets

```json
{
  "id": "0e2e4cb0-3d07-4208-94bb-bc20761f4bb6",
  "input": "Is this a violation of the Liskov Substitution Principle?",
  "expected_output": "[{\"id\": \"224350\", \"score\": 1}, {\"id\": \"254398\", \"score\": 1}, {\"id\": \"229549\", \"score\": 1}, {\"id\": \"189222\", \"score\": 1}, {\"id\": \"177831\", \"score\": 1}, {\"id\": \"145941\", \"score\": 1}, {\"id\": \"82682\", \"score\": 1}, {\"id\": \"237843\", \"score\": 1}, {\"id\": \"132612\", \"score\": 1}, {\"id\": \"107723\", \"score\": 1}, {\"id\": \"231300\", \"score\": 1}]",
  "metadata.query_id": "170138",
  "metadata.split": "test"
}
```

## References

### CQADupStack (original dataset)

**Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin**  
*CQADupStack: A Benchmark Data Set for Community Question-Answering Research*  
Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), pages 3:1–3:8.

**Abstract (summary):** The paper presents CQADupStack, a resource derived from Stack Exchange with **duplicate question annotations** across multiple communities, together with standard splits and evaluation tooling for **retrieval** and **classification** experiments on duplicate detection in cQA.

- Paper: [ACM DL 10.1145/2838931.2838934](https://doi.org/10.1145/2838931.2838934) — IR Anthology entry: [hoogeveen-2015-cqadupstack](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/).
- Original data landing page (University of Melbourne NLP): [CQADupStack resources](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/).

### BEIR benchmark (CQADupStack as a subset)

**Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych**  
*BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models*  
NeurIPS 2021 (Datasets and Benchmarks Track).

**Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”*

- Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ); code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir).

### Related resources

- **MTEB** packages the same retrieval task as [`mteb/cqadupstack-programmers`](https://huggingface.co/datasets/mteb/cqadupstack-programmers) (corpus + queries + qrels) with descriptive statistics.
- **BEIR-style mirrors** on Hugging Face for raw JSONL / TSV layouts (e.g. `BeIR/cqadupstack-*` datasets where applicable) — see the upstream BEIR project for the canonical file layout.

## Citation

If you use **CQADupStack**, cite the ADCS 2015 paper. If you use the **BEIR** benchmark packaging, cite the BEIR NeurIPS 2021 paper. BibTeX for CQADupStack is available from the [IR Anthology](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/) (BibTeX download on that page).

## License

Stack Exchange content is typically shared under **Creative Commons** terms. This card marks **`cc-by-sa-4.0`** as a common license for Stack Exchange–derived text; verify against your corpus snapshot and upstream terms if compliance is strict.

---

*Dataset card maintained for the `orgrctera/beir_cqadupstack_programmers` Hub repository.*