orgrctera commited on
Commit
cdd1d6b
·
verified ·
1 Parent(s): ec042fe

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +117 -18
README.md CHANGED
@@ -1,28 +1,127 @@
1
  ---
2
- tags: ["benchmark", "beir", "cqadupstack_programmers", "retrieval"]
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  task_categories:
4
- - question-answering
5
  - text-retrieval
6
- size_categories:
7
- - 1K<n<10K
8
  ---
9
 
10
- # beir_cqadupstack_programmers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- BEIR CQADupStack/programmers test split
13
 
14
- | Field | Value |
15
- |-------|-------|
16
- | Benchmark | beir |
17
- | Sub-benchmark | cqadupstack_programmers |
18
- | Type | retrieval |
19
- | Total items | 876 |
20
- | Splits | 1 |
 
 
21
 
22
- ## Splits
23
 
24
- | Split | Items |
25
- |-------|-------|
26
- | test | 876 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- Exported from Langfuse.
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ tags:
6
+ - retrieval
7
+ - text-retrieval
8
+ - beir
9
+ - programming
10
+ - stack-exchange
11
+ - duplicate-question-detection
12
+ - community-question-answering
13
+ - benchmark
14
+ pretty_name: BEIR CQADupStack — Programmers (retrieval)
15
+ size_categories: 10K<n<100K
16
  task_categories:
 
17
  - text-retrieval
 
 
18
  ---
19
 
20
+ # CQADupStack / Programmers (BEIR) — programming Q&A retrieval
21
+
22
+ ## Dataset description
23
+
24
+ **CQADupStack** is a benchmark for **community question answering (cQA)** built from Stack Exchange data. It was introduced by Hoogeveen, Verspoor, and Baldwin at **ADCS 2015** to support research on **duplicate questions**: finding earlier posts that match or subsume a newly asked question, so users can reuse existing answers instead of opening redundant threads.
25
+
26
+ The full CQADupStack release aggregates **twelve** Stack Exchange subcommunities (subforums). Each subforum is distributed as its own slice with **pre-defined splits** and **duplicate annotations** (which posts are duplicates of which), enabling comparable retrieval and classification experiments. The **Programmers** slice corresponds to the historical **Programmers.StackExchange** community (software engineering and professional programming topics, distinct from “pure code” Q&A).
27
+
28
+ **BEIR** (*Benchmarking IR*) repackaged CQADupStack — including the **Programmers** sub-benchmark — as part of a **heterogeneous zero-shot IR benchmark** spanning many tasks and domains. In BEIR’s retrieval setting, each slice is a standard **corpus + queries + qrels** collection: systems must rank corpus documents so that human-annotated duplicates appear at the top.
29
+
30
+ This repository (`orgrctera/beir_cqadupstack_programmers`) exposes the **BEIR CQADupStack / Programmers** **test** split in **Parquet** form for retrieval evaluation pipelines. Each row is one **query** with **relevance judgments** (`qrels`) pointing at corpus document identifiers, aligned with the BEIR release.
31
+
32
+ ### Scale (BEIR / Programmers retrieval setting)
33
+
34
+ For the **Programmers** task as used in embedding benchmarks (e.g. MTEB’s “CQADupstackProgrammersRetrieval”), the **test** split is on the order of:
35
+
36
+ - **~32k** unique **documents** in the corpus (Stack Exchange question posts).
37
+ - **876** **queries** in the official test set.
38
+ - **~1.9** relevant documents per query on average (some queries have many duplicates annotated).
39
+
40
+ Exact counts depend on the upstream BEIR snapshot; see [BEIR on GitHub](https://github.com/beir-cellar/beir) for version-precise figures.
41
+
42
+ ## Task: retrieval (CQADupStack Programmers)
43
+
44
+ The task is **ad hoc passage (or document) retrieval** for **duplicate question finding**:
45
+
46
+ 1. **Input:** a natural-language **question** (the query) posted on the Programmers forum.
47
+ 2. **Output:** a ranked list of **document IDs** from the Programmers corpus (or scores over the full collection), such that **relevant** IDs — posts marked as duplicates of the query in the official qrels — receive high rank.
48
+
49
+ Evaluation uses standard IR metrics (e.g. **nDCG@k**, **Recall@k**, **MRR**), as in BEIR’s evaluation utilities or frameworks such as Pyserini / MTEB.
50
+
51
+ > **Note:** Full retrieval evaluation also requires the **corpus** (passage text keyed by ID). This dataset card describes the **query + qrels** side as prepared for CTERA-style evaluation rows; align corpus IDs with the same **BEIR CQADupStack / Programmers** corpus you use for indexing.
52
+
53
+ ## Data format (this repository)
54
+
55
+ Each record includes:
56
+
57
+ | Field | Description |
58
+ |--------|-------------|
59
+ | `id` | UUID for this example row. |
60
+ | `input` | The **query text** (question). |
61
+ | `expected_output` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <relevance>}`. Scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). |
62
+ | `metadata.query_id` | Original BEIR / CQADupStack query identifier (string). |
63
+ | `metadata.split` | Split name (here: `test`). |
64
 
65
+ ### Example 1 — single annotated duplicate
66
 
67
+ ```json
68
+ {
69
+ "id": "4fc8cccf-f1d8-4685-adcc-6506b470d0c1",
70
+ "input": "How do I write a specification?",
71
+ "expected_output": "[{\"id\": \"34356\", \"score\": 1}]",
72
+ "metadata.query_id": "132074",
73
+ "metadata.split": "test"
74
+ }
75
+ ```
76
 
77
+ ### Example 2 — multiple duplicate targets
78
 
79
+ ```json
80
+ {
81
+ "id": "0e2e4cb0-3d07-4208-94bb-bc20761f4bb6",
82
+ "input": "Is this a violation of the Liskov Substitution Principle?",
83
+ "expected_output": "[{\"id\": \"224350\", \"score\": 1}, {\"id\": \"254398\", \"score\": 1}, {\"id\": \"229549\", \"score\": 1}, {\"id\": \"189222\", \"score\": 1}, {\"id\": \"177831\", \"score\": 1}, {\"id\": \"145941\", \"score\": 1}, {\"id\": \"82682\", \"score\": 1}, {\"id\": \"237843\", \"score\": 1}, {\"id\": \"132612\", \"score\": 1}, {\"id\": \"107723\", \"score\": 1}, {\"id\": \"231300\", \"score\": 1}]",
84
+ "metadata.query_id": "170138",
85
+ "metadata.split": "test"
86
+ }
87
+ ```
88
+
89
+ ## References
90
+
91
+ ### CQADupStack (original dataset)
92
+
93
+ **Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin**
94
+ *CQADupStack: A Benchmark Data Set for Community Question-Answering Research*
95
+ Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), pages 3:1–3:8.
96
+
97
+ **Abstract (summary):** The paper presents CQADupStack, a resource derived from Stack Exchange with **duplicate question annotations** across multiple communities, together with standard splits and evaluation tooling for **retrieval** and **classification** experiments on duplicate detection in cQA.
98
+
99
+ - Paper: [ACM DL 10.1145/2838931.2838934](https://doi.org/10.1145/2838931.2838934) — IR Anthology entry: [hoogeveen-2015-cqadupstack](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/).
100
+ - Original data landing page (University of Melbourne NLP): [CQADupStack resources](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/).
101
+
102
+ ### BEIR benchmark (CQADupStack as a subset)
103
+
104
+ **Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych**
105
+ *BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models*
106
+ NeurIPS 2021 (Datasets and Benchmarks Track).
107
+
108
+ **Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”*
109
+
110
+ - Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ); code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir).
111
+
112
+ ### Related resources
113
+
114
+ - **MTEB** packages the same retrieval task as [`mteb/cqadupstack-programmers`](https://huggingface.co/datasets/mteb/cqadupstack-programmers) (corpus + queries + qrels) with descriptive statistics.
115
+ - **BEIR-style mirrors** on Hugging Face for raw JSONL / TSV layouts (e.g. `BeIR/cqadupstack-*` datasets where applicable) — see the upstream BEIR project for the canonical file layout.
116
+
117
+ ## Citation
118
+
119
+ If you use **CQADupStack**, cite the ADCS 2015 paper. If you use the **BEIR** benchmark packaging, cite the BEIR NeurIPS 2021 paper. BibTeX for CQADupStack is available from the [IR Anthology](https://ir.webis.de/anthology/2015.adcs_conference-2015.3/) (BibTeX download on that page).
120
+
121
+ ## License
122
+
123
+ Stack Exchange content is typically shared under **Creative Commons** terms. This card marks **`cc-by-sa-4.0`** as a common license for Stack Exchange–derived text; verify against your corpus snapshot and upstream terms if compliance is strict.
124
+
125
+ ---
126
 
127
+ *Dataset card maintained for the `orgrctera/beir_cqadupstack_programmers` Hub repository.*