File size: 2,625 Bytes
ac42d29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: apache-2.0
task_categories:
  - text-retrieval
  - question-answering
language:
  - en
tags:
  - retrieval
  - rlvr
  - search
  - distractor-mining
size_categories:
  - 100K<n<1M
---

# RLVR-Env-Retrieval-Source-code-search-net-python

RLVR-ready retrieval environment derived from [Nan-Do/code-search-net-python](https://huggingface.co/datasets/Nan-Do/code-search-net-python).

**Author:** [Aman Priyanshu](https://huggingface.co/AmanPriyanshu)

## What Is This

A 100k-row retrieval QA dataset where each row contains a question, ground-truth chunks, and pre-mined distractor chunks (random + semantically similar). Designed for training and evaluating retrieval agents in an RLVR (Reinforcement Learning with Verifiable Rewards) setup — the agent searches through distractors to find the correct chunk(s).

**Domain:** Python open-source functions from GitHub (CodeSearchNet)

## Source

Derived from [Nan-Do/code-search-net-python](https://huggingface.co/datasets/Nan-Do/code-search-net-python) (455,243 unique functions).
Original license: **Apache 2.0** — retained here.

## Schema

### qa.parquet (100,000 rows)

| Column | Type | Description |
|---|---|---|
| `qa_id` | string | Unique ID (`search_py_0`, `search_py_1`, ...) |
| `question` | string | The retrieval query |
| `gt_chunks` | JSON string | List of ground-truth chunk texts. 1 target code chunk per question (the function matching the summary) |
| `random_chunks` | JSON string | List of random distractor texts. ~500 random code chunks (>=20 chars, deduplicated against gt and similar) |
| `similar_chunks` | JSON string | List of hard-negative distractor texts. ~178 similar chunks via MiniLM cosine (<0.97) + char trigram edit-distance (<0.97 seq ratio), deduplicated |

### metadata.parquet (100,000 rows)

| Column | Type | Description |
|---|---|---|
| `qa_id` | string | Matches qa.parquet |
| ... | ... | chunk_idx, func_name, repo, char_count |

### chunks.parquet

455,243 code chunks with MiniLM embeddings. Kept for reference — not needed at inference time.

## Deduplication

Within each row: gt > similar > random priority. No chunk text appears in more than one column per row. Similar chunks are internally deduplicated. Random chunks are filtered against both gt and similar.

## How To Use

```python
import json
import pyarrow.parquet as pq

t = pq.read_table("qa.parquet")
row = {col: t.column(col)[0].as_py() for col in t.column_names}
gt = json.loads(row["gt_chunks"])
distractors = json.loads(row["random_chunks"]) + json.loads(row["similar_chunks"])
```

## License

Apache 2.0 (inherited from source dataset).