Update dataset card to reflect filtered release and noise-filter rules
Browse files
README.md
CHANGED
|
@@ -210,73 +210,48 @@ license: apache-2.0
|
|
| 210 |
|
| 211 |
# cornstack-samples
|
| 212 |
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
|
| 216 |
-
|
| 217 |
-
`cornstack-samples` is a compact, sampled version of CoRNStack for code-focused embedding training, reranker training, and retrieval experiments.
|
| 218 |
|
| 219 |
Source dataset and paper:
|
| 220 |
- CoRNStack collection: https://huggingface.co/collections/nomic-ai/cornstack
|
| 221 |
- CoRNStack paper: https://huggingface.co/papers/2412.01007
|
| 222 |
|
| 223 |
-
##
|
| 224 |
|
| 225 |
-
|
| 226 |
|
| 227 |
-
Important
|
| 228 |
-
-
|
| 229 |
-
-
|
| 230 |
|
| 231 |
-
## Config
|
| 232 |
|
| 233 |
-
Each language is published as
|
| 234 |
- `{lang}-v1-pair-2M`
|
| 235 |
- `{lang}-v1-hard-negatives-100k`
|
| 236 |
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
Fields:
|
| 241 |
-
- `query`
|
| 242 |
-
- `document`
|
| 243 |
-
|
| 244 |
-
Target rows per language: ~2,000,000
|
| 245 |
|
| 246 |
-
|
| 247 |
-
Fields:
|
| 248 |
-
- `query`
|
| 249 |
-
- `positive`
|
| 250 |
-
- `negative_1`
|
| 251 |
-
- `negative_2`
|
| 252 |
-
- `negative_3`
|
| 253 |
-
- `negative_4`
|
| 254 |
-
- `negative_5`
|
| 255 |
-
- `negative_6`
|
| 256 |
-
- `negative_7`
|
| 257 |
|
| 258 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 259 |
|
| 260 |
-
##
|
| 261 |
-
|
| 262 |
-
All subsets are published with split `train`.
|
| 263 |
-
|
| 264 |
-
| Subset (config name) | split | num_examples | type |
|
| 265 |
-
| --- | --- | ---: | --- |
|
| 266 |
-
| `go-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 267 |
-
| `go-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 268 |
-
| `java-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 269 |
-
| `java-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 270 |
-
| `javascript-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 271 |
-
| `javascript-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 272 |
-
| `php-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 273 |
-
| `php-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 274 |
-
| `python-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 275 |
-
| `python-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 276 |
-
| `ruby-v1-pair-2M` | `train` | 2,000,000 | pair |
|
| 277 |
-
| `ruby-v1-hard-negatives-100k` | `train` | 100,000 | hard-negatives |
|
| 278 |
-
|
| 279 |
-
## Quick usage
|
| 280 |
|
| 281 |
```python
|
| 282 |
from datasets import load_dataset
|
|
@@ -288,26 +263,48 @@ print(pair_ds.column_names, len(pair_ds))
|
|
| 288 |
print(hard_ds.column_names, len(hard_ds))
|
| 289 |
```
|
| 290 |
|
| 291 |
-
## Creation process (rough)
|
| 292 |
-
|
| 293 |
-
For each language:
|
| 294 |
-
1. Read CoRNStack shard files (`shard-*.jsonl.gz`).
|
| 295 |
-
2. Count lines per shard and allocate per-shard targets.
|
| 296 |
-
3. Build pair data via random line-index sampling with valid string `query`/`document`.
|
| 297 |
-
4. Build hard-negative data from disjoint rows:
|
| 298 |
-
- valid string `query`/`document`
|
| 299 |
-
- `negatives` contains at least 7 unique non-empty strings
|
| 300 |
-
- sample 7 negatives per row
|
| 301 |
-
- cap output via reservoir sampling
|
| 302 |
-
|
| 303 |
-
Default seed is fixed (`42`) for reproducibility under identical source snapshots and parameters.
|
| 304 |
-
|
| 305 |
## License
|
| 306 |
|
| 307 |
This dataset follows CoRNStack and is released under **Apache-2.0**.
|
| 308 |
|
| 309 |
-
## Citation
|
| 310 |
|
| 311 |
If you use this dataset, please cite and attribute CoRNStack:
|
| 312 |
- Paper: https://huggingface.co/papers/2412.01007
|
| 313 |
- Collection: https://huggingface.co/collections/nomic-ai/cornstack
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 210 |
|
| 211 |
# cornstack-samples
|
| 212 |
|
| 213 |
+
Filtered CoRNStack sample subsets for code retrieval training.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
Source dataset and paper:
|
| 216 |
- CoRNStack collection: https://huggingface.co/collections/nomic-ai/cornstack
|
| 217 |
- CoRNStack paper: https://huggingface.co/papers/2412.01007
|
| 218 |
|
| 219 |
+
## What This Release Contains
|
| 220 |
|
| 221 |
+
This release keeps the original subset layout (6 languages x pair + hard-negatives) and applies a deterministic rule-based noise filter.
|
| 222 |
|
| 223 |
+
Important:
|
| 224 |
+
- Counts are post-filter counts, so they are slightly smaller than the nominal 2M / 100k targets.
|
| 225 |
+
- Data is published in normalized IR format.
|
| 226 |
|
| 227 |
+
## Config Layout And Schema
|
| 228 |
|
| 229 |
+
Each language is published as two configs with split `train`:
|
| 230 |
- `{lang}-v1-pair-2M`
|
| 231 |
- `{lang}-v1-hard-negatives-100k`
|
| 232 |
|
| 233 |
+
Schema:
|
| 234 |
+
- Pair configs: `query`, `pos`
|
| 235 |
+
- Hard-negative configs: `query`, `pos`, `negs` (list[string])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
|
| 237 |
+
## Subsets And Row Counts (Post-filter)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
|
| 239 |
+
| Subset (config name) | split | num_examples |
|
| 240 |
+
| --- | --- | ---: |
|
| 241 |
+
| `go-v1-pair-2M` | `train` | 1,992,985 |
|
| 242 |
+
| `go-v1-hard-negatives-100k` | `train` | 99,663 |
|
| 243 |
+
| `java-v1-pair-2M` | `train` | 1,752,593 |
|
| 244 |
+
| `java-v1-hard-negatives-100k` | `train` | 87,504 |
|
| 245 |
+
| `javascript-v1-pair-2M` | `train` | 1,960,276 |
|
| 246 |
+
| `javascript-v1-hard-negatives-100k` | `train` | 98,025 |
|
| 247 |
+
| `php-v1-pair-2M` | `train` | 1,710,537 |
|
| 248 |
+
| `php-v1-hard-negatives-100k` | `train` | 85,460 |
|
| 249 |
+
| `python-v1-pair-2M` | `train` | 1,990,051 |
|
| 250 |
+
| `python-v1-hard-negatives-100k` | `train` | 99,535 |
|
| 251 |
+
| `ruby-v1-pair-2M` | `train` | 1,583,047 |
|
| 252 |
+
| `ruby-v1-hard-negatives-100k` | `train` | 79,040 |
|
| 253 |
|
| 254 |
+
## Quick Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 255 |
|
| 256 |
```python
|
| 257 |
from datasets import load_dataset
|
|
|
|
| 263 |
print(hard_ds.column_names, len(hard_ds))
|
| 264 |
```
|
| 265 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 266 |
## License
|
| 267 |
|
| 268 |
This dataset follows CoRNStack and is released under **Apache-2.0**.
|
| 269 |
|
| 270 |
+
## Citation And Attribution
|
| 271 |
|
| 272 |
If you use this dataset, please cite and attribute CoRNStack:
|
| 273 |
- Paper: https://huggingface.co/papers/2412.01007
|
| 274 |
- Collection: https://huggingface.co/collections/nomic-ai/cornstack
|
| 275 |
+
|
| 276 |
+
## Noise Filtering Algorithm (Rule-based)
|
| 277 |
+
|
| 278 |
+
The following deterministic rules are applied before publishing this release.
|
| 279 |
+
|
| 280 |
+
1. Prefix-based noisy query removal
|
| 281 |
+
A row is dropped if `query` starts with any of the following prefixes:
|
| 282 |
+
- `TODO`
|
| 283 |
+
- `GET /`
|
| 284 |
+
- `POST /`
|
| 285 |
+
- `PUT /`
|
| 286 |
+
- `DELETE /`
|
| 287 |
+
- `Display a listing of the resource.`
|
| 288 |
+
- `Store a newly created resource in storage.`
|
| 289 |
+
- `Show the form for editing the specified resource.`
|
| 290 |
+
- `Update the specified resource in storage.`
|
| 291 |
+
- `Show the form for creating a new resource.`
|
| 292 |
+
- `Remove the specified resource from storage.`
|
| 293 |
+
- `Display the specified resource.`
|
| 294 |
+
- `Transform the resource into an array.`
|
| 295 |
+
- `Autogenerated method stub`
|
| 296 |
+
- `Auto generated`
|
| 297 |
+
- `this down() migration is autogenerated`
|
| 298 |
+
- `this up() migration is autogenerated`
|
| 299 |
+
- `"/ renamed from:"`
|
| 300 |
+
- `"/ access modifiers changed from:"`
|
| 301 |
+
|
| 302 |
+
2. Minimum positive-document length
|
| 303 |
+
A row is dropped if the positive side is shorter than 30 characters.
|
| 304 |
+
- Pair task: `document` length >= 30 required
|
| 305 |
+
- Hard-negatives task: `positive` length >= 30 required
|
| 306 |
+
|
| 307 |
+
3. Hard-negative validity constraint
|
| 308 |
+
For hard-negative configs, at least one valid negative must remain after normalization (`min_negs = 1`).
|
| 309 |
+
|
| 310 |
+
This filtering is purely rule-based (no model scoring), targeting high-noise templates and low-information positives while preserving broad retrieval coverage.
|