File size: 3,029 Bytes
0ef5b92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10554b6
0ef5b92
10554b6
0ef5b92
10554b6
0ef5b92
 
 
 
 
 
 
 
10554b6
0ef5b92
 
 
10554b6
0ef5b92
 
10554b6
0ef5b92
 
 
 
10554b6
0ef5b92
 
 
 
 
10554b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ef5b92
 
 
 
 
 
 
 
 
 
10554b6
0ef5b92
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: mit
task_categories:
  - text-classification
tags:
  - code
  - vulnerability-detection
  - embeddings
  - codebert
  - positive-unlabeled-learning
language:
  - code
size_categories:
  - 100K<n<1M
---

# PrimeVul CodeBERT Embeddings

Pre-extracted [CLS] token embeddings from microsoft/codebert-base for all functions in the PrimeVul v0.1 vulnerability detection dataset, plus the raw PrimeVul v0.1 JSONL source files.

## Embeddings (.npz files)

Each .npz file contains frozen CodeBERT embeddings (768-dimensional vectors) for C/C++ functions, along with their labels and CWE type annotations. These were extracted once using a frozen CodeBERT model and are used for downstream PU (positive-unlabeled) learning experiments without requiring GPU access.

| File | Functions | Vulnerable | Shape |
|------|-----------|-----------|-------|
| train.npz | 175,797 | 4,862 (2.77%) | (175797, 768) |
| valid.npz | 23,948 | 593 (2.48%) | (23948, 768) |
| test.npz | 24,788 | 549 (2.21%) | (24788, 768) |
| test_paired.npz | 870 | 435 (50%) | (870, 768) |

Arrays in each .npz:

- embeddings: (N, 768) float32 -- CodeBERT [CLS] token vectors
- labels: (N,) int32 -- 0 = benign, 1 = vulnerable
- cwe_types: (N,) U20 string -- CWE category (e.g., "CWE-119") or "unknown"
- idxs: (N,) int64 -- original PrimeVul record index for traceability

### How to load

```python
import numpy as np

data = np.load("train.npz")
X = data["embeddings"]  # (175797, 768)
y = data["labels"]       # (175797,)
cwes = data["cwe_types"] # (175797,)
```

No special flags needed. All arrays use standard numpy dtypes (float32, int32, U20, int64).

## Raw PrimeVul v0.1 data (raw/ folder)

The raw/ folder contains the original PrimeVul v0.1 JSONL files from the PrimeVul project. Each line is a JSON object with fields including func (source code), target (0/1 label), cwe (list of CWE strings), cve (CVE identifier), and project metadata.

| File | Records |
|------|---------|
| raw/primevul_train.jsonl | 175,797 |
| raw/primevul_valid.jsonl | 23,948 |
| raw/primevul_test.jsonl | 24,788 |
| raw/primevul_train_paired.jsonl | 9,724 |
| raw/primevul_valid_paired.jsonl | 870 |
| raw/primevul_test_paired.jsonl | 870 |

## Extraction details

- Model: microsoft/codebert-base (RoBERTa architecture, 125M parameters)
- Extraction: frozen model, [CLS] token from final layer
- Tokenization: max_length=512, truncation=True, padding=max_length
- Source data: PrimeVul v0.1 (chronological train/valid/test splits)
- Extracted on: Google Colab, A100 GPU, ~23 minutes for all splits

## Citation

If you use this data, please cite the PrimeVul dataset:

```bibtex
@article{ding2024primevul,
  title={Vulnerability Detection with Code Language Models: How Far Are We?},
  author={Ding, Yangruibo and Fu, Yanjun and Ibrahim, Omniyyah and Sitawarin, Chawin and Chen, Xinyun and Alomair, Basel and Wagner, David and Ray, Baishakhi and Chen, Yizheng},
  journal={arXiv preprint arXiv:2403.18624},
  year={2024}
}
```

## License

MIT (same as PrimeVul)