Language Models are Universal Embedders
Paper • 2310.08232 • Published • 1
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.
This dataset is created by DeepSim: deep learning code functional similarity.
I downloaded googlejam4.tar.gz from parasol-aser/deepsim, fixed encoding of 6/googlejam6.p261.Round1B.java and 1/googlejam1.p815.MushroomMonster.java, and re-compressed.
The all split (all 12 problems) is consistent with their paper (I guess...).
The test split (problem 5, 6, 7, 8, 12) is used in experiments of Language Models are Universal Embedders.
| problem | code num |
|---|---|
| 1 | 478 |
| 2 | 88 |
| 3 | 242 |
| 4 | 38 |
| 5 | 2 |
| 6 | 435 |
| 7 | 27 |
| 8 | 245 |
| 9 | 68 |
| 10 | 18 |
| 11 | 20 |
| 12 | 4 |