| | --- |
| | license: apache-2.0 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | dataset_info: |
| | features: |
| | - name: source |
| | dtype: string |
| | - name: file_name |
| | dtype: string |
| | - name: cwe |
| | sequence: string |
| | splits: |
| | - name: train |
| | num_bytes: 1015823 |
| | num_examples: 113 |
| | download_size: 405079 |
| | dataset_size: 1015823 |
| | --- |
| | |
| | A dataset of 76 Python programs taken from real Python open source projects (top 100 on GitHub), |
| | where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep), used in the paper [Patched MOA: optimizing inference for diverse software development tasks](https://huggingface.co/papers/2407.18521). |
| |
|
| | OpenAI used the [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) and fine-tuned |
| | a new version of gpt-4o is now the SOTA on this benchmark. More details and code is available from their [repo.](https://github.com/openai/build-hours/tree/main/5-4o_fine_tuning) |
| |
|
| |  |
| |
|
| | More details on the benchmark are available in our [blog](https://www.patched.codes/blog/the-static-analysis-evaluation-benchmark-measuring-llm-performance-in-fixing-software-vulnerabilities). |
| |
|
| | # New Version of Static Analysis Eval (Aug 20, 2024) |
| |
|
| | We have created a new version of the benchmark with instances that are harder than the previous one. There has been a lot of progress in models |
| | over the last year as a result the previous version of the benchmark was saturated. The methodology is the same, we have also released the |
| | dataset generation script which scans the top 100 Python projects to generate the instances. You can see it [here](_script_for_gen.py). |
| | The same [eval script](_script_for_eval.py) works as before. You do not need to login to Semgrep anymore as we |
| | only use their OSS rules for this version of the benchmark. |
| |
|
| | The highest score a model can get on this benchmark is 100%, you can see the oracle run logs [here](oracle-0-shot_semgrep_1.85.0_20240820_174931.log). |
| |
|
| | # New Evaluation |
| |
|
| | | Model | Score | Logs | |
| | |:-----:|:-----:|:----:| |
| | | o1-mini-2024-09-12 | 51.33 | [link](o1-mini-0-shot_semgrep_1.85.0_20240913_155514.log) | |
| | | gpt-4o-mini | 52.21 | [link](gpt-4o-mini-0-shot_semgrep_1.85.0_20240820_201236.log)| |
| | | gpt-4o-mini + 3-shot prompt | 53.10 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240820_213814.log)| |
| | | gpt-4o-mini + rag (embedding & reranking) | 58.41 | [link](gpt-4o-mini-3-shot-sim_semgrep_1.85.0_20240821_023541.log) | |
| | | gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 53.98 | [link](ft_gpt-4o-mini-2024-07-18_patched_patched_9yhVV00P-0-shot_semgrep_1.85.0_20240821_082958.log) | |
| |
|
| |
|
| | | Model | Score | Logs | |
| | |:-----:|:-----:|:----:| |
| | | gpt-4o | 53.10 | [link](gpt-4o-0-shot_semgrep_1.85.0_20240820_210136.log)| |
| | | gpt-4o + 3-shot prompt | 53.98 | [link](gpt-4o-3-shot_semgrep_1.85.0_20240820_215534.log)| |
| | | gpt-4o + rag (embedding & reranking) | 56.64 | [link](gpt-4o-3-shot-sim_semgrep_1.85.0_20240821_025455.log) | |
| | | gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 61.06 | [link](ft_gpt-4o-2024-08-06_patched_patched_9yhZp9nn-0-shot_semgrep_1.85.0_20240821_084452.log) | |
| |
|
| |
|
| | ## Mixture of Agents (MOA) |
| |
|
| | We also benchmarked gpt-4o with [Patched MOA](https://arxiv.org/abs/2407.18521). This demostrates that an inference optimization |
| | technique like MOA can improve performance without fine-tuning. |
| |
|
| | | Model | Score | Logs | |
| | |:-----:|:-----:|:----:| |
| | | moa-gpt-4o | 53.98 | [link](moa-gpt-4o-2024-08-06-0-shot_semgrep_1.85.0_20240824_032808.log)| |
| | | moa-gpt-4o + 3-shot prompt | 60.18 | [link](moa-gpt-4o-2024-08-06-3-shot_semgrep_1.85.0_20240824_035842.log)| |
| | | moa-gpt-4o + rag (embedding & reranking) | 61.06 | [link](moa-gpt-4o-2024-08-06-3-shot-sim_semgrep_1.85.0_20240824_043304.log) | |
| |
|
| | # Static Analysis Eval Benchmark |
| |
|
| |
|
| |
|
| | You can run the `_script_for_eval.py` script to check the results. |
| |
|
| | ``` |
| | python3 -m venv .venv |
| | source .venv/bin/activate |
| | pip install -r requirements.txt |
| | python _script_for_eval.py |
| | ``` |
| |
|
| | For all supported options, run with `--help`: |
| |
|
| | ``` |
| | usage: _script_for_eval.py [-h] [--model MODEL] [--cache] [--n_shot N_SHOT] [--use_similarity] [--oracle] |
| | |
| | Run Static Analysis Evaluation |
| | |
| | options: |
| | -h, --help show this help message and exit |
| | --model MODEL OpenAI model to use |
| | --cache Enable caching of results |
| | --n_shot N_SHOT Number of examples to use for few-shot learning |
| | --use_similarity Use similarity for fetching dataset examples |
| | --oracle Run in oracle mode (assume all vulnerabilities are fixed) |
| | ``` |
| |
|
| | We need to use the logged in version of Semgrep to get access to more rules for vulnerability detection. So, make sure you login before running the eval script. |
| |
|
| | ``` |
| | % semgrep login |
| | API token already exists in /Users/user/.semgrep/settings.yml. To login with a different token logout use `semgrep logout` |
| | ``` |
| |
|
| | After the run, the script will also create a log file which captures the stats for the run and the files that were fixed. |
| | You can see an example [here](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log). |
| | Due to the recent versions of Semgrep not detecting a few of the samples in the dataset as vulnerable anymore, the maximum score |
| | possible on the benchmark is 77.63%. You can see the oracle run log [here](oracle-0-shot_semgrep_1.85.0_20240819_022711.log). |
| |
|
| | ## Evaluation |
| | We did some detailed evaluations recently (19/08/2024): |
| |
|
| | | Model | Score | Logs | |
| | |:-----:|:-----:|:----:| |
| | | gpt-4o-mini | 67.11 | [link](gpt-4o-mini_semgrep_1.85.0_20240818_215254.log)| |
| | | gpt-4o-mini + 3-shot prompt | 71.05 | [link](gpt-4o-mini-3-shot_semgrep_1.85.0_20240818_234709.log)| |
| | | gpt-4o-mini + rag (embedding & reranking) | 72.37 | [link](gpt-4o-mini-1-shot-sim_semgrep_1.85.0_20240819_013810.log) | |
| | | gpt-4o-mini + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-mini-2024-07-18_patched_patched_9uUpKXcm_semgrep_1.85.0_20240818_220158.log) | |
| |
|
| |
|
| | | Model | Score | Logs | |
| | |:-----:|:-----:|:----:| |
| | | gpt-4o | 68.42 | [link](gpt-4o-0-shot_semgrep_1.85.0_20240819_015355.log)| |
| | | gpt-4o + 3-shot prompt | 77.63 | [link](gpt-4o-3-shot_semgrep_1.85.0_20240819_020525.log)| |
| | | gpt-4o + rag (embedding & reranking) | 77.63 | [link](gpt-4o-1-shot-sim_semgrep_1.85.0_20240819_023323.log) | |
| | | gpt-4o + fine-tuned with [synth-vuln-fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes) | 77.63 | [link](ft_gpt-4o-2024-05-13_patched_patched-4o_9xp8XOM9-0-shot_semgrep_1.85.0_20240819_075205.log) | |
| |
|
| | # Leaderboard |
| |
|
| | The top models on the leaderboard are all fine-tuned using the same dataset that we released called [synth vuln fixes](https://huggingface.co/datasets/patched-codes/synth-vuln-fixes). |
| | You can read about our experience with fine-tuning them on our [blog](https://www.patched.codes/blog/a-comparative-study-of-fine-tuning-gpt-4o-mini-gemini-flash-1-5-and-llama-3-1-8b). |
| | You can also explore the leaderboard with this [interactive visualization](https://claude.site/artifacts/5656c16d-9751-407c-9631-a3526c259354). |
| |  |
| |
|
| | | Model | StaticAnalysisEval (%) | Time (mins) | Price (USD) | |
| | |:-------------------------:|:----------------------:|:-------------:|:-----------:| |
| | | gpt-4o-mini-fine-tuned | 77.63 | 21:0 | 0.21 | |
| | | gemini-1.5-flash-fine-tuned | 73.68 | 18:0 | | |
| | | Llama-3.1-8B-Instruct-fine-tuned | 69.74 | 23:0 | | |
| | | gpt-4o | 69.74 | 24:0 | 0.12 | |
| | | gpt-4o-mini | 68.42 | 20:0 | 0.07 | |
| | | gemini-1.5-flash-latest | 68.42 | 18:2 | 0.07 | |
| | | Llama-3.1-405B-Instruct | 65.78 | 40:12 | | |
| | | Llama-3-70B-instruct | 65.78 | 35:2 | | |
| | | Llama-3-8B-instruct | 65.78 | 31.34 | | |
| | | gemini-1.5-pro-latest | 64.47 | 34:40 | | |
| | | gpt-4-1106-preview | 64.47 | 27:56 | 3.04 | |
| | | gpt-4 | 63.16 | 26:31 | 6.84 | |
| | | claude-3-5-sonnet-20240620| 59.21 | 23:59 | 0.70 | |
| | | moa-gpt-3.5-turbo-0125 | 53.95 | 49:26 | | |
| | | gpt-4-0125-preview | 53.94 | 34:40 | | |
| | | patched-coder-7b | 51.31 | 45.20 | | |
| | | patched-coder-34b | 46.05 | 33:58 | 0.87 | |
| | | patched-mix-4x7b | 46.05 | 60:00+ | 0.80 | |
| | | Mistral-Large | 40.80 | 60:00+ | | |
| | | Gemini-pro | 39.47 | 16:09 | 0.23 | |
| | | Mistral-Medium | 39.47 | 60:00+ | 0.80 | |
| | | Mixtral-Small | 30.26 | 30:09 | | |
| | | gpt-3.5-turbo-0125 | 28.95 | 21:50 | | |
| | | claude-3-opus-20240229 | 25.00 | 60:00+ | | |
| | | Llama-3-8B-instruct.Q4_K_M| 21.05 | 60:00+ | | |
| | | Gemma-7b-it | 19.73 | 36:40 | | |
| | | gpt-3.5-turbo-1106 | 17.11 | 13:00 | 0.23 | |
| | | Codellama-70b-Instruct | 10.53 | 30.32 | | |
| | | CodeLlama-34b-Instruct | 7.89 | 23:16 | | |
| |
|
| | The price is calcualted by assuming 1000 input and output tokens per call as all examples in the dataset are < 512 tokens (OpenAI cl100k_base tokenizer). |
| | |
| | Some models timed out during the run or had intermittent API errors. We try each example 3 times in such cases. This is why some runs are reported to be longer than 1 hr (60:00+ mins). |
| | |
| | If you want to add your model to the leaderboard, you can send in a PR to this repo with the log file from the evaluation run. |