url
stringlengths
24
122
repo_url
stringlengths
60
156
date_extracted
stringdate
2025-08-13 00:00:00
2025-08-13 00:00:00
root
stringlengths
3
85
breadcrumbs
listlengths
1
6
filename
stringlengths
6
60
stage
stringclasses
33 values
group
stringclasses
81 values
info
stringclasses
22 values
title
stringlengths
3
110
description
stringlengths
11
359
clean_text
stringlengths
47
3.32M
rich_text
stringlengths
321
3.32M
https://docs.gitlab.com/user/application_security/dast/browser/configuration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/dast/browser/_index.md
2025-08-13
doc/user/application_security/dast/browser/configuration
[ "doc", "user", "application_security", "dast", "browser", "configuration" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configuration
null
- [Requirements](../_index.md) - [Enabling the analyzer](enabling_the_analyzer.md) - [Customize analyzer settings](customize_settings.md) - [Overriding analyzer jobs](overriding_analyzer_jobs.md) - [Available CI/CD variables](variables.md) - [Authentication configuration](authentication.md) - [Offline configuration](offline_configuration.md)
--- type: reference, howto stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configuration breadcrumbs: - doc - user - application_security - dast - browser - configuration --- - [Requirements](../_index.md) - [Enabling the analyzer](enabling_the_analyzer.md) - [Customize analyzer settings](customize_settings.md) - [Overriding analyzer jobs](overriding_analyzer_jobs.md) - [Available CI/CD variables](variables.md) - [Authentication configuration](authentication.md) - [Offline configuration](offline_configuration.md)
https://docs.gitlab.com/user/application_security/dast/browser/variables
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/dast/browser/variables.md
2025-08-13
doc/user/application_security/dast/browser/configuration
[ "doc", "user", "application_security", "dast", "browser", "configuration" ]
variables.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Available CI/CD variables
null
<!-- This documentation is auto generated by a script. Please do not edit this file directly. To edit the introductory text, modify `tooling/dast_variables/docs/templates/default.md.haml`. To edit information about the variables, modify `lib/gitlab/security/dast_variables.rb`. Run `bundle exec rake gitlab:dast_variables:compile_docs` or check the `compile_docs` task in `lib/tasks/gitlab/dast_variables.rake`. --> These CI/CD variables are specific to the browser-based DAST analyzer. They can be used to customize the behavior of DAST to your requirements. ## Scanner behavior These variables control how the scan is conducted and where its results are stored. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_CHECKS_TO_EXCLUDE` | string | `552.2,78.1` | Comma-separated list of check identifiers to exclude from the scan. For identifiers, see [vulnerability checks](../checks/_index.md). | | `DAST_CHECKS_TO_RUN` | List of strings | `16.1,16.2,16.3` | Comma-separated list of check identifiers to use for the scan. For identifiers, see [vulnerability checks](../checks/_index.md). | | `DAST_CRAWL_GRAPH` | boolean | `true` | Set to `true` to generate an SVG graph of navigation paths visited during crawl phase of the scan. You must also define `gl-dast-crawl-graph.svg` as a CI job artifact to be able to access the generated graph. Defaults to `false`. | | `DAST_FULL_SCAN` | boolean | `true` | Set to `true` to run both passive and active checks. Default is `false`. | | `DAST_LOG_BROWSER_OUTPUT` | boolean | `true` | Set to `true` to log Chromium `STDOUT` and `STDERR`. | | `DAST_LOG_CONFIG` | List of strings | `brows:debug,auth:debug` | A list of modules and their intended logging level for use in the console log. | | `DAST_LOG_DEVTOOLS_CONFIG` | string | `Default:messageAndBody,truncate:2000` | Set to log protocol messages between DAST and the Chromium browser. | | `DAST_LOG_FILE_CONFIG` | List of strings | `brows:debug,auth:debug` | A list of modules and their intended logging level for use in the file log. | | `DAST_LOG_FILE_PATH` | string | `/output/browserker.log` | Set to the path of the file log. Default is `gl-dast-scan.log`. | | `SECURE_ANALYZERS_PREFIX` | URL | `registry.organization.com` | Set the Docker registry base address from which to download the analyzer. | | `SECURE_LOG_LEVEL` | string | `debug` | Set the default level for the file log. See [SECURE_LOG_LEVEL](../troubleshooting.md#secure_log_level). | ## Elements, actions, and timeouts These variables tell the scanner where to look for certain elements, which actions to take, and how long to wait for operations to complete. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_ACTIVE_SCAN_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `3h` | The maximum amount of time to wait for the active scan phase of the scan to complete. Defaults to 3h. | | `DAST_ACTIVE_SCAN_WORKER_COUNT` | number | `3` | The number of active checks to run in parallel. Defaults to 3. | | `DAST_CRAWL_EXTRACT_ELEMENT_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `5s` | The maximum amount of time to allow the browser to extract newly found elements or navigations. Defaults to `5s`. | | `DAST_CRAWL_MAX_ACTIONS` | number | `10000` | The maximum number of actions that the crawler performs. Example actions include selecting a link, or filling out a form. Defaults to `10000`. | | `DAST_CRAWL_MAX_DEPTH` | number | `10` | The maximum number of chained actions that the crawler takes. For example, `Click, Form Fill, Click` is a depth of three. Defaults to `10`. | | `DAST_CRAWL_SEARCH_ELEMENT_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `3s` | The maximum amount of time to allow the browser to search for new elements or user actions. Defaults to `3s`. | | `DAST_CRAWL_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `5m` | The maximum amount of time to wait for the crawl phase of the scan to complete. Defaults to `24h`. | | `DAST_CRAWL_WORKER_COUNT` | number | `3` | The maximum number of concurrent browser instances to use. For instance runners on GitLab.com, we recommended a maximum of three. Private runners with more resources may benefit from a higher number, but are likely to produce little benefit after five to seven instances. The default value is dynamic, equal to the number of usable logical CPUs. | | `DAST_PAGE_DOM_READY_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis after a navigation completes. Defaults to `6s`. | | `DAST_PAGE_DOM_STABLE_WAIT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `200ms` | Define how long to wait for updates to the DOM before checking a page is stable. Defaults to `500ms`. | | `DAST_PAGE_ELEMENT_READY_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `600ms` | The maximum amount of time to wait for an element before determining it is ready for analysis. Defaults to `300ms`. | | `DAST_PAGE_IS_LOADING_ELEMENT` | [selector](authentication.md#finding-an-elements-selector) | `css:#page-is-loading` | Selector that, when no longer visible on the page, indicates to the analyzer that the page has finished loading and the scan can continue. Cannot be used with `DAST_PAGE_IS_READY_ELEMENT`. | | `DAST_PAGE_IS_READY_ELEMENT` | [selector](authentication.md#finding-an-elements-selector) | `css:#page-is-ready` | Selector that when detected as visible on the page, indicates to the analyzer that the page has finished loading and the scan can continue. Cannot be used with `DAST_PAGE_IS_LOADING_ELEMENT`. | | `DAST_PAGE_MAX_RESPONSE_SIZE_MB` | number | `15` | The maximum size of a HTTP response body. Responses with bodies larger than this are blocked by the browser. Defaults to `10` MB. | | `DAST_PAGE_READY_AFTER_ACTION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis. Defaults to `7s`. | | `DAST_PAGE_READY_AFTER_NAVIGATION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `15s` | The maximum amount of time to wait for a browser to navigate from one page to another. Defaults to `15s`. | | `DAST_PASSIVE_SCAN_WORKER_COUNT` | int | `5` | Number of workers that passive scan in parallel. Defaults to the number of available CPUs. | | `DAST_PKCS12_CERTIFICATE_BASE64` | string | `ZGZkZ2p5NGd...` | The PKCS12 certificate used for sites that require Mutual TLS. Must be encoded as base64 text. | | `DAST_PKCS12_PASSWORD` | string | `password` | The password of the certificate used in `DAST_PKCS12_CERTIFICATE_BASE64`. Create sensitive [custom CI/CI variables](../../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) using the GitLab UI. | | `DAST_REQUEST_ADVERTISE_SCAN` | boolean | `true` | Set to `true` to add a `Via: GitLab DAST <version>` header to every request sent, advertising that the request was sent as part of a GitLab DAST scan. Default: `false`. | | `DAST_REQUEST_COOKIES` | dictionary | `abtesting_group:3,region:locked` | A cookie name and value to be added to every request. | | `DAST_REQUEST_HEADERS` | String | `Cache-control:no-cache` | Set to a comma-separated list of request header names and values. The following headers are not supported: `content-length`, `cookie2`, `keep-alive`, `hosts`, `trailer`, `transfer-encoding`, and all headers with a `proxy-` prefix. | | `DAST_SCOPE_ALLOW_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered in scope when crawled. By default the `DAST_TARGET_URL` hostname is included in the allowed hosts list. Headers set using `DAST_REQUEST_HEADERS` are added to every request made to these hostnames. | | `DAST_SCOPE_EXCLUDE_ELEMENTS` | [selector](authentication.md#finding-an-elements-selector) | `a[href='2.html'],css:.no-follow` | Comma-separated list of selectors that are ignored when scanning. | | `DAST_SCOPE_EXCLUDE_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered excluded and connections are forcibly dropped. | | `DAST_SCOPE_IGNORE_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are accessed, not attacked, and not reported against. | | `DAST_TARGET_CHECK_SKIP` | boolean | `true` | Set to `true` to prevent DAST from checking that the target is available before scanning. Default: `false`. | | `DAST_TARGET_CHECK_TIMEOUT` | number | `60` | Time limit in seconds to wait for target availability. Default: `60s`. | | `DAST_TARGET_PATHS_FILE` | string | `/builds/project/urls.txt` | Scan only these paths instead of crawling the whole site. Set to a file path containing a list of URL paths relative to `DAST_TARGET_URL`. The file must be plain text with one path per line. When this is set, `DAST_CRAWL_MAX_DEPTH` defaults to 1. To prevent this, set `DAST_OVERRIDE_MAX_DEPTH: false`. | | `DAST_TARGET_PATHS` | string | `/page1.html,/category1/page3.html` | Scan only these paths instead of crawling the whole site. Set to a comma-separated list of URL paths relative to `DAST_TARGET_URL`. When this is set, `DAST_CRAWL_MAX_DEPTH` defaults to 1. To prevent this, set `DAST_OVERRIDE_MAX_DEPTH: false`. | | `DAST_TARGET_URL` | URL | `https://site.com` | The URL of the website to scan. | | `DAST_USE_CACHE` | boolean | `true` | Set to `false` to disable caching. Default: `true`. **Note**: Disabling cache can cause OOM events or DAST job timeouts. | ### Authentication These variables tell the scanner how to authenticate with your application. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_AUTH_AFTER_LOGIN_ACTIONS` | string | `select(option=id:accept-yes),click(on=css:.continue)` | A comma-separated list of actions to take after login but before login verification. Supports `click` and `select` actions. See [Taking additional actions after submitting the login form](authentication.md#taking-additional-actions-after-submitting-the-login-form). | | `DAST_AUTH_BEFORE_LOGIN_ACTIONS` | [selector](authentication.md#finding-an-elements-selector) | `css:.user,id:show-login-form` | A comma-separated list of selectors representing elements to click on prior to entering the DAST_AUTH_USERNAME and DAST_AUTH_PASSWORD into the login form. | | `DAST_AUTH_CLEAR_INPUT_FIELDS` | boolean | `true` | Disables clearing of username and password fields before attempting manual login. Set to false by default. | | `DAST_AUTH_COOKIE_NAMES` | string | `sessionID,groupName` | Set to a comma-separated list of cookie names to specify which cookies are used for authentication. | | `DAST_AUTH_FIRST_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element that is clicked on to submit the username form of a multi-page login process. | | `DAST_AUTH_NEGOTIATE_DELEGATION` | string | `*.example.com,example.com,*.EXAMPLE.COM,EXAMPLE.COM` | Which servers should be allowed for integrated authentication and delegation. This property sets two Chromium policies: [AuthServerAllowlist](https://chromeenterprise.google/policies/#AuthServerAllowlist) and [AuthNegotiateDelegateAllowlist](https://chromeenterprise.google/policies/#AuthNegotiateDelegateAllowlist). [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/502476) in GitLab 17.6. | | `DAST_AUTH_OTP_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:otp` | A selector describing the element used to enter the one-time password on the login form. | | `DAST_AUTH_OTP_KEY` | String | `I5UXITDBMIQEIQKTKQFA====` | The Base32 encoded secret key to use when generating a one-time password to authenticate to the website. | | `DAST_AUTH_OTP_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element that is clicked on to submit the OTP form when it is separate from the username. | | `DAST_AUTH_PASSWORD` | String | `P@55w0rd!` | The password to authenticate to in the website. | | `DAST_AUTH_PASSWORD_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:password` | A selector describing the element used to enter the password on the login form. | | `DAST_AUTH_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element clicked on to submit the login form for a single-page login form, or the password form for a multi-page login form. | | `DAST_AUTH_SUCCESS_IF_AT_URL` | URL | `https://www.site.com/welcome*` | A URL that is compared to the URL in the browser to determine if authentication has succeeded after the login form is submitted. Wildcard `*` can be used to match a dynamic URL. | | `DAST_AUTH_SUCCESS_IF_ELEMENT_FOUND` | [selector](authentication.md#finding-an-elements-selector) | `css:.user-avatar` | A selector describing an element whose presence is used to determine if authentication has succeeded after the login form is submitted. | | `DAST_AUTH_SUCCESS_IF_NO_LOGIN_FORM` | boolean | `true` | Verifies successful authentication by checking for the absence of a login form after the login form has been submitted. This success check is enabled by default. | | `DAST_AUTH_TYPE` | string | `basic-digest` | The authentication type to use. | | `DAST_AUTH_URL` | URL | `https://www.site.com/login` | The URL of the page containing the login form on the target website. DAST_AUTH_USERNAME and DAST_AUTH_PASSWORD are submitted with the login form to create an authenticated scan. | | `DAST_AUTH_USERNAME` | string | `user@email.com` | The username to authenticate to in the website. | | `DAST_AUTH_USERNAME_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:username` | A selector describing the element used to enter the username on the login form. | | `DAST_SCOPE_EXCLUDE_URLS` | URLs | `https://site.com/.*/sign-out` | The URLs to skip during the authenticated scan; comma-separated. Regular expression syntax can be used to match multiple URLs. For example, `.*` matches an arbitrary character sequence. | | `DAST_AUTH_REPORT` | boolean | `true` | Set to `true` to generate a report detailing steps taken during the authentication process. You must also define `gl-dast-debug-auth-report.html` as a CI job artifact to be able to access the generated report. The report's content aids when debugging authentication failures. Defaults to `false`. |
--- type: reference, howto stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Available CI/CD variables breadcrumbs: - doc - user - application_security - dast - browser - configuration --- <!-- This documentation is auto generated by a script. Please do not edit this file directly. To edit the introductory text, modify `tooling/dast_variables/docs/templates/default.md.haml`. To edit information about the variables, modify `lib/gitlab/security/dast_variables.rb`. Run `bundle exec rake gitlab:dast_variables:compile_docs` or check the `compile_docs` task in `lib/tasks/gitlab/dast_variables.rake`. --> These CI/CD variables are specific to the browser-based DAST analyzer. They can be used to customize the behavior of DAST to your requirements. ## Scanner behavior These variables control how the scan is conducted and where its results are stored. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_CHECKS_TO_EXCLUDE` | string | `552.2,78.1` | Comma-separated list of check identifiers to exclude from the scan. For identifiers, see [vulnerability checks](../checks/_index.md). | | `DAST_CHECKS_TO_RUN` | List of strings | `16.1,16.2,16.3` | Comma-separated list of check identifiers to use for the scan. For identifiers, see [vulnerability checks](../checks/_index.md). | | `DAST_CRAWL_GRAPH` | boolean | `true` | Set to `true` to generate an SVG graph of navigation paths visited during crawl phase of the scan. You must also define `gl-dast-crawl-graph.svg` as a CI job artifact to be able to access the generated graph. Defaults to `false`. | | `DAST_FULL_SCAN` | boolean | `true` | Set to `true` to run both passive and active checks. Default is `false`. | | `DAST_LOG_BROWSER_OUTPUT` | boolean | `true` | Set to `true` to log Chromium `STDOUT` and `STDERR`. | | `DAST_LOG_CONFIG` | List of strings | `brows:debug,auth:debug` | A list of modules and their intended logging level for use in the console log. | | `DAST_LOG_DEVTOOLS_CONFIG` | string | `Default:messageAndBody,truncate:2000` | Set to log protocol messages between DAST and the Chromium browser. | | `DAST_LOG_FILE_CONFIG` | List of strings | `brows:debug,auth:debug` | A list of modules and their intended logging level for use in the file log. | | `DAST_LOG_FILE_PATH` | string | `/output/browserker.log` | Set to the path of the file log. Default is `gl-dast-scan.log`. | | `SECURE_ANALYZERS_PREFIX` | URL | `registry.organization.com` | Set the Docker registry base address from which to download the analyzer. | | `SECURE_LOG_LEVEL` | string | `debug` | Set the default level for the file log. See [SECURE_LOG_LEVEL](../troubleshooting.md#secure_log_level). | ## Elements, actions, and timeouts These variables tell the scanner where to look for certain elements, which actions to take, and how long to wait for operations to complete. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_ACTIVE_SCAN_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `3h` | The maximum amount of time to wait for the active scan phase of the scan to complete. Defaults to 3h. | | `DAST_ACTIVE_SCAN_WORKER_COUNT` | number | `3` | The number of active checks to run in parallel. Defaults to 3. | | `DAST_CRAWL_EXTRACT_ELEMENT_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `5s` | The maximum amount of time to allow the browser to extract newly found elements or navigations. Defaults to `5s`. | | `DAST_CRAWL_MAX_ACTIONS` | number | `10000` | The maximum number of actions that the crawler performs. Example actions include selecting a link, or filling out a form. Defaults to `10000`. | | `DAST_CRAWL_MAX_DEPTH` | number | `10` | The maximum number of chained actions that the crawler takes. For example, `Click, Form Fill, Click` is a depth of three. Defaults to `10`. | | `DAST_CRAWL_SEARCH_ELEMENT_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `3s` | The maximum amount of time to allow the browser to search for new elements or user actions. Defaults to `3s`. | | `DAST_CRAWL_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `5m` | The maximum amount of time to wait for the crawl phase of the scan to complete. Defaults to `24h`. | | `DAST_CRAWL_WORKER_COUNT` | number | `3` | The maximum number of concurrent browser instances to use. For instance runners on GitLab.com, we recommended a maximum of three. Private runners with more resources may benefit from a higher number, but are likely to produce little benefit after five to seven instances. The default value is dynamic, equal to the number of usable logical CPUs. | | `DAST_PAGE_DOM_READY_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis after a navigation completes. Defaults to `6s`. | | `DAST_PAGE_DOM_STABLE_WAIT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `200ms` | Define how long to wait for updates to the DOM before checking a page is stable. Defaults to `500ms`. | | `DAST_PAGE_ELEMENT_READY_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `600ms` | The maximum amount of time to wait for an element before determining it is ready for analysis. Defaults to `300ms`. | | `DAST_PAGE_IS_LOADING_ELEMENT` | [selector](authentication.md#finding-an-elements-selector) | `css:#page-is-loading` | Selector that, when no longer visible on the page, indicates to the analyzer that the page has finished loading and the scan can continue. Cannot be used with `DAST_PAGE_IS_READY_ELEMENT`. | | `DAST_PAGE_IS_READY_ELEMENT` | [selector](authentication.md#finding-an-elements-selector) | `css:#page-is-ready` | Selector that when detected as visible on the page, indicates to the analyzer that the page has finished loading and the scan can continue. Cannot be used with `DAST_PAGE_IS_LOADING_ELEMENT`. | | `DAST_PAGE_MAX_RESPONSE_SIZE_MB` | number | `15` | The maximum size of a HTTP response body. Responses with bodies larger than this are blocked by the browser. Defaults to `10` MB. | | `DAST_PAGE_READY_AFTER_ACTION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `7s` | The maximum amount of time to wait for a browser to consider a page loaded and ready for analysis. Defaults to `7s`. | | `DAST_PAGE_READY_AFTER_NAVIGATION_TIMEOUT` | [Duration string](https://pkg.go.dev/time#ParseDuration) | `15s` | The maximum amount of time to wait for a browser to navigate from one page to another. Defaults to `15s`. | | `DAST_PASSIVE_SCAN_WORKER_COUNT` | int | `5` | Number of workers that passive scan in parallel. Defaults to the number of available CPUs. | | `DAST_PKCS12_CERTIFICATE_BASE64` | string | `ZGZkZ2p5NGd...` | The PKCS12 certificate used for sites that require Mutual TLS. Must be encoded as base64 text. | | `DAST_PKCS12_PASSWORD` | string | `password` | The password of the certificate used in `DAST_PKCS12_CERTIFICATE_BASE64`. Create sensitive [custom CI/CI variables](../../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) using the GitLab UI. | | `DAST_REQUEST_ADVERTISE_SCAN` | boolean | `true` | Set to `true` to add a `Via: GitLab DAST <version>` header to every request sent, advertising that the request was sent as part of a GitLab DAST scan. Default: `false`. | | `DAST_REQUEST_COOKIES` | dictionary | `abtesting_group:3,region:locked` | A cookie name and value to be added to every request. | | `DAST_REQUEST_HEADERS` | String | `Cache-control:no-cache` | Set to a comma-separated list of request header names and values. The following headers are not supported: `content-length`, `cookie2`, `keep-alive`, `hosts`, `trailer`, `transfer-encoding`, and all headers with a `proxy-` prefix. | | `DAST_SCOPE_ALLOW_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered in scope when crawled. By default the `DAST_TARGET_URL` hostname is included in the allowed hosts list. Headers set using `DAST_REQUEST_HEADERS` are added to every request made to these hostnames. | | `DAST_SCOPE_EXCLUDE_ELEMENTS` | [selector](authentication.md#finding-an-elements-selector) | `a[href='2.html'],css:.no-follow` | Comma-separated list of selectors that are ignored when scanning. | | `DAST_SCOPE_EXCLUDE_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are considered excluded and connections are forcibly dropped. | | `DAST_SCOPE_IGNORE_HOSTS` | List of strings | `site.com,another.com` | Hostnames included in this variable are accessed, not attacked, and not reported against. | | `DAST_TARGET_CHECK_SKIP` | boolean | `true` | Set to `true` to prevent DAST from checking that the target is available before scanning. Default: `false`. | | `DAST_TARGET_CHECK_TIMEOUT` | number | `60` | Time limit in seconds to wait for target availability. Default: `60s`. | | `DAST_TARGET_PATHS_FILE` | string | `/builds/project/urls.txt` | Scan only these paths instead of crawling the whole site. Set to a file path containing a list of URL paths relative to `DAST_TARGET_URL`. The file must be plain text with one path per line. When this is set, `DAST_CRAWL_MAX_DEPTH` defaults to 1. To prevent this, set `DAST_OVERRIDE_MAX_DEPTH: false`. | | `DAST_TARGET_PATHS` | string | `/page1.html,/category1/page3.html` | Scan only these paths instead of crawling the whole site. Set to a comma-separated list of URL paths relative to `DAST_TARGET_URL`. When this is set, `DAST_CRAWL_MAX_DEPTH` defaults to 1. To prevent this, set `DAST_OVERRIDE_MAX_DEPTH: false`. | | `DAST_TARGET_URL` | URL | `https://site.com` | The URL of the website to scan. | | `DAST_USE_CACHE` | boolean | `true` | Set to `false` to disable caching. Default: `true`. **Note**: Disabling cache can cause OOM events or DAST job timeouts. | ### Authentication These variables tell the scanner how to authenticate with your application. | CI/CD variable | Type | Example | Description | | :------------- | :--- | ------- | :---------- | | `DAST_AUTH_AFTER_LOGIN_ACTIONS` | string | `select(option=id:accept-yes),click(on=css:.continue)` | A comma-separated list of actions to take after login but before login verification. Supports `click` and `select` actions. See [Taking additional actions after submitting the login form](authentication.md#taking-additional-actions-after-submitting-the-login-form). | | `DAST_AUTH_BEFORE_LOGIN_ACTIONS` | [selector](authentication.md#finding-an-elements-selector) | `css:.user,id:show-login-form` | A comma-separated list of selectors representing elements to click on prior to entering the DAST_AUTH_USERNAME and DAST_AUTH_PASSWORD into the login form. | | `DAST_AUTH_CLEAR_INPUT_FIELDS` | boolean | `true` | Disables clearing of username and password fields before attempting manual login. Set to false by default. | | `DAST_AUTH_COOKIE_NAMES` | string | `sessionID,groupName` | Set to a comma-separated list of cookie names to specify which cookies are used for authentication. | | `DAST_AUTH_FIRST_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element that is clicked on to submit the username form of a multi-page login process. | | `DAST_AUTH_NEGOTIATE_DELEGATION` | string | `*.example.com,example.com,*.EXAMPLE.COM,EXAMPLE.COM` | Which servers should be allowed for integrated authentication and delegation. This property sets two Chromium policies: [AuthServerAllowlist](https://chromeenterprise.google/policies/#AuthServerAllowlist) and [AuthNegotiateDelegateAllowlist](https://chromeenterprise.google/policies/#AuthNegotiateDelegateAllowlist). [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/502476) in GitLab 17.6. | | `DAST_AUTH_OTP_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:otp` | A selector describing the element used to enter the one-time password on the login form. | | `DAST_AUTH_OTP_KEY` | String | `I5UXITDBMIQEIQKTKQFA====` | The Base32 encoded secret key to use when generating a one-time password to authenticate to the website. | | `DAST_AUTH_OTP_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element that is clicked on to submit the OTP form when it is separate from the username. | | `DAST_AUTH_PASSWORD` | String | `P@55w0rd!` | The password to authenticate to in the website. | | `DAST_AUTH_PASSWORD_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:password` | A selector describing the element used to enter the password on the login form. | | `DAST_AUTH_SUBMIT_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `css:input[type=submit]` | A selector describing the element clicked on to submit the login form for a single-page login form, or the password form for a multi-page login form. | | `DAST_AUTH_SUCCESS_IF_AT_URL` | URL | `https://www.site.com/welcome*` | A URL that is compared to the URL in the browser to determine if authentication has succeeded after the login form is submitted. Wildcard `*` can be used to match a dynamic URL. | | `DAST_AUTH_SUCCESS_IF_ELEMENT_FOUND` | [selector](authentication.md#finding-an-elements-selector) | `css:.user-avatar` | A selector describing an element whose presence is used to determine if authentication has succeeded after the login form is submitted. | | `DAST_AUTH_SUCCESS_IF_NO_LOGIN_FORM` | boolean | `true` | Verifies successful authentication by checking for the absence of a login form after the login form has been submitted. This success check is enabled by default. | | `DAST_AUTH_TYPE` | string | `basic-digest` | The authentication type to use. | | `DAST_AUTH_URL` | URL | `https://www.site.com/login` | The URL of the page containing the login form on the target website. DAST_AUTH_USERNAME and DAST_AUTH_PASSWORD are submitted with the login form to create an authenticated scan. | | `DAST_AUTH_USERNAME` | string | `user@email.com` | The username to authenticate to in the website. | | `DAST_AUTH_USERNAME_FIELD` | [selector](authentication.md#finding-an-elements-selector) | `name:username` | A selector describing the element used to enter the username on the login form. | | `DAST_SCOPE_EXCLUDE_URLS` | URLs | `https://site.com/.*/sign-out` | The URLs to skip during the authenticated scan; comma-separated. Regular expression syntax can be used to match multiple URLs. For example, `.*` matches an arbitrary character sequence. | | `DAST_AUTH_REPORT` | boolean | `true` | Set to `true` to generate a report detailing steps taken during the authentication process. You must also define `gl-dast-debug-auth-report.html` as a CI job artifact to be able to access the generated report. The report's content aids when debugging authentication failures. Defaults to `false`. |
https://docs.gitlab.com/user/application_security/triage
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/triage
[ "doc", "user", "application_security", "triage" ]
_index.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Triage
Vulnerability separation by status.
Triage is the second phase of the vulnerability management lifecycle: detect, triage, analyze, remediate. Triage is an ongoing process of evaluating each vulnerability to decide which need attention now and which are not as critical. High-risk vulnerabilities are separated from medium or low risk threats. It may not be possible or feasible to analyze and remediate every vulnerability. As part of a risk management framework, triage helps ensure resources are applied where they're most effective. It's best to triage vulnerabilities often, so that the number of vulnerabilities per triage cycle is small and manageable. The objective of the triage phase is to either confirm or dismiss each vulnerability. A confirmed vulnerability continues to the analysis phase but a dismissed vulnerability does not. Use the data contained in the [security dashboard](../security_dashboard/_index.md), the [security inventory](../security_inventory/_index.md), and the [vulnerability report](../vulnerability_report/_index.md) to help triage vulnerabilities efficiently and effectively. ## Scope The scope of the triage phase is all those vulnerabilities that have not been triaged. To list these vulnerabilities, use the following filter criteria in the vulnerability report: - **Status**: Needs triage ## Risk analysis You should conduct vulnerability triage according to a risk assessment framework. Depending on your industry or geographical location, compliance with a framework might be required by law. If not, you should use a respected risk assessment framework, for example: - SANS Institute [Vulnerability Management Framework](https://www.sans.org/blog/the-vulnerability-assessment-framework/) - OWASP [Threat and Safeguard Matrix (TaSM)](https://owasp.org/www-project-threat-and-safeguard-matrix/) Generally, the amount of time and effort spent on a vulnerability should be proportional to its risk. For example, your triage strategy might be that only vulnerabilities of critical and high risk continue to the analysis phase and the remainder are dismissed. You should make this decision according to your risk threshold for vulnerabilities. After you triage a vulnerability you should change its status to either: - **Confirmed**: You have triaged this vulnerability and decided it requires analysis. - **Dismissed**: You have triaged this vulnerability and decided against analysis. When you dismiss a vulnerability you must provide a brief comment that states why it has been dismissed. Dismissed vulnerabilities are ignored if detected in subsequent scans. Vulnerability records are permanent but you can change a vulnerability's status at any time. ## Triage strategies Use a risk assessment framework to help guide your vulnerability triage process. The following strategies may also help. ### Prioritize vulnerabilities of significant risk Prioritize vulnerabilities according to their risk. - Use the [Vulnerability Prioritizer CI/CD component](../vulnerabilities/risk_assessment_data.md#vulnerability-prioritizer) to help prioritize vulnerabilities. For example, vulnerabilities in the CISA Known Exploited Vulnerabilities (KEV) catalogue should be analyzed and remediated as highest priority because these are known to have been exploited. - For each group, go to the **Security inventory** to visualize the assets you need to secure and to understand the actions that need to be taken to improve your security posture. - For each group, go to the **Security dashboard** and view the **Project security status** panel. This groups projects by their highest-severity vulnerability. Use this grouping to prioritize triaging vulnerabilities in each project. - Prioritize vulnerability triage on your highest-priority projects - for example, applications deployed to customers. - For each project, view the vulnerability report. Group the vulnerabilities by severity and change the status of all vulnerabilities of critical and high severity to "Confirmed". ### Dismiss vulnerabilities of low risk To ensure you focus on the right vulnerabilities it can help to triage in bulk those that are of low risk. - Vulnerabilities are sometimes detected but no longer detected in subsequent CI/CD pipelines. In this instance the vulnerability's activity is labeled as **No longer detected**. You might choose to dismiss these vulnerabilities if their severity is **low** or **info**. In the vulnerability report, use the filter criteria **Activity: No longer detected** and then bulk dismiss them. You can also automate this by using a [vulnerability management policy](../policies/vulnerability_management_policy.md). - Dismiss vulnerabilities by identifier. If a vulnerability is mitigated by controls outside the application layer, you might choose to dismiss them. In the vulnerability report, use the **Identifier** filter to select all vulnerabilities matching the specific identifier and then bulk dismiss them.
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Triage description: Vulnerability separation by status. breadcrumbs: - doc - user - application_security - triage --- Triage is the second phase of the vulnerability management lifecycle: detect, triage, analyze, remediate. Triage is an ongoing process of evaluating each vulnerability to decide which need attention now and which are not as critical. High-risk vulnerabilities are separated from medium or low risk threats. It may not be possible or feasible to analyze and remediate every vulnerability. As part of a risk management framework, triage helps ensure resources are applied where they're most effective. It's best to triage vulnerabilities often, so that the number of vulnerabilities per triage cycle is small and manageable. The objective of the triage phase is to either confirm or dismiss each vulnerability. A confirmed vulnerability continues to the analysis phase but a dismissed vulnerability does not. Use the data contained in the [security dashboard](../security_dashboard/_index.md), the [security inventory](../security_inventory/_index.md), and the [vulnerability report](../vulnerability_report/_index.md) to help triage vulnerabilities efficiently and effectively. ## Scope The scope of the triage phase is all those vulnerabilities that have not been triaged. To list these vulnerabilities, use the following filter criteria in the vulnerability report: - **Status**: Needs triage ## Risk analysis You should conduct vulnerability triage according to a risk assessment framework. Depending on your industry or geographical location, compliance with a framework might be required by law. If not, you should use a respected risk assessment framework, for example: - SANS Institute [Vulnerability Management Framework](https://www.sans.org/blog/the-vulnerability-assessment-framework/) - OWASP [Threat and Safeguard Matrix (TaSM)](https://owasp.org/www-project-threat-and-safeguard-matrix/) Generally, the amount of time and effort spent on a vulnerability should be proportional to its risk. For example, your triage strategy might be that only vulnerabilities of critical and high risk continue to the analysis phase and the remainder are dismissed. You should make this decision according to your risk threshold for vulnerabilities. After you triage a vulnerability you should change its status to either: - **Confirmed**: You have triaged this vulnerability and decided it requires analysis. - **Dismissed**: You have triaged this vulnerability and decided against analysis. When you dismiss a vulnerability you must provide a brief comment that states why it has been dismissed. Dismissed vulnerabilities are ignored if detected in subsequent scans. Vulnerability records are permanent but you can change a vulnerability's status at any time. ## Triage strategies Use a risk assessment framework to help guide your vulnerability triage process. The following strategies may also help. ### Prioritize vulnerabilities of significant risk Prioritize vulnerabilities according to their risk. - Use the [Vulnerability Prioritizer CI/CD component](../vulnerabilities/risk_assessment_data.md#vulnerability-prioritizer) to help prioritize vulnerabilities. For example, vulnerabilities in the CISA Known Exploited Vulnerabilities (KEV) catalogue should be analyzed and remediated as highest priority because these are known to have been exploited. - For each group, go to the **Security inventory** to visualize the assets you need to secure and to understand the actions that need to be taken to improve your security posture. - For each group, go to the **Security dashboard** and view the **Project security status** panel. This groups projects by their highest-severity vulnerability. Use this grouping to prioritize triaging vulnerabilities in each project. - Prioritize vulnerability triage on your highest-priority projects - for example, applications deployed to customers. - For each project, view the vulnerability report. Group the vulnerabilities by severity and change the status of all vulnerabilities of critical and high severity to "Confirmed". ### Dismiss vulnerabilities of low risk To ensure you focus on the right vulnerabilities it can help to triage in bulk those that are of low risk. - Vulnerabilities are sometimes detected but no longer detected in subsequent CI/CD pipelines. In this instance the vulnerability's activity is labeled as **No longer detected**. You might choose to dismiss these vulnerabilities if their severity is **low** or **info**. In the vulnerability report, use the filter criteria **Activity: No longer detected** and then bulk dismiss them. You can also automate this by using a [vulnerability management policy](../policies/vulnerability_management_policy.md). - Dismiss vulnerabilities by identifier. If a vulnerability is mitigated by controls outside the application layer, you might choose to dismiss them. In the vulnerability report, use the **Identifier** filter to select all vulnerabilities matching the specific identifier and then bulk dismiss them.
https://docs.gitlab.com/user/application_security/vulnerability_management_policy_schema
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/vulnerability_management_policy_schema.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
vulnerability_management_policy_schema.md
Security Risk Management
Security Insights
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability management policy schema
null
The YAML file with vulnerability management policies consists of an array of objects matching the vulnerability management policy schema nested under the `vulnerability_management_policy` key. When you save a vulnerability management policy, its content is validated against the vulnerability management policy schema. If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-----------------------------------|--------------------------------------------|----------|-------------| | `vulnerability_management_policy` | `array` of vulnerability management policy | true | List of vulnerability management policies (maximum 5) | ## Vulnerability management policy | Field | Type | Required | Description | |----------------|----------------------------------------------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | List of rules that define the policy's criteria. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Scope of the policy, based on the projects, groups, or compliance framework labels you specify. | | `actions` | `array` of actions | true | Action to be taken on vulnerabilities matching the policy. | ### `no_longer_detected` rule This rule defines the criteria for the policy. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `no_longer_detected` | The rule's type. | | `scanners` | `array` | `true` | `sast`, `secret_detection`, `dependency_scanning`, `container_scanning`, `dast`, `coverage_fuzzing`, `api_fuzzing` | Specifies the scanners for which this policy is enforced. | | `severity_levels` | `array` | `true` | `critical`, `high`, `medium`, `low`, `info`, `unknown` | Specifies the severity levels for which this policy is enforced. | ### `auto_resolve` action This action resolves vulnerabilities matching the policy's rules and scope. | Field | Type | Required | Possible values | Description | |--------|----------|----------|-----------------|-------------| | `type` | `string` | true | `auto_resolve` | The action's type. |
--- stage: Security Risk Management group: Security Insights info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability management policy schema breadcrumbs: - doc - user - application_security - policies --- The YAML file with vulnerability management policies consists of an array of objects matching the vulnerability management policy schema nested under the `vulnerability_management_policy` key. When you save a vulnerability management policy, its content is validated against the vulnerability management policy schema. If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-----------------------------------|--------------------------------------------|----------|-------------| | `vulnerability_management_policy` | `array` of vulnerability management policy | true | List of vulnerability management policies (maximum 5) | ## Vulnerability management policy | Field | Type | Required | Description | |----------------|----------------------------------------------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | List of rules that define the policy's criteria. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Scope of the policy, based on the projects, groups, or compliance framework labels you specify. | | `actions` | `array` of actions | true | Action to be taken on vulnerabilities matching the policy. | ### `no_longer_detected` rule This rule defines the criteria for the policy. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `no_longer_detected` | The rule's type. | | `scanners` | `array` | `true` | `sast`, `secret_detection`, `dependency_scanning`, `container_scanning`, `dast`, `coverage_fuzzing`, `api_fuzzing` | Specifies the scanners for which this policy is enforced. | | `severity_levels` | `array` | `true` | `critical`, `high`, `medium`, `low`, `info`, `unknown` | Specifies the severity levels for which this policy is enforced. | ### `auto_resolve` action This action resolves vulnerabilities matching the policy's rules and scope. | Field | Type | Required | Possible values | Description | |--------|----------|----------|-----------------|-------------| | `type` | `string` | true | `auto_resolve` | The action's type. |
https://docs.gitlab.com/user/application_security/scheduled_pipeline_execution_policies
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/scheduled_pipeline_execution_policies.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
scheduled_pipeline_execution_policies.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Scheduled pipeline execution policies
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14147) as an experiment in GitLab 18.0 with a flag named `scheduled_pipeline_execution_policy_type` defined in the `policy.yml` file. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} Pipeline execution policies enforce custom CI/CD jobs in your projects' pipelines. With scheduled pipeline execution policies, you can extend this enforcement to run the CI/CD job on a regular cadence (daily, weekly, or monthly), ensuring that compliance scripts, security scans, or other custom CI/CD job are executed even when there are no new commits. ## Scheduling your pipeline execution policies Unlike regular pipeline execution policies that inject or override jobs in existing pipelines, scheduled policies create new pipelines that run independently on the schedule you define. Common use cases include: - Enforce security scans on a regular cadence to meet compliance requirements. - Check project configurations periodically. - Run dependency scans on inactive repositories to detect newly discovered vulnerabilities. - Execute compliance reporting scripts on a schedule. ## Enable scheduled pipeline execution policies Scheduled pipeline execution policies are available as an experimental feature. To enable this feature in your environment, enable the `pipeline_execution_schedule_policy` experiment in the security policy configuration. The `.gitlab/security-policies/policy.yml` YAML configuration file is stored in your Security Policy Project: ```yaml experiments: pipeline_execution_schedule_policy: enabled: true ``` {{< alert type="note" >}} This feature is experimental and may change in future releases. You should test it thoroughly in a non-production environment only. You should not use this feature in production environments as it may be unstable. {{< /alert >}} ## Configure schedule pipeline execution policies To configure a scheduled pipeline execution policy, add additional configuration fields to the `pipeline_execution_schedule_policy` section of your security policy project's `.gitlab/security-policies/policy.yml` file: ```yaml pipeline_execution_schedule_policy: - name: Scheduled Pipeline Execution Policy description: '' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: daily start_time: '10:00' time_window: value: 600 distribution: random ``` ### Schedule configuration schema The `schedules` section allows you to configure when security policy jobs run automatically. You can create daily, weekly, or monthly schedules with specific execution times and distribution windows. ### Schedules configuration options The `schedules` section supports the following options: | Parameter | Description | |-----------|-------------| | `type` | Schedule type: `daily`, `weekly`, or `monthly` | | `start_time` | Time to start the schedule in 24-hour format (HH:MM) | | `time_window` | Time window in which to distribute the pipeline executions | | `time_window.value` | Duration in seconds (minimum: 600, maximum: 2629746) | | `time_window.distribution` | Distribution method (currently, only `random` is supported) | | `timezone` | IANA timezone identifier (defaults to UTC if not specified) | | `branches` | Optional array with names of the branches to schedule pipelines on. If `branches` is specified, pipelines run only on the specified branches and only if they exist in the project. If not specified, pipelines run only on the default branch. You can provide a maximum of five unique branch names per schedule. | | `days` | Use with weekly schedules only: Array of days when the schedule runs (for example, `["Monday", "Friday"]`) | | `days_of_month` | Use with monthly schedules only: Array of dates when the schedule runs (for example, `[1, 15]`, can include values from 1 to 31) | | `snooze` | Optional configuration to temporarily pause the schedule | | `snooze.until` | ISO8601 date and time when the schedule resumes after the snooze (format: `2025-06-13T20:20:00+00:00`) | | `snooze.reason` | Optional documentation explaining why the schedule is snoozed | ### Schedule examples Use daily, weekly, or monthly schedules. #### Daily schedule example ```yaml schedules: - type: daily start_time: "01:00" time_window: value: 3600 # 1 hour window distribution: random timezone: "America/New_York" branches: - main - develop - staging ``` #### Weekly schedule example ```yaml schedules: - type: weekly days: - Monday - Wednesday - Friday start_time: "04:30" time_window: value: 7200 # 2 hour window distribution: random timezone: "Europe/Berlin" ``` #### Monthly schedule example ```yaml schedules: - type: monthly days_of_month: - 1 - 15 start_time: "02:15" time_window: value: 14400 # 4 hour window distribution: random timezone: "Asia/Tokyo" ``` ### Time window distribution To prevent overwhelming your CI/CD infrastructure when applying policies to multiple projects, scheduled pipeline execution policies distribute the creation of the pipelines across a time window with some common rules: - All pipelines are scheduled at `random`. Pipelines are randomly distributed during the specified time window. - The minimum time window is 10 minutes (600 seconds), and the maximum is approximately 1 month (2,629,746 seconds). - For monthly schedules, if you specify dates that don't exist in certain months (like 31 for February), those runs are skipped. - A scheduled policy can only have one schedule configuration at a time. ## Snooze scheduled pipeline execution policies You can temporarily pause scheduled pipeline execution policies using the snooze feature. Use the snooze feature during maintenance windows, holidays, or when you need to prevent scheduled pipelines from running for a specific time period. ### How snoozing works When you snooze a scheduled pipeline execution policy: - No new scheduled pipelines are created during the snooze period. - Pipelines that were created before the snooze continue to execute. - The policy remains enabled but in a snoozed state. - After the snooze period expires, scheduled pipeline execution resumes automatically. ### Configuring snooze To snooze a scheduled pipeline execution policy, add a `snooze` section to the schedule configuration: ```yaml pipeline_execution_schedule_policy: - name: Weekly Security Scan description: 'Run security scans every week' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: weekly start_time: '02:00' time_window: value: 3600 distribution: random timezone: UTC days: - Monday snooze: until: "2025-06-26T16:27:00+00:00" # ISO8601 format reason: "Critical production deployment" ``` The `snooze.until` parameter specifies when the snooze period ends using the ISO8601 format: `YYYY-MM-DDThh:mm:ss+00:00` where: - `YYYY-MM-DD`: Year, month, and day - `T`: Separator between date and time - `hh:mm:ss`: Hours, minutes, and seconds in 24-hour format - `+00:00`: Time zone offset from UTC (or Z for UTC) For example, `2025-06-26T16:27:00+00:00` represents June 26, 2025, at 4:27 PM UTC. ### Removing a snooze To remove a snooze before its expiration time, remove the `snooze` section from the policy configuration or set a date in the past for the `until` value. ## Schedule pipelines for specific branches By default, schedules run on the default branch only. Scheduled pipeline execution policies support branch filtering, which allows you to schedule pipelines for additional branches. Use the `branches` property to perform regular scans or checks on other important branches in your project. When you configure the `branches` property in your schedule: - If you don't specify any branches, the scheduled pipeline runs only on the default branch. - If you specify branches, the policy schedules pipelines for each specified branch that actually exists in the project. - You can specify a maximum of five unique branch names per schedule. - You must specify each branch name in full. Wildcard matching is not supported. ### Branch filtering example ```yaml pipeline_execution_schedule_policy: - name: Scan Multiple Branches description: 'Run security scans on main, staging and develop branches' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: weekly days: - Monday start_time: '02:00' time_window: value: 3600 distribution: random branches: - main - staging - develop - feature/new-authentication ``` In this example, if all of the specified branches exist in the project, the policy creates four separate pipelines (one for each branch). ## Requirements To use scheduled pipeline execution policies: 1. Store your CI/CD configuration in your security policy project. 1. In your security policy project's **Settings** > **General** > **Visibility, project features, permissions** section, enable the **Grant security policy project access to CI/CD configuration** setting. 1. Ensure your CI/CD configuration includes appropriate workflow rules for scheduled pipelines. The security policy bot is a system account that GitLab automatically creates to handle the execution of security policies. When you enable the appropriate settings, this bot is granted the necessary permissions to access CI/CD configurations and run scheduled pipelines. The permissions are only necessary if the CI/CD configurations is not in a public project. Note these limitations: - If no branches are specified, scheduled pipeline execution policies only run on the default branch. - You can specify up to five unique branch names in the `branches` array. - Time windows must be at least 10 minutes (600 seconds) to ensure proper distribution of pipelines. - The maximum number of scheduled pipeline execution policies per security policy project is limited to 1 policy with 1 schedule. - This feature is experimental and may change in future releases. - Scheduled pipelines can be delayed if there are insufficient runners available. - The maximum frequency for schedules is daily. ## Troubleshooting If your scheduled pipelines are not running as expected, follow these troubleshooting steps: 1. **Verify experimental flag**: Ensure that the `pipeline_execution_schedule_policy: enabled: true` flag is set in the `experiments` section of your `policy.yml` file. 1. **Check policy access**: Verify that your security policy project has access to the CI/CD configuration: - Go to the security policy project's **Settings** > **General** > **Visibility, project features, permissions** and ensure the "Pipeline execution policies" setting is enabled. 1. **Validate CI configuration**: - Check that the CI/CD configuration file exists at the specified path. - Verify the configuration is valid by running a manual pipeline. - Ensure the configuration includes appropriate workflow rules for scheduled pipelines. 1. **Verify policy configuration**: - Ensure the policy is enabled (`enabled: true`). - Verify that the schedule configuration has the correct format and valid values. - If you've specified branches, verify that the branches exist in the project. - Verify that the time zone setting is correct (if specified). 1. **Review logs and activity**: - Check the security policy project's CI/CD pipeline logs for any errors. 1. **Check runner availability**: - Ensure that runners are available and configured properly. - Verify that runners have the capacity to handle the scheduled jobs.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Scheduled pipeline execution policies breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14147) as an experiment in GitLab 18.0 with a flag named `scheduled_pipeline_execution_policy_type` defined in the `policy.yml` file. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} Pipeline execution policies enforce custom CI/CD jobs in your projects' pipelines. With scheduled pipeline execution policies, you can extend this enforcement to run the CI/CD job on a regular cadence (daily, weekly, or monthly), ensuring that compliance scripts, security scans, or other custom CI/CD job are executed even when there are no new commits. ## Scheduling your pipeline execution policies Unlike regular pipeline execution policies that inject or override jobs in existing pipelines, scheduled policies create new pipelines that run independently on the schedule you define. Common use cases include: - Enforce security scans on a regular cadence to meet compliance requirements. - Check project configurations periodically. - Run dependency scans on inactive repositories to detect newly discovered vulnerabilities. - Execute compliance reporting scripts on a schedule. ## Enable scheduled pipeline execution policies Scheduled pipeline execution policies are available as an experimental feature. To enable this feature in your environment, enable the `pipeline_execution_schedule_policy` experiment in the security policy configuration. The `.gitlab/security-policies/policy.yml` YAML configuration file is stored in your Security Policy Project: ```yaml experiments: pipeline_execution_schedule_policy: enabled: true ``` {{< alert type="note" >}} This feature is experimental and may change in future releases. You should test it thoroughly in a non-production environment only. You should not use this feature in production environments as it may be unstable. {{< /alert >}} ## Configure schedule pipeline execution policies To configure a scheduled pipeline execution policy, add additional configuration fields to the `pipeline_execution_schedule_policy` section of your security policy project's `.gitlab/security-policies/policy.yml` file: ```yaml pipeline_execution_schedule_policy: - name: Scheduled Pipeline Execution Policy description: '' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: daily start_time: '10:00' time_window: value: 600 distribution: random ``` ### Schedule configuration schema The `schedules` section allows you to configure when security policy jobs run automatically. You can create daily, weekly, or monthly schedules with specific execution times and distribution windows. ### Schedules configuration options The `schedules` section supports the following options: | Parameter | Description | |-----------|-------------| | `type` | Schedule type: `daily`, `weekly`, or `monthly` | | `start_time` | Time to start the schedule in 24-hour format (HH:MM) | | `time_window` | Time window in which to distribute the pipeline executions | | `time_window.value` | Duration in seconds (minimum: 600, maximum: 2629746) | | `time_window.distribution` | Distribution method (currently, only `random` is supported) | | `timezone` | IANA timezone identifier (defaults to UTC if not specified) | | `branches` | Optional array with names of the branches to schedule pipelines on. If `branches` is specified, pipelines run only on the specified branches and only if they exist in the project. If not specified, pipelines run only on the default branch. You can provide a maximum of five unique branch names per schedule. | | `days` | Use with weekly schedules only: Array of days when the schedule runs (for example, `["Monday", "Friday"]`) | | `days_of_month` | Use with monthly schedules only: Array of dates when the schedule runs (for example, `[1, 15]`, can include values from 1 to 31) | | `snooze` | Optional configuration to temporarily pause the schedule | | `snooze.until` | ISO8601 date and time when the schedule resumes after the snooze (format: `2025-06-13T20:20:00+00:00`) | | `snooze.reason` | Optional documentation explaining why the schedule is snoozed | ### Schedule examples Use daily, weekly, or monthly schedules. #### Daily schedule example ```yaml schedules: - type: daily start_time: "01:00" time_window: value: 3600 # 1 hour window distribution: random timezone: "America/New_York" branches: - main - develop - staging ``` #### Weekly schedule example ```yaml schedules: - type: weekly days: - Monday - Wednesday - Friday start_time: "04:30" time_window: value: 7200 # 2 hour window distribution: random timezone: "Europe/Berlin" ``` #### Monthly schedule example ```yaml schedules: - type: monthly days_of_month: - 1 - 15 start_time: "02:15" time_window: value: 14400 # 4 hour window distribution: random timezone: "Asia/Tokyo" ``` ### Time window distribution To prevent overwhelming your CI/CD infrastructure when applying policies to multiple projects, scheduled pipeline execution policies distribute the creation of the pipelines across a time window with some common rules: - All pipelines are scheduled at `random`. Pipelines are randomly distributed during the specified time window. - The minimum time window is 10 minutes (600 seconds), and the maximum is approximately 1 month (2,629,746 seconds). - For monthly schedules, if you specify dates that don't exist in certain months (like 31 for February), those runs are skipped. - A scheduled policy can only have one schedule configuration at a time. ## Snooze scheduled pipeline execution policies You can temporarily pause scheduled pipeline execution policies using the snooze feature. Use the snooze feature during maintenance windows, holidays, or when you need to prevent scheduled pipelines from running for a specific time period. ### How snoozing works When you snooze a scheduled pipeline execution policy: - No new scheduled pipelines are created during the snooze period. - Pipelines that were created before the snooze continue to execute. - The policy remains enabled but in a snoozed state. - After the snooze period expires, scheduled pipeline execution resumes automatically. ### Configuring snooze To snooze a scheduled pipeline execution policy, add a `snooze` section to the schedule configuration: ```yaml pipeline_execution_schedule_policy: - name: Weekly Security Scan description: 'Run security scans every week' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: weekly start_time: '02:00' time_window: value: 3600 distribution: random timezone: UTC days: - Monday snooze: until: "2025-06-26T16:27:00+00:00" # ISO8601 format reason: "Critical production deployment" ``` The `snooze.until` parameter specifies when the snooze period ends using the ISO8601 format: `YYYY-MM-DDThh:mm:ss+00:00` where: - `YYYY-MM-DD`: Year, month, and day - `T`: Separator between date and time - `hh:mm:ss`: Hours, minutes, and seconds in 24-hour format - `+00:00`: Time zone offset from UTC (or Z for UTC) For example, `2025-06-26T16:27:00+00:00` represents June 26, 2025, at 4:27 PM UTC. ### Removing a snooze To remove a snooze before its expiration time, remove the `snooze` section from the policy configuration or set a date in the past for the `until` value. ## Schedule pipelines for specific branches By default, schedules run on the default branch only. Scheduled pipeline execution policies support branch filtering, which allows you to schedule pipelines for additional branches. Use the `branches` property to perform regular scans or checks on other important branches in your project. When you configure the `branches` property in your schedule: - If you don't specify any branches, the scheduled pipeline runs only on the default branch. - If you specify branches, the policy schedules pipelines for each specified branch that actually exists in the project. - You can specify a maximum of five unique branch names per schedule. - You must specify each branch name in full. Wildcard matching is not supported. ### Branch filtering example ```yaml pipeline_execution_schedule_policy: - name: Scan Multiple Branches description: 'Run security scans on main, staging and develop branches' enabled: true content: include: - project: your-group/your-project file: security-scan.yml schedules: - type: weekly days: - Monday start_time: '02:00' time_window: value: 3600 distribution: random branches: - main - staging - develop - feature/new-authentication ``` In this example, if all of the specified branches exist in the project, the policy creates four separate pipelines (one for each branch). ## Requirements To use scheduled pipeline execution policies: 1. Store your CI/CD configuration in your security policy project. 1. In your security policy project's **Settings** > **General** > **Visibility, project features, permissions** section, enable the **Grant security policy project access to CI/CD configuration** setting. 1. Ensure your CI/CD configuration includes appropriate workflow rules for scheduled pipelines. The security policy bot is a system account that GitLab automatically creates to handle the execution of security policies. When you enable the appropriate settings, this bot is granted the necessary permissions to access CI/CD configurations and run scheduled pipelines. The permissions are only necessary if the CI/CD configurations is not in a public project. Note these limitations: - If no branches are specified, scheduled pipeline execution policies only run on the default branch. - You can specify up to five unique branch names in the `branches` array. - Time windows must be at least 10 minutes (600 seconds) to ensure proper distribution of pipelines. - The maximum number of scheduled pipeline execution policies per security policy project is limited to 1 policy with 1 schedule. - This feature is experimental and may change in future releases. - Scheduled pipelines can be delayed if there are insufficient runners available. - The maximum frequency for schedules is daily. ## Troubleshooting If your scheduled pipelines are not running as expected, follow these troubleshooting steps: 1. **Verify experimental flag**: Ensure that the `pipeline_execution_schedule_policy: enabled: true` flag is set in the `experiments` section of your `policy.yml` file. 1. **Check policy access**: Verify that your security policy project has access to the CI/CD configuration: - Go to the security policy project's **Settings** > **General** > **Visibility, project features, permissions** and ensure the "Pipeline execution policies" setting is enabled. 1. **Validate CI configuration**: - Check that the CI/CD configuration file exists at the specified path. - Verify the configuration is valid by running a manual pipeline. - Ensure the configuration includes appropriate workflow rules for scheduled pipelines. 1. **Verify policy configuration**: - Ensure the policy is enabled (`enabled: true`). - Verify that the schedule configuration has the correct format and valid values. - If you've specified branches, verify that the branches exist in the project. - Verify that the time zone setting is correct (if specified). 1. **Review logs and activity**: - Check the security policy project's CI/CD pipeline logs for any errors. 1. **Check runner availability**: - Ensure that runners are available and configured properly. - Verify that runners have the capacity to handle the scheduled jobs.
https://docs.gitlab.com/user/application_security/centralized_security_policy_management
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/centralized_security_policy_management.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
centralized_security_policy_management.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](enforcement/compliance_and_security_policy_groups.md). <!-- This redirect file can be deleted after 2025-10-23. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: enforcement/compliance_and_security_policy_groups.md remove_date: '2025-07-23' breadcrumbs: - doc - user - application_security - policies --- <!-- markdownlint-disable --> This document was moved to [another location](enforcement/compliance_and_security_policy_groups.md). <!-- This redirect file can be deleted after 2025-10-23. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/pipeline_execution_policies
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/pipeline_execution_policies.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
pipeline_execution_policies.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Pipeline execution policies
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13266) in GitLab 17.2 [with a flag](../../../administration/feature_flags/_index.md) named `pipeline_execution_policy_type`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/454278) in GitLab 17.3. Feature flag `pipeline_execution_policy_type` removed. {{< /history >}} Use pipeline execution policies to manage and enforce CI/CD jobs for multiple projects with a single configuration. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [Security Policies: Pipeline Execution Policy Type](https://www.youtube.com/watch?v=QQAOpkZ__pA). ## Schema {{< history >}} - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/159858) the `suffix` field in GitLab 17.4. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165096) pipeline execution so later stages wait for the `.pipeline-policy-pre` stage to complete in GitLab 17.7. {{< /history >}} The YAML file with pipeline execution policies consists of an array of objects matching pipeline execution policy schema nested under the `pipeline_execution_policy` key. You can configure a maximum of five policies under the `pipeline_execution_policy` key per security policy project. Any other policies configured after the first five are not applied. When you save a new policy, GitLab validates its contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-------|------|----------|-------------| | `pipeline_execution_policy` | `array` of pipeline execution policy | true | List of pipeline execution policies (maximum five) | ## `pipeline_execution_policy` schema | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters.| | `description` (optional) | `string` | true | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `content` | `object` of [`content`](#content-type) | true | Reference to the CI/CD configuration to inject into project pipelines. | | `pipeline_config_strategy` | `string` | false | Can be `inject_policy`, `inject_ci` (deprecated), or `override_project_ci`. See [pipeline strategies](#pipeline-configuration-strategies) for more information. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Scopes the policy based on projects, groups, or compliance framework labels you specify. | | `suffix` | `string` | false | Can either be `on_conflict` (default), or `never`. Defines the behavior for handling job naming conflicts. `on_conflict` applies a unique suffix to the job names for jobs that would break the uniqueness. `never` causes the pipeline to fail if the job names across the project and all applicable policies are not unique. | | `skip_ci` | `object` of [`skip_ci`](pipeline_execution_policies.md#skip_ci-type) | false | Defines whether users can apply the `skip-ci` directive. By default, the use of `skip-ci` is ignored and as a result, pipelines with pipeline execution policies cannot be skipped. | | `variables_override` | `object` of [`variables_override`](pipeline_execution_policies.md#variables_override-type) | false | Controls whether users can override the behavior of policy variables. By default, the policy variables are enforced with the highest precedence and users cannot override them. | Note the following: - Users that trigger a pipeline must have at least read access to the pipeline execution file specified in the pipeline execution policy, otherwise the pipelines do not start. - If the pipeline execution file gets deleted or renamed, the pipelines in projects with the policy enforced might stop working. - Pipeline execution policy jobs can be assigned to one of the two reserved stages: - `.pipeline-policy-pre` at the beginning of the pipeline, before the `.pre` stage. - `.pipeline-policy-post` at the very end of the pipeline, after the `.post` stage. - Injecting jobs in any of the reserved stages is guaranteed to always work. Execution policy jobs can also be assigned to any standard (build, test, deploy) or user-declared stages. However, in this case, the jobs may be ignored depending on the project pipeline configuration. - It is not possible to assign jobs to reserved stages outside of a pipeline execution policy. - Choose unique job names for pipeline execution policies. Some CI/CD configurations are based on job names, which can lead to unwanted results if a job name exists multiple times in the same pipeline. For example, the `needs` keyword makes one job dependent on another. If there are multiple jobs with the name `example`, a job that `needs` the `example` job name depends on only one of the `example` job instances at random. - Pipeline execution policies remain in effect even if the project lacks a CI/CD configuration file. - The order of the policies matters for the applied suffix. - If any policy applied to a given project has `suffix: never`, the pipeline fails if another job with the same name is already present in the pipeline. - Pipeline execution policies are enforced on all branches and pipeline sources. You can use [workflow rules](../../../ci/yaml/workflow.md) to control when pipeline execution policies are enforced. ### `.pipeline-policy-pre` stage Jobs in the `.pipeline-policy-pre` stage always execute. This stage is designed for security and compliance use cases. Jobs in the pipeline do not begin until the `.pipeline-policy-pre` stage completes. If you don't require this behavior for your workflow, you can use the `.pre` stage or a custom stage instead. #### Ensure that `.pipeline-policy-pre` succeeds {{< details >}} - Status: Experiment {{< /details >}} {{< alert type="note" >}} This feature is experimental and might change in future releases. Test it thoroughly in non-production environments only, as it might be unstable in production. {{< /alert >}} To ensure that `.pipeline-policy-pre` completes and succeeds, enable the `ensure_pipeline_policy_pre_succeeds` experiment in the security policy configuration. The `.gitlab/security-policies/policy.yml` YAML configuration file is stored in your security policy project: ```yaml experiments: ensure_pipeline_policy_pre_succeeds: enabled: true ``` If the `.pipeline-policy-pre` stage fails or all jobs in the stage are skipped, all jobs in later stages are skipped, including: - Jobs with `needs: []` - Jobs with `when: always` When multiple pipeline execution policies apply, the experiment takes effect if enabled in any of them, ensuring that `.pipeline-policy-pre` must succeed. ### Job naming best practice {{< history >}} - Naming conflict handling [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/473189) in GitLab 17.4. {{< /history >}} There is no visible indicator that a job was generated by a security policy. To make it easier to identify jobs that were created by policies and avoid job name collisions, add a unique prefix or suffix to the job name. Examples: - Use: `policy1:deployments:sast`. This name is likely unique across all other policies and projects. - Don't use: `sast`. This name is likely to be duplicated in other policies and projects. Pipeline execution policies handle naming conflicts depending on the `suffix` attribute. If there are multiple jobs with the same name: - Using `on_conflict` (default), a suffix is added to a job if its name conflicts with another job in the pipeline. - Using `never`, no suffix is added in the event of a conflict and the pipeline fails. The suffix is added based on the order in which the jobs are merged onto the main pipeline. The order is as follows: 1. Project pipeline jobs 1. Project policy jobs (if applicable) 1. Group policy jobs (if applicable, ordered by hierarchy, the most top-level group is applied as last) The applied suffix has the following format: `:policy-<security-policy-project-id>-<policy-index>`. Example of the resulting job: `sast:policy-123456-0`. If multiple policies in on security policy project define the same job name, the numerical suffix corresponds to the index of the conflicting policy. Example of the resulting jobs: - `sast:policy-123456-0` - `sast:policy-123456-1` ### Job stage best practice Jobs defined in a pipeline execution policy can use any [stage](../../../ci/yaml/_index.md#stage) defined in the project's CI/CD configuration, also the reserved stages `.pipeline-policy-pre` and `.pipeline-policy-post`. {{< alert type="note" >}} If your policy contains jobs only in the `.pre` and `.post` stages, the policy's pipeline is evaluated as `empty`. It is not merged with the project's pipeline. To use the `.pre` and `.post` stages in a pipeline execution policy, you must include at least one other job that runs in a different stage. For example: `.pipeline-policy-pre`. {{< /alert >}} When you use the `inject_policy` [pipeline strategy](#pipeline-configuration-strategies), if a target project does not contain its own `.gitlab-ci.yml` file, all policy stages are injected into the pipeline. When you use the (deprecated) `inject_ci` [pipeline strategy](#pipeline-configuration-strategies), if a target project does not contain its own `.gitlab-ci.yml` file, then the only stages available are the default pipeline stages and the reserved stages. When you enforce pipeline execution policies over projects with CI/CD configurations that you do not have permissions to modify, you should define jobs in the `.pipeline-policy-pre` and `.pipeline-policy-post` stages. These stages are always available, regardless of any project's CI/CD configuration. When you use the `override_project_ci` [pipeline strategy](#pipeline-configuration-strategies) with multiple pipeline execution policies and with custom stages, the stages must be defined in the same relative order to be compatible with each other: Valid configuration example: ```yaml - override-policy-1 stages: [build, test, policy-test, deploy] - override-policy-2 stages: [test, deploy] ``` Invalid configuration example: ```yaml - override-policy-1 stages: [build, test, policy-test, deploy] - override-policy-2 stages: [deploy, test] ``` The pipeline fails if one or more `override_project_ci` policies has an invalid `stages` configuration. ### `content` type | Field | Type | Required | Description | |-------|------|----------|-------------| | `project` | `string` | true | The full GitLab project path to a project on the same GitLab instance. | | `file` | `string` | true | A full file path relative to the root directory (/). The YAML files must have the `.yml` or `.yaml` extension. | | `ref` | `string` | false | The ref to retrieve the file from. Defaults to the HEAD of the project when not specified. | Use the `content` type in a policy to reference a CI/CD configuration stored in another repository. This allows you to reuse the same CI/CD configuration across multiple policies, reducing the overhead of maintaining these configurations. For example, if you have a custom secret detection CI/CD configuration you want to enforce in policy A and policy B, you can create a single YAML configuration file and reference the configuration in both policies. Prerequisites: - Users triggering pipelines run in those projects on which a policy containing the `content` type is enforced must have at minimum read-only access to the project containing the CI/CD - In projects that enforce pipeline execution policies, users must have at least read-only access to the project that contains the CI/CD configuration to trigger the pipeline. In GitLab 17.4 and later, you can grant the required read-only access for the CI/CD configuration file specified in a security policy project using the `content` type. To do so, enable the setting **Pipeline execution policies** in the general settings of the security policy project. Enabling this setting grants the user who triggered the pipeline access to read the CI/CD configuration file enforced by the pipeline execution policy. This setting does not grant the user access to any other parts of the project where the configuration file is stored. For more details, see [Grant access automatically](#grant-access-automatically). ### `skip_ci` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/173480) in GitLab 17.7. {{< /history >}} Pipeline execution policies offer control over who can use the `[skip ci]` directive. You can specify certain users or service accounts that are allowed to use `[skip ci]` while still ensuring critical security and compliance checks are performed. Use the `skip_ci` keyword to specify whether users are allowed to apply the `skip_ci` directive to skip the pipelines. When the keyword is not specified, the `skip_ci` directive is ignored, preventing all users from bypassing the pipeline execution policies. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | Flag to allow (`true`) or prevent (`false`) the use of the `skip-ci` directive for pipelines with enforced pipeline execution policies. | | `allowlist` | `object` | `users` | Specify users who are always allowed to use `skip-ci` directive, regardless of the `allowed` flag. Use `users:` followed by an array of objects with `id` keys representing user IDs. | ### `variables_override` type {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16430) in GitLab 18.1. {{< /history >}} | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | When `true`, other configurations can override policy variables. When `false`, other configurations cannot override policy variables. | | `exceptions` | `array` | `array` of `string` | Variables that are exceptions to the global rule. When `allowed: false`, the `exceptions` are an allowlist. When `allowed: true`, the `exceptions` are a denylist. | This option controls how user-defined variables are handled in pipelines with policies enforced. This feature allows you to: - Deny user-defined variables by default (recommended), which provides stronger security, but requires that you add all of the variables that should be customizable to the `exceptions` allowlist. - Allow user-defined variables by default, which provides more flexibility but lower security, as you must add variables that can affect policy enforcement to the `exceptions` denylist. - Define exceptions to the `allowed` global rule. User-defined variables can affect the behavior of any policy jobs in the pipeline and can come from various sources: - [Pipeline variables](../../../ci/variables/_index.md#use-pipeline-variables). - [Project variables](../../../ci/variables/_index.md#for-a-project). - [Group variables](../../../ci/variables/_index.md#for-a-group). - [Instance variables](../../../ci/variables/_index.md#for-an-instance). When the `variables_override` option is not specified, the "highest precedence" behavior is maintained. For more information about this behavior, see [precedence of variables in pipeline execution policies](#precedence-of-variables-in-pipeline-execution-policies). When the pipeline execution policy controls variable precedence, the job logs include the configured `variables_override` options and the policy name. To view these logs, `gitlab-runner` must be updated to version 18.1 or later. #### Example `variables_override` configuration Add the `variables_override` option to your pipeline execution policy configuration: ```yaml pipeline_execution_policy: - name: Security Scans description: 'Enforce security scanning' enabled: true pipeline_config_strategy: inject_policy content: include: - project: gitlab-org/security-policies file: security-scans.yml variables_override: allowed: false exceptions: - CS_IMAGE - SAST_EXCLUDED_ANALYZERS ``` ##### Enforcing security scans while allowing container customization (allowlist approach) To enforce security scans but allow project teams to specify their own container image: ```yaml variables_override: allowed: false exceptions: - CS_IMAGE ``` This configuration blocks all user-defined variables except `CS_IMAGE`, ensuring that security scans cannot be disabled, while allowing teams to customize the container image. ##### Prevent specific security variable overrides (denylist approach) To allow most variables, but prevent disabling security scans: ```yaml variables_override: allowed: true exceptions: - SECRET_DETECTION_DISABLED - SAST_DISABLED - DEPENDENCY_SCANNING_DISABLED - DAST_DISABLED - CONTAINER_SCANNING_DISABLED ``` This configuration allows all user-defined variables except those that could disable security scans. {{< alert type="warning" >}} While this configuration can provide flexibility, it is discouraged due to the security implications. Any variable that is not explicitly listed in the `exceptions` can be injected by the users. As a result, the policy configuration is not as well protected as when using the `allowlist` approach. {{< /alert >}} ### `policy scope` schema To customize policy enforcement, you can define a policy's scope to either include, or exclude, specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## Manage access to the CI/CD configuration When you enforce pipeline execution policies on a project, users that trigger pipelines must have at least read-only access to the project that contains the policy CI/CD configuration. You can grant access to the project manually or automatically. ### Grant access manually To allow users or groups to run pipelines with enforced pipeline execution policies, you can invite them to the project that contains the policy CI/CD configuration. ### Grant access automatically You can automatically grant access to the policy CI/CD configuration for all users who run pipelines in projects with enforced pipeline execution policies. Prerequisites: - Make sure the pipeline execution policy CI/CD configuration is stored in a security policy project. - In the general settings of the security policy project, enable the **Pipeline execution policies** setting. If you don't yet have a security policy project and you want to create the first pipeline execution policy, create an empty project and link it as a security policy project. To link the project: 1. In the group or project where you want to enforce the policy, select **Secure** > **Policies** > **Edit policy project**. 1. Select the security policy project. The project becomes a security policy project, and the setting becomes available. {{< alert type="note" >}} To create downstream pipelines using `$CI_JOB_TOKEN`, you need to make sure that projects and groups are authorized to request the security policy project. In the security policy project, go to **Settings > CI/CD > Job token permissions** and add the authorized groups and projects to the allowlist. If you don't see the **CI/CD** settings, go to **Settings > General > Visibility, project features, permissions** and enable **CI/CD**. {{< /alert >}} #### Configuration 1. In the policy project, select **Settings** > **General** > **Visibility, project features, permissions**. 1. Enable the setting **Pipeline execution policies: Grant access to the CI/CD configurations for projects linked to this security policy project as the source for security policies**. 1. In the policy project, create a file for the policy CI/CD configuration. ```yaml # policy-ci.yml policy-job: script: ... ``` 1. In the group or project where you want to enforce the policy, create a pipeline execution policy and specify the CI/CD configuration file for the security policy project. ```yaml pipeline_execution_policy: - name: My pipeline execution policy description: Enforces CI/CD jobs enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/my-security-policy-project file: policy-ci.yml ``` ## Pipeline configuration strategies Pipeline configuration strategy defines the method for merging the policy configuration with the project pipeline. Pipeline execution policies execute the jobs defined in the `.gitlab-ci.yml` file in isolated pipelines, which are merged into the pipelines of the target projects. ### `inject_policy` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/475152) in GitLab 17.9. {{< /history >}} This strategy adds custom CI/CD configurations into the existing project pipeline without completely replacing the project's original CI/CD configuration. It is suitable when you want to enhance or extend the current pipeline with additional steps, such as adding new security scans, compliance checks, or custom scripts. Unlike the deprecated `inject_ci` strategy, `inject_policy` allows you to inject custom policy stages into your pipeline, giving you more granular control over where policy rules are applied in your CI/CD workflow. If you have multiple policies enabled, this strategy injects all of jobs from each policy. When you use this strategy, a project CI/CD configuration cannot override any behavior defined in the policy pipelines because each pipeline has an isolated YAML configuration. For projects without a `.gitlab-ci.yml` file, this strategy creates `.gitlab-ci.yml` file implicitly. The executed pipeline contains only the jobs defined in the pipeline execution policy. {{< alert type="note" >}} When a pipeline execution policy uses workflow rules that prevent policy jobs from running, the only jobs that run are the project's CI/CD jobs. If the project uses workflow rules that prevent project CI/CD jobs from running, the only jobs that run are the pipeline execution policy jobs. {{< /alert >}} #### Stages injection The stages for the policy pipeline follow the usual CI/CD configuration. You define the order in which a custom policy stage is injected into the project pipeline by providing the stages before and after the custom stages. The project and policy pipeline stages are represented as a Directed Acyclic Graph (DAG), where nodes are stages and edges represent dependencies. When you combine pipelines, the individual DAGs are merged into a single, larger DAG. Afterward, a topological sorting is performed, which determines the order in which stages from all pipelines should execute. This sorting ensures that all dependencies are respected in the final order. If there are conflicting dependencies, the pipeline fails to run. To fix the dependencies, ensure that stages used across the project and policies are aligned. If a stage isn't explicitly defined in the policy pipeline configuration, the pipeline uses the default stages `stages: [build, test, deploy]`. If these stages are included, but listed in a different order, the pipeline fails with a `Cyclic dependencies detected when enforcing policies` error. The following examples demonstrate this behavior. All examples assume the following project CI/CD configuration: ```yaml # .gitlab-ci.yml stages: [build, test, deploy] project-build-job: stage: build script: ... project-test-job: stage: test script: ... project-deploy-job: stage: deploy script: ... ``` ##### Example 1 ```yaml # policy-ci.yml stages: [test, policy-stage, deploy] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after `test` stage, if present. - Must be injected before `deploy` stage, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified the stages as `[build, deploy, test]`, the pipeline would fail with the error `Cyclic dependencies detected when enforcing policies` because the constraints cannot be satisfied. To fix the failure, adjust the project configuration to align the stages with the policies. - If the `.gitlab-ci.yml` specified stages as `[build]`, the resulting pipeline has the following stages: `[build, policy-stage]`. ##### Example 2 ```yaml # policy-ci.yml stages: [policy-stage, deploy] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected before `deploy` stage, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified the stages as `[build, deploy, test]`, the resulting pipeline stages would be: `[build, policy-stage, deploy, test]`. - If there is no `deploy` stage in the project pipeline, the `policy-stage` stage is injected at the end of the pipeline, just before `.pipeline-policy-post`. ##### Example 3 ```yaml # policy-ci.yml stages: [test, policy-stage] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after `test` stage, if present. Result: The pipeline contains the following stages: `[build, test, deploy, policy-stage]`. Special cases: - If there is no `test` stage in the project pipeline, the `policy-stage` stage is injected at the end of the pipeline, just before `.pipeline-policy-post`. ##### Example 4 ```yaml # policy-ci.yml stages: [policy-stage] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage has no constraints. Result: The pipeline contains the following stages: `[build, test, deploy, policy-stage]`. ##### Example 5 ```yaml # policy-ci.yml stages: [check, lint, test, policy-stage, deploy, verify, publish] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after the stages `check`, `lint`, `test`, if present. - Must be injected before the stages `deploy`, `verify`, `publish`, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified stages as `[check, publish]`, the resulting pipeline has the following stages: `[check, policy-stage, publish]` ### `inject_ci` (deprecated) {{< alert type="warning" >}} This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/475152) in GitLab 17.9. Use [`inject_policy`](#inject_policy-type) instead as it supports the enforcement of custom policy stages. {{< /alert >}} This strategy adds custom CI/CD configurations into the existing project pipeline without completely replacing the project's original CI/CD configuration. It is suitable when you want to enhance or extend the current pipeline with additional steps, such as adding new security scans, compliance checks, or custom scripts. Having multiple policies enabled injects all jobs additively. When you use this strategy, a project CI/CD configuration cannot override any behavior defined in the policy pipelines because each pipeline has an isolated YAML configuration. For projects without a `.gitlab-ci.yml` file, this strategy creates a `.gitlab-ci.yml` file implicitly. This allows a pipeline containing only the jobs defined in the pipeline execution policy to execute. {{< alert type="note" >}} When a pipeline execution policy uses workflow rules that prevent policy jobs from running, the only jobs that run are the project's CI/CD jobs. If the project uses workflow rules that prevent project CI/CD jobs from running, the only jobs that run are the pipeline execution policy jobs. {{< /alert >}} ### `override_project_ci` {{< history >}} - Updated handling of workflow rules [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/175088) in GitLab 17.8 [with a flag](../../../administration/feature_flags/_index.md) named `policies_always_override_project_ci`. Enabled by default. - Updated [handling of `override_project_ci`](https://gitlab.com/gitlab-org/gitlab/-/issues/504434) to allow scan execution policies to run together with pipeline execution policies, in GitLab 17.9. - Updated handling of workflow rules [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/512877) in GitLab 17.10. Feature flag `policies_always_override_project_ci` removed. {{< /history >}} This strategy replaces the project's existing CI/CD configuration with a new one defined by the pipeline execution policy. This strategy is ideal when the entire pipeline needs to be standardized or replaced, like when you want to enforce organization-wide CI/CD standards or compliance requirements in a highly regulated industry. To override the pipeline configuration, define the CI/CD jobs and do not use `include:project`. The strategy takes precedence over other policies that use the `inject_ci` or `inject_policy` strategy. If a policy with `override_project_ci` applies, the project CI/CD configuration is ignored. However, other security policy configurations are not overridden. When you use `override_project_ci` in a pipeline execution policy together with a scan execution policy, the CI/CD configurations are merged and both policies are applied to the resulting pipeline. Alternatively, you can merge the project's CI/CD configuration with the project's `.gitlab-ci.yml` instead of overriding it. To merge the configuration, use `include:project`. This strategy allows users to include the project CI/CD configuration in the pipeline execution policy configuration, enabling the users to customize the policy jobs. For example, they can combine the policy and project CI/CD configuration into one YAML file to override the `before_script` configuration or define required variables, such as `CS_IMAGE`, to define the required path to the container to scan. Here's a [short demo](https://youtu.be/W8tubneJ1X8) of this behavior. The following diagram illustrates how variables defined at the project and policy levels are selected in the resulting pipeline: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% graph TB classDef yaml text-align:left ActualPolicyYAML["<pre> variables: MY_VAR: 'policy' policy-job: stage: test </pre>"] class ActualPolicyYAML yaml ActualProjectYAML["<pre> variables: MY_VAR: 'project' project-job: stage: test </pre>"] class ActualProjectYAML yaml PolicyVariablesYAML["<pre> variables: MY_VAR: 'policy' </pre>"] class PolicyVariablesYAML yaml ProjectVariablesYAML["<pre> variables: MY_VAR: 'project' </pre>"] class ProjectVariablesYAML yaml ResultingPolicyVariablesYAML["<pre> variables: MY_VAR: 'policy' </pre>"] class ResultingPolicyVariablesYAML yaml ResultingProjectVariablesYAML["<pre> variables: MY_VAR: 'project' </pre>"] class ResultingProjectVariablesYAML yaml PolicyCiYAML(Policy CI YAML) --> ActualPolicyYAML ProjectCiYAML(<code>.gitlab-ci.yml</code>) --> ActualProjectYAML subgraph "Policy Pipeline" subgraph "Test stage" subgraph "<code>policy-job</code>" PolicyVariablesYAML end end end subgraph "Project Pipeline" subgraph "Test stage" subgraph "<code>project-job</code>" ProjectVariablesYAML end end end ActualPolicyYAML -- "Used as source" --> PolicyVariablesYAML ActualProjectYAML -- "Used as source" --> ProjectVariablesYAML subgraph "Resulting Pipeline" subgraph "Test stage" subgraph "<code>policy-job</code> " ResultingPolicyVariablesYAML end subgraph "<code>project-job</code> " ResultingProjectVariablesYAML end end end PolicyVariablesYAML -- "Inject <code>policy-job</code> if Test Stage exists" --> ResultingPolicyVariablesYAML ProjectVariablesYAML -- "Basis of the resulting pipeline" --> ResultingProjectVariablesYAML ``` {{< alert type="note" >}} The workflow rules in the pipeline execution policy override the project's original CI/CD configuration. By defining workflow rules in the policy, you can set rules that are enforced across all linked projects, like preventing the use of branch pipelines. {{< /alert >}} ### Include a project's CI/CD configuration in the pipeline execution policy configuration When you use the `override_project_ci` strategy, the project configuration can be included into the pipeline execution policy configuration: ```yaml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH rules: - exists: paths: - '$CI_CONFIG_PATH' project: '$CI_PROJECT_PATH' ref: '$CI_COMMIT_SHA' compliance_job: ... ``` ## CI/CD variables {{< alert type="warning" >}} Don't store sensitive information or credentials in variables because they are stored as part of the plaintext policy configuration in a Git repository. {{< /alert >}} Pipeline execution jobs are executed in isolation. Variables defined in another policy or in the project's `.gitlab-ci.yml` file are not available in the pipeline execution policy and cannot be overwritten from the outside, unless permitted by the [variables_override type](#variables_override-type) type. Variables can be shared with pipeline execution policies using group or project settings, which follow the standard [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence) rules. However, the precedence rules are more complex when using a pipeline execution policy as they can vary depending on the pipeline execution policy strategy: - `inject_policy` strategy: If the variable is defined in the pipeline execution policy, the job always uses this value. If a variable is not defined in a pipeline execution policy, the job applies the value from the group or project settings. - `inject_ci` strategy: If the variable is defined in the pipeline execution policy, the job always uses this value. If a variable is not defined in a pipeline execution policy, the job applies the value from the group or project settings. - `override_project_ci` strategy: All jobs in the resulting pipeline are treated as policy jobs. Variables defined in the policy (including those in included files) take precedence over project and group variables. This means that variables from jobs in the CI/CD configuration of the included project take precedence over the variables defined in the project and group settings. For more details on variable in pipeline execution policies, see [precedence of variable in pipeline execution policies](#precedence-of-variables-in-pipeline-execution-policies). You can [define project or group variables in the UI](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui). ### Precedence of variables in pipeline execution policies When you use pipeline execution policies, especially with the `override_project_ci` strategy, the precedence of variable values defined in multiple places can differ from standard GitLab CI/CD pipelines. These are some important points to understand: - When using `override_project_ci`, all jobs in the resulting pipeline are considered policy jobs, including those from the CI/CD configurations of included projects. - Variables defined in a policy pipeline (for the entire instance or for a job) take precedence over variables defined in the project or group settings. - This behavior applies to all jobs, including those included from the project's CI/CD configuration file (`.gitlab-ci.yml`). #### Example If a variable in a project's CI/CD configuration and a job variable defined in an included `.gitlab-ci.yml` file have the same name, the job variable takes precedence when using `override_project_ci`. In the project's CI/CD settings, a `MY_VAR` variable is defined: - Key: `MY_VAR` - Value: `Project configuration variable value` In `.gitlab-ci.yml` of the included project, the same variable is defined: ```yaml project-job: variables: MY_VAR: "Project job variable value" script: - echo $MY_VAR # This will output "Project job variable value" ``` In this case, the job variable value `Project job variable value` takes precedence. ## Behavior with `[skip ci]` By default, to prevent a regular pipeline from triggering, users can push a commit to a protected branch with `[skip ci]` in the commit message. However, jobs defined with a pipeline execution policy are always triggered, as the policy ignores the `[skip ci]` directive. This prevents developers from skipping the execution of jobs defined in the policy, which ensures that critical security and compliance checks are always performed. For more flexible control over `[skip ci]` behavior, see the [`skip_ci` type](#skip_ci-type) section. ## Examples These examples demonstrate what you can achieve with pipeline execution policies. ### Pipeline execution policy You can use the following example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- pipeline_execution_policy: - name: My pipeline execution policy description: Enforces CI/CD jobs enabled: true pipeline_config_strategy: override_project_ci content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml ref: main # optional policy_scope: projects: including: - id: 361 ``` ### Customize enforced jobs based on project variables You can customize enforced jobs, based on the presence of a project variable. In this example, the value of `CS_IMAGE` is defined in the policy as `alpine:latest`. However, if the project also defines the value of `PROJECT_CS_IMAGE`, that value is used instead. The CI/CD variable must be a predefined project variable, not defined in the project's `.gitlab-ci.yml` file. ```yaml variables: CS_ANALYZER_IMAGE: "$CI_TEMPLATE_REGISTRY_HOST/security-products/container-scanning:8" CS_IMAGE: alpine:latest policy::container-security: stage: .pipeline-policy-pre rules: - if: $PROJECT_CS_IMAGE variables: CS_IMAGE: $PROJECT_CS_IMAGE - when: always script: - echo "CS_ANALYZER_IMAGE:$CS_ANALYZER_IMAGE" - echo "CS_IMAGE:$CS_IMAGE" ``` ### Customize enforced jobs using `.gitlab-ci.yml` and artifacts Because policy pipelines run in isolation, pipeline execution policies cannot read variables from `.gitlab-ci.yml` directly. If you want to use the variables in `.gitlab-ci.yml` instead of defining them in the project's CI/CD configuration, you can use artifacts to pass variables from the `.gitlab-ci.yml` configuration to the pipeline execution policy's pipeline. ```yaml # .gitlab-ci.yml build-job: stage: build script: - echo "BUILD_VARIABLE=value_from_build_job" >> build.env artifacts: reports: dotenv: build.env ``` ```yaml stages: - build - test test-job: stage: test script: - echo "$BUILD_VARIABLE" # Prints "value_from_build_job" ``` ### Customize security scanner's behavior with `before_script` in project configurations To customize the behavior of a security job enforced by a policy in the project's `.gitlab-ci.yml`, you can override `before_script`. To do so, use the `override_project_ci` strategy in the policy and include the project's CI/CD configuration. Example pipeline execution policy configuration: ```yaml # policy.yml type: pipeline_execution_policy name: Secret detection description: >- This policy enforces secret detection and allows projects to override the behavior of the scanner. enabled: true pipeline_config_strategy: override_project_ci content: include: - project: gitlab-org/pipeline-execution-policies/compliance-project file: secret-detection.yml ``` ```yaml # secret-detection.yml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH - template: Jobs/Secret-Detection.gitlab-ci.yml ``` In the project's `.gitlab-ci.yml`, you can define `before_script` for the scanner: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: before_script: - echo "Before secret detection" ``` By using `override_project_ci` and including the project's configuration, it allows for YAML configurations to be merged. ### Configure resource-specific variable control You can allow teams to set global variables that can override pipeline execution policy variables, while still permitting job-specific overrides. This allows teams to set appropriate defaults for security scans, but use appropriate resources for other jobs. Include in your `resource-optimized-scans.yml`: ```yaml variables: # Default resource settings for all jobs KUBERNETES_MEMORY_REQUEST: 4Gi KUBERNETES_MEMORY_LIMIT: 4Gi # Default values that teams can override via project variables SAST_KUBERNETES_MEMORY_REQUEST: 4Gi sast: variables: SAST_EXCLUDED_ANALYZERS: 'spotbugs' KUBERNETES_MEMORY_REQUEST: $SAST_KUBERNETES_MEMORY_REQUEST KUBERNETES_MEMORY_LIMIT: $SAST_KUBERNETES_MEMORY_REQUEST ``` Include in your `policy.yml`: ```yaml pipeline_execution_policy: - name: Resource-Optimized Security Policy description: Enforces security scans with efficient resource management enabled: true pipeline_config_strategy: inject_ci content: include: - project: security/policy-templates file: resource-optimized-scans.yml ref: main variables_override: allowed: false exceptions: # Allow scan-specific resource overrides - SAST_KUBERNETES_MEMORY_REQUEST - SECRET_DETECTION_KUBERNETES_MEMORY_REQUEST - CS_KUBERNETES_MEMORY_REQUEST # Allow necessary scan customization - CS_IMAGE - SAST_EXCLUDED_PATHS ``` This approach allows teams to set scan-specific resource variables (like `SAST_KUBERNETES_MEMORY_REQUEST`) using variable overrides without affecting all jobs in their pipeline, which provides better resource management for large projects. This example also shows the use of other common scan customization options that you can extend to developers. Make sure you document the available variables so your development teams can leverage them. ### Use group or project variables in a pipeline execution policy You can use group or project variables in a pipeline execution policy. With a project variable of `PROJECT_VAR="I'm a project"` the following pipeline execution policy job results in: `I'm a project`. ```yaml pipeline execution policy job: stage: .pipeline-policy-pre script: - echo "$PROJECT_VAR" ``` ### Enforce a variable's value by using a pipeline execution policy The value of a variable defined in a pipeline execution policy overrides the value of a group or policy variable with the same name. In this example, the project value of variable `PROJECT_VAR` is overwritten and the job results in: `I'm a pipeline execution policy`. ```yaml variables: PROJECT_VAR: "I'm a pipeline execution policy" pipeline execution policy job: stage: .pipeline-policy-pre script: - echo "$PROJECT_VAR" ``` ### Example `policy.yml` with security policy scopes In this example, the security policy's `policy_scope`: - Includes any project with compliance frameworks with an ID of `9` applied to them. - Excludes projects with an ID of `456`. ```yaml pipeline_execution_policy: - name: Pipeline execution policy description: '' enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml policy_scope: compliance_frameworks: - id: 9 projects: excluding: - id: 456 ``` ### Configure `ci_skip` in a pipeline execution policy In the following example, the pipeline execution policy is enforced, and [skipping CI](#skip_ci-type) is disallowed except for the user with ID `75`. ```yaml pipeline_execution_policy: - name: My pipeline execution policy with ci.skip exceptions description: 'Enforces CI/CD jobs' enabled: true pipeline_config_strategy: inject_policy content: include: - project: group-a/project1 file: README.md skip_ci: allowed: false allowlist: users: - id: 75 ``` ### Configure the `exists` condition Use the `exists` rule to configure the pipeline execution policy to include the CI/CD configuration file from the project when a certain file exists. In the following example, the pipeline execution policy includes the CI/CD configuration from the project if a `Dockerfile` exists. You must set the `exists` rule to use `'$CI_PROJECT_PATH'` as the `project`, otherwise GitLab evaluates where the files exists in the project that holds the security policy CI/CD configuration. ```yaml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH rules: - exists: paths: - 'Dockerfile' project: '$CI_PROJECT_PATH' ``` To use this approach, the group or project must use the `override_project_ci` strategy. ### Enforce a container scanning `component` using a pipeline execution policy You can use security scan components to improve the handling and enforcement of versioning. ```yaml include: - component: gitlab.com/components/container-scanning/container-scanning@main inputs: cs_image: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA container_scanning: # override component with additional configuration variables: CS_REGISTRY_USER: $CI_REGISTRY_USER CS_REGISTRY_PASSWORD: $CI_REGISTRY_PASSWORD SECURE_LOG_LEVEL: debug # add for verbose debugging of the container scanner before_script: - echo $CS_IMAGE # optionally add a before_script for additional debugging ```
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Pipeline execution policies breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13266) in GitLab 17.2 [with a flag](../../../administration/feature_flags/_index.md) named `pipeline_execution_policy_type`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/454278) in GitLab 17.3. Feature flag `pipeline_execution_policy_type` removed. {{< /history >}} Use pipeline execution policies to manage and enforce CI/CD jobs for multiple projects with a single configuration. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [Security Policies: Pipeline Execution Policy Type](https://www.youtube.com/watch?v=QQAOpkZ__pA). ## Schema {{< history >}} - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/159858) the `suffix` field in GitLab 17.4. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165096) pipeline execution so later stages wait for the `.pipeline-policy-pre` stage to complete in GitLab 17.7. {{< /history >}} The YAML file with pipeline execution policies consists of an array of objects matching pipeline execution policy schema nested under the `pipeline_execution_policy` key. You can configure a maximum of five policies under the `pipeline_execution_policy` key per security policy project. Any other policies configured after the first five are not applied. When you save a new policy, GitLab validates its contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-------|------|----------|-------------| | `pipeline_execution_policy` | `array` of pipeline execution policy | true | List of pipeline execution policies (maximum five) | ## `pipeline_execution_policy` schema | Field | Type | Required | Description | |-------|------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters.| | `description` (optional) | `string` | true | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `content` | `object` of [`content`](#content-type) | true | Reference to the CI/CD configuration to inject into project pipelines. | | `pipeline_config_strategy` | `string` | false | Can be `inject_policy`, `inject_ci` (deprecated), or `override_project_ci`. See [pipeline strategies](#pipeline-configuration-strategies) for more information. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Scopes the policy based on projects, groups, or compliance framework labels you specify. | | `suffix` | `string` | false | Can either be `on_conflict` (default), or `never`. Defines the behavior for handling job naming conflicts. `on_conflict` applies a unique suffix to the job names for jobs that would break the uniqueness. `never` causes the pipeline to fail if the job names across the project and all applicable policies are not unique. | | `skip_ci` | `object` of [`skip_ci`](pipeline_execution_policies.md#skip_ci-type) | false | Defines whether users can apply the `skip-ci` directive. By default, the use of `skip-ci` is ignored and as a result, pipelines with pipeline execution policies cannot be skipped. | | `variables_override` | `object` of [`variables_override`](pipeline_execution_policies.md#variables_override-type) | false | Controls whether users can override the behavior of policy variables. By default, the policy variables are enforced with the highest precedence and users cannot override them. | Note the following: - Users that trigger a pipeline must have at least read access to the pipeline execution file specified in the pipeline execution policy, otherwise the pipelines do not start. - If the pipeline execution file gets deleted or renamed, the pipelines in projects with the policy enforced might stop working. - Pipeline execution policy jobs can be assigned to one of the two reserved stages: - `.pipeline-policy-pre` at the beginning of the pipeline, before the `.pre` stage. - `.pipeline-policy-post` at the very end of the pipeline, after the `.post` stage. - Injecting jobs in any of the reserved stages is guaranteed to always work. Execution policy jobs can also be assigned to any standard (build, test, deploy) or user-declared stages. However, in this case, the jobs may be ignored depending on the project pipeline configuration. - It is not possible to assign jobs to reserved stages outside of a pipeline execution policy. - Choose unique job names for pipeline execution policies. Some CI/CD configurations are based on job names, which can lead to unwanted results if a job name exists multiple times in the same pipeline. For example, the `needs` keyword makes one job dependent on another. If there are multiple jobs with the name `example`, a job that `needs` the `example` job name depends on only one of the `example` job instances at random. - Pipeline execution policies remain in effect even if the project lacks a CI/CD configuration file. - The order of the policies matters for the applied suffix. - If any policy applied to a given project has `suffix: never`, the pipeline fails if another job with the same name is already present in the pipeline. - Pipeline execution policies are enforced on all branches and pipeline sources. You can use [workflow rules](../../../ci/yaml/workflow.md) to control when pipeline execution policies are enforced. ### `.pipeline-policy-pre` stage Jobs in the `.pipeline-policy-pre` stage always execute. This stage is designed for security and compliance use cases. Jobs in the pipeline do not begin until the `.pipeline-policy-pre` stage completes. If you don't require this behavior for your workflow, you can use the `.pre` stage or a custom stage instead. #### Ensure that `.pipeline-policy-pre` succeeds {{< details >}} - Status: Experiment {{< /details >}} {{< alert type="note" >}} This feature is experimental and might change in future releases. Test it thoroughly in non-production environments only, as it might be unstable in production. {{< /alert >}} To ensure that `.pipeline-policy-pre` completes and succeeds, enable the `ensure_pipeline_policy_pre_succeeds` experiment in the security policy configuration. The `.gitlab/security-policies/policy.yml` YAML configuration file is stored in your security policy project: ```yaml experiments: ensure_pipeline_policy_pre_succeeds: enabled: true ``` If the `.pipeline-policy-pre` stage fails or all jobs in the stage are skipped, all jobs in later stages are skipped, including: - Jobs with `needs: []` - Jobs with `when: always` When multiple pipeline execution policies apply, the experiment takes effect if enabled in any of them, ensuring that `.pipeline-policy-pre` must succeed. ### Job naming best practice {{< history >}} - Naming conflict handling [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/473189) in GitLab 17.4. {{< /history >}} There is no visible indicator that a job was generated by a security policy. To make it easier to identify jobs that were created by policies and avoid job name collisions, add a unique prefix or suffix to the job name. Examples: - Use: `policy1:deployments:sast`. This name is likely unique across all other policies and projects. - Don't use: `sast`. This name is likely to be duplicated in other policies and projects. Pipeline execution policies handle naming conflicts depending on the `suffix` attribute. If there are multiple jobs with the same name: - Using `on_conflict` (default), a suffix is added to a job if its name conflicts with another job in the pipeline. - Using `never`, no suffix is added in the event of a conflict and the pipeline fails. The suffix is added based on the order in which the jobs are merged onto the main pipeline. The order is as follows: 1. Project pipeline jobs 1. Project policy jobs (if applicable) 1. Group policy jobs (if applicable, ordered by hierarchy, the most top-level group is applied as last) The applied suffix has the following format: `:policy-<security-policy-project-id>-<policy-index>`. Example of the resulting job: `sast:policy-123456-0`. If multiple policies in on security policy project define the same job name, the numerical suffix corresponds to the index of the conflicting policy. Example of the resulting jobs: - `sast:policy-123456-0` - `sast:policy-123456-1` ### Job stage best practice Jobs defined in a pipeline execution policy can use any [stage](../../../ci/yaml/_index.md#stage) defined in the project's CI/CD configuration, also the reserved stages `.pipeline-policy-pre` and `.pipeline-policy-post`. {{< alert type="note" >}} If your policy contains jobs only in the `.pre` and `.post` stages, the policy's pipeline is evaluated as `empty`. It is not merged with the project's pipeline. To use the `.pre` and `.post` stages in a pipeline execution policy, you must include at least one other job that runs in a different stage. For example: `.pipeline-policy-pre`. {{< /alert >}} When you use the `inject_policy` [pipeline strategy](#pipeline-configuration-strategies), if a target project does not contain its own `.gitlab-ci.yml` file, all policy stages are injected into the pipeline. When you use the (deprecated) `inject_ci` [pipeline strategy](#pipeline-configuration-strategies), if a target project does not contain its own `.gitlab-ci.yml` file, then the only stages available are the default pipeline stages and the reserved stages. When you enforce pipeline execution policies over projects with CI/CD configurations that you do not have permissions to modify, you should define jobs in the `.pipeline-policy-pre` and `.pipeline-policy-post` stages. These stages are always available, regardless of any project's CI/CD configuration. When you use the `override_project_ci` [pipeline strategy](#pipeline-configuration-strategies) with multiple pipeline execution policies and with custom stages, the stages must be defined in the same relative order to be compatible with each other: Valid configuration example: ```yaml - override-policy-1 stages: [build, test, policy-test, deploy] - override-policy-2 stages: [test, deploy] ``` Invalid configuration example: ```yaml - override-policy-1 stages: [build, test, policy-test, deploy] - override-policy-2 stages: [deploy, test] ``` The pipeline fails if one or more `override_project_ci` policies has an invalid `stages` configuration. ### `content` type | Field | Type | Required | Description | |-------|------|----------|-------------| | `project` | `string` | true | The full GitLab project path to a project on the same GitLab instance. | | `file` | `string` | true | A full file path relative to the root directory (/). The YAML files must have the `.yml` or `.yaml` extension. | | `ref` | `string` | false | The ref to retrieve the file from. Defaults to the HEAD of the project when not specified. | Use the `content` type in a policy to reference a CI/CD configuration stored in another repository. This allows you to reuse the same CI/CD configuration across multiple policies, reducing the overhead of maintaining these configurations. For example, if you have a custom secret detection CI/CD configuration you want to enforce in policy A and policy B, you can create a single YAML configuration file and reference the configuration in both policies. Prerequisites: - Users triggering pipelines run in those projects on which a policy containing the `content` type is enforced must have at minimum read-only access to the project containing the CI/CD - In projects that enforce pipeline execution policies, users must have at least read-only access to the project that contains the CI/CD configuration to trigger the pipeline. In GitLab 17.4 and later, you can grant the required read-only access for the CI/CD configuration file specified in a security policy project using the `content` type. To do so, enable the setting **Pipeline execution policies** in the general settings of the security policy project. Enabling this setting grants the user who triggered the pipeline access to read the CI/CD configuration file enforced by the pipeline execution policy. This setting does not grant the user access to any other parts of the project where the configuration file is stored. For more details, see [Grant access automatically](#grant-access-automatically). ### `skip_ci` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/173480) in GitLab 17.7. {{< /history >}} Pipeline execution policies offer control over who can use the `[skip ci]` directive. You can specify certain users or service accounts that are allowed to use `[skip ci]` while still ensuring critical security and compliance checks are performed. Use the `skip_ci` keyword to specify whether users are allowed to apply the `skip_ci` directive to skip the pipelines. When the keyword is not specified, the `skip_ci` directive is ignored, preventing all users from bypassing the pipeline execution policies. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | Flag to allow (`true`) or prevent (`false`) the use of the `skip-ci` directive for pipelines with enforced pipeline execution policies. | | `allowlist` | `object` | `users` | Specify users who are always allowed to use `skip-ci` directive, regardless of the `allowed` flag. Use `users:` followed by an array of objects with `id` keys representing user IDs. | ### `variables_override` type {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16430) in GitLab 18.1. {{< /history >}} | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | When `true`, other configurations can override policy variables. When `false`, other configurations cannot override policy variables. | | `exceptions` | `array` | `array` of `string` | Variables that are exceptions to the global rule. When `allowed: false`, the `exceptions` are an allowlist. When `allowed: true`, the `exceptions` are a denylist. | This option controls how user-defined variables are handled in pipelines with policies enforced. This feature allows you to: - Deny user-defined variables by default (recommended), which provides stronger security, but requires that you add all of the variables that should be customizable to the `exceptions` allowlist. - Allow user-defined variables by default, which provides more flexibility but lower security, as you must add variables that can affect policy enforcement to the `exceptions` denylist. - Define exceptions to the `allowed` global rule. User-defined variables can affect the behavior of any policy jobs in the pipeline and can come from various sources: - [Pipeline variables](../../../ci/variables/_index.md#use-pipeline-variables). - [Project variables](../../../ci/variables/_index.md#for-a-project). - [Group variables](../../../ci/variables/_index.md#for-a-group). - [Instance variables](../../../ci/variables/_index.md#for-an-instance). When the `variables_override` option is not specified, the "highest precedence" behavior is maintained. For more information about this behavior, see [precedence of variables in pipeline execution policies](#precedence-of-variables-in-pipeline-execution-policies). When the pipeline execution policy controls variable precedence, the job logs include the configured `variables_override` options and the policy name. To view these logs, `gitlab-runner` must be updated to version 18.1 or later. #### Example `variables_override` configuration Add the `variables_override` option to your pipeline execution policy configuration: ```yaml pipeline_execution_policy: - name: Security Scans description: 'Enforce security scanning' enabled: true pipeline_config_strategy: inject_policy content: include: - project: gitlab-org/security-policies file: security-scans.yml variables_override: allowed: false exceptions: - CS_IMAGE - SAST_EXCLUDED_ANALYZERS ``` ##### Enforcing security scans while allowing container customization (allowlist approach) To enforce security scans but allow project teams to specify their own container image: ```yaml variables_override: allowed: false exceptions: - CS_IMAGE ``` This configuration blocks all user-defined variables except `CS_IMAGE`, ensuring that security scans cannot be disabled, while allowing teams to customize the container image. ##### Prevent specific security variable overrides (denylist approach) To allow most variables, but prevent disabling security scans: ```yaml variables_override: allowed: true exceptions: - SECRET_DETECTION_DISABLED - SAST_DISABLED - DEPENDENCY_SCANNING_DISABLED - DAST_DISABLED - CONTAINER_SCANNING_DISABLED ``` This configuration allows all user-defined variables except those that could disable security scans. {{< alert type="warning" >}} While this configuration can provide flexibility, it is discouraged due to the security implications. Any variable that is not explicitly listed in the `exceptions` can be injected by the users. As a result, the policy configuration is not as well protected as when using the `allowlist` approach. {{< /alert >}} ### `policy scope` schema To customize policy enforcement, you can define a policy's scope to either include, or exclude, specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## Manage access to the CI/CD configuration When you enforce pipeline execution policies on a project, users that trigger pipelines must have at least read-only access to the project that contains the policy CI/CD configuration. You can grant access to the project manually or automatically. ### Grant access manually To allow users or groups to run pipelines with enforced pipeline execution policies, you can invite them to the project that contains the policy CI/CD configuration. ### Grant access automatically You can automatically grant access to the policy CI/CD configuration for all users who run pipelines in projects with enforced pipeline execution policies. Prerequisites: - Make sure the pipeline execution policy CI/CD configuration is stored in a security policy project. - In the general settings of the security policy project, enable the **Pipeline execution policies** setting. If you don't yet have a security policy project and you want to create the first pipeline execution policy, create an empty project and link it as a security policy project. To link the project: 1. In the group or project where you want to enforce the policy, select **Secure** > **Policies** > **Edit policy project**. 1. Select the security policy project. The project becomes a security policy project, and the setting becomes available. {{< alert type="note" >}} To create downstream pipelines using `$CI_JOB_TOKEN`, you need to make sure that projects and groups are authorized to request the security policy project. In the security policy project, go to **Settings > CI/CD > Job token permissions** and add the authorized groups and projects to the allowlist. If you don't see the **CI/CD** settings, go to **Settings > General > Visibility, project features, permissions** and enable **CI/CD**. {{< /alert >}} #### Configuration 1. In the policy project, select **Settings** > **General** > **Visibility, project features, permissions**. 1. Enable the setting **Pipeline execution policies: Grant access to the CI/CD configurations for projects linked to this security policy project as the source for security policies**. 1. In the policy project, create a file for the policy CI/CD configuration. ```yaml # policy-ci.yml policy-job: script: ... ``` 1. In the group or project where you want to enforce the policy, create a pipeline execution policy and specify the CI/CD configuration file for the security policy project. ```yaml pipeline_execution_policy: - name: My pipeline execution policy description: Enforces CI/CD jobs enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/my-security-policy-project file: policy-ci.yml ``` ## Pipeline configuration strategies Pipeline configuration strategy defines the method for merging the policy configuration with the project pipeline. Pipeline execution policies execute the jobs defined in the `.gitlab-ci.yml` file in isolated pipelines, which are merged into the pipelines of the target projects. ### `inject_policy` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/475152) in GitLab 17.9. {{< /history >}} This strategy adds custom CI/CD configurations into the existing project pipeline without completely replacing the project's original CI/CD configuration. It is suitable when you want to enhance or extend the current pipeline with additional steps, such as adding new security scans, compliance checks, or custom scripts. Unlike the deprecated `inject_ci` strategy, `inject_policy` allows you to inject custom policy stages into your pipeline, giving you more granular control over where policy rules are applied in your CI/CD workflow. If you have multiple policies enabled, this strategy injects all of jobs from each policy. When you use this strategy, a project CI/CD configuration cannot override any behavior defined in the policy pipelines because each pipeline has an isolated YAML configuration. For projects without a `.gitlab-ci.yml` file, this strategy creates `.gitlab-ci.yml` file implicitly. The executed pipeline contains only the jobs defined in the pipeline execution policy. {{< alert type="note" >}} When a pipeline execution policy uses workflow rules that prevent policy jobs from running, the only jobs that run are the project's CI/CD jobs. If the project uses workflow rules that prevent project CI/CD jobs from running, the only jobs that run are the pipeline execution policy jobs. {{< /alert >}} #### Stages injection The stages for the policy pipeline follow the usual CI/CD configuration. You define the order in which a custom policy stage is injected into the project pipeline by providing the stages before and after the custom stages. The project and policy pipeline stages are represented as a Directed Acyclic Graph (DAG), where nodes are stages and edges represent dependencies. When you combine pipelines, the individual DAGs are merged into a single, larger DAG. Afterward, a topological sorting is performed, which determines the order in which stages from all pipelines should execute. This sorting ensures that all dependencies are respected in the final order. If there are conflicting dependencies, the pipeline fails to run. To fix the dependencies, ensure that stages used across the project and policies are aligned. If a stage isn't explicitly defined in the policy pipeline configuration, the pipeline uses the default stages `stages: [build, test, deploy]`. If these stages are included, but listed in a different order, the pipeline fails with a `Cyclic dependencies detected when enforcing policies` error. The following examples demonstrate this behavior. All examples assume the following project CI/CD configuration: ```yaml # .gitlab-ci.yml stages: [build, test, deploy] project-build-job: stage: build script: ... project-test-job: stage: test script: ... project-deploy-job: stage: deploy script: ... ``` ##### Example 1 ```yaml # policy-ci.yml stages: [test, policy-stage, deploy] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after `test` stage, if present. - Must be injected before `deploy` stage, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified the stages as `[build, deploy, test]`, the pipeline would fail with the error `Cyclic dependencies detected when enforcing policies` because the constraints cannot be satisfied. To fix the failure, adjust the project configuration to align the stages with the policies. - If the `.gitlab-ci.yml` specified stages as `[build]`, the resulting pipeline has the following stages: `[build, policy-stage]`. ##### Example 2 ```yaml # policy-ci.yml stages: [policy-stage, deploy] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected before `deploy` stage, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified the stages as `[build, deploy, test]`, the resulting pipeline stages would be: `[build, policy-stage, deploy, test]`. - If there is no `deploy` stage in the project pipeline, the `policy-stage` stage is injected at the end of the pipeline, just before `.pipeline-policy-post`. ##### Example 3 ```yaml # policy-ci.yml stages: [test, policy-stage] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after `test` stage, if present. Result: The pipeline contains the following stages: `[build, test, deploy, policy-stage]`. Special cases: - If there is no `test` stage in the project pipeline, the `policy-stage` stage is injected at the end of the pipeline, just before `.pipeline-policy-post`. ##### Example 4 ```yaml # policy-ci.yml stages: [policy-stage] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage has no constraints. Result: The pipeline contains the following stages: `[build, test, deploy, policy-stage]`. ##### Example 5 ```yaml # policy-ci.yml stages: [check, lint, test, policy-stage, deploy, verify, publish] policy-job: stage: policy-stage script: ... ``` In this example, the `policy-stage` stage: - Must be injected after the stages `check`, `lint`, `test`, if present. - Must be injected before the stages `deploy`, `verify`, `publish`, if present. Result: The pipeline contains the following stages: `[build, test, policy-stage, deploy]`. Special cases: - If the `.gitlab-ci.yml` specified stages as `[check, publish]`, the resulting pipeline has the following stages: `[check, policy-stage, publish]` ### `inject_ci` (deprecated) {{< alert type="warning" >}} This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/475152) in GitLab 17.9. Use [`inject_policy`](#inject_policy-type) instead as it supports the enforcement of custom policy stages. {{< /alert >}} This strategy adds custom CI/CD configurations into the existing project pipeline without completely replacing the project's original CI/CD configuration. It is suitable when you want to enhance or extend the current pipeline with additional steps, such as adding new security scans, compliance checks, or custom scripts. Having multiple policies enabled injects all jobs additively. When you use this strategy, a project CI/CD configuration cannot override any behavior defined in the policy pipelines because each pipeline has an isolated YAML configuration. For projects without a `.gitlab-ci.yml` file, this strategy creates a `.gitlab-ci.yml` file implicitly. This allows a pipeline containing only the jobs defined in the pipeline execution policy to execute. {{< alert type="note" >}} When a pipeline execution policy uses workflow rules that prevent policy jobs from running, the only jobs that run are the project's CI/CD jobs. If the project uses workflow rules that prevent project CI/CD jobs from running, the only jobs that run are the pipeline execution policy jobs. {{< /alert >}} ### `override_project_ci` {{< history >}} - Updated handling of workflow rules [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/175088) in GitLab 17.8 [with a flag](../../../administration/feature_flags/_index.md) named `policies_always_override_project_ci`. Enabled by default. - Updated [handling of `override_project_ci`](https://gitlab.com/gitlab-org/gitlab/-/issues/504434) to allow scan execution policies to run together with pipeline execution policies, in GitLab 17.9. - Updated handling of workflow rules [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/512877) in GitLab 17.10. Feature flag `policies_always_override_project_ci` removed. {{< /history >}} This strategy replaces the project's existing CI/CD configuration with a new one defined by the pipeline execution policy. This strategy is ideal when the entire pipeline needs to be standardized or replaced, like when you want to enforce organization-wide CI/CD standards or compliance requirements in a highly regulated industry. To override the pipeline configuration, define the CI/CD jobs and do not use `include:project`. The strategy takes precedence over other policies that use the `inject_ci` or `inject_policy` strategy. If a policy with `override_project_ci` applies, the project CI/CD configuration is ignored. However, other security policy configurations are not overridden. When you use `override_project_ci` in a pipeline execution policy together with a scan execution policy, the CI/CD configurations are merged and both policies are applied to the resulting pipeline. Alternatively, you can merge the project's CI/CD configuration with the project's `.gitlab-ci.yml` instead of overriding it. To merge the configuration, use `include:project`. This strategy allows users to include the project CI/CD configuration in the pipeline execution policy configuration, enabling the users to customize the policy jobs. For example, they can combine the policy and project CI/CD configuration into one YAML file to override the `before_script` configuration or define required variables, such as `CS_IMAGE`, to define the required path to the container to scan. Here's a [short demo](https://youtu.be/W8tubneJ1X8) of this behavior. The following diagram illustrates how variables defined at the project and policy levels are selected in the resulting pipeline: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% graph TB classDef yaml text-align:left ActualPolicyYAML["<pre> variables: MY_VAR: 'policy' policy-job: stage: test </pre>"] class ActualPolicyYAML yaml ActualProjectYAML["<pre> variables: MY_VAR: 'project' project-job: stage: test </pre>"] class ActualProjectYAML yaml PolicyVariablesYAML["<pre> variables: MY_VAR: 'policy' </pre>"] class PolicyVariablesYAML yaml ProjectVariablesYAML["<pre> variables: MY_VAR: 'project' </pre>"] class ProjectVariablesYAML yaml ResultingPolicyVariablesYAML["<pre> variables: MY_VAR: 'policy' </pre>"] class ResultingPolicyVariablesYAML yaml ResultingProjectVariablesYAML["<pre> variables: MY_VAR: 'project' </pre>"] class ResultingProjectVariablesYAML yaml PolicyCiYAML(Policy CI YAML) --> ActualPolicyYAML ProjectCiYAML(<code>.gitlab-ci.yml</code>) --> ActualProjectYAML subgraph "Policy Pipeline" subgraph "Test stage" subgraph "<code>policy-job</code>" PolicyVariablesYAML end end end subgraph "Project Pipeline" subgraph "Test stage" subgraph "<code>project-job</code>" ProjectVariablesYAML end end end ActualPolicyYAML -- "Used as source" --> PolicyVariablesYAML ActualProjectYAML -- "Used as source" --> ProjectVariablesYAML subgraph "Resulting Pipeline" subgraph "Test stage" subgraph "<code>policy-job</code> " ResultingPolicyVariablesYAML end subgraph "<code>project-job</code> " ResultingProjectVariablesYAML end end end PolicyVariablesYAML -- "Inject <code>policy-job</code> if Test Stage exists" --> ResultingPolicyVariablesYAML ProjectVariablesYAML -- "Basis of the resulting pipeline" --> ResultingProjectVariablesYAML ``` {{< alert type="note" >}} The workflow rules in the pipeline execution policy override the project's original CI/CD configuration. By defining workflow rules in the policy, you can set rules that are enforced across all linked projects, like preventing the use of branch pipelines. {{< /alert >}} ### Include a project's CI/CD configuration in the pipeline execution policy configuration When you use the `override_project_ci` strategy, the project configuration can be included into the pipeline execution policy configuration: ```yaml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH rules: - exists: paths: - '$CI_CONFIG_PATH' project: '$CI_PROJECT_PATH' ref: '$CI_COMMIT_SHA' compliance_job: ... ``` ## CI/CD variables {{< alert type="warning" >}} Don't store sensitive information or credentials in variables because they are stored as part of the plaintext policy configuration in a Git repository. {{< /alert >}} Pipeline execution jobs are executed in isolation. Variables defined in another policy or in the project's `.gitlab-ci.yml` file are not available in the pipeline execution policy and cannot be overwritten from the outside, unless permitted by the [variables_override type](#variables_override-type) type. Variables can be shared with pipeline execution policies using group or project settings, which follow the standard [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence) rules. However, the precedence rules are more complex when using a pipeline execution policy as they can vary depending on the pipeline execution policy strategy: - `inject_policy` strategy: If the variable is defined in the pipeline execution policy, the job always uses this value. If a variable is not defined in a pipeline execution policy, the job applies the value from the group or project settings. - `inject_ci` strategy: If the variable is defined in the pipeline execution policy, the job always uses this value. If a variable is not defined in a pipeline execution policy, the job applies the value from the group or project settings. - `override_project_ci` strategy: All jobs in the resulting pipeline are treated as policy jobs. Variables defined in the policy (including those in included files) take precedence over project and group variables. This means that variables from jobs in the CI/CD configuration of the included project take precedence over the variables defined in the project and group settings. For more details on variable in pipeline execution policies, see [precedence of variable in pipeline execution policies](#precedence-of-variables-in-pipeline-execution-policies). You can [define project or group variables in the UI](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui). ### Precedence of variables in pipeline execution policies When you use pipeline execution policies, especially with the `override_project_ci` strategy, the precedence of variable values defined in multiple places can differ from standard GitLab CI/CD pipelines. These are some important points to understand: - When using `override_project_ci`, all jobs in the resulting pipeline are considered policy jobs, including those from the CI/CD configurations of included projects. - Variables defined in a policy pipeline (for the entire instance or for a job) take precedence over variables defined in the project or group settings. - This behavior applies to all jobs, including those included from the project's CI/CD configuration file (`.gitlab-ci.yml`). #### Example If a variable in a project's CI/CD configuration and a job variable defined in an included `.gitlab-ci.yml` file have the same name, the job variable takes precedence when using `override_project_ci`. In the project's CI/CD settings, a `MY_VAR` variable is defined: - Key: `MY_VAR` - Value: `Project configuration variable value` In `.gitlab-ci.yml` of the included project, the same variable is defined: ```yaml project-job: variables: MY_VAR: "Project job variable value" script: - echo $MY_VAR # This will output "Project job variable value" ``` In this case, the job variable value `Project job variable value` takes precedence. ## Behavior with `[skip ci]` By default, to prevent a regular pipeline from triggering, users can push a commit to a protected branch with `[skip ci]` in the commit message. However, jobs defined with a pipeline execution policy are always triggered, as the policy ignores the `[skip ci]` directive. This prevents developers from skipping the execution of jobs defined in the policy, which ensures that critical security and compliance checks are always performed. For more flexible control over `[skip ci]` behavior, see the [`skip_ci` type](#skip_ci-type) section. ## Examples These examples demonstrate what you can achieve with pipeline execution policies. ### Pipeline execution policy You can use the following example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- pipeline_execution_policy: - name: My pipeline execution policy description: Enforces CI/CD jobs enabled: true pipeline_config_strategy: override_project_ci content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml ref: main # optional policy_scope: projects: including: - id: 361 ``` ### Customize enforced jobs based on project variables You can customize enforced jobs, based on the presence of a project variable. In this example, the value of `CS_IMAGE` is defined in the policy as `alpine:latest`. However, if the project also defines the value of `PROJECT_CS_IMAGE`, that value is used instead. The CI/CD variable must be a predefined project variable, not defined in the project's `.gitlab-ci.yml` file. ```yaml variables: CS_ANALYZER_IMAGE: "$CI_TEMPLATE_REGISTRY_HOST/security-products/container-scanning:8" CS_IMAGE: alpine:latest policy::container-security: stage: .pipeline-policy-pre rules: - if: $PROJECT_CS_IMAGE variables: CS_IMAGE: $PROJECT_CS_IMAGE - when: always script: - echo "CS_ANALYZER_IMAGE:$CS_ANALYZER_IMAGE" - echo "CS_IMAGE:$CS_IMAGE" ``` ### Customize enforced jobs using `.gitlab-ci.yml` and artifacts Because policy pipelines run in isolation, pipeline execution policies cannot read variables from `.gitlab-ci.yml` directly. If you want to use the variables in `.gitlab-ci.yml` instead of defining them in the project's CI/CD configuration, you can use artifacts to pass variables from the `.gitlab-ci.yml` configuration to the pipeline execution policy's pipeline. ```yaml # .gitlab-ci.yml build-job: stage: build script: - echo "BUILD_VARIABLE=value_from_build_job" >> build.env artifacts: reports: dotenv: build.env ``` ```yaml stages: - build - test test-job: stage: test script: - echo "$BUILD_VARIABLE" # Prints "value_from_build_job" ``` ### Customize security scanner's behavior with `before_script` in project configurations To customize the behavior of a security job enforced by a policy in the project's `.gitlab-ci.yml`, you can override `before_script`. To do so, use the `override_project_ci` strategy in the policy and include the project's CI/CD configuration. Example pipeline execution policy configuration: ```yaml # policy.yml type: pipeline_execution_policy name: Secret detection description: >- This policy enforces secret detection and allows projects to override the behavior of the scanner. enabled: true pipeline_config_strategy: override_project_ci content: include: - project: gitlab-org/pipeline-execution-policies/compliance-project file: secret-detection.yml ``` ```yaml # secret-detection.yml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH - template: Jobs/Secret-Detection.gitlab-ci.yml ``` In the project's `.gitlab-ci.yml`, you can define `before_script` for the scanner: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: before_script: - echo "Before secret detection" ``` By using `override_project_ci` and including the project's configuration, it allows for YAML configurations to be merged. ### Configure resource-specific variable control You can allow teams to set global variables that can override pipeline execution policy variables, while still permitting job-specific overrides. This allows teams to set appropriate defaults for security scans, but use appropriate resources for other jobs. Include in your `resource-optimized-scans.yml`: ```yaml variables: # Default resource settings for all jobs KUBERNETES_MEMORY_REQUEST: 4Gi KUBERNETES_MEMORY_LIMIT: 4Gi # Default values that teams can override via project variables SAST_KUBERNETES_MEMORY_REQUEST: 4Gi sast: variables: SAST_EXCLUDED_ANALYZERS: 'spotbugs' KUBERNETES_MEMORY_REQUEST: $SAST_KUBERNETES_MEMORY_REQUEST KUBERNETES_MEMORY_LIMIT: $SAST_KUBERNETES_MEMORY_REQUEST ``` Include in your `policy.yml`: ```yaml pipeline_execution_policy: - name: Resource-Optimized Security Policy description: Enforces security scans with efficient resource management enabled: true pipeline_config_strategy: inject_ci content: include: - project: security/policy-templates file: resource-optimized-scans.yml ref: main variables_override: allowed: false exceptions: # Allow scan-specific resource overrides - SAST_KUBERNETES_MEMORY_REQUEST - SECRET_DETECTION_KUBERNETES_MEMORY_REQUEST - CS_KUBERNETES_MEMORY_REQUEST # Allow necessary scan customization - CS_IMAGE - SAST_EXCLUDED_PATHS ``` This approach allows teams to set scan-specific resource variables (like `SAST_KUBERNETES_MEMORY_REQUEST`) using variable overrides without affecting all jobs in their pipeline, which provides better resource management for large projects. This example also shows the use of other common scan customization options that you can extend to developers. Make sure you document the available variables so your development teams can leverage them. ### Use group or project variables in a pipeline execution policy You can use group or project variables in a pipeline execution policy. With a project variable of `PROJECT_VAR="I'm a project"` the following pipeline execution policy job results in: `I'm a project`. ```yaml pipeline execution policy job: stage: .pipeline-policy-pre script: - echo "$PROJECT_VAR" ``` ### Enforce a variable's value by using a pipeline execution policy The value of a variable defined in a pipeline execution policy overrides the value of a group or policy variable with the same name. In this example, the project value of variable `PROJECT_VAR` is overwritten and the job results in: `I'm a pipeline execution policy`. ```yaml variables: PROJECT_VAR: "I'm a pipeline execution policy" pipeline execution policy job: stage: .pipeline-policy-pre script: - echo "$PROJECT_VAR" ``` ### Example `policy.yml` with security policy scopes In this example, the security policy's `policy_scope`: - Includes any project with compliance frameworks with an ID of `9` applied to them. - Excludes projects with an ID of `456`. ```yaml pipeline_execution_policy: - name: Pipeline execution policy description: '' enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml policy_scope: compliance_frameworks: - id: 9 projects: excluding: - id: 456 ``` ### Configure `ci_skip` in a pipeline execution policy In the following example, the pipeline execution policy is enforced, and [skipping CI](#skip_ci-type) is disallowed except for the user with ID `75`. ```yaml pipeline_execution_policy: - name: My pipeline execution policy with ci.skip exceptions description: 'Enforces CI/CD jobs' enabled: true pipeline_config_strategy: inject_policy content: include: - project: group-a/project1 file: README.md skip_ci: allowed: false allowlist: users: - id: 75 ``` ### Configure the `exists` condition Use the `exists` rule to configure the pipeline execution policy to include the CI/CD configuration file from the project when a certain file exists. In the following example, the pipeline execution policy includes the CI/CD configuration from the project if a `Dockerfile` exists. You must set the `exists` rule to use `'$CI_PROJECT_PATH'` as the `project`, otherwise GitLab evaluates where the files exists in the project that holds the security policy CI/CD configuration. ```yaml include: - project: $CI_PROJECT_PATH ref: $CI_COMMIT_SHA file: $CI_CONFIG_PATH rules: - exists: paths: - 'Dockerfile' project: '$CI_PROJECT_PATH' ``` To use this approach, the group or project must use the `override_project_ci` strategy. ### Enforce a container scanning `component` using a pipeline execution policy You can use security scan components to improve the handling and enforcement of versioning. ```yaml include: - component: gitlab.com/components/container-scanning/container-scanning@main inputs: cs_image: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA container_scanning: # override component with additional configuration variables: CS_REGISTRY_USER: $CI_REGISTRY_USER CS_REGISTRY_PASSWORD: $CI_REGISTRY_PASSWORD SECURE_LOG_LEVEL: debug # add for verbose debugging of the container scanner before_script: - echo $CS_IMAGE # optionally add a before_script for additional debugging ```
https://docs.gitlab.com/user/application_security/security_policy_projects
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/security_policy_projects.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
security_policy_projects.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](enforcement/security_policy_projects.md). <!-- This redirect file can be deleted after 2025-10-23. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: enforcement/security_policy_projects.md remove_date: '2025-07-23' breadcrumbs: - doc - user - application_security - policies --- <!-- markdownlint-disable --> This document was moved to [another location](enforcement/security_policy_projects.md). <!-- This redirect file can be deleted after 2025-10-23. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/scan_execution_policies
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/scan_execution_policies.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
scan_execution_policies.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Scan execution policies
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Support for custom CI/CD variables in the scan execution policies editor [introduced](https://gitlab.com/groups/gitlab-org/-/epics/9566) in GitLab 16.2. - Enforcement of scan execution policies on projects with an existing GitLab CI/CD configuration [introduced](https://gitlab.com/groups/gitlab-org/-/epics/6880) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `scan_execution_policy_pipelines`. Feature flag `scan_execution_policy_pipelines` removed in GitLab 16.5. - Overriding predefined variables in scan execution policies [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/440855) in GitLab 16.10 [with a flag](../../../administration/feature_flags/_index.md) named `allow_restricted_variables_at_policy_level`. Enabled by default. Feature flag `allow_restricted_variables_at_policy_level` removed in GitLab 17.5. {{< /history >}} Scan execution policies enforce GitLab security scans based on the default or latest [security CI/CD templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Jobs). You can deploy scan execution policies as part of the pipeline or on a specified schedule. Scan execution policies are enforced across all projects that are linked to the security policy project and are in the scope of the policy. For projects without a `.gitlab-ci.yml` file, or where AutoDevOps is disabled, security policies create the `.gitlab-ci.yml` file implicitly. The `.gitlab-ci.yml` file ensures policies that run secret detection, static analysis, or other scanners that do not require a build in the project can always run and be enforced. Both scan execution policies and pipeline execution policies can configure GitLab security scans across multiple projects to manage security and compliance. Scan execution policies are faster to configure, but are not customizable. If any of the following cases are true, use [pipeline execution policies](pipeline_execution_policies.md) instead: - You require advanced configuration settings. - You want to enforce custom CI/CD jobs or scripts. - You want to enable third-party security scans through an enforced CI/CD job. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Security Scan Policies in GitLab](https://youtu.be/ZBcqGmEwORA?si=aeT4EXtmHjosgjBY). - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Learn more about [enforcing scan execution policies on projects with no GitLab CI/CD configuration](https://www.youtube.com/watch?v=sUfwQQ4-qHs). ## Restrictions - You can assign a maximum of five rules to each policy. - You can assign a maximum of five scan execution policies to each security policy project. - Local project YAML files cannot override scan execution policies. These policies take precedence over any configurations defined for a pipeline, even if you use the same job name in your project's CI/CD configuration. - Scan execution policies with `type: pipeline` rules do not create pipelines if the project's `.gitlab-ci.yml` file contains [`workflow:rules`](../../../ci/yaml/workflow.md) that prevent the creation of pipelines. This limitation does not apply to `type: schedule` rules. ## Jobs Policy jobs for scans, other than DAST scans, are created in the `test` stage of the pipeline. If you remove the `test` stage from the default pipeline, jobs run in the `scan-policies` stage instead. This stage is injected into the CI/CD pipeline at evaluation time if it doesn't exist. If the `build` stage exists, `scan-policies` is injected just after the `build` stage, otherwise it is injected at the beginning of the pipeline. DAST scans always run in the `dast` stage. If the `dast` stage does not exist, then a `dast` stage is injected at the end of the pipeline. To avoid job name conflicts, a hyphen and a number are appended to the job name. Each number is a unique value for each policy action. For example, `secret-detection` becomes `secret-detection-1`. ## Scan execution policy editor {{< history >}} - `Merge Request Security Template` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/541689) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `flexible_scan_execution`. Disabled by default. {{< /history >}} Use the scan execution policy editor to create or edit a scan execution policy. Prerequisites: - By default, only group, subgroup, or project Owners have the [permissions](../../permissions.md#application-security) required to create or assign a security policy project. Alternatively, you can create a custom role with the permission to [manage security policy links](../../custom_roles/abilities.md#security-policy-management). When you create your first scan execution policies, we provide you with templates to get started quickly with some of the most common use cases: - Merge Request Security Template - Use case: You want security scans to run only when merge requests are created, not on every commit. - When to use: For projects using merge request pipelines that need security scans to run on source branches targeting default or protected branches. - Best for: Teams that want to align with merge request approval policies and reduce infrastructure costs by avoiding scans on every branch. - Pipeline sources: Primarily merge request pipelines. - Scheduled Scanning Template - Use case: You want security scans to run automatically on a schedule (like daily or weekly) regardless of code changes. - When to use: For security scanning on a regular cadence, independent of development activity. - Best for: Compliance requirements, baseline security monitoring, or projects with infrequent commits. - Pipeline sources: Scheduled pipelines. - Merge Release Security Template - Use case: You want security scans to run on all changes to your `main` or release branches. - When to use: For projects that need comprehensive scanning before releases, or on protected branches. - Best for: Release-gated workflows, production deployments, or high-security environments. - Pipeline sources: Push pipelines to protected branches, release pipelines. If the available template do not meet your needs, or you require more customized scan execution policies, you can: - Select the **Custom** option and create your own scan execution policy with custom requirements. - Access more customizable options for security scan and CI enforcement using [pipeline execution policies](pipeline_execution_policies.md). Once your policy is complete, save it by selecting **Configure with a merge request** at the bottom of the editor. You are redirected to the merge request on the project's configured security policy project. If one does not link to your project, a security policy project is automatically created. You can remove existing policies from the editor interface by selecting **Delete policy** at the bottom of the editor. This action creates a merge request to remove the policy from your `policy.yml` file. Most policy changes take effect as soon as the merge request is merged. Any changes committed directly to the default branch instead of a merge request require up to 10 minutes before the policy changes take effect. ![Scan Execution Policy Editor Rule Mode](img/scan_execution_policy_rule_mode_v17_5.png) {{< alert type="note" >}} For DAST execution policies, the way you apply site and scanner profiles in the rule mode editor depends on where the policy is defined: - For policies in projects, in the rule mode editor, choose from a list of profiles that are already defined in the project. - For policies in groups, you must type in the names of the profiles to use. To prevent pipeline errors, profiles with matching names must exist in all of the group's projects. {{< /alert >}} ## Scan execution policies schema A YAML configuration with scan execution policies consists of an array of objects matching the scan execution policy schema. Objects are nested under the `scan_execution_policy` key. You can configure a maximum of five policies under the `scan_execution_policy` key. Any other policies configured after the first five are not applied. When you save a new policy, GitLab validates the policy's contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `scan_execution_policy` | `array` of scan execution policy | true | | List of scan execution policies (maximum 5) | ## Scan execution policy schema {{< history >}} - Limit of actions per policy [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/472213) in GitLab 17.4 [with flags](../../../administration/feature_flags/_index.md) named `scan_execution_policy_action_limit` (for projects) and `scan_execution_policy_action_limit_group` (for groups). Disabled by default. - Limit of actions per policy [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/535605) in GitLab 18.0. Feature flags `scan_execution_policy_action_limit` (for projects) and `scan_execution_policy_action_limit_group` (for groups) removed. {{< /history >}} {{< alert type="flag" >}} This feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} | Field | Type | Required | Description | |----------------|----------------------------------------------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | List of rules that the policy applies. | | `actions` | `array` of actions | true | List of actions that the policy enforces. Limited to a maximum of 10 in GitLab 18.0 and later. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Defines the scope of the policy based on the projects, groups, or compliance framework labels you specify. | | `skip_ci` | `object` of [`skip_ci`](#skip_ci-type) | false | Defines whether users can apply the `skip-ci` directive. | ### `skip_ci` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/482952) in GitLab 17.9. {{< /history >}} Scan execution policies offer control over who can use the `[skip ci]` directive. You can specify certain users or service accounts that are allowed to use `[skip ci]` while still ensuring critical security and compliance checks are performed. Use the `skip_ci` keyword to specify whether users are allowed to apply the `skip_ci` directive to skip the pipelines. When the keyword is not specified, the `skip_ci` directive is ignored, preventing all users from bypassing the pipeline execution policies. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | Flag to allow (`true`) or prevent (`false`) the use of the `skip-ci` directive for pipelines with enforced pipeline execution policies. | | `allowlist` | `object` | `users` | Specify users who are always allowed to use `skip-ci` directive, regardless of the `allowed` flag. Use `users:` followed by an array of objects with `id` keys representing user IDs. | {{< alert type="note" >}} Scan execution policies that have the rule type `schedule` always ignore the `skip_ci` option. Scheduled scans run at their configured times regardless of whether `[skip ci]` (or any of its variations) appear in the last commit message. This ensures that security scans occur on a predictable schedule even when CI/CD pipelines are otherwise skipped. {{< /alert >}} ## `pipeline` rule type {{< history >}} - The `branch_type` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404774) in GitLab 16.1 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_type`. Generally available in GitLab 16.2. Feature flag removed. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Generally available in GitLab 16.5. Feature flag removed. - The `pipeline_sources` field and the `branch_type` options `target_default` and `target_protected` were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/541689) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `flexible_scan_execution`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} This rule enforces the defined actions whenever the pipeline runs for a selected branch. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `pipeline` | The rule's type. | | `branches` <sup>1</sup> | `array` of `string` | true if `branch_type` field does not exist | `*` or the branch's name | The branch the given policy applies to (supports wildcard). For compatibility with merge request approval policies, you should target all branches to include the scans in the feature branch and default branch | | `branch_type` <sup>1</sup> | `string` | true if `branches` field does not exist | `default`, `protected`, `all`, `target_default` <sup>2</sup>, or `target_protected` <sup>2</sup> | The types of branches the given policy applies to. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Branches to exclude from this rule. | | `pipeline_sources` <sup>2</sup> | `array` of `string` | false | `api`, `chat`, `external`, `external_pull_request_event`, `merge_request_event` <sup>3</sup>, `pipeline`, `push` <sup>3</sup>, `schedule`, `trigger`, `unknown`, `web` | The pipeline source that determines when the scan execution job triggers. See the [documentation](../../../ci/jobs/job_rules.md#ci_pipeline_source-predefined-variable) for more information. | 1. You must specify either `branches` or `branch_type`, but not both. 1. Some options are only available with the `flexible_scan_execution` feature flag enabled. See the history for details. 1. When the `branch_type` options `target_default` or `target_protected` are specified, the `pipeline_sources` field supports only the `merge_request_event` and `push` fields. ## `schedule` rule type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404774) the `branch_type` field in GitLab 16.1 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_type`. Generally available in GitLab 16.2. Feature flag removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) the `branch_exceptions` field in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Generally available in GitLab 16.5. Feature flag removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/147691) a new `scan_execution_pipeline_worker` worker to scheduled scans to create pipelines in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md). - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/152855) a new application setting `security_policy_scheduled_scans_max_concurrency` in GitLab 17.1. The concurrency limit applies when both the `scan_execution_pipeline_worker` and `scan_execution_pipeline_concurrency_control` are enabled. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158636) a concurrency limit for scan execution scheduled jobs in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `scan_execution_pipeline_concurrency_control`. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/451890) the `scan_execution_pipeline_worker` feature flag on GitLab.com in GitLab 17.5. - [Feature flag](https://gitlab.com/gitlab-org/gitlab/-/issues/451890) `scan_execution_pipeline_worker` removed in GitLab 17.6. - [Feature flag](https://gitlab.com/gitlab-org/gitlab/-/issues/463802) `scan_execution_pipeline_concurrency_control` removed in GitLab 17.9. - [Removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178892) a new application setting `security_policy_scheduled_scans_max_concurrency` in GitLab 17.11 {{< /history >}} {{< alert type="warning" >}} In GitLab 16.1 and earlier, you should not use [direct transfer](../../../administration/settings/import_and_export_settings.md#enable-migration-of-groups-and-projects-by-direct-transfer) with scheduled scan execution policies. If you must use direct transfer, first upgrade to GitLab 16.2 and ensure security policy bots are enabled in the projects you are enforcing. {{< /alert >}} Use the `schedule` rule type to run security scanners on a schedule. A scheduled pipeline: - Runs only the scanners defined in the policy, not the jobs defined in the project's `.gitlab-ci.yml` file. - Runs according to the schedule defined in the `cadence` field. - Runs under a `security_policy_bot` user account in the project, with the Guest role and permissions to create pipelines and read the repository's content from a CI/CD job. This account is created when the policy is linked to a group or project. - On GitLab.com, only the first 10 `schedule` rules in a scan execution policy are enforced. Rules that exceed the limit have no effect. | Field | Type | Required | Possible values | Description | |------------|------|----------|-----------------|-------------| | `type` | `string` | true | `schedule` | The rule's type. | | `branches` <sup>1</sup> | `array` of `string` | true if either `branch_type` or `agents` fields does not exist | `*` or the branch's name | The branch the given policy applies to (supports wildcard). | | `branch_type` <sup>1</sup> | `string` | true if either `branches` or `agents` fields does not exist | `default`, `protected` or `all` | The types of branches the given policy applies to. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Branches to exclude from this rule. | | `cadence` | `string` | true | Cron expression with limited options. For example, `0 0 * * *` creates a schedule to run every day at midnight (12:00 AM). | A whitespace-separated string containing five fields that represents the scheduled time. | | `timezone` | `string` | false | Time zone identifier (for example, `America/New_York`) | Time zone to apply to the cadence. Value must be an IANA Time Zone Database identifier. | | `time_window` | `object` | false | | Distribution and duration settings for scheduled security scans. | | `agents` <sup>1</sup> | `object` | true if either `branch_type` or `branches` fields do not exists | | The name of the [GitLab agents for Kubernetes](../../clusters/agent/_index.md) where [Operational Container Scanning](../../clusters/agent/vulnerabilities.md) runs. The object key is the name of the Kubernetes agent configured for your project in GitLab. | 1. You must specify only one of `branches`, `branch_type`, or `agents`. ### Cadence Use the `cadence` field to schedule when you want the policy's actions to run. The `cadence` field uses [cron syntax](../../../topics/cron/_index.md), but with some restrictions: - Only the following types of cron syntax are supported: - A daily cadence of once per hour around specified time, for example: `0 18 * * *` - A weekly cadence of once per week on a specified day and around specified time, for example: `0 13 * * 0` - Use of the comma (,), hyphens (-), or step operators (/) are not supported for minutes and hours. Any scheduled pipeline using these characters is skipped. Consider the following when choosing a value for the `cadence` field: - Timing is based on UTC for GitLab.com and GitLab Dedicated, and on the GitLab host's system time for GitLab Self-Managed. When testing new policies, pipelines may appear to run at incorrect times because they are scheduled in your server's time zone, not your local time zone. - A scheduled pipeline doesn't start until the required resources become available to create it. In other words, the pipeline may not begin precisely at the timing specified in the policy. When using the `schedule` rule type with the `agents` field: - The GitLab agent for Kubernetes checks every 30 seconds to see if there is an applicable policy. When the agent finds a policy, the scans execute according to the defined `cadence`. - The cron expression is evaluated using the system time of the Kubernetes agent pod. When using the `schedule` rule type with the `branches` field: - The cron worker runs on 15 minute intervals and starts any pipelines that were scheduled to run during the previous 15 minutes. Therefore, scheduled pipelines may run with an offset of up to 15 minutes. - If a policy is enforced on a large number of projects or branches, the policy is processed in batches, and may take some time to create all pipelines. ![A diagram showing how scheduled security scans are processed and executed with potential delays.](img/scheduled_scan_execution_policies_diagram_v15_10.png) ### `agent` schema Use this schema to define `agents` objects in the [`schedule` rule type](#schedule-rule-type). | Field | Type | Required | Description | |--------------|---------------------|----------|-------------| | `namespaces` | `array` of `string` | true | The namespace that is scanned. If empty, all namespaces are scanned. | #### `agent` example ```yaml - name: Enforce Container Scanning in cluster connected through my-gitlab-agent for default and kube-system namespaces enabled: true rules: - type: schedule cadence: '0 10 * * *' agents: <agent-name>: namespaces: - 'default' - 'kube-system' actions: - scan: container_scanning ``` The keys for a schedule rule are: - `cadence` (required): a [Cron expression](../../../topics/cron/_index.md) for when the scans are run. - `agents:<agent-name>` (required): The name of the agent to use for scanning. - `agents:<agent-name>:namespaces` (optional): The Kubernetes namespaces to scan. If omitted, all namespaces are scanned. ### `time_window` schema Define how scheduled scans are distributed over time with the `time_window` object in the [`schedule` rule type](#schedule-rule-type). You can configure `time_window` only in YAML mode of the policy editor. | Field | Type | Required | Description | |----------------|-----------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `distribution` | `string` | true | Distribution pattern for schedule scans. Supports only `random`, where scans are distributed randomly in the interval defined by the `value` key of the `time_window`. | | `value` | `integer` | true | The time window in seconds the schedule scans should run. Enter a value between 3600 (1 hour) and 86400 (24 hours). | #### `time_window` example ```yaml - name: Enforce Container Scanning with a time window of 1 hour enabled: true rules: - type: schedule cadence: '0 10 * * *' time_window: value: 3600 distribution: random actions: - scan: container_scanning ``` ### Optimize scheduled pipelines for projects at scale Consider performance when enabling scheduled scans across many projects. If the `scan_execution_pipeline_concurrency_control` feature flag is not enabled: - Scheduled pipelines run simultaneously across all projects and branches enforced by the policy. - The first scheduled pipeline execution in each project creates a security bot user responsible for executing the schedules for each project. To optimize performance for projects at scale: - Roll out scheduled scan execution policies gradually, starting with a subset of projects. You can leverage security policy scopes to target specific groups, projects, or projects containing a given compliance framework label. - You can configure the policy to run the schedules on runners with a specified `tag`. Consider setting up a dedicated runner in each project to handle schedules enforced from a policy to reduce impact to other runners. - Test your implementation in a staging or lower environment before deploying to production. Monitor performance and adjust your rollout plan based on results. ### Concurrency control GitLab applies concurrency control when: - The `scan_execution_pipeline_concurrency_control` feature flag is enabled - You set the `time_window` property The concurrency control distributes the scheduled pipelines according to the [`time_window` settings](#time_window-schema) defined in the policy. ## `scan` action type {{< history >}} - Scan Execution Policies variable precedence was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/424028) in GitLab 16.7 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_variables_precedence`. Enabled by default. [Feature flag removed in GitLab 16.8](https://gitlab.com/gitlab-org/gitlab/-/issues/435727). - Selection of security templates for given action (for projects) was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415427) in GitLab 17.1 [with feature flag](../../../administration/feature_flags/_index.md) named `scan_execution_policies_with_latest_templates`. Disabled by default. - Selection of security templates for given action (for groups) was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/468981) in GitLab 17.2 [with feature flag](../../../administration/feature_flags/_index.md) named `scan_execution_policies_with_latest_templates_group`. Disabled by default. - Selection of security templates for given action (for projects and groups) was enabled on GitLab Self-Managed, and GitLab Dedicated ([1](https://gitlab.com/gitlab-org/gitlab/-/issues/461474), [2](https://gitlab.com/gitlab-org/gitlab/-/issues/468981)) in GitLab 17.2. - Selection of security templates for given action (for projects and groups) was generally available in GitLab 17.3. Feature flags `scan_execution_policies_with_latest_templates` and `scan_execution_policies_with_latest_templates_group` removed. {{< /history >}} This action executes the selected `scan` with additional parameters when conditions for at least one rule in the defined policy are met. | Field | Type | Possible values | Description | |-------|------|-----------------|-------------| | `scan` | `string` | `sast`, `sast_iac`, `dast`, `secret_detection`, `container_scanning`, `dependency_scanning` | The action's type. | | `site_profile` | `string` | Name of the selected [DAST site profile](../dast/profiles.md#site-profile). | The DAST site profile to execute the DAST scan. This field should only be set if `scan` type is `dast`. | | `scanner_profile` | `string` or `null` | Name of the selected [DAST scanner profile](../dast/profiles.md#scanner-profile). | The DAST scanner profile to execute the DAST scan. This field should only be set if `scan` type is `dast`.| | `variables` | `object` | | A set of CI/CD variables, supplied as an array of `key: value` pairs, to apply and enforce for the selected scan. The `key` is the variable name, with its `value` provided as a string. This parameter supports any variable that the GitLab CI/CD job supports for the specified scan. | | `tags` | `array` of `string` | | A list of runner tags for the policy. The policy jobs are run by runner with the specified tags. | | `template` | `string` | `default` or `latest` | CI/CD template version to enforce. The `latest` version may introduce breaking changes and supports only `pipeline_sources` related to merge requests. For details, see [customize security scanning](../../application_security/detect/security_configuration.md#customize-security-scanning). | | `scan_settings` | `object` | | A set of scan settings, supplied as an array of `key: value` pairs, to apply and enforce for the selected scan. The `key` is the setting name, with its `value` provided as a boolean or string. This parameter supports the settings defined in [scan settings](#scan-settings). | {{< alert type="note" >}} If you have merge request pipelines enabled for your project, you must set the `AST_ENABLE_MR_PIPELINES` CI/CD variable to `"true"` in your policy for each enforced scan. For more information on using security scanning tools with merge request pipelines, refer to the [security scanning documentation](../../application_security/detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). {{< /alert >}} ### Scanner behavior Some scanners behave differently in a `scan` action than they do in a regular CI/CD pipeline scan: - Static Application Security Testing (SAST): Runs only if the repository contains [files supported by SAST](../sast/_index.md#supported-languages-and-frameworks). - Secret detection: - Only rules in the default ruleset are supported by default. - To customize a ruleset configuration, either: - Modify the default ruleset. Use a scan execution policy to specify the `SECRET_DETECTION_RULESET_GIT_REFERENCE` CI/CD variable. By default, this points to a [remote configuration file](../secret_detection/pipeline/configure.md#with-a-remote-ruleset) that only overrides or disables rules from the default ruleset. Using only this variable does not support extending or replacing the default set of rules. - [Extend](../secret_detection/pipeline/configure.md#extend-the-default-ruleset) or [replace](../secret_detection/pipeline/configure.md#replace-the-default-ruleset) the default ruleset. Use the scan execution policy to specify the `SECRET_DETECTION_RULESET_GIT_REFERENCE` CI/CD variable and a remote configuration file that uses [a Git passthrough](../secret_detection/pipeline/custom_rulesets_schema.md#passthrough-types) to extend or replace the default ruleset. For a detailed guide, see [How to set up a centrally managed pipeline secret detection configuration](https://support.gitlab.com/hc/en-us/articles/18863735262364-How-to-set-up-a-centrally-managed-pipeline-secret-detection-configuration-applied-via-Scan-Execution-Policy). - For `scheduled` scan execution policies, secret detection by default runs first in `historic` mode (`SECRET_DETECTION_HISTORIC_SCAN` = `true`). All subsequent scheduled scans run in default mode with `SECRET_DETECTION_LOG_OPTIONS` set to the commit range between last run and current SHA. You can override this behavior by specifying CI/CD variables in the scan execution policy. For more information, see [Full history pipeline secret detection](../secret_detection/pipeline/_index.md#run-a-historic-scan). - For `triggered` scan execution policies, secret detection works just like regular scan [configured manually in the `.gitlab-ci.yml`](../secret_detection/pipeline/_index.md#edit-the-gitlab-ciyml-file-manually). - Container scanning: A scan that is configured for the `pipeline` rule type ignores the agent defined in the `agents` object. The `agents` object is only considered for `schedule` rule types. An agent with a name provided in the `agents` object must be created and configured for the project. ### DAST profiles The following requirements apply when enforcing Dynamic Application Security Testing (DAST): - For every project in the policy's scope the specified [site profile](../dast/profiles.md#site-profile) and [scanner profile](../dast/profiles.md#scanner-profile) must exist. If these are not available, the policy is not applied and a job with an error message is created instead. - When a DAST site profile or scanner profile is named in an enabled scan execution policy, the profile cannot be modified or deleted. To edit or delete the profile, you must first set the policy to **Disabled** in the policy editor or set `enabled: false` in the YAML mode. - When configuring policies with a scheduled DAST scan, the author of the commit in the security policy project's repository must have access to the scanner and site profiles. Otherwise, the scan is not scheduled successfully. ### Scan settings The following settings are supported by the `scan_settings` parameter: | Setting | Type | Required | Possible values | Default | Description | |-------|------|----------|-----------------|-------------|-----------| | `ignore_default_before_after_script` | `boolean` | false | `true`, `false` | `false` | Specifies whether to exclude any default `before_script` and `after_script` definitions in the pipeline configuration from the scan job. | ## CI/CD variables {{< alert type="warning" >}} Don't store sensitive information or credentials in variables because they are stored as part of the plaintext policy configuration in a Git repository. {{< /alert >}} Variables defined in a scan execution policy follow the standard [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). Preconfigured values are used for the following CI/CD variables in any project on which a scan execution policy is enforced. Their values can be overridden, but **only** if they are declared in a policy. They **cannot** be overridden by group or project CI/CD variables: ```plaintext DS_EXCLUDED_PATHS: spec, test, tests, tmp SAST_EXCLUDED_PATHS: spec, test, tests, tmp SECRET_DETECTION_EXCLUDED_PATHS: '' SECRET_DETECTION_HISTORIC_SCAN: false SAST_EXCLUDED_ANALYZERS: '' DEFAULT_SAST_EXCLUDED_PATHS: spec, test, tests, tmp DS_EXCLUDED_ANALYZERS: '' SECURE_ENABLE_LOCAL_CONFIGURATION: true ``` In GitLab 16.9 and earlier: - If the CI/CD variables suffixed `_EXCLUDED_PATHS` were declared in a policy, their values _could_ be overridden by group or project CI/CD variables. - If the CI/CD variables suffixed `_EXCLUDED_ANALYZERS` were declared in a policy, their values were ignored, regardless of where they were defined: policy, group, or project. ## Policy scope schema To customize policy enforcement, you can define a policy's scope to either include, or exclude, specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## Example security policy project You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- scan_execution_policy: - name: Enforce DAST in every release pipeline description: This policy enforces pipeline configuration to have a job with DAST scan for release branches enabled: true rules: - type: pipeline branches: - release/* actions: - scan: dast scanner_profile: Scanner Profile A site_profile: Site Profile B - name: Enforce DAST and secret detection scans every 10 minutes description: This policy enforces DAST and secret detection scans to run every 10 minutes enabled: true rules: - type: schedule branches: - main cadence: "*/10 * * * *" actions: - scan: dast scanner_profile: Scanner Profile C site_profile: Site Profile D - scan: secret_detection scan_settings: ignore_default_before_after_script: true - name: Enforce Secret Detection and Container Scanning in every default branch pipeline description: This policy enforces pipeline configuration to have a job with Secret Detection and Container Scanning scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: sast variables: SAST_EXCLUDED_ANALYZERS: brakeman - scan: container_scanning ``` In this example: - For every pipeline executed on branches that match the `release/*` wildcard (for example, branch `release/v1.2.1`) - DAST scans run with `Scanner Profile A` and `Site Profile B`. - DAST and secret detection scans run every 10 minutes. The DAST scan runs with `Scanner Profile C` and `Site Profile D`. - Secret detection, container scanning, and SAST scans run for every pipeline executed on the `main` branch. The SAST scan runs with the `SAST_EXCLUDED_ANALYZER` variable set to `"brakeman"`. ## Example for scan execution policy editor You can use this example in the YAML mode of the [scan execution policy editor](#scan-execution-policy-editor). It corresponds to a single object from the previous example. ```yaml name: Enforce Secret Detection and Container Scanning in every default branch pipeline description: This policy enforces pipeline configuration to have a job with Secret Detection and Container Scanning scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: container_scanning ``` ## Avoiding duplicate scans Scan execution policies can cause the same type of scanner to run more than once if developers include scan jobs in the project's `.gitlab-ci.yml` file. This behavior is intentional as scanners can run more than once with different variables and settings. For example, a developer may want to try running a SAST scan with different variables than the one enforced by the security and compliance team. In this case, two SAST jobs run in the pipeline: - One with the developer's variables. - One with the security and compliance team's variables. To avoid running duplicate scans, you can either remove the scans from the project's `.gitlab-ci.yml` file or skip your local jobs with variables. Skipping jobs does not prevent any security jobs defined by scan execution policies from running. To skip scan jobs with variables, you can use: - `SAST_DISABLED: "true"` to skip SAST jobs. - `DAST_DISABLED: "true"` to skip DAST jobs. - `CONTAINER_SCANNING_DISABLED: "true"` to skip container scanning jobs. - `SECRET_DETECTION_DISABLED: "true"` to skip secret detection jobs. - `DEPENDENCY_SCANNING_DISABLED: "true"` to skip dependency scanning jobs. For an overview of all variables that can skip jobs, see [CI/CD variables documentation](../../../topics/autodevops/cicd_variables.md#job-skipping-variables) ## Troubleshooting ### Scan execution policy pipelines are not created If scan execution policies do not create the pipelines defined in `type: pipeline` as expected, you may have [`workflow:rules`](../../../ci/yaml/workflow.md) in the project's `.gitlab-ci.yml` file that prevent the policy from creating the pipeline. Scan execution policies with `type: pipeline` rules rely on the merged CI/CD configuration to create pipelines. If the project's `workflow:rules` filter out the pipeline entirely, the scan execution policy cannot create a pipeline. For example, the following `workflow:rules` configuration prevents all pipelines from being created: ```yaml # .gitlab-ci.yml workflow: rules: - if: $CI_PIPELINE_SOURCE == "push" when: never ``` Resolution: To resolve this issue, you can use any of these options: - Modify the `workflow:rules` in your project's `.gitlab-ci.yml` file to allow scan execution policies to create pipelines. You can use the `$CI_PIPELINE_SOURCE` variable to identify pipelines that are triggered by policies: ```yaml workflow: rules: - if: $CI_PIPELINE_SOURCE == "security_orchestration_policy" - if: $CI_PIPELINE_SOURCE == "push" when: never ``` - Use `type: schedule` rules instead of `type: pipeline` rules. Scheduled scan execution policies are not affected by `workflow:rules` and create pipelines according to their defined schedule. - Use [pipeline execution policies](pipeline_execution_policies.md) for more control over when and how security scans are executed in your CI/CD pipelines.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Scan execution policies breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Support for custom CI/CD variables in the scan execution policies editor [introduced](https://gitlab.com/groups/gitlab-org/-/epics/9566) in GitLab 16.2. - Enforcement of scan execution policies on projects with an existing GitLab CI/CD configuration [introduced](https://gitlab.com/groups/gitlab-org/-/epics/6880) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `scan_execution_policy_pipelines`. Feature flag `scan_execution_policy_pipelines` removed in GitLab 16.5. - Overriding predefined variables in scan execution policies [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/440855) in GitLab 16.10 [with a flag](../../../administration/feature_flags/_index.md) named `allow_restricted_variables_at_policy_level`. Enabled by default. Feature flag `allow_restricted_variables_at_policy_level` removed in GitLab 17.5. {{< /history >}} Scan execution policies enforce GitLab security scans based on the default or latest [security CI/CD templates](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Jobs). You can deploy scan execution policies as part of the pipeline or on a specified schedule. Scan execution policies are enforced across all projects that are linked to the security policy project and are in the scope of the policy. For projects without a `.gitlab-ci.yml` file, or where AutoDevOps is disabled, security policies create the `.gitlab-ci.yml` file implicitly. The `.gitlab-ci.yml` file ensures policies that run secret detection, static analysis, or other scanners that do not require a build in the project can always run and be enforced. Both scan execution policies and pipeline execution policies can configure GitLab security scans across multiple projects to manage security and compliance. Scan execution policies are faster to configure, but are not customizable. If any of the following cases are true, use [pipeline execution policies](pipeline_execution_policies.md) instead: - You require advanced configuration settings. - You want to enforce custom CI/CD jobs or scripts. - You want to enable third-party security scans through an enforced CI/CD job. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Security Scan Policies in GitLab](https://youtu.be/ZBcqGmEwORA?si=aeT4EXtmHjosgjBY). - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Learn more about [enforcing scan execution policies on projects with no GitLab CI/CD configuration](https://www.youtube.com/watch?v=sUfwQQ4-qHs). ## Restrictions - You can assign a maximum of five rules to each policy. - You can assign a maximum of five scan execution policies to each security policy project. - Local project YAML files cannot override scan execution policies. These policies take precedence over any configurations defined for a pipeline, even if you use the same job name in your project's CI/CD configuration. - Scan execution policies with `type: pipeline` rules do not create pipelines if the project's `.gitlab-ci.yml` file contains [`workflow:rules`](../../../ci/yaml/workflow.md) that prevent the creation of pipelines. This limitation does not apply to `type: schedule` rules. ## Jobs Policy jobs for scans, other than DAST scans, are created in the `test` stage of the pipeline. If you remove the `test` stage from the default pipeline, jobs run in the `scan-policies` stage instead. This stage is injected into the CI/CD pipeline at evaluation time if it doesn't exist. If the `build` stage exists, `scan-policies` is injected just after the `build` stage, otherwise it is injected at the beginning of the pipeline. DAST scans always run in the `dast` stage. If the `dast` stage does not exist, then a `dast` stage is injected at the end of the pipeline. To avoid job name conflicts, a hyphen and a number are appended to the job name. Each number is a unique value for each policy action. For example, `secret-detection` becomes `secret-detection-1`. ## Scan execution policy editor {{< history >}} - `Merge Request Security Template` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/541689) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `flexible_scan_execution`. Disabled by default. {{< /history >}} Use the scan execution policy editor to create or edit a scan execution policy. Prerequisites: - By default, only group, subgroup, or project Owners have the [permissions](../../permissions.md#application-security) required to create or assign a security policy project. Alternatively, you can create a custom role with the permission to [manage security policy links](../../custom_roles/abilities.md#security-policy-management). When you create your first scan execution policies, we provide you with templates to get started quickly with some of the most common use cases: - Merge Request Security Template - Use case: You want security scans to run only when merge requests are created, not on every commit. - When to use: For projects using merge request pipelines that need security scans to run on source branches targeting default or protected branches. - Best for: Teams that want to align with merge request approval policies and reduce infrastructure costs by avoiding scans on every branch. - Pipeline sources: Primarily merge request pipelines. - Scheduled Scanning Template - Use case: You want security scans to run automatically on a schedule (like daily or weekly) regardless of code changes. - When to use: For security scanning on a regular cadence, independent of development activity. - Best for: Compliance requirements, baseline security monitoring, or projects with infrequent commits. - Pipeline sources: Scheduled pipelines. - Merge Release Security Template - Use case: You want security scans to run on all changes to your `main` or release branches. - When to use: For projects that need comprehensive scanning before releases, or on protected branches. - Best for: Release-gated workflows, production deployments, or high-security environments. - Pipeline sources: Push pipelines to protected branches, release pipelines. If the available template do not meet your needs, or you require more customized scan execution policies, you can: - Select the **Custom** option and create your own scan execution policy with custom requirements. - Access more customizable options for security scan and CI enforcement using [pipeline execution policies](pipeline_execution_policies.md). Once your policy is complete, save it by selecting **Configure with a merge request** at the bottom of the editor. You are redirected to the merge request on the project's configured security policy project. If one does not link to your project, a security policy project is automatically created. You can remove existing policies from the editor interface by selecting **Delete policy** at the bottom of the editor. This action creates a merge request to remove the policy from your `policy.yml` file. Most policy changes take effect as soon as the merge request is merged. Any changes committed directly to the default branch instead of a merge request require up to 10 minutes before the policy changes take effect. ![Scan Execution Policy Editor Rule Mode](img/scan_execution_policy_rule_mode_v17_5.png) {{< alert type="note" >}} For DAST execution policies, the way you apply site and scanner profiles in the rule mode editor depends on where the policy is defined: - For policies in projects, in the rule mode editor, choose from a list of profiles that are already defined in the project. - For policies in groups, you must type in the names of the profiles to use. To prevent pipeline errors, profiles with matching names must exist in all of the group's projects. {{< /alert >}} ## Scan execution policies schema A YAML configuration with scan execution policies consists of an array of objects matching the scan execution policy schema. Objects are nested under the `scan_execution_policy` key. You can configure a maximum of five policies under the `scan_execution_policy` key. Any other policies configured after the first five are not applied. When you save a new policy, GitLab validates the policy's contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `scan_execution_policy` | `array` of scan execution policy | true | | List of scan execution policies (maximum 5) | ## Scan execution policy schema {{< history >}} - Limit of actions per policy [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/472213) in GitLab 17.4 [with flags](../../../administration/feature_flags/_index.md) named `scan_execution_policy_action_limit` (for projects) and `scan_execution_policy_action_limit_group` (for groups). Disabled by default. - Limit of actions per policy [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/535605) in GitLab 18.0. Feature flags `scan_execution_policy_action_limit` (for projects) and `scan_execution_policy_action_limit_group` (for groups) removed. {{< /history >}} {{< alert type="flag" >}} This feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} | Field | Type | Required | Description | |----------------|----------------------------------------------|----------|-------------| | `name` | `string` | true | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | Description of the policy. | | `enabled` | `boolean` | true | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | List of rules that the policy applies. | | `actions` | `array` of actions | true | List of actions that the policy enforces. Limited to a maximum of 10 in GitLab 18.0 and later. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | Defines the scope of the policy based on the projects, groups, or compliance framework labels you specify. | | `skip_ci` | `object` of [`skip_ci`](#skip_ci-type) | false | Defines whether users can apply the `skip-ci` directive. | ### `skip_ci` type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/482952) in GitLab 17.9. {{< /history >}} Scan execution policies offer control over who can use the `[skip ci]` directive. You can specify certain users or service accounts that are allowed to use `[skip ci]` while still ensuring critical security and compliance checks are performed. Use the `skip_ci` keyword to specify whether users are allowed to apply the `skip_ci` directive to skip the pipelines. When the keyword is not specified, the `skip_ci` directive is ignored, preventing all users from bypassing the pipeline execution policies. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `allowed` | `boolean` | `true`, `false` | Flag to allow (`true`) or prevent (`false`) the use of the `skip-ci` directive for pipelines with enforced pipeline execution policies. | | `allowlist` | `object` | `users` | Specify users who are always allowed to use `skip-ci` directive, regardless of the `allowed` flag. Use `users:` followed by an array of objects with `id` keys representing user IDs. | {{< alert type="note" >}} Scan execution policies that have the rule type `schedule` always ignore the `skip_ci` option. Scheduled scans run at their configured times regardless of whether `[skip ci]` (or any of its variations) appear in the last commit message. This ensures that security scans occur on a predictable schedule even when CI/CD pipelines are otherwise skipped. {{< /alert >}} ## `pipeline` rule type {{< history >}} - The `branch_type` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404774) in GitLab 16.1 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_type`. Generally available in GitLab 16.2. Feature flag removed. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Generally available in GitLab 16.5. Feature flag removed. - The `pipeline_sources` field and the `branch_type` options `target_default` and `target_protected` were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/541689) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `flexible_scan_execution`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} This rule enforces the defined actions whenever the pipeline runs for a selected branch. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `pipeline` | The rule's type. | | `branches` <sup>1</sup> | `array` of `string` | true if `branch_type` field does not exist | `*` or the branch's name | The branch the given policy applies to (supports wildcard). For compatibility with merge request approval policies, you should target all branches to include the scans in the feature branch and default branch | | `branch_type` <sup>1</sup> | `string` | true if `branches` field does not exist | `default`, `protected`, `all`, `target_default` <sup>2</sup>, or `target_protected` <sup>2</sup> | The types of branches the given policy applies to. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Branches to exclude from this rule. | | `pipeline_sources` <sup>2</sup> | `array` of `string` | false | `api`, `chat`, `external`, `external_pull_request_event`, `merge_request_event` <sup>3</sup>, `pipeline`, `push` <sup>3</sup>, `schedule`, `trigger`, `unknown`, `web` | The pipeline source that determines when the scan execution job triggers. See the [documentation](../../../ci/jobs/job_rules.md#ci_pipeline_source-predefined-variable) for more information. | 1. You must specify either `branches` or `branch_type`, but not both. 1. Some options are only available with the `flexible_scan_execution` feature flag enabled. See the history for details. 1. When the `branch_type` options `target_default` or `target_protected` are specified, the `pipeline_sources` field supports only the `merge_request_event` and `push` fields. ## `schedule` rule type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404774) the `branch_type` field in GitLab 16.1 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_type`. Generally available in GitLab 16.2. Feature flag removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) the `branch_exceptions` field in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Generally available in GitLab 16.5. Feature flag removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/147691) a new `scan_execution_pipeline_worker` worker to scheduled scans to create pipelines in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md). - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/152855) a new application setting `security_policy_scheduled_scans_max_concurrency` in GitLab 17.1. The concurrency limit applies when both the `scan_execution_pipeline_worker` and `scan_execution_pipeline_concurrency_control` are enabled. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158636) a concurrency limit for scan execution scheduled jobs in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `scan_execution_pipeline_concurrency_control`. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/451890) the `scan_execution_pipeline_worker` feature flag on GitLab.com in GitLab 17.5. - [Feature flag](https://gitlab.com/gitlab-org/gitlab/-/issues/451890) `scan_execution_pipeline_worker` removed in GitLab 17.6. - [Feature flag](https://gitlab.com/gitlab-org/gitlab/-/issues/463802) `scan_execution_pipeline_concurrency_control` removed in GitLab 17.9. - [Removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178892) a new application setting `security_policy_scheduled_scans_max_concurrency` in GitLab 17.11 {{< /history >}} {{< alert type="warning" >}} In GitLab 16.1 and earlier, you should not use [direct transfer](../../../administration/settings/import_and_export_settings.md#enable-migration-of-groups-and-projects-by-direct-transfer) with scheduled scan execution policies. If you must use direct transfer, first upgrade to GitLab 16.2 and ensure security policy bots are enabled in the projects you are enforcing. {{< /alert >}} Use the `schedule` rule type to run security scanners on a schedule. A scheduled pipeline: - Runs only the scanners defined in the policy, not the jobs defined in the project's `.gitlab-ci.yml` file. - Runs according to the schedule defined in the `cadence` field. - Runs under a `security_policy_bot` user account in the project, with the Guest role and permissions to create pipelines and read the repository's content from a CI/CD job. This account is created when the policy is linked to a group or project. - On GitLab.com, only the first 10 `schedule` rules in a scan execution policy are enforced. Rules that exceed the limit have no effect. | Field | Type | Required | Possible values | Description | |------------|------|----------|-----------------|-------------| | `type` | `string` | true | `schedule` | The rule's type. | | `branches` <sup>1</sup> | `array` of `string` | true if either `branch_type` or `agents` fields does not exist | `*` or the branch's name | The branch the given policy applies to (supports wildcard). | | `branch_type` <sup>1</sup> | `string` | true if either `branches` or `agents` fields does not exist | `default`, `protected` or `all` | The types of branches the given policy applies to. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Branches to exclude from this rule. | | `cadence` | `string` | true | Cron expression with limited options. For example, `0 0 * * *` creates a schedule to run every day at midnight (12:00 AM). | A whitespace-separated string containing five fields that represents the scheduled time. | | `timezone` | `string` | false | Time zone identifier (for example, `America/New_York`) | Time zone to apply to the cadence. Value must be an IANA Time Zone Database identifier. | | `time_window` | `object` | false | | Distribution and duration settings for scheduled security scans. | | `agents` <sup>1</sup> | `object` | true if either `branch_type` or `branches` fields do not exists | | The name of the [GitLab agents for Kubernetes](../../clusters/agent/_index.md) where [Operational Container Scanning](../../clusters/agent/vulnerabilities.md) runs. The object key is the name of the Kubernetes agent configured for your project in GitLab. | 1. You must specify only one of `branches`, `branch_type`, or `agents`. ### Cadence Use the `cadence` field to schedule when you want the policy's actions to run. The `cadence` field uses [cron syntax](../../../topics/cron/_index.md), but with some restrictions: - Only the following types of cron syntax are supported: - A daily cadence of once per hour around specified time, for example: `0 18 * * *` - A weekly cadence of once per week on a specified day and around specified time, for example: `0 13 * * 0` - Use of the comma (,), hyphens (-), or step operators (/) are not supported for minutes and hours. Any scheduled pipeline using these characters is skipped. Consider the following when choosing a value for the `cadence` field: - Timing is based on UTC for GitLab.com and GitLab Dedicated, and on the GitLab host's system time for GitLab Self-Managed. When testing new policies, pipelines may appear to run at incorrect times because they are scheduled in your server's time zone, not your local time zone. - A scheduled pipeline doesn't start until the required resources become available to create it. In other words, the pipeline may not begin precisely at the timing specified in the policy. When using the `schedule` rule type with the `agents` field: - The GitLab agent for Kubernetes checks every 30 seconds to see if there is an applicable policy. When the agent finds a policy, the scans execute according to the defined `cadence`. - The cron expression is evaluated using the system time of the Kubernetes agent pod. When using the `schedule` rule type with the `branches` field: - The cron worker runs on 15 minute intervals and starts any pipelines that were scheduled to run during the previous 15 minutes. Therefore, scheduled pipelines may run with an offset of up to 15 minutes. - If a policy is enforced on a large number of projects or branches, the policy is processed in batches, and may take some time to create all pipelines. ![A diagram showing how scheduled security scans are processed and executed with potential delays.](img/scheduled_scan_execution_policies_diagram_v15_10.png) ### `agent` schema Use this schema to define `agents` objects in the [`schedule` rule type](#schedule-rule-type). | Field | Type | Required | Description | |--------------|---------------------|----------|-------------| | `namespaces` | `array` of `string` | true | The namespace that is scanned. If empty, all namespaces are scanned. | #### `agent` example ```yaml - name: Enforce Container Scanning in cluster connected through my-gitlab-agent for default and kube-system namespaces enabled: true rules: - type: schedule cadence: '0 10 * * *' agents: <agent-name>: namespaces: - 'default' - 'kube-system' actions: - scan: container_scanning ``` The keys for a schedule rule are: - `cadence` (required): a [Cron expression](../../../topics/cron/_index.md) for when the scans are run. - `agents:<agent-name>` (required): The name of the agent to use for scanning. - `agents:<agent-name>:namespaces` (optional): The Kubernetes namespaces to scan. If omitted, all namespaces are scanned. ### `time_window` schema Define how scheduled scans are distributed over time with the `time_window` object in the [`schedule` rule type](#schedule-rule-type). You can configure `time_window` only in YAML mode of the policy editor. | Field | Type | Required | Description | |----------------|-----------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `distribution` | `string` | true | Distribution pattern for schedule scans. Supports only `random`, where scans are distributed randomly in the interval defined by the `value` key of the `time_window`. | | `value` | `integer` | true | The time window in seconds the schedule scans should run. Enter a value between 3600 (1 hour) and 86400 (24 hours). | #### `time_window` example ```yaml - name: Enforce Container Scanning with a time window of 1 hour enabled: true rules: - type: schedule cadence: '0 10 * * *' time_window: value: 3600 distribution: random actions: - scan: container_scanning ``` ### Optimize scheduled pipelines for projects at scale Consider performance when enabling scheduled scans across many projects. If the `scan_execution_pipeline_concurrency_control` feature flag is not enabled: - Scheduled pipelines run simultaneously across all projects and branches enforced by the policy. - The first scheduled pipeline execution in each project creates a security bot user responsible for executing the schedules for each project. To optimize performance for projects at scale: - Roll out scheduled scan execution policies gradually, starting with a subset of projects. You can leverage security policy scopes to target specific groups, projects, or projects containing a given compliance framework label. - You can configure the policy to run the schedules on runners with a specified `tag`. Consider setting up a dedicated runner in each project to handle schedules enforced from a policy to reduce impact to other runners. - Test your implementation in a staging or lower environment before deploying to production. Monitor performance and adjust your rollout plan based on results. ### Concurrency control GitLab applies concurrency control when: - The `scan_execution_pipeline_concurrency_control` feature flag is enabled - You set the `time_window` property The concurrency control distributes the scheduled pipelines according to the [`time_window` settings](#time_window-schema) defined in the policy. ## `scan` action type {{< history >}} - Scan Execution Policies variable precedence was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/424028) in GitLab 16.7 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_variables_precedence`. Enabled by default. [Feature flag removed in GitLab 16.8](https://gitlab.com/gitlab-org/gitlab/-/issues/435727). - Selection of security templates for given action (for projects) was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415427) in GitLab 17.1 [with feature flag](../../../administration/feature_flags/_index.md) named `scan_execution_policies_with_latest_templates`. Disabled by default. - Selection of security templates for given action (for groups) was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/468981) in GitLab 17.2 [with feature flag](../../../administration/feature_flags/_index.md) named `scan_execution_policies_with_latest_templates_group`. Disabled by default. - Selection of security templates for given action (for projects and groups) was enabled on GitLab Self-Managed, and GitLab Dedicated ([1](https://gitlab.com/gitlab-org/gitlab/-/issues/461474), [2](https://gitlab.com/gitlab-org/gitlab/-/issues/468981)) in GitLab 17.2. - Selection of security templates for given action (for projects and groups) was generally available in GitLab 17.3. Feature flags `scan_execution_policies_with_latest_templates` and `scan_execution_policies_with_latest_templates_group` removed. {{< /history >}} This action executes the selected `scan` with additional parameters when conditions for at least one rule in the defined policy are met. | Field | Type | Possible values | Description | |-------|------|-----------------|-------------| | `scan` | `string` | `sast`, `sast_iac`, `dast`, `secret_detection`, `container_scanning`, `dependency_scanning` | The action's type. | | `site_profile` | `string` | Name of the selected [DAST site profile](../dast/profiles.md#site-profile). | The DAST site profile to execute the DAST scan. This field should only be set if `scan` type is `dast`. | | `scanner_profile` | `string` or `null` | Name of the selected [DAST scanner profile](../dast/profiles.md#scanner-profile). | The DAST scanner profile to execute the DAST scan. This field should only be set if `scan` type is `dast`.| | `variables` | `object` | | A set of CI/CD variables, supplied as an array of `key: value` pairs, to apply and enforce for the selected scan. The `key` is the variable name, with its `value` provided as a string. This parameter supports any variable that the GitLab CI/CD job supports for the specified scan. | | `tags` | `array` of `string` | | A list of runner tags for the policy. The policy jobs are run by runner with the specified tags. | | `template` | `string` | `default` or `latest` | CI/CD template version to enforce. The `latest` version may introduce breaking changes and supports only `pipeline_sources` related to merge requests. For details, see [customize security scanning](../../application_security/detect/security_configuration.md#customize-security-scanning). | | `scan_settings` | `object` | | A set of scan settings, supplied as an array of `key: value` pairs, to apply and enforce for the selected scan. The `key` is the setting name, with its `value` provided as a boolean or string. This parameter supports the settings defined in [scan settings](#scan-settings). | {{< alert type="note" >}} If you have merge request pipelines enabled for your project, you must set the `AST_ENABLE_MR_PIPELINES` CI/CD variable to `"true"` in your policy for each enforced scan. For more information on using security scanning tools with merge request pipelines, refer to the [security scanning documentation](../../application_security/detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). {{< /alert >}} ### Scanner behavior Some scanners behave differently in a `scan` action than they do in a regular CI/CD pipeline scan: - Static Application Security Testing (SAST): Runs only if the repository contains [files supported by SAST](../sast/_index.md#supported-languages-and-frameworks). - Secret detection: - Only rules in the default ruleset are supported by default. - To customize a ruleset configuration, either: - Modify the default ruleset. Use a scan execution policy to specify the `SECRET_DETECTION_RULESET_GIT_REFERENCE` CI/CD variable. By default, this points to a [remote configuration file](../secret_detection/pipeline/configure.md#with-a-remote-ruleset) that only overrides or disables rules from the default ruleset. Using only this variable does not support extending or replacing the default set of rules. - [Extend](../secret_detection/pipeline/configure.md#extend-the-default-ruleset) or [replace](../secret_detection/pipeline/configure.md#replace-the-default-ruleset) the default ruleset. Use the scan execution policy to specify the `SECRET_DETECTION_RULESET_GIT_REFERENCE` CI/CD variable and a remote configuration file that uses [a Git passthrough](../secret_detection/pipeline/custom_rulesets_schema.md#passthrough-types) to extend or replace the default ruleset. For a detailed guide, see [How to set up a centrally managed pipeline secret detection configuration](https://support.gitlab.com/hc/en-us/articles/18863735262364-How-to-set-up-a-centrally-managed-pipeline-secret-detection-configuration-applied-via-Scan-Execution-Policy). - For `scheduled` scan execution policies, secret detection by default runs first in `historic` mode (`SECRET_DETECTION_HISTORIC_SCAN` = `true`). All subsequent scheduled scans run in default mode with `SECRET_DETECTION_LOG_OPTIONS` set to the commit range between last run and current SHA. You can override this behavior by specifying CI/CD variables in the scan execution policy. For more information, see [Full history pipeline secret detection](../secret_detection/pipeline/_index.md#run-a-historic-scan). - For `triggered` scan execution policies, secret detection works just like regular scan [configured manually in the `.gitlab-ci.yml`](../secret_detection/pipeline/_index.md#edit-the-gitlab-ciyml-file-manually). - Container scanning: A scan that is configured for the `pipeline` rule type ignores the agent defined in the `agents` object. The `agents` object is only considered for `schedule` rule types. An agent with a name provided in the `agents` object must be created and configured for the project. ### DAST profiles The following requirements apply when enforcing Dynamic Application Security Testing (DAST): - For every project in the policy's scope the specified [site profile](../dast/profiles.md#site-profile) and [scanner profile](../dast/profiles.md#scanner-profile) must exist. If these are not available, the policy is not applied and a job with an error message is created instead. - When a DAST site profile or scanner profile is named in an enabled scan execution policy, the profile cannot be modified or deleted. To edit or delete the profile, you must first set the policy to **Disabled** in the policy editor or set `enabled: false` in the YAML mode. - When configuring policies with a scheduled DAST scan, the author of the commit in the security policy project's repository must have access to the scanner and site profiles. Otherwise, the scan is not scheduled successfully. ### Scan settings The following settings are supported by the `scan_settings` parameter: | Setting | Type | Required | Possible values | Default | Description | |-------|------|----------|-----------------|-------------|-----------| | `ignore_default_before_after_script` | `boolean` | false | `true`, `false` | `false` | Specifies whether to exclude any default `before_script` and `after_script` definitions in the pipeline configuration from the scan job. | ## CI/CD variables {{< alert type="warning" >}} Don't store sensitive information or credentials in variables because they are stored as part of the plaintext policy configuration in a Git repository. {{< /alert >}} Variables defined in a scan execution policy follow the standard [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). Preconfigured values are used for the following CI/CD variables in any project on which a scan execution policy is enforced. Their values can be overridden, but **only** if they are declared in a policy. They **cannot** be overridden by group or project CI/CD variables: ```plaintext DS_EXCLUDED_PATHS: spec, test, tests, tmp SAST_EXCLUDED_PATHS: spec, test, tests, tmp SECRET_DETECTION_EXCLUDED_PATHS: '' SECRET_DETECTION_HISTORIC_SCAN: false SAST_EXCLUDED_ANALYZERS: '' DEFAULT_SAST_EXCLUDED_PATHS: spec, test, tests, tmp DS_EXCLUDED_ANALYZERS: '' SECURE_ENABLE_LOCAL_CONFIGURATION: true ``` In GitLab 16.9 and earlier: - If the CI/CD variables suffixed `_EXCLUDED_PATHS` were declared in a policy, their values _could_ be overridden by group or project CI/CD variables. - If the CI/CD variables suffixed `_EXCLUDED_ANALYZERS` were declared in a policy, their values were ignored, regardless of where they were defined: policy, group, or project. ## Policy scope schema To customize policy enforcement, you can define a policy's scope to either include, or exclude, specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## Example security policy project You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- scan_execution_policy: - name: Enforce DAST in every release pipeline description: This policy enforces pipeline configuration to have a job with DAST scan for release branches enabled: true rules: - type: pipeline branches: - release/* actions: - scan: dast scanner_profile: Scanner Profile A site_profile: Site Profile B - name: Enforce DAST and secret detection scans every 10 minutes description: This policy enforces DAST and secret detection scans to run every 10 minutes enabled: true rules: - type: schedule branches: - main cadence: "*/10 * * * *" actions: - scan: dast scanner_profile: Scanner Profile C site_profile: Site Profile D - scan: secret_detection scan_settings: ignore_default_before_after_script: true - name: Enforce Secret Detection and Container Scanning in every default branch pipeline description: This policy enforces pipeline configuration to have a job with Secret Detection and Container Scanning scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: sast variables: SAST_EXCLUDED_ANALYZERS: brakeman - scan: container_scanning ``` In this example: - For every pipeline executed on branches that match the `release/*` wildcard (for example, branch `release/v1.2.1`) - DAST scans run with `Scanner Profile A` and `Site Profile B`. - DAST and secret detection scans run every 10 minutes. The DAST scan runs with `Scanner Profile C` and `Site Profile D`. - Secret detection, container scanning, and SAST scans run for every pipeline executed on the `main` branch. The SAST scan runs with the `SAST_EXCLUDED_ANALYZER` variable set to `"brakeman"`. ## Example for scan execution policy editor You can use this example in the YAML mode of the [scan execution policy editor](#scan-execution-policy-editor). It corresponds to a single object from the previous example. ```yaml name: Enforce Secret Detection and Container Scanning in every default branch pipeline description: This policy enforces pipeline configuration to have a job with Secret Detection and Container Scanning scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: container_scanning ``` ## Avoiding duplicate scans Scan execution policies can cause the same type of scanner to run more than once if developers include scan jobs in the project's `.gitlab-ci.yml` file. This behavior is intentional as scanners can run more than once with different variables and settings. For example, a developer may want to try running a SAST scan with different variables than the one enforced by the security and compliance team. In this case, two SAST jobs run in the pipeline: - One with the developer's variables. - One with the security and compliance team's variables. To avoid running duplicate scans, you can either remove the scans from the project's `.gitlab-ci.yml` file or skip your local jobs with variables. Skipping jobs does not prevent any security jobs defined by scan execution policies from running. To skip scan jobs with variables, you can use: - `SAST_DISABLED: "true"` to skip SAST jobs. - `DAST_DISABLED: "true"` to skip DAST jobs. - `CONTAINER_SCANNING_DISABLED: "true"` to skip container scanning jobs. - `SECRET_DETECTION_DISABLED: "true"` to skip secret detection jobs. - `DEPENDENCY_SCANNING_DISABLED: "true"` to skip dependency scanning jobs. For an overview of all variables that can skip jobs, see [CI/CD variables documentation](../../../topics/autodevops/cicd_variables.md#job-skipping-variables) ## Troubleshooting ### Scan execution policy pipelines are not created If scan execution policies do not create the pipelines defined in `type: pipeline` as expected, you may have [`workflow:rules`](../../../ci/yaml/workflow.md) in the project's `.gitlab-ci.yml` file that prevent the policy from creating the pipeline. Scan execution policies with `type: pipeline` rules rely on the merged CI/CD configuration to create pipelines. If the project's `workflow:rules` filter out the pipeline entirely, the scan execution policy cannot create a pipeline. For example, the following `workflow:rules` configuration prevents all pipelines from being created: ```yaml # .gitlab-ci.yml workflow: rules: - if: $CI_PIPELINE_SOURCE == "push" when: never ``` Resolution: To resolve this issue, you can use any of these options: - Modify the `workflow:rules` in your project's `.gitlab-ci.yml` file to allow scan execution policies to create pipelines. You can use the `$CI_PIPELINE_SOURCE` variable to identify pipelines that are triggered by policies: ```yaml workflow: rules: - if: $CI_PIPELINE_SOURCE == "security_orchestration_policy" - if: $CI_PIPELINE_SOURCE == "push" when: never ``` - Use `type: schedule` rules instead of `type: pipeline` rules. Scheduled scan execution policies are not affected by `workflow:rules` and create pipelines according to their defined schedule. - Use [pipeline execution policies](pipeline_execution_policies.md) for more control over when and how security scans are executed in your CI/CD pipelines.
https://docs.gitlab.com/user/application_security/policies
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
_index.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Policies
Security policies, enforcement, compliance, approvals, and scans.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Policies provide security and compliance teams with a way to enforce controls globally in their organization. Security teams can ensure: - Security scanners are enforced in development team pipelines with proper configuration. - All scan jobs execute without any changes or alterations. - Proper approvals are provided on merge requests, based on results from those findings. - Vulnerabilities that are no longer detected are resolved automatically, reducing the workload of triaging vulnerabilities. Compliance teams can enforce: - Multiple approvers on all merge requests - Projects settings based on organizational requirements, such as enabling or locking merge request settings or repository settings. The following policy types are available: - [Scan execution policy](scan_execution_policies.md). Enforce security scans, either as part of the pipeline or on a specified schedule. - [Merge request approval policy](merge_request_approval_policies.md). Enforce project-level settings and approval rules based on scan results. - [Pipeline execution policy](pipeline_execution_policies.md). Enforce CI/CD jobs as part of project pipelines. - [Scheduled pipeline execution policy (experiment)](scheduled_pipeline_execution_policies.md). Enforce custom CI/CD jobs on a scheduled cadence across projects, independent of commit activity. - [Vulnerability management policy](vulnerability_management_policy.md). Automatically resolve vulnerabilities that are no longer detected in the default branch. ## Configure the policy scope ## `policy_scope` keyword Use the `policy_scope` keyword to enforce the policy on only those groups, projects, compliance frameworks, or a combination, that you specify. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `compliance_frameworks` | `array` | Not applicable | List of IDs of the compliance frameworks in scope for enforcement, in an array of objects with key `id`. | | `projects` | `object` | `including`, `excluding` | Use `excluding:` or `including:` then list the IDs of the projects you wish to include or exclude, in an array of objects with key `id`. | | `groups` | `object` | `including` | Use `including:` then list the IDs of the groups you wish to include, in an array of objects with key `id`. Only groups linked to the same security policy project can be listed in the policy. | ### Scope examples In this example, the scan execution policy enforces a SAST scan in every release pipeline, on every project with the compliance frameworks with an ID either `2` or `11` applied to them. ```yaml --- scan_execution_policy: - name: Enforce specified scans in every release pipeline description: This policy enforces a SAST scan for release branches enabled: true rules: - type: pipeline branches: - release/* actions: - scan: sast policy_scope: compliance_frameworks: - id: 2 - id: 11 ``` In this example, the scan execution policy enforces a secret detection and SAST scan on pipelines for the default branch, on all projects in the group with ID `203` (including all descendent subgroups and their projects), excluding the project with ID `64`. ```yaml - name: Enforce specified scans in every default branch pipeline description: This policy enforces Secret Detection and SAST scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: sast policy_scope: groups: including: - id: 203 projects: excluding: - id: 64 ``` ## Separation of duties Separation of duties is vital to successfully implementing policies. Implement policies that achieve the necessary compliance and security requirements, while allowing development teams to achieve their goals. Security and compliance teams: - Should be responsible for defining policies and working with development teams to ensure the policies meet their needs. Development teams: - Should not be able to disable, modify, or circumvent the policies in any way. To enforce a security policy project on a group, subgroup, or project, you must have either: - The Owner role in that group, subgroup, or project. - A [custom role](../../custom_roles/_index.md) in that group, subgroup, or project with the `manage_security_policy_link` permission. The Owner role and custom roles with the `manage_security_policy_link` permission follow the standard hierarchy rules across groups, subgroups, and projects: | Organization unit | Group owner or group `manage_security_policy_link` permission | Subgroup owner or subgroup `manage_security_policy_link` permission | Project owner or project `manage_security_policy_link` permission | |-------------------|---------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------| | Group | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Subgroup | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | Project | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | ### Required permissions To create and manage security policies: - For policies enforced on groups: You must have at least the Maintainer role for the group. - For policies enforced on projects: - You must be the project owner. - You must be a group member with permissions to create projects in the group. {{< alert type="note" >}} If you're not a group member, you may face limitations in adding or editing policies for your project. The ability to create and manage policies requires permissions to create projects in the group. Make sure you have the required permissions in the group, even when working with project-level policies. {{< /alert >}} ## Policy recommendations When implementing policies, consider the following recommendations. ### Branch names When specifying branch names in a policy, use a generic category of protected branches, such as **default branch** or **all protected branches**, not individual branch names. A policy is enforced on a project only if the specified branch exists in that project. For example, if your policy enforces rules on branch `main` but some projects in scope are using `production` as their default branch, the policy is not applied for the latter. ### Push rules In GitLab 17.3 and earlier, if you use push rules to [validate branch names](../../project/repository/push_rules.md#validate-branch-names) ensure they allow creation of branches with the prefix `update-policy-`. This branch naming prefix is used when a security policy is created or amended. For example, `update-policy-1659094451`, where `1659094451` is the timestamp. If push rules block the creation of the branch the following error occurs: ```plaintext Branch name `update-policy-<timestamp>` does not follow the pattern `<branch_name_regex>`. ``` In [GitLab 17.4 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/463064), security policy projects are excluded from push rules that enforce branch name validation. ### Security policy projects To prevent the exposure of sensitive information that was intended to remain private in your security policy project, when you link security policy projects to other projects: - Don't include sensitive content in your security policy projects. - Before linking a private security policy project, review the member list of the target project to ensure all members should have access to your policy content. - Evaluate the visibility settings of target projects. - Use [security policy management](../../compliance/audit_event_types.md#security-policy-management) audit logs to monitor project linking. These recommendations prevent sensitive information exposure for the following reasons: - Shared visibility: When a private security project is linked to another project, users with access to the **Security Policies** page of the linked project can view the contents of the `.gitlab/security-policies/policy.yml` file. This includes linking a private security policy project to a public project, which can expose the policy contents to anyone who can access the public project. - Access control: All members of the project to which a private security project is linked can view the policy file on the **Policy** page, even if they don't have access to the original private repository. ### Security and compliance controls Project maintainers can create policies for projects that interfere with the execution of policies for groups. To limit who can modify policies for groups and ensure that compliance requirements are being met, when you implement critical security or compliance controls: - Use custom roles to restrict who can create or modify pipeline execution policies at the project level. - Configure protected branches for the default branch in your security policy projects to prevent direct pushes. - Set up merge request approval rules in your security policy projects that require review from designated approvers. - Monitor and review all policy changes in policies for both groups and projects. ## Policy management The Policies page displays deployed policies for all available environments. You can check a policy's information (for example, description or enforcement status), and create and edit deployed policies: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Policies**. ![Policies List Page](img/policies_list_v17_7.png) A green checkmark in the first column indicates that the policy is enabled and enforced on all groups and projects within its scope. A gray checkmark indicates that the policy is currently not enabled. ## Policy editor Use the policy editor to create, edit, and delete policies: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Policies**. - To create a new policy, select **New policy** which is located in the **Policies** page's header. You can then select which type of policy to create. - To edit an existing policy, select **Edit policy** in the selected policy drawer. The policy editor has two modes: - The visual **Rule mode** allows you to construct and preview policy rules using rule blocks and related controls. ![Policy Editor Rule Mode](img/policy_rule_mode_v15_9.png) - **YAML mode** allows you to enter a policy definition in `.yaml` format and is aimed at expert users and cases that the Rule mode doesn't support. ![Policy Editor YAML Mode](img/policy_yaml_mode_v15_9.png) You can use both modes interchangeably and switch between them at any time. If a YAML resource is incorrect or contains data not supported by the Rule mode, Rule mode is automatically disabled. If the YAML is incorrect, you must use YAML mode to fix your policy before Rule mode is available again. 1. Select **Configure with a merge request** to save and apply the changes. The policy's YAML is validated and any resulting errors are displayed. 1. Review and merge the resulting merge request. If you are a project owner and a security policy project is not associated with this project, a security policy project is created and linked to this project when the merge request is created. ### Annotate IDs in `policy.yml` {{< details >}} Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497774) as an [experiment](../../../policy/development_stages_support.md) in GitLab 18.1 with an `annotate_ids` option defined in the `policy.yml` file. {{< /history >}} To simplify your `policy.yml` file, GitLab can automatically add comments after IDs, such as project IDs, group IDs, user IDs, or compliance framework IDs. The annotations help users identify the meaning or origin of each ID, which makes the `policy.yml` file easier to understand and maintain. To enable this experimental feature, add an `annotate_ids` section to the `experiments` section in the `.gitlab/security-policies/policy.yml` file for your security policy project: ```yaml experiments: annotate_ids: enabled: true ``` After you enable the option, any change to the security policies made with the GitLab [policy editor](#policy-editor) creates annotation comments next to the IDs in the `policy.yml` file. {{< alert type="note" >}} To apply the annotations, you must use the policy editor. If you edit the `policy.yml` file manually (for example, with a Git commit), the annotations are not applied. {{< /alert>}} For example: ```yaml # Example policy.yml with annotated IDs approval_policy: - name: Your policy name # ... other policy fields ... policy_scope: projects: including: - id: 361 # my-group/my-project actions: - type: require_approval approvals_required: 1 user_approvers_ids: - 75 # jane.doe group_approvers_ids: - 203 # security-approvers ``` {{< alert type="note" >}} When you apply annotations for the first time, GitLab creates the annotations for all IDs in the `policy.yml` file, including those in policies that you aren't editing. {{< /alert >}} ## Troubleshooting When working with security policies, consider these troubleshooting tips: - You should not link a security policy project to both a development project and the group or subgroup the development project belongs to. Linking this way results in approval rules from the merge request approval policies not being applied to merge requests in the development project. - When creating a merge request approval policy, neither the array `severity_levels` nor the array `vulnerability_states` in the [`scan_finding` rule](merge_request_approval_policies.md#scan_finding-rule-type) can be left empty. For a working rule, at least one entry must exist for each array. - The owner of a project can enforce policies for that project, provided they also have permissions to create projects in the group. Project owners who are not group members may face limitations in adding or editing policies. If you're unable to manage policies for your project, contact your group administrator to ensure you have the necessary permissions in the group. If you are still experiencing issues, you can [view recent reported bugs](https://gitlab.com/gitlab-org/gitlab/-/issues/?sort=popularity&state=opened&label_name%5B%5D=group%3A%3Asecurity%20policies&label_name%5B%5D=type%3A%3Abug&first_page_size=20) and raise new unreported issues. ### Resynchronize policies with the GraphQL API If you notice inconsistencies in any of the policies, such as policies that aren't being enforced or approvals that are incorrect, you can manually force a resynchronization of the policies with the GraphQL `resyncSecurityPolicies` mutation: ```graphql mutation { resyncSecurityPolicies(input: { fullPath: "group-or-project-path" }) { errors } } ``` Set `fullPath` to the path of the project or group to which the security policy project is assigned.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Policies description: Security policies, enforcement, compliance, approvals, and scans. breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Policies provide security and compliance teams with a way to enforce controls globally in their organization. Security teams can ensure: - Security scanners are enforced in development team pipelines with proper configuration. - All scan jobs execute without any changes or alterations. - Proper approvals are provided on merge requests, based on results from those findings. - Vulnerabilities that are no longer detected are resolved automatically, reducing the workload of triaging vulnerabilities. Compliance teams can enforce: - Multiple approvers on all merge requests - Projects settings based on organizational requirements, such as enabling or locking merge request settings or repository settings. The following policy types are available: - [Scan execution policy](scan_execution_policies.md). Enforce security scans, either as part of the pipeline or on a specified schedule. - [Merge request approval policy](merge_request_approval_policies.md). Enforce project-level settings and approval rules based on scan results. - [Pipeline execution policy](pipeline_execution_policies.md). Enforce CI/CD jobs as part of project pipelines. - [Scheduled pipeline execution policy (experiment)](scheduled_pipeline_execution_policies.md). Enforce custom CI/CD jobs on a scheduled cadence across projects, independent of commit activity. - [Vulnerability management policy](vulnerability_management_policy.md). Automatically resolve vulnerabilities that are no longer detected in the default branch. ## Configure the policy scope ## `policy_scope` keyword Use the `policy_scope` keyword to enforce the policy on only those groups, projects, compliance frameworks, or a combination, that you specify. | Field | Type | Possible values | Description | |-------------------------|----------|--------------------------|-------------| | `compliance_frameworks` | `array` | Not applicable | List of IDs of the compliance frameworks in scope for enforcement, in an array of objects with key `id`. | | `projects` | `object` | `including`, `excluding` | Use `excluding:` or `including:` then list the IDs of the projects you wish to include or exclude, in an array of objects with key `id`. | | `groups` | `object` | `including` | Use `including:` then list the IDs of the groups you wish to include, in an array of objects with key `id`. Only groups linked to the same security policy project can be listed in the policy. | ### Scope examples In this example, the scan execution policy enforces a SAST scan in every release pipeline, on every project with the compliance frameworks with an ID either `2` or `11` applied to them. ```yaml --- scan_execution_policy: - name: Enforce specified scans in every release pipeline description: This policy enforces a SAST scan for release branches enabled: true rules: - type: pipeline branches: - release/* actions: - scan: sast policy_scope: compliance_frameworks: - id: 2 - id: 11 ``` In this example, the scan execution policy enforces a secret detection and SAST scan on pipelines for the default branch, on all projects in the group with ID `203` (including all descendent subgroups and their projects), excluding the project with ID `64`. ```yaml - name: Enforce specified scans in every default branch pipeline description: This policy enforces Secret Detection and SAST scans for the default branch enabled: true rules: - type: pipeline branches: - main actions: - scan: secret_detection - scan: sast policy_scope: groups: including: - id: 203 projects: excluding: - id: 64 ``` ## Separation of duties Separation of duties is vital to successfully implementing policies. Implement policies that achieve the necessary compliance and security requirements, while allowing development teams to achieve their goals. Security and compliance teams: - Should be responsible for defining policies and working with development teams to ensure the policies meet their needs. Development teams: - Should not be able to disable, modify, or circumvent the policies in any way. To enforce a security policy project on a group, subgroup, or project, you must have either: - The Owner role in that group, subgroup, or project. - A [custom role](../../custom_roles/_index.md) in that group, subgroup, or project with the `manage_security_policy_link` permission. The Owner role and custom roles with the `manage_security_policy_link` permission follow the standard hierarchy rules across groups, subgroups, and projects: | Organization unit | Group owner or group `manage_security_policy_link` permission | Subgroup owner or subgroup `manage_security_policy_link` permission | Project owner or project `manage_security_policy_link` permission | |-------------------|---------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------| | Group | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Subgroup | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | Project | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | ### Required permissions To create and manage security policies: - For policies enforced on groups: You must have at least the Maintainer role for the group. - For policies enforced on projects: - You must be the project owner. - You must be a group member with permissions to create projects in the group. {{< alert type="note" >}} If you're not a group member, you may face limitations in adding or editing policies for your project. The ability to create and manage policies requires permissions to create projects in the group. Make sure you have the required permissions in the group, even when working with project-level policies. {{< /alert >}} ## Policy recommendations When implementing policies, consider the following recommendations. ### Branch names When specifying branch names in a policy, use a generic category of protected branches, such as **default branch** or **all protected branches**, not individual branch names. A policy is enforced on a project only if the specified branch exists in that project. For example, if your policy enforces rules on branch `main` but some projects in scope are using `production` as their default branch, the policy is not applied for the latter. ### Push rules In GitLab 17.3 and earlier, if you use push rules to [validate branch names](../../project/repository/push_rules.md#validate-branch-names) ensure they allow creation of branches with the prefix `update-policy-`. This branch naming prefix is used when a security policy is created or amended. For example, `update-policy-1659094451`, where `1659094451` is the timestamp. If push rules block the creation of the branch the following error occurs: ```plaintext Branch name `update-policy-<timestamp>` does not follow the pattern `<branch_name_regex>`. ``` In [GitLab 17.4 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/463064), security policy projects are excluded from push rules that enforce branch name validation. ### Security policy projects To prevent the exposure of sensitive information that was intended to remain private in your security policy project, when you link security policy projects to other projects: - Don't include sensitive content in your security policy projects. - Before linking a private security policy project, review the member list of the target project to ensure all members should have access to your policy content. - Evaluate the visibility settings of target projects. - Use [security policy management](../../compliance/audit_event_types.md#security-policy-management) audit logs to monitor project linking. These recommendations prevent sensitive information exposure for the following reasons: - Shared visibility: When a private security project is linked to another project, users with access to the **Security Policies** page of the linked project can view the contents of the `.gitlab/security-policies/policy.yml` file. This includes linking a private security policy project to a public project, which can expose the policy contents to anyone who can access the public project. - Access control: All members of the project to which a private security project is linked can view the policy file on the **Policy** page, even if they don't have access to the original private repository. ### Security and compliance controls Project maintainers can create policies for projects that interfere with the execution of policies for groups. To limit who can modify policies for groups and ensure that compliance requirements are being met, when you implement critical security or compliance controls: - Use custom roles to restrict who can create or modify pipeline execution policies at the project level. - Configure protected branches for the default branch in your security policy projects to prevent direct pushes. - Set up merge request approval rules in your security policy projects that require review from designated approvers. - Monitor and review all policy changes in policies for both groups and projects. ## Policy management The Policies page displays deployed policies for all available environments. You can check a policy's information (for example, description or enforcement status), and create and edit deployed policies: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Policies**. ![Policies List Page](img/policies_list_v17_7.png) A green checkmark in the first column indicates that the policy is enabled and enforced on all groups and projects within its scope. A gray checkmark indicates that the policy is currently not enabled. ## Policy editor Use the policy editor to create, edit, and delete policies: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Policies**. - To create a new policy, select **New policy** which is located in the **Policies** page's header. You can then select which type of policy to create. - To edit an existing policy, select **Edit policy** in the selected policy drawer. The policy editor has two modes: - The visual **Rule mode** allows you to construct and preview policy rules using rule blocks and related controls. ![Policy Editor Rule Mode](img/policy_rule_mode_v15_9.png) - **YAML mode** allows you to enter a policy definition in `.yaml` format and is aimed at expert users and cases that the Rule mode doesn't support. ![Policy Editor YAML Mode](img/policy_yaml_mode_v15_9.png) You can use both modes interchangeably and switch between them at any time. If a YAML resource is incorrect or contains data not supported by the Rule mode, Rule mode is automatically disabled. If the YAML is incorrect, you must use YAML mode to fix your policy before Rule mode is available again. 1. Select **Configure with a merge request** to save and apply the changes. The policy's YAML is validated and any resulting errors are displayed. 1. Review and merge the resulting merge request. If you are a project owner and a security policy project is not associated with this project, a security policy project is created and linked to this project when the merge request is created. ### Annotate IDs in `policy.yml` {{< details >}} Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497774) as an [experiment](../../../policy/development_stages_support.md) in GitLab 18.1 with an `annotate_ids` option defined in the `policy.yml` file. {{< /history >}} To simplify your `policy.yml` file, GitLab can automatically add comments after IDs, such as project IDs, group IDs, user IDs, or compliance framework IDs. The annotations help users identify the meaning or origin of each ID, which makes the `policy.yml` file easier to understand and maintain. To enable this experimental feature, add an `annotate_ids` section to the `experiments` section in the `.gitlab/security-policies/policy.yml` file for your security policy project: ```yaml experiments: annotate_ids: enabled: true ``` After you enable the option, any change to the security policies made with the GitLab [policy editor](#policy-editor) creates annotation comments next to the IDs in the `policy.yml` file. {{< alert type="note" >}} To apply the annotations, you must use the policy editor. If you edit the `policy.yml` file manually (for example, with a Git commit), the annotations are not applied. {{< /alert>}} For example: ```yaml # Example policy.yml with annotated IDs approval_policy: - name: Your policy name # ... other policy fields ... policy_scope: projects: including: - id: 361 # my-group/my-project actions: - type: require_approval approvals_required: 1 user_approvers_ids: - 75 # jane.doe group_approvers_ids: - 203 # security-approvers ``` {{< alert type="note" >}} When you apply annotations for the first time, GitLab creates the annotations for all IDs in the `policy.yml` file, including those in policies that you aren't editing. {{< /alert >}} ## Troubleshooting When working with security policies, consider these troubleshooting tips: - You should not link a security policy project to both a development project and the group or subgroup the development project belongs to. Linking this way results in approval rules from the merge request approval policies not being applied to merge requests in the development project. - When creating a merge request approval policy, neither the array `severity_levels` nor the array `vulnerability_states` in the [`scan_finding` rule](merge_request_approval_policies.md#scan_finding-rule-type) can be left empty. For a working rule, at least one entry must exist for each array. - The owner of a project can enforce policies for that project, provided they also have permissions to create projects in the group. Project owners who are not group members may face limitations in adding or editing policies. If you're unable to manage policies for your project, contact your group administrator to ensure you have the necessary permissions in the group. If you are still experiencing issues, you can [view recent reported bugs](https://gitlab.com/gitlab-org/gitlab/-/issues/?sort=popularity&state=opened&label_name%5B%5D=group%3A%3Asecurity%20policies&label_name%5B%5D=type%3A%3Abug&first_page_size=20) and raise new unreported issues. ### Resynchronize policies with the GraphQL API If you notice inconsistencies in any of the policies, such as policies that aren't being enforced or approvals that are incorrect, you can manually force a resynchronization of the policies with the GraphQL `resyncSecurityPolicies` mutation: ```graphql mutation { resyncSecurityPolicies(input: { fullPath: "group-or-project-path" }) { errors } } ``` Set `fullPath` to the path of the project or group to which the security policy project is assigned.
https://docs.gitlab.com/user/application_security/vulnerability_management_policy
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/vulnerability_management_policy.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
vulnerability_management_policy.md
Security Risk Management
Security Insights
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability management policy
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/5708) support for enforcing policies on projects in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_management_policy_type`. Enabled by default. - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/15697) support for enforcing policies on groups in GitLab 17.8 for the group-level [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_management_policy_type_group`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178031) in GitLab 17.9. Feature flags `vulnerability_management_policy_type` and `vulnerability_management_policy_type_group` removed. {{< /history >}} Use a vulnerability management policy to automatically resolve vulnerabilities that are no longer detected. This can help reduce the workload of triaging vulnerabilities. When a scanner detects a vulnerability on the default branch, the scanner creates a vulnerability record with the status **Needs triage**. After the vulnerability has been remediated and the next security scan runs, the scan adds **No longer detected** to the record's activity log but the record's status does not change. You can change the status to **Resolved** either [manually](../vulnerabilities/_index.md#change-the-status-of-a-vulnerability) or by using a vulnerability management policy. Using a vulnerability management policy ensures rules are applied consistently. For example, you can create a policy that marks as resolved those vulnerabilities that are no longer detected on the default branch, but only those created by SAST and are of low risk. The vulnerability management policy only affects vulnerabilities with the status **Needs triage** or **Confirmed**. The vulnerability management policy is applied when a pipeline runs against the default branch. For each vulnerability that is no longer detected by the same scanner and matches the policy's rules: - The vulnerability record's status is set to **Resolved** by the **GitLab Security Policy Bot** user. - A note about the status change is added to the vulnerability's record. To limit the pipeline load and duration, a maximum of 1,000 vulnerabilities per pipeline are set to status **Resolved**. This repeats in each pipeline until all vulnerabilities that are no longer detected are marked **Resolved**. ## Restrictions - You can assign a maximum of five rules to each policy. - You can assign a maximum of five vulnerability management policies to each security policy project. - When a secret detection scan finds that a previously detected secret key is no longer detected, the vulnerability is not auto-resolved. Instead, it remains in **Needs Triage** because the removed secret key has already been exposed. The vulnerability status should be manually resolved only after the secret key is revoked or rotated. ## Create a vulnerability management policy Create a vulnerability management policy to automatically resolve vulnerabilities matching specific criteria. Prerequisites: - By default, only group, subgroup, or project Owners have the permissions required to create or assign a security policy project. This can be changed using [custom roles](../../custom_roles/_index.md). To create a vulnerability management policy: 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Policies**. 1. Select **New policy**. 1. In **Vulnerability management policy**, select **Select policy**. 1. Complete the fields and set the policy's status to **Enabled**. 1. Select **Create policy**. 1. Review and merge the merge request. After the vulnerability management policy has been created, the policy rules are applied to pipelines on the default branch. ## Edit a vulnerability management policy Edit a vulnerability management policy to change its rules. 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Policies**. 1. In the policy's row, select **Edit**. 1. Edit the policy's details. 1. Select **Save changes**. 1. Review and merge the merge request. The vulnerability management policy has been updated. When a pipeline next runs against the default branch, the policy's rules are applied. ### Schema When a vulnerability management policy is created or edited, it's checked against the [vulnerability management policy schema](vulnerability_management_policy_schema.md) to confirm it's valid.
--- stage: Security Risk Management group: Security Insights info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability management policy breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/5708) support for enforcing policies on projects in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_management_policy_type`. Enabled by default. - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/15697) support for enforcing policies on groups in GitLab 17.8 for the group-level [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_management_policy_type_group`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/178031) in GitLab 17.9. Feature flags `vulnerability_management_policy_type` and `vulnerability_management_policy_type_group` removed. {{< /history >}} Use a vulnerability management policy to automatically resolve vulnerabilities that are no longer detected. This can help reduce the workload of triaging vulnerabilities. When a scanner detects a vulnerability on the default branch, the scanner creates a vulnerability record with the status **Needs triage**. After the vulnerability has been remediated and the next security scan runs, the scan adds **No longer detected** to the record's activity log but the record's status does not change. You can change the status to **Resolved** either [manually](../vulnerabilities/_index.md#change-the-status-of-a-vulnerability) or by using a vulnerability management policy. Using a vulnerability management policy ensures rules are applied consistently. For example, you can create a policy that marks as resolved those vulnerabilities that are no longer detected on the default branch, but only those created by SAST and are of low risk. The vulnerability management policy only affects vulnerabilities with the status **Needs triage** or **Confirmed**. The vulnerability management policy is applied when a pipeline runs against the default branch. For each vulnerability that is no longer detected by the same scanner and matches the policy's rules: - The vulnerability record's status is set to **Resolved** by the **GitLab Security Policy Bot** user. - A note about the status change is added to the vulnerability's record. To limit the pipeline load and duration, a maximum of 1,000 vulnerabilities per pipeline are set to status **Resolved**. This repeats in each pipeline until all vulnerabilities that are no longer detected are marked **Resolved**. ## Restrictions - You can assign a maximum of five rules to each policy. - You can assign a maximum of five vulnerability management policies to each security policy project. - When a secret detection scan finds that a previously detected secret key is no longer detected, the vulnerability is not auto-resolved. Instead, it remains in **Needs Triage** because the removed secret key has already been exposed. The vulnerability status should be manually resolved only after the secret key is revoked or rotated. ## Create a vulnerability management policy Create a vulnerability management policy to automatically resolve vulnerabilities matching specific criteria. Prerequisites: - By default, only group, subgroup, or project Owners have the permissions required to create or assign a security policy project. This can be changed using [custom roles](../../custom_roles/_index.md). To create a vulnerability management policy: 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Policies**. 1. Select **New policy**. 1. In **Vulnerability management policy**, select **Select policy**. 1. Complete the fields and set the policy's status to **Enabled**. 1. Select **Create policy**. 1. Review and merge the merge request. After the vulnerability management policy has been created, the policy rules are applied to pipelines on the default branch. ## Edit a vulnerability management policy Edit a vulnerability management policy to change its rules. 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Policies**. 1. In the policy's row, select **Edit**. 1. Edit the policy's details. 1. Select **Save changes**. 1. Review and merge the merge request. The vulnerability management policy has been updated. When a pipeline next runs against the default branch, the policy's rules are applied. ### Schema When a vulnerability management policy is created or edited, it's checked against the [vulnerability management policy schema](vulnerability_management_policy_schema.md) to confirm it's valid.
https://docs.gitlab.com/user/application_security/merge_request_approval_policies
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/merge_request_approval_policies.md
2025-08-13
doc/user/application_security/policies
[ "doc", "user", "application_security", "policies" ]
merge_request_approval_policies.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Merge request approval policies
Learn how to enforce security rules in GitLab using merge request approval policies to automate scans, approvals, and compliance across your projects.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Group-level scan result policies [introduced](https://gitlab.com/groups/gitlab-org/-/epics/7622) in GitLab 15.6. - Scan result policies feature was renamed to merge request approval policies in GitLab 16.9. {{< /history >}} {{< alert type="note" >}} Scan result policies feature was renamed to merge request approval policies in GitLab 16.9. {{< /alert >}} You can use merge request approval policies for multiple purposes, including: - Detect results from security and license scanners to enforce approval rules. For example, one type of merge request policy is a security approval policy that allows approval to be required based on the findings of one or more security scan jobs. Merge request approval policies are evaluated after a CI scanning job is fully executed and both vulnerability and license type policies are evaluated based on the job artifact reports that are published in the completed pipeline. - Enforce approval rules on all merge requests that meet certain conditions. For example, enforce that MRs are reviewed by multiple users with Developer and Maintainer roles for all MRs that target default branches. - Enforce settings for security and compliance on a project. For example, prevent users who have authored or committed changes to an MR from approving the MR. Or prevent users from pushing or force pushing to the default branch to ensure all changes go through an MR. {{< alert type="note" >}} When a protected branch is created or deleted, the policy approval rules synchronize, with a delay of 1 minute. {{< /alert >}} The following video gives you an overview of GitLab merge request approval policies (previously scan result policies): <div class="video-fallback"> See the video: <a href="https://youtu.be/w5I9gcUgr9U">Overview of GitLab Scan Result Policies</a>. </div> <figure class="video-container"> <iframe src="https://www.youtube-nocookie.com/embed/w5I9gcUgr9U" frameborder="0" allowfullscreen> </iframe> </figure> ## Restrictions - You can enforce merge request approval policies only on [protected](../../project/repository/branches/protected.md) target branches. - You can assign a maximum of five rules to each policy. - You can assign a maximum of five merge request approval policies to each security policy project. - Policies created for a group or subgroup can take some time to apply to all the merge requests in the group. The time it takes is determined by the number of projects and the number of merge requests in those projects. Typically, the time taken is a matter of seconds. For groups with many thousands of projects and merge requests, this could take several minutes, based on what we've previously observed. - Merge request approval policies do not check the integrity or authenticity of the scan results generated in the artifact reports. - A merge request approval policy is evaluated according to its rules. By default, if the rules are invalid, or can't be evaluated, approval is required. You can change this behavior with the [`fallback_behavior` field](#fallback_behavior). ## Pipeline requirements A merge request approval policy is enforced according to the outcome of the pipeline. Consider the following when implementing a merge request approval policy: - A merge request approval policy evaluates completed pipeline jobs, ignoring manual jobs. When the manual jobs are run, the policy re-evaluates the merge request's jobs. - For a merge request approval policy that evaluates the results of security scanners, all specified scanners must have output a security report. If not, approvals are enforced to minimize the risk of vulnerabilities being introduced. This behavior can affect: - New projects where security scans are not yet established. - Branches created before the security scans were configured. - Projects with inconsistent scanner configurations between branches. - The pipeline must produce artifacts for all enabled scanners, for both the source and target branches. If not, there's no basis for comparison and so the policy can't be evaluated. You should use a scan execution policy to enforce this requirement. - Policy evaluation depends on a successful and completed merge base pipeline. If the merge base pipeline is skipped, merge requests with the merge base pipeline are blocked. - Security scanners specified in a policy must be configured and enabled in the projects on which the policy is enforced. If not, the merge request approval policy cannot be evaluated and the corresponding approvals are required. ## Best practices for using security scanners with merge request approval policies When you create a new project, you can enforce both merge request approval policies and security scans on that project. However, incorrectly configured security scanners can affect the merge request approval policies. There are multiple ways to configure security scans in new projects: - In the project's CI/CD configuration by adding the scanners to the initial `.gitlab-ci.yml` configuration file. - In a scan execution policy to enforce that pipelines run specific security scanners. - In a pipeline execution policy to control which jobs must run in pipelines. For simple use cases, you can use the project's CI/CD configuration. For a comprehensive security strategy, consider combining merge request approval policies with the other policy types. To minimize unnecessary approval requirements and ensure accurate security evaluations: - **Run security scans on your default branch first**: Before creating feature branches, ensure security scans have run successfully on your default branch. - **Use consistent scanner configuration**: Run the same scanners in both source and target branches, preferably in a single pipeline. - **Verify that scans produce artifacts**: Ensure that scans complete successfully and produce artifacts for comparison. - **Keep branches synchronized**: Regularly merge changes from the default branch into feature branches. - **Consider pipeline configurations**: For new projects, include security scanners in your initial `.gitlab-ci.yml` configuration. ### Verify security scanners before you apply merge request approval policies By implementing your security scans in your new project before you apply a merge request approval policy, you can ensure security scanners run consistently before relying on merge request approval policies, which helps avoid situations where merge requests are blocked due to missing security scans. To create and verify your security scanners and merge request approval policies together, use this recommended workflow: 1. Create the project. 1. Configure security scanners using the `.gitlab-ci.yml` configuration, a scan execution policy, or a pipeline execution policy. 1. Wait for the initial pipeline to complete on the default branch. Resolve any issues and rerun the pipeline to ensure it completes successfully before you continue. 1. Create merge requests using feature branches with the same security scanners configured. Again, ensure that the security scanners complete sucessfully. 1. Apply your merge request approval policies. ## Merge request with multiple pipelines {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/379108) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `multi_pipeline_scan_result_policies`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/409482) in GitLab 16.3. Feature flag `multi_pipeline_scan_result_policies` removed. - Support for parent-child pipelines [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/428591) in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_parent_child_pipeline`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/451597) in GitLab 17.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/428591) in GitLab 17.1. Feature flag `approval_policy_parent_child_pipeline` removed. {{< /history >}} A project can have multiple pipeline types configured. A single commit can initiate multiple pipelines, each of which may contain a security scan. - In GitLab 16.3 and later, the results of all completed pipelines for the latest commit in the merge request's source and target branch are evaluated and used to enforce the merge request approval policy. On-demand DAST pipelines are not considered. - In GitLab 16.2 and earlier, only the results of the latest completed pipeline were evaluated when enforcing merge request approval policies. If a project uses [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md), you must set the CI/CD variable `AST_ENABLE_MR_PIPELINES` to `"true"` for the security scanning jobs to be present in the pipeline. For more information see [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). For projects where many pipelines have run on the latest commit (for example, dormant projects), policy evaluation considers a maximum of 1,000 pipelines from both the source and target branches of the merge request. For parent-child pipelines, policy evaluation considers a maximum of 1,000 child pipelines. ## Merge request approval policy editor {{< history >}} - [Enabled by default](https://gitlab.com/gitlab-org/gitlab/-/issues/369473) in GitLab 15.6. {{< /history >}} {{< alert type="note" >}} Only project Owners have the [permissions](../../permissions.md#project-members-permissions) to select Security Policy Project. {{< /alert >}} Once your policy is complete, save it by selecting **Configure with a merge request** at the bottom of the editor. This redirects you to the merge request on the project's configured security policy project. If a security policy project doesn't link to your project, GitLab creates such a project for you. Existing policies can also be removed from the editor interface by selecting **Delete policy** at the bottom of the editor. Most policy changes take effect as soon as the merge request is merged. Any changes that do not go through a merge request and are committed directly to the default branch may require up to 10 minutes before the policy changes take effect. The [policy editor](_index.md#policy-editor) supports YAML mode and rule mode. {{< alert type="note" >}} Propagating merge request approval policies created for groups with a large number of projects take a while to complete. {{< /alert >}} ## Merge request approval policies schema The YAML file with merge request approval policies consists of an array of objects matching the merge request approval policy schema nested under the `approval_policy` key. You can configure a maximum of five policies under the `approval_policy` key. {{< alert type="note" >}} Merge request approval policies were defined under the `scan_result_policy` key. Until GitLab 17.0, policies can be defined under both keys. Starting from GitLab 17.0, only `approval_policy` key is supported. {{< /alert >}} When you save a new policy, GitLab validates its contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-------------------|------------------------------------------|----------|------------------------------------------------------| | `approval_policy` | `array` of merge request approval policy objects | true | List of merge request approval policies (maximum 5). | ## Merge request approval policy schema {{< history >}} - The `approval_settings` fields were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4 [with flags](../../../administration/feature_flags/_index.md) named `scan_result_policies_block_unprotecting_branches`, `scan_result_any_merge_request`, or `scan_result_policies_block_force_push`. See the `approval_settings` section below for more information. {{< /history >}} | Field | Type | Required | Possible values | Description | |---------------------|--------------------|----------|-----------------|----------------------------------------------------------| | `name` | `string` | true | | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | | Description of the policy. | | `enabled` | `boolean` | true | `true`, `false` | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | | List of rules that the policy applies. | | `actions` | `array` of actions | false | | List of actions that the policy enforces. | | `approval_settings` | `object` | false | | Project settings that the policy overrides. | | `fallback_behavior` | `object` | false | | Settings that affect invalid or unenforceable rules. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | | Defines the scope of the policy based on the projects, groups, or compliance framework labels you specify. | | `policy_tuning` | `object` | false | | (Experimental) Settings that affect policy comparison logic. | | `bypass_settings` | `object` | false | | Settings that affect when certain branches, tokens, or accounts can bypass a policy . | ## `scan_finding` rule type {{< history >}} - The merge request approval policy field `vulnerability_attributes` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123052) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `enforce_vulnerability_attributes_rules`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/418784) in GitLab 16.3. Feature flag removed. - The merge request approval policy field `vulnerability_age` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123956) in GitLab 16.2. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `vulnerability_states` option `newly_detected` was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/422414) in GitLab 17.0 and the options `new_needs_triage` and `new_dismissed` were added to replace it. {{< /history >}} This rule enforces the defined actions based on security scan findings. | Field | Type | Required | Possible values | Description | |----------------------------|---------------------|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------|-------------| | `type` | `string` | true | `scan_finding` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `scanners` | `array` of `string` | true | `[]` or `sast`, `secret_detection`, `dependency_scanning`, `container_scanning`, `dast`, `coverage_fuzzing`, `api_fuzzing` | The security scanners for this rule to consider. `sast` includes results from both SAST and SAST IaC scanners. An empty array, `[]`, applies the rule to all security scanners.| | `vulnerabilities_allowed` | `integer` | true | Greater than or equal to zero | Number of vulnerabilities allowed before this rule is considered. | | `severity_levels` | `array` of `string` | true | `info`, `unknown`, `low`, `medium`, `high`, `critical` | The severity levels for this rule to consider. | | `vulnerability_states` | `array` of `string` | true | `[]` or `detected`, `confirmed`, `resolved`, `dismissed`, `new_needs_triage`, `new_dismissed` | All vulnerabilities fall into two categories:<br><br>**Newly Detected Vulnerabilities** - Vulnerabilities identified in the merge request branch itself but that do not currently exist on the MR's target branch. This policy option requires a pipeline to complete before the rule is evaluated so that it knows whether vulnerabilities are newly detected or not. Merge requests are blocked until the pipeline and necessary security scans are complete. The `new_needs_triage` option considers the status<br><br> • Detected<br><br> The `new_dismissed` option considers the status<br><br> • Dismissed<br><br>**Pre-Existing Vulnerabilities** - these policy options are evaluated immediately and do not require a pipeline complete as they consider only vulnerabilities previously detected in the default branch.<br><br> • `Detected` - the policy looks for vulnerabilities in the detected state.<br> • `Confirmed` - the policy looks for vulnerabilities in the confirmed state.<br> • `Dismissed` - the policy looks for vulnerabilities in the dismissed state.<br> • `Resolved` - the policy looks for vulnerabilities in the resolved state. <br><br>An empty array, `[]`, covers the same statuses as `['new_needs_triage', 'new_dismissed']`. | | `vulnerability_attributes` | `object` | false | `{false_positive: boolean, fix_available: boolean}` | All vulnerability findings are considered by default. But filters can be applied for attributes to consider only vulnerability findings: <br><br> • With a fix available (`fix_available: true`)<br><br> • With no fix available (`fix_available: false`)<br> • That are false positive (`false_positive: true`)<br> • That are not false positive (`false_positive: false`)<br> • Or a combination of both. For example (`fix_available: true, false_positive: false`) | | `vulnerability_age` | `object` | false | N/A | Filter pre-existing vulnerability findings by age. A vulnerability's age is calculated as the time since it was detected in the project. The criteria are `operator`, `value`, and `interval`.<br>- The `operator` criterion specifies if the age comparison used is older than (`greater_than`) or younger than (`less_than`).<br>- The `value` criterion specifies the numeric value representing the vulnerability's age.<br>- The `interval` criterion specifies the unit of measure of the vulnerability's age: `day`, `week`, `month`, or `year`.<br><br>Example: `operator: greater_than`, `value: 30`, `interval: day`. | ## `license_finding` rule type {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8092) in GitLab 15.9 [with a flag](../../../administration/feature_flags/_index.md) named `license_scanning_policies`. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/397644) in GitLab 15.11. Feature flag `license_scanning_policies` removed. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `licenses` field was [introduced](https://gitlab.com/groups/gitlab-org/-/epics/10203) in GitLab 17.11 [with a flag](../../../administration/feature_flags/_index.md) named `exclude_license_packages`. Feature flag removed. {{< /history >}} This rule enforces the defined actions based on license findings. | Field | Type | Required | Possible values | Description | |----------------|----------|-----------------------------------------------|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `type` | `string` | true | `license_finding` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `match_on_inclusion_license` | `boolean` | true if `licenses` field does not exists | `true`, `false` | Whether the rule matches inclusion or exclusion of licenses listed in `license_types`. | | `license_types` | `array` of `string` | true if `licenses` field does not exists | license types | [SPDX license names](https://spdx.org/licenses) to match on, for example `Affero General Public License v1.0` or `MIT License`. | | `license_states` | `array` of `string` | true | `newly_detected`, `detected` | Whether to match newly detected and/or previously detected licenses. The `newly_detected` state triggers approval when either a new package is introduced or when a new license for an existing package is detected. | | `licenses` | `object` | true if `license_types` field does not exists | `licenses` object | [SPDX license names](https://spdx.org/licenses) to match on including package exceptions. | ### `licenses` object | Field | Type | Required | Possible values | Description | |-----------|----------|-----------------------------------------|------------------------------------------------------|------------------------------------------------------------| | `denied` | `object` | true if `allowed` field does not exist | `array` of `licenses_with_package_exclusion` objects | The list of denied licenses including package exceptions. | | `allowed` | `object` | true if `denied` field does not exist | `array` of `licenses_with_package_exclusion` objects | The list of allowed licenses including package exceptions. | ### `licenses_with_package_exclusion` object | Field | Type | Required | Possible values | Description | |--------|----------|----------|-------------------|----------------------------------------------------| | `name` | `string` | true | SPDX license name | [SPDX license name](https://spdx.org/licenses). | | `packages` | `object` | false | `packages` object | List of packages exceptions for the given license. | ### `packages` object | Field | Type | Required | Possible values | Description | |--------|----------|----------|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `excluding` | `object` | true | {purls: `array` of `strings` using the `uri` format} | List of package exceptions for the given license. Define the list of packages exceptions using the [`purl`](https://github.com/package-url/purl-spec?tab=readme-ov-file#purl) components `scheme:type/name@version`. The `scheme:type/name` components are required. The `@` and `version` are optional. If a version is specified, only that version is considered an exception. If no version is specified and the `@` character is added at the end of the `purl`, only packages with the exact name is considered a match. If the `@` character is not added to the package name, all packages with the same prefix for the given license are matches. For example, a purl `pkg:gem/bundler` matches the `bundler` and `bundler-stats` packages because both packages use the same license. Defining a `purl` `pkg:gem/bundler@` matches only the `bundler` package. | ## `any_merge_request` rule type {{< history >}} - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `any_merge_request` rule type was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136298) in GitLab 16.6. Feature flag [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/432127). {{< /history >}} This rule enforces the defined actions for any merge request based on the commits signature. | Field | Type | Required | Possible values | Description | |---------------------|---------------------|--------------------------------------------|---------------------------|-------------| | `type` | `string` | true | `any_merge_request` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `commits` | `string` | true | `any`, `unsigned` | Whether the rule matches for any commits, or only if unsigned commits are detected in the merge request. | ## `require_approval` action type {{< history >}} - [Added](https://gitlab.com/groups/gitlab-org/-/epics/12319) support for up to five separate `require_approval` actions in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `multiple_approval_actions`. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/505374) in GitLab 17.8. Feature flag `multiple_approval_actions` removed. - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13550) support to specify custom roles as `role_approvers` in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `security_policy_custom_roles`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/505742) in GitLab 17.10. Feature flag `security_policy_custom_roles` removed. {{< /history >}} This action makes an approval rule required when the conditions are met for at least one rule in the defined policy. If you specify multiple approvers in the same `require_approval` block, any of the eligible approvers can satisfy the approval requirement. For example, if you specify two `group_approvers` and `approvals_required` as `2`, both of the approvals can come from the same group. To require multiple approvals from unique approver types, use multiple `require_approval` actions. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `require_approval` | The action's type. | | `approvals_required` | `integer` | true | Greater than or equal to zero | The number of MR approvals required. | | `user_approvers` | `array` of `string` | false | Username of one of more users | The users to consider as approvers. Users must have access to the project to be eligible to approve. | | `user_approvers_ids` | `array` of `integer` | false | ID of one of more users | The IDs of users to consider as approvers. Users must have access to the project to be eligible to approve. | | `group_approvers` | `array` of `string` | false | Path of one of more groups | The groups to consider as approvers. Users with [direct membership in the group](../../project/merge_requests/approvals/rules.md#group-approvers) are eligible to approve. | | `group_approvers_ids` | `array` of `integer` | false | ID of one of more groups | The IDs of groups to consider as approvers. Users with [direct membership in the group](../../project/merge_requests/approvals/rules.md#group-approvers) are eligible to approve. | | `role_approvers` | `array` of `string` | false | One or more [roles](../../permissions.md#roles) (for example: `owner`, `maintainer`). You can also specify custom roles (or custom role identifiers in YAML mode) as `role_approvers` if the custom roles have the permission to approve merge requests. The custom roles can be selected along with user and group approvers. | The roles that are eligible to approve. | ## `send_bot_message` action type {{< history >}} - The `send_bot_message` action type was [introduced for projects](https://gitlab.com/gitlab-org/gitlab/-/issues/438269) in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_disable_bot_comment`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/454852) in GitLab 17.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/454852) in GitLab 17.3. Feature flag `approval_policy_disable_bot_comment` removed. - The `send_bot_message` action type was [introduced for groups](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.2 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_disable_bot_comment_group`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.2. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.3. Feature flag `approval_policy_disable_bot_comment_group` removed. {{< /history >}} This action enables configuration of the bot message in merge requests when policy violations are detected. If the action is not specified, the bot message is enabled by default. If there are multiple policies defined, the bot message is sent as long as at least one of those policies has the `send_bot_message` action is enabled. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `send_bot_message` | The action's type. | | `enabled` | `boolean` | true | `true`, `false` | Whether a bot message should be created when policy violations are detected. Default: `true` | ### Example bot messages ![scan_results_example_bot_message_v17_0](img/scan_result_policy_example_bot_message_vulnerabilities_v17_0.png) ![scan_results_example_bot_message_v17_0](img/scan_result_policy_example_bot_message_artifacts_v17_0.png) ## Warn mode {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/15552) in GitLab 17.8 [with a flag](../../../administration/feature_flags/_index.md) named `security_policy_approval_warn_mode`. Disabled by default {{< /history >}} When warn mode is enabled and a merge request triggers a security policy that doesn't require any additional approvers, a bot comment is added to the merge request. The comment directs users to the policy for more information. ## `approval_settings` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420724) the `block_group_branch_modification` field in GitLab 16.8 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_block_group_branch_modification`. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/437306) in GitLab 17.6. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/503930) in GitLab 17.7. Feature flag `scan_result_policy_block_group_branch_modification` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/423101) the `block_unprotecting_branches` field in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_settings`. Disabled by default. - The `scan_result_policy_settings` feature flag was replaced by the `scan_result_policies_block_unprotecting_branches` feature flag in 16.4. - The `block_unprotecting_branches` field was [replaced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137153) by `block_branch_modification` field in GitLab 16.7. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/423901) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/433415) in GitLab 16.11. Feature flag `scan_result_policies_block_unprotecting_branches` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) the `prevent_approval_by_author`, `prevent_approval_by_commit_author`, `remove_approvals_with_new_commit`, and `require_password_to_approve` fields in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_any_merge_request`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/423988) in GitLab 16.6. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/423988) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/432127) in GitLab 16.8. Feature flag `scan_result_any_merge_request` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420629) the `prevent_pushing_and_force_pushing` field in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policies_block_force_push`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/427260) in GitLab 16.6. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/427260) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/432123) in GitLab 16.9. Feature flag `scan_result_policies_block_force_push` removed. {{< /history >}} The settings set in the policy overwrite settings in the project. | Field | Type | Required | Possible values | Applicable rule type | Description | |-------------------------------------|-----------------------|----------|---------------------------------------------------------------|----------------------|-------------| | `block_branch_modification` | `boolean` | false | `true`, `false` | All | When enabled, prevents a user from removing a branch from the protected branches list, deleting a protected branch, or changing the default branch if that branch is included in the security policy. This ensures users cannot remove protection status from a branch to merge vulnerable code. Enforced based on `branches`, `branch_type` and `policy_scope` and regardless of detected vulnerabilities. | | `block_group_branch_modification` | `boolean` or `object` | false | `true`, `false`, `{ enabled: boolean, exceptions: [{ id: Integer}] }` | All | When enabled, prevents a user from removing group-level protected branches on every group the policy applies to. If `block_branch_modification` is `true`, implicitly defaults to `true`. Add top-level groups that support [group-level protected branches](../../project/repository/branches/protected.md#in-a-group) as `exceptions` | | `prevent_approval_by_author` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, merge request authors cannot approve their own MRs. This ensures code authors cannot introduce vulnerabilities and approve code to merge. | | `prevent_approval_by_commit_author` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, users who have contributed code to the MR are ineligible for approval. This ensures code committers cannot introduce vulnerabilities and approve code to merge. | | `remove_approvals_with_new_commit` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, if an MR receives all necessary approvals to merge, but then a new commit is added, new approvals are required. This ensures new commits that may include vulnerabilities cannot be introduced. | | `require_password_to_approve` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, there will be password confirmation on approvals. Password confirmation adds an extra layer of security. | | `prevent_pushing_and_force_pushing` | `boolean` | false | `true`, `false` | All | When enabled, prevents users from pushing and force pushing to a protected branch if that branch is included in the security policy. This ensures users do not bypass the merge request process to add vulnerable code to a branch. | ## `fallback_behavior` {{< history >}} - The `fallback_behavior` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/451784) in GitLab 17.0 [with a flag](../../../administration/feature_flags/_index.md) named `security_scan_result_policies_unblock_fail_open_approval_rules`. Disabled by default. - The `fallback_behavior` field was [enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/groups/gitlab-org/-/epics/10816) in GitLab 17.0. {{< /history >}} {{< alert type="flag" >}} On GitLab Self-Managed, by default the `fallback_behavior` field is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags/_index.md) named `security_scan_result_policies_unblock_fail_open_approval_rules`. On GitLab.com and GitLab Dedicated, this feature is available. {{< /alert >}} | Field | Type | Required | Possible values | Description | |--------|----------|----------|--------------------|----------------------------------------------------------------------------------------------------------------------| | `fail` | `string` | false | `open` or `closed` | `closed` (default): Invalid or unenforceable rules of a policy require approval. `open`: Invalid or unenforceable rules of a policy do not require approval. | ## `policy_tuning` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/498624) support for use in pipeline execution policies in GitLab 17.10 [with a flag](../../../administration/feature_flags/_index.md) named `unblock_rules_using_pipeline_execution_policies`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/525270) in GitLab 18.3. Feature flag `unblock_rules_using_pipeline_execution_policies` removed. {{< /history >}} {{< alert type="flag" >}} The availability of support for pipeline execution policies is controlled by a feature flag. For more information, see the history. {{< /alert >}} | Field | Type | Required | Possible values | Description | |--------|----------|----------|--------------------|----------------------------------------------------------------------------------------------------------------------| | `unblock_rules_using_execution_policies` | `boolean` | false | `true`, `false` | When enabled, approval rules do not block merge requests when a scan is required by a scan execution policy or a pipeline execution policy but a required scan artifact is missing from the target branch. This option only works when the project or group has an existing scan execution policy or pipeline execution policy with matching scanners. | ### Examples #### Example of `policy_tuning` with a scan execution policy You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml scan_execution_policy: - name: Enforce dependency scanning description: '' enabled: true policy_scope: projects: excluding: [] rules: - type: pipeline branch_type: all actions: - scan: dependency_scanning approval_policy: - name: Dependency scanning approvals description: '' enabled: true policy_scope: projects: excluding: [] rules: - type: scan_finding scanners: - dependency_scanning vulnerabilities_allowed: 0 severity_levels: [] vulnerability_states: [] branch_type: protected actions: - type: require_approval approvals_required: 1 role_approvers: - developer - type: send_bot_message enabled: true fallback_behavior: fail: closed policy_tuning: unblock_rules_using_execution_policies: true ``` #### Example of `policy_tuning` with a pipeline execution policy {{< alert type="warning" >}} This feature does not work with pipeline execution policies created before GitLab 17.10. To use this feature with older pipeline execution policies, copy, delete, and recreate the policies. For more information, see [Recreate pipeline execution policies created before GitLab 17.10](#recreate-pipeline-execution-policies-created-before-gitlab-1710). {{< /alert >}} You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- pipeline_execution_policy: - name: Enforce dependency scanning description: '' enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml ref: main # optional ``` The linked pipeline execution policy CI/CD configuration in `policy-ci.yml`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml ``` ##### Recreate pipeline execution policies created before GitLab 17.10 Pipeline execution policies created before GitLab 17.10 do not contain the data required to use the `policy_tuning` feature. To use this feature with older pipeline execution policies, copy and delete the old policies, then recreate them. <i class="fa-youtube-play" aria-hidden="true"></i> For a video walkthrough, see [Security policies: Recreate a pipeline execution policy for use with `policy_tuning`](https://youtu.be/XN0jCQWlk1A). <!-- Video published on 2025-03-07 --> To recreate a pipeline execution policy: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Policies**. 1. Select the pipeline execution policy you want to recreate. 1. On the right sidebar, select the **YAML** tab and copy the contents of the entire policy file. 1. Next to the policies table, select the vertical ellipsis ({{< icon name="ellipsis_v" >}}), and select **Delete**. 1. Merge the generated merge request. 1. Go back to **Secure > Policies** and select **New policy**. 1. In the **Pipeline execution policy** section, select **Select policy**. 1. In the **.YAML mode**, paste the contents of the old policy. 1. Select **Update via merge request** and merge the generated merge request. ## Policy scope schema To customize policy enforcement, you can define a policy's scope to either include or exclude specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## `bypass_settings` The `bypass_settings` field allows you to specify exceptions to the policy for certain branches, access tokens, or service accounts. When a bypass condition is met, the policy is not enforced for the matching merge request or branch. | Field | Type | Required | Description | |-------------------|---------|----------|---------------------------------------------------------------------------------| | `branches` | array | false | List of source and target branches (by name or pattern) that bypass the policy. | | `access_tokens` | array | false | List of access token IDs that bypass the policy. | | `service_accounts`| array | false | List of service account IDs that bypass the policy. | ### Source branch exceptions {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/18113) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_branch_exceptions`. Enabled by default - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/543778) in GitLab 18.3. Feature flag `approval_policy_branch_exceptions` removed. {{< /history >}} With branch-based exceptions, you can configure merge request approval policies to automatically waive approval requirements for specific source and target branch combinations. This enables you to preserve security governance and maintain strict approval rules for certain types of merges, such as feature-to-main, while allowing more flexibility for others, such as release-to-main. | Field | Type | Required | Possible values | Description | |---------|--------|----------|-----------------|-------------| | `source`| object | false | `name` (string) or `pattern` (string) | Source branch exception. Specify either an exact name or a pattern. | | `target`| object | false | `name` (string) or `pattern` (string) | Target branch exception. Specify either an exact name or a pattern. | ### Access token and service account exceptions {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/18112) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_bypass_options_tokens_accounts`. Enabled by default - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/551129) in GitLab 18.3. Feature flag `security_policies_bypass_options_tokens_accounts` removed. {{< /history >}} With access token and service account exceptions, you can designate specific service accounts and access tokens that can bypass merge request approval policies when necessary. This approach enables automations that you trust to operate without manual approval while maintaining restrictions for human users. For example, trusted automations might include CI/CD pipelines, repository mirroring, and automated updates. Bypass events are fully audited to allow you to support your compliance and emergency access needs. | Field | Type | Required | Description | |-------|---------|----------|------------------------------------------------| | `id` | integer | true | The ID of the access token or service account. | #### Example YAML ```yaml bypass_settings: access_tokens: - id: 123 - id: 456 service_accounts: - id: 789 - id: 1011 ``` ## Example `policy.yml` in a security policy project You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- approval_policy: - name: critical vulnerability CS approvals description: critical severity level only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 0 severity_levels: - critical vulnerability_states: [] vulnerability_attributes: false_positive: true fix_available: true actions: - type: require_approval approvals_required: 1 user_approvers: - adalberto.dare - name: secondary CS approvals description: secondary only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 1 severity_levels: - low - unknown vulnerability_states: - detected vulnerability_age: operator: greater_than value: 30 interval: day actions: - type: require_approval approvals_required: 1 role_approvers: - owner - 1002816 # Example custom role identifier called "AppSec Engineer" ``` In this example: - Every MR that contains new `critical` vulnerabilities identified by container scanning requires one approval from `alberto.dare`. - Every MR that contains more than one preexisting `low` or `unknown` vulnerability older than 30 days identified by container scanning requires one approval from either a project member with the Owner role or a user with the custom role `AppSec Engineer`. ## Example for Merge Request Approval Policy editor You can use this example in the YAML mode of the [Merge Request Approval Policy editor](#merge-request-approval-policy-editor). It corresponds to a single object from the previous example: ```yaml type: approval_policy name: critical vulnerability CS approvals description: critical severity level only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 1 severity_levels: - critical vulnerability_states: [] actions: - type: require_approval approvals_required: 1 user_approvers: - adalberto.dare ``` ## Understanding merge request approval policy approvals {{< history >}} - The branch comparison logic for `scan_finding` was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/428518) in GitLab 16.8 [with a flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_merge_base_pipeline`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/435297) in GitLab 16.9. Feature flag `scan_result_policy_merge_base_pipeline` removed. {{< /history >}} ### Scope of merge request approval policy comparison - To determine when approval is required on a merge request, we compare completed pipelines for each supported pipeline source for the source and target branch (for example, `feature`/`main`). This ensures the most comprehensive evaluation of scan results. - For the source branch, the comparison pipelines are all completed pipelines for each supported pipeline source for the latest commit in the source branch. - If the merge request approval policy looks only for the newly detected states (`new_needs_triage` & `new_dismissed`), the comparison is performed against all the supported pipeline sources in the common ancestor between the source and the target branch. An exception is when using Merged Results pipelines, in which case the comparison is done against the tip of the MR's target branch. - If the merge request approval policy looks for pre-existing states (`detected`, `confirmed`, `resolved`, `dismissed`), the comparison is always done against the tip of the default branch (for example, `main`). - If the merge request approval policy looks for a combination of new and pre-existing vulnerability states, the comparison is done against the common ancestor of the source and target branches. - Merge request approval policies considers all supported pipeline sources (based on the [`CI_PIPELINE_SOURCE` variable](../../../ci/variables/predefined_variables.md)) when comparing results from both the source and target branches when determining if a merge request requires approval. Pipelines with source `webide` are not supported. - In GitLab 16.11 and later, the child pipelines of each of the selected pipelines are also considered for comparison. ### Accepting risk and ignoring vulnerabilities in future merge requests For merge request approval policies that are scoped to newly detected findings (`new_needs_triage` or `new_dismissed` statuses), it's important to understand the implications of this vulnerability state. A finding is considered newly detected if it exists on the merge request's branch but not on the target branch. When a merge request with a branch that contains newly detected findings is approved and merged, approvers are "accepting the risk" of those vulnerabilities. If one or more of the same vulnerabilities is detected after this time, the status would be `detected` and thus ignored by a policy configured to consider `new_needs_triage` or `new_dismissed` findings. For example: - A merge request approval policy is created to block critical SAST findings. If a SAST finding for CVE-1234 is approved, future merge requests with the same violation will not require approval in the project. When using `new_needs_triage` and `new_dismissed` vulnerability states, the policy will block MRs for any findings matching policy rules if they are new and not yet triaged, even if they have been dismissed. If you want to ignore vulnerabilities newly detected and then dismissed within the merge request, you may use only the `new_needs_triage` status. When using license approval policies, the combination of project, component (dependency), and license are considered in the evaluation. If a license is approved as an exception, future merge requests don't require approval for the same combination of project, component (dependency), and license. The component's version is not be considered in this case. If a previously approved package is updated to a new version, approvers will not need to re-approve. For example: - A license approval policy is created to block merge requests with newly detected licenses matching `AGPL-1.0`. A change is made in project `demo` for component `osframework` that violates the policy. If approved and merged, future merge requests to `osframework` in project `demo` with the license `AGPL-1.0` don't require approval. ### Additional approvals Merge request approval policies require an additional approval step in some situations. For example: - The number of security jobs is reduced in the working branch and no longer matches the number of security jobs in the target branch. Users can't skip the Scanning Result Policies by removing scanning jobs from the CI/CD configuration. Only the security scans that are configured in the merge request approval policy rules are checked for removal. For example, consider a situation where the default branch pipeline has four security scans: `sast`, `secret_detection`, `container_scanning`, and `dependency_scanning`. A merge request approval policy enforces two scanners: `container_scanning` and `dependency_scanning`. If an MR removes a scan that is configured in merge request approval policy, `container_scanning` for example, an additional approval is required. - Someone stops a pipeline security job, and users can't skip the security scan. - A job in a merge request fails and is configured with `allow_failure: false`. As a result, the pipeline is in a blocked state. - A pipeline has a manual job that must run successfully for the entire pipeline to pass. ### Managing scan findings used to evaluate approval requirements Merge request approval policies evaluate the artifact reports generated by scanners in your pipelines after the pipeline has completed. Merge request approval policies focus on evaluating the results and determining approvals based on the scan result findings to identify potential risks, block merge requests, and require approval. Merge request approval policies do not extend beyond that scope to reach into artifact files or scanners. Instead, we trust the results from artifact reports. This gives teams flexibility in managing their scan execution and supply chain, and customizing scan results generated in artifact reports (for example, to filter out false positives) if needed. Lock file tampering, for example, is outside of the scope of security policy management, but may be mitigated through use of [Code owners](../../project/codeowners/_index.md#codeowners-file) or [external status checks](../../project/merge_requests/status_checks.md). For more information, see [issue 433029](https://gitlab.com/gitlab-org/gitlab/-/issues/433029). ![Evaluating scan result findings](img/scan_results_evaluation_white-bg_v16_8.png) ### Filter out policy violations with the attributes "Fix Available" or "False Positive" To avoid unnecessary approval requirements, these additional filters help ensure you only block MRs on the most actionable findings. By setting `fix_available` to `false` in YAML, or **is not** and **Fix Available** in the policy editor, the finding is not considered a policy violation when the finding has a solution or remediation available. Solutions appear at the bottom of the vulnerability object under the heading **Solution**. Remediations appear as a **Resolve with Merge Request** button within the vulnerability object. The **Resolve with Merge Request** button only appears when one of the following criteria is met: 1. A SAST vulnerability is found in a project that is on the Ultimate Tier with GitLab Duo Enterprise. 1. A container scanning vulnerability is found in a project that is on the Ultimate Tier in a job where `GIT_STRATEGY: fetch` has been set. Additionally, the vulnerability must have a package containing a fix that is available for the repositories enabled for the container image. 1. A dependency scanning vulnerability is found in a Node.js project that is managed by yarn and a fix is available. Additionally, the project must be on the Ultimate Tier and FIPS mode must be disabled for the instance. **Fix Available** only applies to dependency scanning and container scanning. By using the **False Positive** attribute, similarly, you can ignore findings detected by a policy by setting `false_positive` to `false` (or set attribute to **Is not** and **False Positive** in the policy editor). The **False Positive** attribute only applies to findings detected by our Vulnerability Extraction Tool for SAST results. ### Policy evaluation and vulnerability state changes When a user changes the status of a vulnerability (for example, dismisses the vulnerability in the vulnerability details page), GitLab does not automatically reevaluate merge request approval policies due to performance reasons. To retrieve updated data from vulnerability reports, update your merge request or rerun the related pipelines. This behavior ensures optimal system performance and maintains security policy enforcement. The policy evaluation occurs during the next pipeline run or when the merge request is updated, but not immediately when the vulnerability state changes. To reflect vulnerability state changes in the policies immediately manually run the pipeline or push a new commit to the merge request. ## Troubleshooting ### Merge request rules widget shows a merge request approval policy is invalid or duplicated {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} On GitLab Self-Managed from 15.0 to 16.4, the most likely cause is that the project was exported from a group and imported into another, and had merge request approval policy rules. These rules are stored in a separate project to the one that was exported. As a result, the project contains policy rules that reference entities that don't exist in the imported project's group. The result is policy rules that are invalid, duplicated, or both. To remove all invalid merge request approval policy rules from a GitLab instance, an administrator can run the following script in the [Rails console](../../../administration/operations/rails_console.md). ```ruby Project.joins(:approval_rules).where(approval_rules: { report_type: %i[scan_finding license_scanning] }).where.not(approval_rules: { security_orchestration_policy_configuration_id: nil }).find_in_batches.flat_map do |batch| batch.map do |project| # Get projects and their configuration_ids for applicable project rules [project, project.approval_rules.where(report_type: %i[scan_finding license_scanning]).pluck(:security_orchestration_policy_configuration_id).uniq] end.uniq.map do |project, configuration_ids| # We take only unique combinations of project + configuration_ids # If we find more configurations than what is available for the project, we take records with the extra configurations [project, configuration_ids - project.all_security_orchestration_policy_configurations.pluck(:id)] end.select { |_project, configuration_ids| configuration_ids.any? } end.each do |project, configuration_ids| # For each found pair project + ghost configuration, we remove these rules for a given project Security::OrchestrationPolicyConfiguration.where(id: configuration_ids).each do |configuration| configuration.delete_scan_finding_rules_for_project(project.id) end # Ensure we sync any potential rules from new group's policy Security::ScanResultPolicies::SyncProjectWorker.perform_async(project.id) end ``` ### Newly detected CVEs When using `new_needs_triage` and `new_dismissed`, some findings may require approval when they are not introduced by the merge request (such as a new CVE on a related dependency). These findings will not be present within the MR widget, but will be highlighted in the policy bot comment and pipeline report. ### Policies still have effect after `policy.yml` was manually invalidated In GitLab 17.2 and earlier, you may find that policies defined in a `policy.yml` file are enforced, even though the file was manually edited and no longer validates against the [policy schema](#merge-request-approval-policies-schema). This issue occurs because of a bug in the policy synchronization logic. Potential symptoms include: - `approval_settings` still block the removal of branch protections, block force-pushes or otherwise affect open merge requests. - `any_merge_request` policies still apply to open merge requests. To resolve this you can: - Manually edit the `policy.yml` file that defines the policy so that it becomes valid again. - Unassign and re-assign the security policy projects where the `policy.yml` file is stored. ### Missing security scans When using merge request approval policies, you may encounter situations where merge requests are blocked, including in new projects or when certain security scans are not executed. This behavior is by design to reduce the risk of introducing vulnerabilities into your system. Example scenarios: - Missing scans on source or target branches If security scans are missing on either the source or target branch, GitLab cannot effectively evaluate whether the merge request is introducing new vulnerabilities. In such cases, approval is required as a precautionary measure. - New projects For new projects where security scans have not yet been set up or executed on the target branch, all merge requests require approval. This ensures that security checks are active from the project's inception. - Projects with no files to scan Even in projects that contain no files relevant to the selected security scans, the approval requirement is still enforced. This maintains consistent security practices across all projects. - First merge request The very first merge request in a new project may be blocked if the default branch doesn't have a security scan, even if the source branch has no vulnerabilities. To resolve these issues: - Ensure that all required security scans are configured and running successfully on both source and target branches. - For new projects, set up and run the necessary security scans on the default branch before creating merge requests. - Consider using scan execution policies or pipeline execution policies to ensure consistent execution of security scans across all branches. - Consider using [`fallback_behavior`](#fallback_behavior) with `open` to prevent invalid or unenforceable rules in a policy from requiring approval. - Consider using the [`policy tuning`](#policy_tuning) setting `unblock_rules_using_execution_policies` to address scenarios where security scan artifacts are missing, and scan execution policies are enforced. When enabled, this setting makes approval rules optional when scan artifacts are missing from the target branch and a scan is required by a scan execution policy. This feature only works with an existing scan execution policy that has matching scanners. It offers flexibility in the merge request process when certain security scans cannot be performed due to missing artifacts. ### `Target: none` in security bot comments If you see `Target: none` in security bot comments, it means GitLab couldn't find a security report for the target branch. To resolve this: 1. Run a pipeline on the target branch that includes the required security scanners. 1. Ensure the pipeline completes successfully and produces security reports. 1. Re-run the pipeline on the source branch. Creating a new commit also triggers the pipeline to re-run #### Security bot messages When the target branch has no security scans: - The security bot may list all vulnerabilities found in the source branch. - Some of the vulnerabilities might already exist in the target branch, but without a target branch scan, GitLab cannot determine which ones are new. Potential solutions: 1. **Manual approvals**: Temporarily approve merge requests manually for new projects until security scans are established. 1. **Targeted policies**: Create separate policies for new projects with different approval requirements. 1. **Fallback behavior**: Consider using `fail: open` for policies on new projects, but be aware this may allow users to merge vulnerabilities even if scans fail. ### Support request for debugging of merge request approval policy GitLab.com users may submit a [support ticket](https://about.gitlab.com/support/) titled "Merge request approval policy debugging". Provide the following details: - Group path, project path and optionally merge request ID - Severity - Current behavior - Expected behavior #### GitLab.com Support teams will investigate [logs](https://log.gprd.gitlab.net/) (`pubsub-sidekiq-inf-gprd*`) to identify the failure `reason`. Below is an example response snippet from logs. You can use this query to find logs related to approvals: `json.event.keyword: "update_approvals"` and `json.project_path: "group-path/project-path"`. Optionally, you can further filter by the merge request identifier using `json.merge_request_iid`: ```json "json": { "project_path": "group-path/project-path", "merge_request_iid": 2, "missing_scans": [ "api_fuzzing" ], "reason": "Scanner removed by MR", "event": "update_approvals", } ``` #### GitLab Self-Managed Search for keywords such as the `project-path`, `api_fuzzing`, and `merge_request`. Example: `grep group-path/project-path`, and `grep merge_request`. If you know the correlation ID you can search by correlation ID. For example, if the value of `correlation_id` is 01HWN2NFABCEDFG, search for `01HWN2NFABCEDFG`. Search in the following files: - `/gitlab/gitlab-rails/production_json.log` - `/gitlab/sidekiq/current` Common failure reasons: - Scanner removed by MR: Merge request approval policy expects that the scanners defined in the policy are present and that they successfully produce an artifact for comparison. ### Inconsistent approvals from merge request approval policies If you notice any inconsistencies in your merge request approval rules, you can take either of the following steps to resynchronize your policies: - Use the [`resyncSecurityPolicies` GraphQL mutation](_index.md#resynchronize-policies-with-the-graphql-api) to resynchronize the policies. - Unassign and then reassign the security policy project to the affected group or project. - Alternatively, you can update a policy to trigger that policy to resynchronize for the affected group or project. - Confirm that the syntax of the YAML file in the security policy project is valid. These actions help ensure that your merge request approval policies are correctly applied and consistent across all merge requests. If you continue to experience issues with merge request approval policies after taking these steps, contact GitLab support for assistance. ### Merge requests that fix a detected vulnerability require approval If your policy configuration includes the `detected` state, merge requests that fix previously detected vulnerabilities still require approval. The merge request approval policy evaluates based on vulnerabilities that existed before the changes in the merge request, which adds an additional layer of review for any changes that affect known vulnerabilities. If you want to allow merge requests that fix vulnerabilities to proceed without any additional approvals due to a detected vulnerability, consider removing the `detected` state from your policy configuration.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to enforce security rules in GitLab using merge request approval policies to automate scans, approvals, and compliance across your projects. title: Merge request approval policies breadcrumbs: - doc - user - application_security - policies --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Group-level scan result policies [introduced](https://gitlab.com/groups/gitlab-org/-/epics/7622) in GitLab 15.6. - Scan result policies feature was renamed to merge request approval policies in GitLab 16.9. {{< /history >}} {{< alert type="note" >}} Scan result policies feature was renamed to merge request approval policies in GitLab 16.9. {{< /alert >}} You can use merge request approval policies for multiple purposes, including: - Detect results from security and license scanners to enforce approval rules. For example, one type of merge request policy is a security approval policy that allows approval to be required based on the findings of one or more security scan jobs. Merge request approval policies are evaluated after a CI scanning job is fully executed and both vulnerability and license type policies are evaluated based on the job artifact reports that are published in the completed pipeline. - Enforce approval rules on all merge requests that meet certain conditions. For example, enforce that MRs are reviewed by multiple users with Developer and Maintainer roles for all MRs that target default branches. - Enforce settings for security and compliance on a project. For example, prevent users who have authored or committed changes to an MR from approving the MR. Or prevent users from pushing or force pushing to the default branch to ensure all changes go through an MR. {{< alert type="note" >}} When a protected branch is created or deleted, the policy approval rules synchronize, with a delay of 1 minute. {{< /alert >}} The following video gives you an overview of GitLab merge request approval policies (previously scan result policies): <div class="video-fallback"> See the video: <a href="https://youtu.be/w5I9gcUgr9U">Overview of GitLab Scan Result Policies</a>. </div> <figure class="video-container"> <iframe src="https://www.youtube-nocookie.com/embed/w5I9gcUgr9U" frameborder="0" allowfullscreen> </iframe> </figure> ## Restrictions - You can enforce merge request approval policies only on [protected](../../project/repository/branches/protected.md) target branches. - You can assign a maximum of five rules to each policy. - You can assign a maximum of five merge request approval policies to each security policy project. - Policies created for a group or subgroup can take some time to apply to all the merge requests in the group. The time it takes is determined by the number of projects and the number of merge requests in those projects. Typically, the time taken is a matter of seconds. For groups with many thousands of projects and merge requests, this could take several minutes, based on what we've previously observed. - Merge request approval policies do not check the integrity or authenticity of the scan results generated in the artifact reports. - A merge request approval policy is evaluated according to its rules. By default, if the rules are invalid, or can't be evaluated, approval is required. You can change this behavior with the [`fallback_behavior` field](#fallback_behavior). ## Pipeline requirements A merge request approval policy is enforced according to the outcome of the pipeline. Consider the following when implementing a merge request approval policy: - A merge request approval policy evaluates completed pipeline jobs, ignoring manual jobs. When the manual jobs are run, the policy re-evaluates the merge request's jobs. - For a merge request approval policy that evaluates the results of security scanners, all specified scanners must have output a security report. If not, approvals are enforced to minimize the risk of vulnerabilities being introduced. This behavior can affect: - New projects where security scans are not yet established. - Branches created before the security scans were configured. - Projects with inconsistent scanner configurations between branches. - The pipeline must produce artifacts for all enabled scanners, for both the source and target branches. If not, there's no basis for comparison and so the policy can't be evaluated. You should use a scan execution policy to enforce this requirement. - Policy evaluation depends on a successful and completed merge base pipeline. If the merge base pipeline is skipped, merge requests with the merge base pipeline are blocked. - Security scanners specified in a policy must be configured and enabled in the projects on which the policy is enforced. If not, the merge request approval policy cannot be evaluated and the corresponding approvals are required. ## Best practices for using security scanners with merge request approval policies When you create a new project, you can enforce both merge request approval policies and security scans on that project. However, incorrectly configured security scanners can affect the merge request approval policies. There are multiple ways to configure security scans in new projects: - In the project's CI/CD configuration by adding the scanners to the initial `.gitlab-ci.yml` configuration file. - In a scan execution policy to enforce that pipelines run specific security scanners. - In a pipeline execution policy to control which jobs must run in pipelines. For simple use cases, you can use the project's CI/CD configuration. For a comprehensive security strategy, consider combining merge request approval policies with the other policy types. To minimize unnecessary approval requirements and ensure accurate security evaluations: - **Run security scans on your default branch first**: Before creating feature branches, ensure security scans have run successfully on your default branch. - **Use consistent scanner configuration**: Run the same scanners in both source and target branches, preferably in a single pipeline. - **Verify that scans produce artifacts**: Ensure that scans complete successfully and produce artifacts for comparison. - **Keep branches synchronized**: Regularly merge changes from the default branch into feature branches. - **Consider pipeline configurations**: For new projects, include security scanners in your initial `.gitlab-ci.yml` configuration. ### Verify security scanners before you apply merge request approval policies By implementing your security scans in your new project before you apply a merge request approval policy, you can ensure security scanners run consistently before relying on merge request approval policies, which helps avoid situations where merge requests are blocked due to missing security scans. To create and verify your security scanners and merge request approval policies together, use this recommended workflow: 1. Create the project. 1. Configure security scanners using the `.gitlab-ci.yml` configuration, a scan execution policy, or a pipeline execution policy. 1. Wait for the initial pipeline to complete on the default branch. Resolve any issues and rerun the pipeline to ensure it completes successfully before you continue. 1. Create merge requests using feature branches with the same security scanners configured. Again, ensure that the security scanners complete sucessfully. 1. Apply your merge request approval policies. ## Merge request with multiple pipelines {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/379108) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `multi_pipeline_scan_result_policies`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/409482) in GitLab 16.3. Feature flag `multi_pipeline_scan_result_policies` removed. - Support for parent-child pipelines [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/428591) in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_parent_child_pipeline`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/451597) in GitLab 17.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/428591) in GitLab 17.1. Feature flag `approval_policy_parent_child_pipeline` removed. {{< /history >}} A project can have multiple pipeline types configured. A single commit can initiate multiple pipelines, each of which may contain a security scan. - In GitLab 16.3 and later, the results of all completed pipelines for the latest commit in the merge request's source and target branch are evaluated and used to enforce the merge request approval policy. On-demand DAST pipelines are not considered. - In GitLab 16.2 and earlier, only the results of the latest completed pipeline were evaluated when enforcing merge request approval policies. If a project uses [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md), you must set the CI/CD variable `AST_ENABLE_MR_PIPELINES` to `"true"` for the security scanning jobs to be present in the pipeline. For more information see [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). For projects where many pipelines have run on the latest commit (for example, dormant projects), policy evaluation considers a maximum of 1,000 pipelines from both the source and target branches of the merge request. For parent-child pipelines, policy evaluation considers a maximum of 1,000 child pipelines. ## Merge request approval policy editor {{< history >}} - [Enabled by default](https://gitlab.com/gitlab-org/gitlab/-/issues/369473) in GitLab 15.6. {{< /history >}} {{< alert type="note" >}} Only project Owners have the [permissions](../../permissions.md#project-members-permissions) to select Security Policy Project. {{< /alert >}} Once your policy is complete, save it by selecting **Configure with a merge request** at the bottom of the editor. This redirects you to the merge request on the project's configured security policy project. If a security policy project doesn't link to your project, GitLab creates such a project for you. Existing policies can also be removed from the editor interface by selecting **Delete policy** at the bottom of the editor. Most policy changes take effect as soon as the merge request is merged. Any changes that do not go through a merge request and are committed directly to the default branch may require up to 10 minutes before the policy changes take effect. The [policy editor](_index.md#policy-editor) supports YAML mode and rule mode. {{< alert type="note" >}} Propagating merge request approval policies created for groups with a large number of projects take a while to complete. {{< /alert >}} ## Merge request approval policies schema The YAML file with merge request approval policies consists of an array of objects matching the merge request approval policy schema nested under the `approval_policy` key. You can configure a maximum of five policies under the `approval_policy` key. {{< alert type="note" >}} Merge request approval policies were defined under the `scan_result_policy` key. Until GitLab 17.0, policies can be defined under both keys. Starting from GitLab 17.0, only `approval_policy` key is supported. {{< /alert >}} When you save a new policy, GitLab validates its contents against [this JSON schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/validators/json_schemas/security_orchestration_policy.json). If you're not familiar with how to read [JSON schemas](https://json-schema.org/), the following sections and tables provide an alternative. | Field | Type | Required | Description | |-------------------|------------------------------------------|----------|------------------------------------------------------| | `approval_policy` | `array` of merge request approval policy objects | true | List of merge request approval policies (maximum 5). | ## Merge request approval policy schema {{< history >}} - The `approval_settings` fields were [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4 [with flags](../../../administration/feature_flags/_index.md) named `scan_result_policies_block_unprotecting_branches`, `scan_result_any_merge_request`, or `scan_result_policies_block_force_push`. See the `approval_settings` section below for more information. {{< /history >}} | Field | Type | Required | Possible values | Description | |---------------------|--------------------|----------|-----------------|----------------------------------------------------------| | `name` | `string` | true | | Name of the policy. Maximum of 255 characters. | | `description` | `string` | false | | Description of the policy. | | `enabled` | `boolean` | true | `true`, `false` | Flag to enable (`true`) or disable (`false`) the policy. | | `rules` | `array` of rules | true | | List of rules that the policy applies. | | `actions` | `array` of actions | false | | List of actions that the policy enforces. | | `approval_settings` | `object` | false | | Project settings that the policy overrides. | | `fallback_behavior` | `object` | false | | Settings that affect invalid or unenforceable rules. | | `policy_scope` | `object` of [`policy_scope`](_index.md#configure-the-policy-scope) | false | | Defines the scope of the policy based on the projects, groups, or compliance framework labels you specify. | | `policy_tuning` | `object` | false | | (Experimental) Settings that affect policy comparison logic. | | `bypass_settings` | `object` | false | | Settings that affect when certain branches, tokens, or accounts can bypass a policy . | ## `scan_finding` rule type {{< history >}} - The merge request approval policy field `vulnerability_attributes` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123052) in GitLab 16.2 [with a flag](../../../administration/feature_flags/_index.md) named `enforce_vulnerability_attributes_rules`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/418784) in GitLab 16.3. Feature flag removed. - The merge request approval policy field `vulnerability_age` was [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123956) in GitLab 16.2. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `vulnerability_states` option `newly_detected` was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/422414) in GitLab 17.0 and the options `new_needs_triage` and `new_dismissed` were added to replace it. {{< /history >}} This rule enforces the defined actions based on security scan findings. | Field | Type | Required | Possible values | Description | |----------------------------|---------------------|--------------------------------------------|--------------------------------------------------------------------------------------------------------------------|-------------| | `type` | `string` | true | `scan_finding` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `scanners` | `array` of `string` | true | `[]` or `sast`, `secret_detection`, `dependency_scanning`, `container_scanning`, `dast`, `coverage_fuzzing`, `api_fuzzing` | The security scanners for this rule to consider. `sast` includes results from both SAST and SAST IaC scanners. An empty array, `[]`, applies the rule to all security scanners.| | `vulnerabilities_allowed` | `integer` | true | Greater than or equal to zero | Number of vulnerabilities allowed before this rule is considered. | | `severity_levels` | `array` of `string` | true | `info`, `unknown`, `low`, `medium`, `high`, `critical` | The severity levels for this rule to consider. | | `vulnerability_states` | `array` of `string` | true | `[]` or `detected`, `confirmed`, `resolved`, `dismissed`, `new_needs_triage`, `new_dismissed` | All vulnerabilities fall into two categories:<br><br>**Newly Detected Vulnerabilities** - Vulnerabilities identified in the merge request branch itself but that do not currently exist on the MR's target branch. This policy option requires a pipeline to complete before the rule is evaluated so that it knows whether vulnerabilities are newly detected or not. Merge requests are blocked until the pipeline and necessary security scans are complete. The `new_needs_triage` option considers the status<br><br> • Detected<br><br> The `new_dismissed` option considers the status<br><br> • Dismissed<br><br>**Pre-Existing Vulnerabilities** - these policy options are evaluated immediately and do not require a pipeline complete as they consider only vulnerabilities previously detected in the default branch.<br><br> • `Detected` - the policy looks for vulnerabilities in the detected state.<br> • `Confirmed` - the policy looks for vulnerabilities in the confirmed state.<br> • `Dismissed` - the policy looks for vulnerabilities in the dismissed state.<br> • `Resolved` - the policy looks for vulnerabilities in the resolved state. <br><br>An empty array, `[]`, covers the same statuses as `['new_needs_triage', 'new_dismissed']`. | | `vulnerability_attributes` | `object` | false | `{false_positive: boolean, fix_available: boolean}` | All vulnerability findings are considered by default. But filters can be applied for attributes to consider only vulnerability findings: <br><br> • With a fix available (`fix_available: true`)<br><br> • With no fix available (`fix_available: false`)<br> • That are false positive (`false_positive: true`)<br> • That are not false positive (`false_positive: false`)<br> • Or a combination of both. For example (`fix_available: true, false_positive: false`) | | `vulnerability_age` | `object` | false | N/A | Filter pre-existing vulnerability findings by age. A vulnerability's age is calculated as the time since it was detected in the project. The criteria are `operator`, `value`, and `interval`.<br>- The `operator` criterion specifies if the age comparison used is older than (`greater_than`) or younger than (`less_than`).<br>- The `value` criterion specifies the numeric value representing the vulnerability's age.<br>- The `interval` criterion specifies the unit of measure of the vulnerability's age: `day`, `week`, `month`, or `year`.<br><br>Example: `operator: greater_than`, `value: 30`, `interval: day`. | ## `license_finding` rule type {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/8092) in GitLab 15.9 [with a flag](../../../administration/feature_flags/_index.md) named `license_scanning_policies`. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/397644) in GitLab 15.11. Feature flag `license_scanning_policies` removed. - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `licenses` field was [introduced](https://gitlab.com/groups/gitlab-org/-/epics/10203) in GitLab 17.11 [with a flag](../../../administration/feature_flags/_index.md) named `exclude_license_packages`. Feature flag removed. {{< /history >}} This rule enforces the defined actions based on license findings. | Field | Type | Required | Possible values | Description | |----------------|----------|-----------------------------------------------|------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `type` | `string` | true | `license_finding` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `match_on_inclusion_license` | `boolean` | true if `licenses` field does not exists | `true`, `false` | Whether the rule matches inclusion or exclusion of licenses listed in `license_types`. | | `license_types` | `array` of `string` | true if `licenses` field does not exists | license types | [SPDX license names](https://spdx.org/licenses) to match on, for example `Affero General Public License v1.0` or `MIT License`. | | `license_states` | `array` of `string` | true | `newly_detected`, `detected` | Whether to match newly detected and/or previously detected licenses. The `newly_detected` state triggers approval when either a new package is introduced or when a new license for an existing package is detected. | | `licenses` | `object` | true if `license_types` field does not exists | `licenses` object | [SPDX license names](https://spdx.org/licenses) to match on including package exceptions. | ### `licenses` object | Field | Type | Required | Possible values | Description | |-----------|----------|-----------------------------------------|------------------------------------------------------|------------------------------------------------------------| | `denied` | `object` | true if `allowed` field does not exist | `array` of `licenses_with_package_exclusion` objects | The list of denied licenses including package exceptions. | | `allowed` | `object` | true if `denied` field does not exist | `array` of `licenses_with_package_exclusion` objects | The list of allowed licenses including package exceptions. | ### `licenses_with_package_exclusion` object | Field | Type | Required | Possible values | Description | |--------|----------|----------|-------------------|----------------------------------------------------| | `name` | `string` | true | SPDX license name | [SPDX license name](https://spdx.org/licenses). | | `packages` | `object` | false | `packages` object | List of packages exceptions for the given license. | ### `packages` object | Field | Type | Required | Possible values | Description | |--------|----------|----------|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `excluding` | `object` | true | {purls: `array` of `strings` using the `uri` format} | List of package exceptions for the given license. Define the list of packages exceptions using the [`purl`](https://github.com/package-url/purl-spec?tab=readme-ov-file#purl) components `scheme:type/name@version`. The `scheme:type/name` components are required. The `@` and `version` are optional. If a version is specified, only that version is considered an exception. If no version is specified and the `@` character is added at the end of the `purl`, only packages with the exact name is considered a match. If the `@` character is not added to the package name, all packages with the same prefix for the given license are matches. For example, a purl `pkg:gem/bundler` matches the `bundler` and `bundler-stats` packages because both packages use the same license. Defining a `purl` `pkg:gem/bundler@` matches only the `bundler` package. | ## `any_merge_request` rule type {{< history >}} - The `branch_exceptions` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418741) in GitLab 16.3 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_branch_exceptions`. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133753) in GitLab 16.5. Feature flag removed. - The `any_merge_request` rule type was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) in GitLab 16.4. Enabled by default. [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136298) in GitLab 16.6. Feature flag [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/432127). {{< /history >}} This rule enforces the defined actions for any merge request based on the commits signature. | Field | Type | Required | Possible values | Description | |---------------------|---------------------|--------------------------------------------|---------------------------|-------------| | `type` | `string` | true | `any_merge_request` | The rule's type. | | `branches` | `array` of `string` | true if `branch_type` field does not exist | `[]` or the branch's name | Applicable only to protected target branches. An empty array, `[]`, applies the rule to all protected target branches. Cannot be used with the `branch_type` field. | | `branch_type` | `string` | true if `branches` field does not exist | `default` or `protected` | The types of protected branches the given policy applies to. Cannot be used with the `branches` field. Default branches must also be `protected`. | | `branch_exceptions` | `array` of `string` | false | Names of branches | Target branches to exclude from this rule. | | `commits` | `string` | true | `any`, `unsigned` | Whether the rule matches for any commits, or only if unsigned commits are detected in the merge request. | ## `require_approval` action type {{< history >}} - [Added](https://gitlab.com/groups/gitlab-org/-/epics/12319) support for up to five separate `require_approval` actions in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `multiple_approval_actions`. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/505374) in GitLab 17.8. Feature flag `multiple_approval_actions` removed. - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13550) support to specify custom roles as `role_approvers` in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `security_policy_custom_roles`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/505742) in GitLab 17.10. Feature flag `security_policy_custom_roles` removed. {{< /history >}} This action makes an approval rule required when the conditions are met for at least one rule in the defined policy. If you specify multiple approvers in the same `require_approval` block, any of the eligible approvers can satisfy the approval requirement. For example, if you specify two `group_approvers` and `approvals_required` as `2`, both of the approvals can come from the same group. To require multiple approvals from unique approver types, use multiple `require_approval` actions. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `require_approval` | The action's type. | | `approvals_required` | `integer` | true | Greater than or equal to zero | The number of MR approvals required. | | `user_approvers` | `array` of `string` | false | Username of one of more users | The users to consider as approvers. Users must have access to the project to be eligible to approve. | | `user_approvers_ids` | `array` of `integer` | false | ID of one of more users | The IDs of users to consider as approvers. Users must have access to the project to be eligible to approve. | | `group_approvers` | `array` of `string` | false | Path of one of more groups | The groups to consider as approvers. Users with [direct membership in the group](../../project/merge_requests/approvals/rules.md#group-approvers) are eligible to approve. | | `group_approvers_ids` | `array` of `integer` | false | ID of one of more groups | The IDs of groups to consider as approvers. Users with [direct membership in the group](../../project/merge_requests/approvals/rules.md#group-approvers) are eligible to approve. | | `role_approvers` | `array` of `string` | false | One or more [roles](../../permissions.md#roles) (for example: `owner`, `maintainer`). You can also specify custom roles (or custom role identifiers in YAML mode) as `role_approvers` if the custom roles have the permission to approve merge requests. The custom roles can be selected along with user and group approvers. | The roles that are eligible to approve. | ## `send_bot_message` action type {{< history >}} - The `send_bot_message` action type was [introduced for projects](https://gitlab.com/gitlab-org/gitlab/-/issues/438269) in GitLab 16.11 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_disable_bot_comment`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/454852) in GitLab 17.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/454852) in GitLab 17.3. Feature flag `approval_policy_disable_bot_comment` removed. - The `send_bot_message` action type was [introduced for groups](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.2 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_disable_bot_comment_group`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.2. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/469449) in GitLab 17.3. Feature flag `approval_policy_disable_bot_comment_group` removed. {{< /history >}} This action enables configuration of the bot message in merge requests when policy violations are detected. If the action is not specified, the bot message is enabled by default. If there are multiple policies defined, the bot message is sent as long as at least one of those policies has the `send_bot_message` action is enabled. | Field | Type | Required | Possible values | Description | |-------|------|----------|-----------------|-------------| | `type` | `string` | true | `send_bot_message` | The action's type. | | `enabled` | `boolean` | true | `true`, `false` | Whether a bot message should be created when policy violations are detected. Default: `true` | ### Example bot messages ![scan_results_example_bot_message_v17_0](img/scan_result_policy_example_bot_message_vulnerabilities_v17_0.png) ![scan_results_example_bot_message_v17_0](img/scan_result_policy_example_bot_message_artifacts_v17_0.png) ## Warn mode {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/15552) in GitLab 17.8 [with a flag](../../../administration/feature_flags/_index.md) named `security_policy_approval_warn_mode`. Disabled by default {{< /history >}} When warn mode is enabled and a merge request triggers a security policy that doesn't require any additional approvers, a bot comment is added to the merge request. The comment directs users to the policy for more information. ## `approval_settings` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420724) the `block_group_branch_modification` field in GitLab 16.8 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_block_group_branch_modification`. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/437306) in GitLab 17.6. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/503930) in GitLab 17.7. Feature flag `scan_result_policy_block_group_branch_modification` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/423101) the `block_unprotecting_branches` field in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_settings`. Disabled by default. - The `scan_result_policy_settings` feature flag was replaced by the `scan_result_policies_block_unprotecting_branches` feature flag in 16.4. - The `block_unprotecting_branches` field was [replaced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137153) by `block_branch_modification` field in GitLab 16.7. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/423901) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/433415) in GitLab 16.11. Feature flag `scan_result_policies_block_unprotecting_branches` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418752) the `prevent_approval_by_author`, `prevent_approval_by_commit_author`, `remove_approvals_with_new_commit`, and `require_password_to_approve` fields in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_any_merge_request`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/423988) in GitLab 16.6. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/423988) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/432127) in GitLab 16.8. Feature flag `scan_result_any_merge_request` removed. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/420629) the `prevent_pushing_and_force_pushing` field in GitLab 16.4 [with flag](../../../administration/feature_flags/_index.md) named `scan_result_policies_block_force_push`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/427260) in GitLab 16.6. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/427260) in GitLab 16.7. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/432123) in GitLab 16.9. Feature flag `scan_result_policies_block_force_push` removed. {{< /history >}} The settings set in the policy overwrite settings in the project. | Field | Type | Required | Possible values | Applicable rule type | Description | |-------------------------------------|-----------------------|----------|---------------------------------------------------------------|----------------------|-------------| | `block_branch_modification` | `boolean` | false | `true`, `false` | All | When enabled, prevents a user from removing a branch from the protected branches list, deleting a protected branch, or changing the default branch if that branch is included in the security policy. This ensures users cannot remove protection status from a branch to merge vulnerable code. Enforced based on `branches`, `branch_type` and `policy_scope` and regardless of detected vulnerabilities. | | `block_group_branch_modification` | `boolean` or `object` | false | `true`, `false`, `{ enabled: boolean, exceptions: [{ id: Integer}] }` | All | When enabled, prevents a user from removing group-level protected branches on every group the policy applies to. If `block_branch_modification` is `true`, implicitly defaults to `true`. Add top-level groups that support [group-level protected branches](../../project/repository/branches/protected.md#in-a-group) as `exceptions` | | `prevent_approval_by_author` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, merge request authors cannot approve their own MRs. This ensures code authors cannot introduce vulnerabilities and approve code to merge. | | `prevent_approval_by_commit_author` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, users who have contributed code to the MR are ineligible for approval. This ensures code committers cannot introduce vulnerabilities and approve code to merge. | | `remove_approvals_with_new_commit` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, if an MR receives all necessary approvals to merge, but then a new commit is added, new approvals are required. This ensures new commits that may include vulnerabilities cannot be introduced. | | `require_password_to_approve` | `boolean` | false | `true`, `false` | `Any merge request` | When enabled, there will be password confirmation on approvals. Password confirmation adds an extra layer of security. | | `prevent_pushing_and_force_pushing` | `boolean` | false | `true`, `false` | All | When enabled, prevents users from pushing and force pushing to a protected branch if that branch is included in the security policy. This ensures users do not bypass the merge request process to add vulnerable code to a branch. | ## `fallback_behavior` {{< history >}} - The `fallback_behavior` field was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/451784) in GitLab 17.0 [with a flag](../../../administration/feature_flags/_index.md) named `security_scan_result_policies_unblock_fail_open_approval_rules`. Disabled by default. - The `fallback_behavior` field was [enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/groups/gitlab-org/-/epics/10816) in GitLab 17.0. {{< /history >}} {{< alert type="flag" >}} On GitLab Self-Managed, by default the `fallback_behavior` field is available. To hide the feature, an administrator can [disable the feature flag](../../../administration/feature_flags/_index.md) named `security_scan_result_policies_unblock_fail_open_approval_rules`. On GitLab.com and GitLab Dedicated, this feature is available. {{< /alert >}} | Field | Type | Required | Possible values | Description | |--------|----------|----------|--------------------|----------------------------------------------------------------------------------------------------------------------| | `fail` | `string` | false | `open` or `closed` | `closed` (default): Invalid or unenforceable rules of a policy require approval. `open`: Invalid or unenforceable rules of a policy do not require approval. | ## `policy_tuning` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/498624) support for use in pipeline execution policies in GitLab 17.10 [with a flag](../../../administration/feature_flags/_index.md) named `unblock_rules_using_pipeline_execution_policies`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/525270) in GitLab 18.3. Feature flag `unblock_rules_using_pipeline_execution_policies` removed. {{< /history >}} {{< alert type="flag" >}} The availability of support for pipeline execution policies is controlled by a feature flag. For more information, see the history. {{< /alert >}} | Field | Type | Required | Possible values | Description | |--------|----------|----------|--------------------|----------------------------------------------------------------------------------------------------------------------| | `unblock_rules_using_execution_policies` | `boolean` | false | `true`, `false` | When enabled, approval rules do not block merge requests when a scan is required by a scan execution policy or a pipeline execution policy but a required scan artifact is missing from the target branch. This option only works when the project or group has an existing scan execution policy or pipeline execution policy with matching scanners. | ### Examples #### Example of `policy_tuning` with a scan execution policy You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml scan_execution_policy: - name: Enforce dependency scanning description: '' enabled: true policy_scope: projects: excluding: [] rules: - type: pipeline branch_type: all actions: - scan: dependency_scanning approval_policy: - name: Dependency scanning approvals description: '' enabled: true policy_scope: projects: excluding: [] rules: - type: scan_finding scanners: - dependency_scanning vulnerabilities_allowed: 0 severity_levels: [] vulnerability_states: [] branch_type: protected actions: - type: require_approval approvals_required: 1 role_approvers: - developer - type: send_bot_message enabled: true fallback_behavior: fail: closed policy_tuning: unblock_rules_using_execution_policies: true ``` #### Example of `policy_tuning` with a pipeline execution policy {{< alert type="warning" >}} This feature does not work with pipeline execution policies created before GitLab 17.10. To use this feature with older pipeline execution policies, copy, delete, and recreate the policies. For more information, see [Recreate pipeline execution policies created before GitLab 17.10](#recreate-pipeline-execution-policies-created-before-gitlab-1710). {{< /alert >}} You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- pipeline_execution_policy: - name: Enforce dependency scanning description: '' enabled: true pipeline_config_strategy: inject_policy content: include: - project: my-group/pipeline-execution-ci-project file: policy-ci.yml ref: main # optional ``` The linked pipeline execution policy CI/CD configuration in `policy-ci.yml`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml ``` ##### Recreate pipeline execution policies created before GitLab 17.10 Pipeline execution policies created before GitLab 17.10 do not contain the data required to use the `policy_tuning` feature. To use this feature with older pipeline execution policies, copy and delete the old policies, then recreate them. <i class="fa-youtube-play" aria-hidden="true"></i> For a video walkthrough, see [Security policies: Recreate a pipeline execution policy for use with `policy_tuning`](https://youtu.be/XN0jCQWlk1A). <!-- Video published on 2025-03-07 --> To recreate a pipeline execution policy: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Policies**. 1. Select the pipeline execution policy you want to recreate. 1. On the right sidebar, select the **YAML** tab and copy the contents of the entire policy file. 1. Next to the policies table, select the vertical ellipsis ({{< icon name="ellipsis_v" >}}), and select **Delete**. 1. Merge the generated merge request. 1. Go back to **Secure > Policies** and select **New policy**. 1. In the **Pipeline execution policy** section, select **Select policy**. 1. In the **.YAML mode**, paste the contents of the old policy. 1. Select **Update via merge request** and merge the generated merge request. ## Policy scope schema To customize policy enforcement, you can define a policy's scope to either include or exclude specified projects, groups, or compliance framework labels. For more details, see [Scope](_index.md#configure-the-policy-scope). ## `bypass_settings` The `bypass_settings` field allows you to specify exceptions to the policy for certain branches, access tokens, or service accounts. When a bypass condition is met, the policy is not enforced for the matching merge request or branch. | Field | Type | Required | Description | |-------------------|---------|----------|---------------------------------------------------------------------------------| | `branches` | array | false | List of source and target branches (by name or pattern) that bypass the policy. | | `access_tokens` | array | false | List of access token IDs that bypass the policy. | | `service_accounts`| array | false | List of service account IDs that bypass the policy. | ### Source branch exceptions {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/18113) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `approval_policy_branch_exceptions`. Enabled by default - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/543778) in GitLab 18.3. Feature flag `approval_policy_branch_exceptions` removed. {{< /history >}} With branch-based exceptions, you can configure merge request approval policies to automatically waive approval requirements for specific source and target branch combinations. This enables you to preserve security governance and maintain strict approval rules for certain types of merges, such as feature-to-main, while allowing more flexibility for others, such as release-to-main. | Field | Type | Required | Possible values | Description | |---------|--------|----------|-----------------|-------------| | `source`| object | false | `name` (string) or `pattern` (string) | Source branch exception. Specify either an exact name or a pattern. | | `target`| object | false | `name` (string) or `pattern` (string) | Target branch exception. Specify either an exact name or a pattern. | ### Access token and service account exceptions {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/18112) in GitLab 18.2 [with a flag](../../../administration/feature_flags/_index.md) named `security_policies_bypass_options_tokens_accounts`. Enabled by default - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/551129) in GitLab 18.3. Feature flag `security_policies_bypass_options_tokens_accounts` removed. {{< /history >}} With access token and service account exceptions, you can designate specific service accounts and access tokens that can bypass merge request approval policies when necessary. This approach enables automations that you trust to operate without manual approval while maintaining restrictions for human users. For example, trusted automations might include CI/CD pipelines, repository mirroring, and automated updates. Bypass events are fully audited to allow you to support your compliance and emergency access needs. | Field | Type | Required | Description | |-------|---------|----------|------------------------------------------------| | `id` | integer | true | The ID of the access token or service account. | #### Example YAML ```yaml bypass_settings: access_tokens: - id: 123 - id: 456 service_accounts: - id: 789 - id: 1011 ``` ## Example `policy.yml` in a security policy project You can use this example in a `.gitlab/security-policies/policy.yml` file stored in a [security policy project](enforcement/security_policy_projects.md): ```yaml --- approval_policy: - name: critical vulnerability CS approvals description: critical severity level only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 0 severity_levels: - critical vulnerability_states: [] vulnerability_attributes: false_positive: true fix_available: true actions: - type: require_approval approvals_required: 1 user_approvers: - adalberto.dare - name: secondary CS approvals description: secondary only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 1 severity_levels: - low - unknown vulnerability_states: - detected vulnerability_age: operator: greater_than value: 30 interval: day actions: - type: require_approval approvals_required: 1 role_approvers: - owner - 1002816 # Example custom role identifier called "AppSec Engineer" ``` In this example: - Every MR that contains new `critical` vulnerabilities identified by container scanning requires one approval from `alberto.dare`. - Every MR that contains more than one preexisting `low` or `unknown` vulnerability older than 30 days identified by container scanning requires one approval from either a project member with the Owner role or a user with the custom role `AppSec Engineer`. ## Example for Merge Request Approval Policy editor You can use this example in the YAML mode of the [Merge Request Approval Policy editor](#merge-request-approval-policy-editor). It corresponds to a single object from the previous example: ```yaml type: approval_policy name: critical vulnerability CS approvals description: critical severity level only for container scanning enabled: true rules: - type: scan_finding branches: - main scanners: - container_scanning vulnerabilities_allowed: 1 severity_levels: - critical vulnerability_states: [] actions: - type: require_approval approvals_required: 1 user_approvers: - adalberto.dare ``` ## Understanding merge request approval policy approvals {{< history >}} - The branch comparison logic for `scan_finding` was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/428518) in GitLab 16.8 [with a flag](../../../administration/feature_flags/_index.md) named `scan_result_policy_merge_base_pipeline`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/435297) in GitLab 16.9. Feature flag `scan_result_policy_merge_base_pipeline` removed. {{< /history >}} ### Scope of merge request approval policy comparison - To determine when approval is required on a merge request, we compare completed pipelines for each supported pipeline source for the source and target branch (for example, `feature`/`main`). This ensures the most comprehensive evaluation of scan results. - For the source branch, the comparison pipelines are all completed pipelines for each supported pipeline source for the latest commit in the source branch. - If the merge request approval policy looks only for the newly detected states (`new_needs_triage` & `new_dismissed`), the comparison is performed against all the supported pipeline sources in the common ancestor between the source and the target branch. An exception is when using Merged Results pipelines, in which case the comparison is done against the tip of the MR's target branch. - If the merge request approval policy looks for pre-existing states (`detected`, `confirmed`, `resolved`, `dismissed`), the comparison is always done against the tip of the default branch (for example, `main`). - If the merge request approval policy looks for a combination of new and pre-existing vulnerability states, the comparison is done against the common ancestor of the source and target branches. - Merge request approval policies considers all supported pipeline sources (based on the [`CI_PIPELINE_SOURCE` variable](../../../ci/variables/predefined_variables.md)) when comparing results from both the source and target branches when determining if a merge request requires approval. Pipelines with source `webide` are not supported. - In GitLab 16.11 and later, the child pipelines of each of the selected pipelines are also considered for comparison. ### Accepting risk and ignoring vulnerabilities in future merge requests For merge request approval policies that are scoped to newly detected findings (`new_needs_triage` or `new_dismissed` statuses), it's important to understand the implications of this vulnerability state. A finding is considered newly detected if it exists on the merge request's branch but not on the target branch. When a merge request with a branch that contains newly detected findings is approved and merged, approvers are "accepting the risk" of those vulnerabilities. If one or more of the same vulnerabilities is detected after this time, the status would be `detected` and thus ignored by a policy configured to consider `new_needs_triage` or `new_dismissed` findings. For example: - A merge request approval policy is created to block critical SAST findings. If a SAST finding for CVE-1234 is approved, future merge requests with the same violation will not require approval in the project. When using `new_needs_triage` and `new_dismissed` vulnerability states, the policy will block MRs for any findings matching policy rules if they are new and not yet triaged, even if they have been dismissed. If you want to ignore vulnerabilities newly detected and then dismissed within the merge request, you may use only the `new_needs_triage` status. When using license approval policies, the combination of project, component (dependency), and license are considered in the evaluation. If a license is approved as an exception, future merge requests don't require approval for the same combination of project, component (dependency), and license. The component's version is not be considered in this case. If a previously approved package is updated to a new version, approvers will not need to re-approve. For example: - A license approval policy is created to block merge requests with newly detected licenses matching `AGPL-1.0`. A change is made in project `demo` for component `osframework` that violates the policy. If approved and merged, future merge requests to `osframework` in project `demo` with the license `AGPL-1.0` don't require approval. ### Additional approvals Merge request approval policies require an additional approval step in some situations. For example: - The number of security jobs is reduced in the working branch and no longer matches the number of security jobs in the target branch. Users can't skip the Scanning Result Policies by removing scanning jobs from the CI/CD configuration. Only the security scans that are configured in the merge request approval policy rules are checked for removal. For example, consider a situation where the default branch pipeline has four security scans: `sast`, `secret_detection`, `container_scanning`, and `dependency_scanning`. A merge request approval policy enforces two scanners: `container_scanning` and `dependency_scanning`. If an MR removes a scan that is configured in merge request approval policy, `container_scanning` for example, an additional approval is required. - Someone stops a pipeline security job, and users can't skip the security scan. - A job in a merge request fails and is configured with `allow_failure: false`. As a result, the pipeline is in a blocked state. - A pipeline has a manual job that must run successfully for the entire pipeline to pass. ### Managing scan findings used to evaluate approval requirements Merge request approval policies evaluate the artifact reports generated by scanners in your pipelines after the pipeline has completed. Merge request approval policies focus on evaluating the results and determining approvals based on the scan result findings to identify potential risks, block merge requests, and require approval. Merge request approval policies do not extend beyond that scope to reach into artifact files or scanners. Instead, we trust the results from artifact reports. This gives teams flexibility in managing their scan execution and supply chain, and customizing scan results generated in artifact reports (for example, to filter out false positives) if needed. Lock file tampering, for example, is outside of the scope of security policy management, but may be mitigated through use of [Code owners](../../project/codeowners/_index.md#codeowners-file) or [external status checks](../../project/merge_requests/status_checks.md). For more information, see [issue 433029](https://gitlab.com/gitlab-org/gitlab/-/issues/433029). ![Evaluating scan result findings](img/scan_results_evaluation_white-bg_v16_8.png) ### Filter out policy violations with the attributes "Fix Available" or "False Positive" To avoid unnecessary approval requirements, these additional filters help ensure you only block MRs on the most actionable findings. By setting `fix_available` to `false` in YAML, or **is not** and **Fix Available** in the policy editor, the finding is not considered a policy violation when the finding has a solution or remediation available. Solutions appear at the bottom of the vulnerability object under the heading **Solution**. Remediations appear as a **Resolve with Merge Request** button within the vulnerability object. The **Resolve with Merge Request** button only appears when one of the following criteria is met: 1. A SAST vulnerability is found in a project that is on the Ultimate Tier with GitLab Duo Enterprise. 1. A container scanning vulnerability is found in a project that is on the Ultimate Tier in a job where `GIT_STRATEGY: fetch` has been set. Additionally, the vulnerability must have a package containing a fix that is available for the repositories enabled for the container image. 1. A dependency scanning vulnerability is found in a Node.js project that is managed by yarn and a fix is available. Additionally, the project must be on the Ultimate Tier and FIPS mode must be disabled for the instance. **Fix Available** only applies to dependency scanning and container scanning. By using the **False Positive** attribute, similarly, you can ignore findings detected by a policy by setting `false_positive` to `false` (or set attribute to **Is not** and **False Positive** in the policy editor). The **False Positive** attribute only applies to findings detected by our Vulnerability Extraction Tool for SAST results. ### Policy evaluation and vulnerability state changes When a user changes the status of a vulnerability (for example, dismisses the vulnerability in the vulnerability details page), GitLab does not automatically reevaluate merge request approval policies due to performance reasons. To retrieve updated data from vulnerability reports, update your merge request or rerun the related pipelines. This behavior ensures optimal system performance and maintains security policy enforcement. The policy evaluation occurs during the next pipeline run or when the merge request is updated, but not immediately when the vulnerability state changes. To reflect vulnerability state changes in the policies immediately manually run the pipeline or push a new commit to the merge request. ## Troubleshooting ### Merge request rules widget shows a merge request approval policy is invalid or duplicated {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} On GitLab Self-Managed from 15.0 to 16.4, the most likely cause is that the project was exported from a group and imported into another, and had merge request approval policy rules. These rules are stored in a separate project to the one that was exported. As a result, the project contains policy rules that reference entities that don't exist in the imported project's group. The result is policy rules that are invalid, duplicated, or both. To remove all invalid merge request approval policy rules from a GitLab instance, an administrator can run the following script in the [Rails console](../../../administration/operations/rails_console.md). ```ruby Project.joins(:approval_rules).where(approval_rules: { report_type: %i[scan_finding license_scanning] }).where.not(approval_rules: { security_orchestration_policy_configuration_id: nil }).find_in_batches.flat_map do |batch| batch.map do |project| # Get projects and their configuration_ids for applicable project rules [project, project.approval_rules.where(report_type: %i[scan_finding license_scanning]).pluck(:security_orchestration_policy_configuration_id).uniq] end.uniq.map do |project, configuration_ids| # We take only unique combinations of project + configuration_ids # If we find more configurations than what is available for the project, we take records with the extra configurations [project, configuration_ids - project.all_security_orchestration_policy_configurations.pluck(:id)] end.select { |_project, configuration_ids| configuration_ids.any? } end.each do |project, configuration_ids| # For each found pair project + ghost configuration, we remove these rules for a given project Security::OrchestrationPolicyConfiguration.where(id: configuration_ids).each do |configuration| configuration.delete_scan_finding_rules_for_project(project.id) end # Ensure we sync any potential rules from new group's policy Security::ScanResultPolicies::SyncProjectWorker.perform_async(project.id) end ``` ### Newly detected CVEs When using `new_needs_triage` and `new_dismissed`, some findings may require approval when they are not introduced by the merge request (such as a new CVE on a related dependency). These findings will not be present within the MR widget, but will be highlighted in the policy bot comment and pipeline report. ### Policies still have effect after `policy.yml` was manually invalidated In GitLab 17.2 and earlier, you may find that policies defined in a `policy.yml` file are enforced, even though the file was manually edited and no longer validates against the [policy schema](#merge-request-approval-policies-schema). This issue occurs because of a bug in the policy synchronization logic. Potential symptoms include: - `approval_settings` still block the removal of branch protections, block force-pushes or otherwise affect open merge requests. - `any_merge_request` policies still apply to open merge requests. To resolve this you can: - Manually edit the `policy.yml` file that defines the policy so that it becomes valid again. - Unassign and re-assign the security policy projects where the `policy.yml` file is stored. ### Missing security scans When using merge request approval policies, you may encounter situations where merge requests are blocked, including in new projects or when certain security scans are not executed. This behavior is by design to reduce the risk of introducing vulnerabilities into your system. Example scenarios: - Missing scans on source or target branches If security scans are missing on either the source or target branch, GitLab cannot effectively evaluate whether the merge request is introducing new vulnerabilities. In such cases, approval is required as a precautionary measure. - New projects For new projects where security scans have not yet been set up or executed on the target branch, all merge requests require approval. This ensures that security checks are active from the project's inception. - Projects with no files to scan Even in projects that contain no files relevant to the selected security scans, the approval requirement is still enforced. This maintains consistent security practices across all projects. - First merge request The very first merge request in a new project may be blocked if the default branch doesn't have a security scan, even if the source branch has no vulnerabilities. To resolve these issues: - Ensure that all required security scans are configured and running successfully on both source and target branches. - For new projects, set up and run the necessary security scans on the default branch before creating merge requests. - Consider using scan execution policies or pipeline execution policies to ensure consistent execution of security scans across all branches. - Consider using [`fallback_behavior`](#fallback_behavior) with `open` to prevent invalid or unenforceable rules in a policy from requiring approval. - Consider using the [`policy tuning`](#policy_tuning) setting `unblock_rules_using_execution_policies` to address scenarios where security scan artifacts are missing, and scan execution policies are enforced. When enabled, this setting makes approval rules optional when scan artifacts are missing from the target branch and a scan is required by a scan execution policy. This feature only works with an existing scan execution policy that has matching scanners. It offers flexibility in the merge request process when certain security scans cannot be performed due to missing artifacts. ### `Target: none` in security bot comments If you see `Target: none` in security bot comments, it means GitLab couldn't find a security report for the target branch. To resolve this: 1. Run a pipeline on the target branch that includes the required security scanners. 1. Ensure the pipeline completes successfully and produces security reports. 1. Re-run the pipeline on the source branch. Creating a new commit also triggers the pipeline to re-run #### Security bot messages When the target branch has no security scans: - The security bot may list all vulnerabilities found in the source branch. - Some of the vulnerabilities might already exist in the target branch, but without a target branch scan, GitLab cannot determine which ones are new. Potential solutions: 1. **Manual approvals**: Temporarily approve merge requests manually for new projects until security scans are established. 1. **Targeted policies**: Create separate policies for new projects with different approval requirements. 1. **Fallback behavior**: Consider using `fail: open` for policies on new projects, but be aware this may allow users to merge vulnerabilities even if scans fail. ### Support request for debugging of merge request approval policy GitLab.com users may submit a [support ticket](https://about.gitlab.com/support/) titled "Merge request approval policy debugging". Provide the following details: - Group path, project path and optionally merge request ID - Severity - Current behavior - Expected behavior #### GitLab.com Support teams will investigate [logs](https://log.gprd.gitlab.net/) (`pubsub-sidekiq-inf-gprd*`) to identify the failure `reason`. Below is an example response snippet from logs. You can use this query to find logs related to approvals: `json.event.keyword: "update_approvals"` and `json.project_path: "group-path/project-path"`. Optionally, you can further filter by the merge request identifier using `json.merge_request_iid`: ```json "json": { "project_path": "group-path/project-path", "merge_request_iid": 2, "missing_scans": [ "api_fuzzing" ], "reason": "Scanner removed by MR", "event": "update_approvals", } ``` #### GitLab Self-Managed Search for keywords such as the `project-path`, `api_fuzzing`, and `merge_request`. Example: `grep group-path/project-path`, and `grep merge_request`. If you know the correlation ID you can search by correlation ID. For example, if the value of `correlation_id` is 01HWN2NFABCEDFG, search for `01HWN2NFABCEDFG`. Search in the following files: - `/gitlab/gitlab-rails/production_json.log` - `/gitlab/sidekiq/current` Common failure reasons: - Scanner removed by MR: Merge request approval policy expects that the scanners defined in the policy are present and that they successfully produce an artifact for comparison. ### Inconsistent approvals from merge request approval policies If you notice any inconsistencies in your merge request approval rules, you can take either of the following steps to resynchronize your policies: - Use the [`resyncSecurityPolicies` GraphQL mutation](_index.md#resynchronize-policies-with-the-graphql-api) to resynchronize the policies. - Unassign and then reassign the security policy project to the affected group or project. - Alternatively, you can update a policy to trigger that policy to resynchronize for the affected group or project. - Confirm that the syntax of the YAML file in the security policy project is valid. These actions help ensure that your merge request approval policies are correctly applied and consistent across all merge requests. If you continue to experience issues with merge request approval policies after taking these steps, contact GitLab support for assistance. ### Merge requests that fix a detected vulnerability require approval If your policy configuration includes the `detected` state, merge requests that fix previously detected vulnerabilities still require approval. The merge request approval policy evaluates based on vulnerabilities that existed before the changes in the merge request, which adds an additional layer of review for any changes that affect known vulnerabilities. If you want to allow merge requests that fix vulnerabilities to proceed without any additional approvals due to a detected vulnerability, consider removing the `detected` state from your policy configuration.
https://docs.gitlab.com/user/application_security/policies/enforcement
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/policies/_index.md
2025-08-13
doc/user/application_security/policies/enforcement
[ "doc", "user", "application_security", "policies", "enforcement" ]
_index.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Policy enforcement
Learn how to apply security policies across multiple groups and projects from a single, centralized location.
You can create a new security policy for each project or group, but duplicating the same policy settings across multiple top-level groups can be time-consuming and present compliance challenges. Before you create a policy, you should know whether the policy should be: - Enforced on a specific project or group. - Enforce on multiple projects. - Enforced across an entire instance or top-level group You can enforce policies in multiple ways: - To enforce a policy in a single project or all of the projects in a group, create the policy in that project or group. - To enforce a policy across multiple projects, use [security policy projects](security_policy_projects.md). A security policy project is a special type of project used only to contain policies. To enforce the policies from a security policy project in other groups and projects, link to the security policy project from groups or other projects. - To enforce policies and compliance frameworks together across a GitLab Self-Managed instance, instance administrators can use [compliance and security policy management groups](compliance_and_security_policy_groups.md). ## Policy design guidelines When designing your policies, your goals should be to: - Design policy enforcement strategies for minimum overhead but maximum coverage - Ensure separation of duties ### Enforcement To enforce policies to meet your requirements, consider the following factors: - **Inheritance**: By default, a policy is enforced on the organizational units it's linked to, and all their descendent subgroups and their projects. - **Scope**: To customize policy enforcement, you can define a policy's scope to match your needs. #### Inheritance To maximize policy coverage, link a security policy project to the highest organizational units that achieves your objectives: groups, subgroups, or projects. A policy is enforced on the organizational units it's linked to, and all their descendent subgroups and their projects. Enforcement at the highest point minimizes the number of security policies required, minimizing the management overhead. You can use policy inheritance to incrementally roll out policies. For example, when rolling out a new policy, you can enforce it on a single project, then conduct testing. If the tests pass, you can then remove it from the project and enforce it on a group, moving up the hierarchy until the policy is enforced on all applicable projects. Policies enforced on an existing group or subgroup are automatically enforced in any new subgroups and projects created under them, provided that: - The new subgroups and projects are included in the scope definition of the policy (for example, the scope includes all projects in this group). - The existing group or subgroup is already linked to the security policy project. {{< alert type="note" >}} GitLab.com users can enforce policies against their top-level group or across subgroups, but cannot enforce policies across GitLab.com top-level groups. GitLab Self-Managed administrators can enforce policies across multiple top-level groups in their instance. {{< /alert >}} The following example illustrates two groups and their structure: - Alpha group contains two subgroups, each of which contains multiple projects. - Security and compliance group contains two policies. **Alpha** group (contains code projects) - **Finance** (subgroup) - Project A - Accounts receiving (subgroup) - Project B - Project C - **Engineering** (subgroup) - Project K - Project L - Project M **Security and compliance** group (contains security policy projects) - Security Policy Management - Security Policy Management - security policy project - SAST policy - Secret Detection policy Assuming no policies are enforced, consider the following examples: - If the "SAST" policy is enforced at group Alpha, it applies to its subgroups, Finance and Engineering, and all their projects and subgroups. If the "Secret Detection" policy is enforced also at subgroup "Accounts receiving", both policies apply to projects B and C. However, only the "SAST" policy applies to project A. - If the "SAST" policy is enforced at subgroup "Accounts receiving", it applies only to projects B and C. No policy applies to project A. - If the "Secret Detection" policy is enforced at project K, it applies only to project K. No other subgroups or projects have a policy apply to them. #### Scope {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135398) in GitLab 16.7 [with a flag](../../../../administration/feature_flags/_index.md) named `security_policies_policy_scope`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/443594) in GitLab 16.11. Feature flag `security_policies_policy_scope` removed. - Scoping by group [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/468384) in GitLab 17.4. {{< /history >}} You can refine a policy's scope by: - Compliance frameworks: Enforce a policy on projects with selected compliance frameworks. - Group: - All projects in a group, including all its descendent subgroups and their projects. Optionally exclude specific projects. - All projects in multiple groups, including their descendent subgroups and their projects. Only groups linked to the same security policy project can be listed in the policy. Optionally exclude specific projects. - Projects: Include or exclude specific projects. Only projects linked to the same security policy project can be listed in the policy. These options can be used together in the same policy. However, exclusion takes precedence over inclusion. ## Separation of duties Separation of duties is vital to successfully implementing policies. Implement policies that achieve the necessary compliance and security requirements, while allowing development teams to achieve their goals. Security and compliance teams: - Should be responsible for defining policies and working with development teams to ensure the policies meet their needs. Development teams: - Should not be able to disable, modify, or circumvent the policies in any way. To enforce a security policy project on a group, subgroup, or project, you must have either: - The Owner role in that group, subgroup, or project. - A [custom role](../../../custom_roles/_index.md) in that group, subgroup, or project with the `manage_security_policy_link` permission. The Owner role and custom roles with the `manage_security_policy_link` permission follow the standard hierarchy rules across groups, subgroups, and projects: | Organization unit | Group owner or group `manage_security_policy_link` permission | Subgroup owner or subgroup `manage_security_policy_link` permission | Project owner or project `manage_security_policy_link` permission | |-------------------|---------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------| | Group | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Subgroup | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | Project | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to apply security policies across multiple groups and projects from a single, centralized location. title: Policy enforcement breadcrumbs: - doc - user - application_security - policies - enforcement --- You can create a new security policy for each project or group, but duplicating the same policy settings across multiple top-level groups can be time-consuming and present compliance challenges. Before you create a policy, you should know whether the policy should be: - Enforced on a specific project or group. - Enforce on multiple projects. - Enforced across an entire instance or top-level group You can enforce policies in multiple ways: - To enforce a policy in a single project or all of the projects in a group, create the policy in that project or group. - To enforce a policy across multiple projects, use [security policy projects](security_policy_projects.md). A security policy project is a special type of project used only to contain policies. To enforce the policies from a security policy project in other groups and projects, link to the security policy project from groups or other projects. - To enforce policies and compliance frameworks together across a GitLab Self-Managed instance, instance administrators can use [compliance and security policy management groups](compliance_and_security_policy_groups.md). ## Policy design guidelines When designing your policies, your goals should be to: - Design policy enforcement strategies for minimum overhead but maximum coverage - Ensure separation of duties ### Enforcement To enforce policies to meet your requirements, consider the following factors: - **Inheritance**: By default, a policy is enforced on the organizational units it's linked to, and all their descendent subgroups and their projects. - **Scope**: To customize policy enforcement, you can define a policy's scope to match your needs. #### Inheritance To maximize policy coverage, link a security policy project to the highest organizational units that achieves your objectives: groups, subgroups, or projects. A policy is enforced on the organizational units it's linked to, and all their descendent subgroups and their projects. Enforcement at the highest point minimizes the number of security policies required, minimizing the management overhead. You can use policy inheritance to incrementally roll out policies. For example, when rolling out a new policy, you can enforce it on a single project, then conduct testing. If the tests pass, you can then remove it from the project and enforce it on a group, moving up the hierarchy until the policy is enforced on all applicable projects. Policies enforced on an existing group or subgroup are automatically enforced in any new subgroups and projects created under them, provided that: - The new subgroups and projects are included in the scope definition of the policy (for example, the scope includes all projects in this group). - The existing group or subgroup is already linked to the security policy project. {{< alert type="note" >}} GitLab.com users can enforce policies against their top-level group or across subgroups, but cannot enforce policies across GitLab.com top-level groups. GitLab Self-Managed administrators can enforce policies across multiple top-level groups in their instance. {{< /alert >}} The following example illustrates two groups and their structure: - Alpha group contains two subgroups, each of which contains multiple projects. - Security and compliance group contains two policies. **Alpha** group (contains code projects) - **Finance** (subgroup) - Project A - Accounts receiving (subgroup) - Project B - Project C - **Engineering** (subgroup) - Project K - Project L - Project M **Security and compliance** group (contains security policy projects) - Security Policy Management - Security Policy Management - security policy project - SAST policy - Secret Detection policy Assuming no policies are enforced, consider the following examples: - If the "SAST" policy is enforced at group Alpha, it applies to its subgroups, Finance and Engineering, and all their projects and subgroups. If the "Secret Detection" policy is enforced also at subgroup "Accounts receiving", both policies apply to projects B and C. However, only the "SAST" policy applies to project A. - If the "SAST" policy is enforced at subgroup "Accounts receiving", it applies only to projects B and C. No policy applies to project A. - If the "Secret Detection" policy is enforced at project K, it applies only to project K. No other subgroups or projects have a policy apply to them. #### Scope {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/135398) in GitLab 16.7 [with a flag](../../../../administration/feature_flags/_index.md) named `security_policies_policy_scope`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/443594) in GitLab 16.11. Feature flag `security_policies_policy_scope` removed. - Scoping by group [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/468384) in GitLab 17.4. {{< /history >}} You can refine a policy's scope by: - Compliance frameworks: Enforce a policy on projects with selected compliance frameworks. - Group: - All projects in a group, including all its descendent subgroups and their projects. Optionally exclude specific projects. - All projects in multiple groups, including their descendent subgroups and their projects. Only groups linked to the same security policy project can be listed in the policy. Optionally exclude specific projects. - Projects: Include or exclude specific projects. Only projects linked to the same security policy project can be listed in the policy. These options can be used together in the same policy. However, exclusion takes precedence over inclusion. ## Separation of duties Separation of duties is vital to successfully implementing policies. Implement policies that achieve the necessary compliance and security requirements, while allowing development teams to achieve their goals. Security and compliance teams: - Should be responsible for defining policies and working with development teams to ensure the policies meet their needs. Development teams: - Should not be able to disable, modify, or circumvent the policies in any way. To enforce a security policy project on a group, subgroup, or project, you must have either: - The Owner role in that group, subgroup, or project. - A [custom role](../../../custom_roles/_index.md) in that group, subgroup, or project with the `manage_security_policy_link` permission. The Owner role and custom roles with the `manage_security_policy_link` permission follow the standard hierarchy rules across groups, subgroups, and projects: | Organization unit | Group owner or group `manage_security_policy_link` permission | Subgroup owner or subgroup `manage_security_policy_link` permission | Project owner or project `manage_security_policy_link` permission | |-------------------|---------------------------------------------------------------|---------------------------------------------------------------------|-------------------------------------------------------------------| | Group | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Subgroup | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | Project | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes |
https://docs.gitlab.com/user/application_security/policies/security_policy_projects
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/policies/security_policy_projects.md
2025-08-13
doc/user/application_security/policies/enforcement
[ "doc", "user", "application_security", "policies", "enforcement" ]
security_policy_projects.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Security policy projects
Learn how to enforce security rules in GitLab using merge request approval policies to automate scans, approvals, and compliance across your projects.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Security policy projects enforce policies across multiple projects. A security policy project is a special type of project used only to contain policies. To enforce the policies contained in a security policy project, link the security policy project to the projects, subgroups, or groups you want to enforce the policies on. A security policy project can contain multiple policies but they are enforced together. A security policy project enforced on a group or subgroup applies to everything below in the hierarchy, including all subgroups and their projects. Policy changes made in a merge request take effect as soon as the merge request is merged. Those that do not go through a merge request, but instead are committed directly to the default branch, may require up to 10 minutes before the policy changes take effect. Policies are stored in the `.gitlab/security-policies/policy.yml` YAML file. ## Security policy project implementation Implementation options for security policy projects differ slightly between GitLab.com, GitLab Dedicated, and GitLab Self-Managed. The main difference is that on GitLab.com it's only possible to create subgroups. Ensuring separation of duties requires more granular permission configuration. ### Enforce policies globally in your GitLab.com namespace {{< details >}} - Tier: Ultimate - Offering: GitLab.com {{< /details >}} Prerequisites: - You must have the Owner role or a [custom role](../../../custom_roles/_index.md) with the `manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](_index.md#separation-of-duties). The high-level workflow for enforcing policies globally across all subgroups and projects in your GitLab.com namespace: 1. Visit the **Policies** tab from your top-level group. 1. In the subgroup, go to the **Policies** tab and create a test policy. You can create a policy as disabled for testing. Creating the policy automatically creates a new security policy project under your top-level group. This project is used to store your `policy.yml` or policy-as-code. 1. Check and set permissions in the newly created project as desired. By default, Owners and Maintainers are able to create, edit, and delete policies. Developers can propose policy changes but cannot merge them. 1. In the security policy project created within your subgroup, create the policies required. You can use the policy editor in the `Security Policy Management` project you created, under the **Policies** tab. Or you can directly update the policies in the `policy.yml` file stored in the newly-created security policy project `Security Policy Management - security policy project`. 1. Link up groups, subgroups, or projects to the security policy project. As a subgroup owner, or project owner with proper permissions, you can visit the **Policies** page and create a link to the security policy project. Include the full path and the project's name should end with "- security policy project". All linked groups, subgroups, and projects become "enforceable" by any policies created in the security policy project. For details, see [Link to a security policy project](#link-to-a-security-policy-project). 1. By default, when a policy is enabled, it is enforced on all projects in linked groups, subgroups, and projects. For more granular enforcement, add a policy scope. A policy scope allow you to enforce policies against a specific set of projects or against projects containing a set of compliance framework labels. 1. If you need additional restrictions, for example to block inherited permissions or require additional review or approval of policy changes, you can create an additional policy scoped only to your security policy project and enforce additional approvals. ### Enforce policies globally in GitLab Dedicated or GitLab Self-Managed {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="note" >}} In GitLab Self-Managed, you can also use [compliance and security policy groups](compliance_and_security_policy_groups.md) to enforce security policies across your instance. {{< /alert >}} Prerequisites: - You must have the Owner role or a [custom role](../../../custom_roles/_index.md) with the `manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](_index.md#separation-of-duties). - To support approval groups globally across your instance, enable `security_policy_global_group_approvers_enabled` in your [GitLab instance application settings](../../../../api/settings.md). The high-level workflow for enforcing policies across multiple groups: 1. Create a separate group to contain your policies and ensure separation of duties. By creating a separate standalone group, you can minimize the number of users who inherit permissions. 1. In the new group, visit the **Policies** tab. This serves as the primary location of the policy editor, allowing you to create and manage policies in the UI. 1. Create a test policy (you can create a policy as disabled for testing). Creating the policy automatically creates a new security policy project under your group. This project is used to store your `policy.yml` or policy-as-code. 1. Check and set permissions in the newly created project as desired. By default, Owners and Maintainers are able to create, edit, and delete policies. Developers can propose policy changes but cannot merge them. 1. In the security policy project created in your subgroup, create the policies required. You can use the policy editor in the `Security Policy Management` project you created, under the Policies tab. Or you can directly update the policies in the `policy.yml` file stored in the newly-created security policy project `Security Policy Management - security policy project`. 1. Link up groups, subgroups, or projects to the security policy project. As a subgroup owner, or project owner with proper permissions, you can visit the **Policies** page and create a link to the security policy project. Include the full path and the project's name should end with "- security policy project". All linked groups, subgroups, and projects become "enforceable" by any policies created in the security policy project. For more information, see [link to a security policy project](#link-to-a-security-policy-project). 1. By default, when a policy is enabled, it is enforced on all projects in linked groups, subgroups, and projects. For more granular enforcement, add a policy scope. A policy scope allows you to enforce policies against a specific set of projects or against projects that contain a set of compliance framework labels. 1. If you need additional restrictions, for example to block inherited permissions or require additional review or approval of policy changes, you can create an additional policy scoped only to your security policy project and enforce additional approvals. ## Link to a security policy project To enforce the policies contained in a security policy project against a group, subgroup, or project, you link them. By default, all linked entities are enforced. To enforce policies granularly per policy, you can set a policy scope in each policy. Prerequisites: - You must have the Owner role or [custom role](../../../custom_roles/_index.md) with the`manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](../_index.md#separation-of-duties). To link a group, subgroup, or project to a security policy project: 1. On the left sidebar, select **Search or go to** and find your project, subgroup, or group. 1. Select **Secure > Policies**. 1. Select **Edit Policy Project**, then search for and select the project you would like to link from the dropdown list. 1. Select **Save**. To unlink a security policy project, follow the same steps but instead select the trash can icon in the dialog. You can link to a security policy project from a different subgroup in the same top-level group, or from an entirely different top-level group. However, when you enforce a [pipeline execution policy](../pipeline_execution_policies.md#schema), users must have at least read-only access to the project that contains the CI/CD configuration referenced in the policy to trigger the pipeline. ### Viewing the linked security policy project Users with access to the project policy page and aren't project owners instead view a button linking to the associated security policy project. You can link a security policy project to more than one group or project. Anyone with permission to view the security policies in one linked group or project can determine which security policies are enforced in other linked groups and project. ## Changing policy limits {{< details >}} - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Configurable limits introduced](https://gitlab.com/groups/gitlab-org/-/epics/8084) in GitLab 18.3. {{< /history >}} For performance reasons, GitLab limits the number of policies that can be configured in a security policy project. {{< alert type="warning" >}} If you reduce the limit below the number of policies currently stored in a security policy project, GitLab does not enforce any policies after the limit. To re-enable the policies, increase the limit to match the number of policies in the largest security policy project. {{< /alert >}} The default limits are: | Policy type | Default policy limit | | --------------------------------- | ---------------------- | | Merge request approval policies | 5 | | Scan execution policies | 5 | | Pipeline execution policies | 5 | | Vulnerability management policies | 5 | On GitLab Self-Managed instances, instance administrators can adjust the limits for the entire instance, up to a maximum of 20 of each type of policy. Administrator can also change the limits for a specific top-level group. ### Change the policy limits for an instance To change the maximum number of policies your organization can store in a security policy project: 1. Go to **Admin Area** > **Settings** > **Security and compliance**. 1. Expand the **Security policies** section. 1. For each type of policy you want to change, set a new value for **Maximum number of {policy type} allowed per security policy configuration**. 1. Select **Save changes**. #### Change the policy limits for a top-level group Group limits can exceed the configured or default instance limits. To change the maximum number of policies your organization can store in a security policy project for a top-level group: {{< alert type="note" >}} Increasing these limits can affect system performance, especially if you apply a large number of complex policies. {{< /alert >}} To adjust the limit for a top-level group: 1. Go to **Admin Area** > **Overview** > **Groups**. 1. In the row of the top-level group you want to modify, select **Edit**. 1. For each type of policy you want to change, set a new value for **Maximum number of {policy type} allowed per security policy configuration**. 1. Select **Save changes**. If you set the limit for an individual group to `0`, the system uses the instance-wide default value. This ensures that groups with a zero limit can still create policies according to the default instance configuration. ## Delete a security policy project {{< history >}} - Deletion protection for security policy projects was introduced in GitLab 17.8 with a flag named `reject_security_policy_project_deletion`. Enabled by default. - Deletion protection for groups that contain security policy projects was introduced in GitLab 17.9 with a flag named `reject_security_policy_project_deletion_groups`. Enabled by default. - Deletion protection for security policy projects and groups that contain security policy projects is generally available in GitLab 17.10. Feature flags `reject_security_policy_project_deletion` and `reject_security_policy_project_deletion_groups` removed. {{< /history >}} To delete a security policy project or one of its parent groups, you must remove the link to it from all other projects or groups. Otherwise, an error message is displayed when you attempt to delete a linked security policy project or a parent group.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to enforce security rules in GitLab using merge request approval policies to automate scans, approvals, and compliance across your projects. title: Security policy projects breadcrumbs: - doc - user - application_security - policies - enforcement --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Security policy projects enforce policies across multiple projects. A security policy project is a special type of project used only to contain policies. To enforce the policies contained in a security policy project, link the security policy project to the projects, subgroups, or groups you want to enforce the policies on. A security policy project can contain multiple policies but they are enforced together. A security policy project enforced on a group or subgroup applies to everything below in the hierarchy, including all subgroups and their projects. Policy changes made in a merge request take effect as soon as the merge request is merged. Those that do not go through a merge request, but instead are committed directly to the default branch, may require up to 10 minutes before the policy changes take effect. Policies are stored in the `.gitlab/security-policies/policy.yml` YAML file. ## Security policy project implementation Implementation options for security policy projects differ slightly between GitLab.com, GitLab Dedicated, and GitLab Self-Managed. The main difference is that on GitLab.com it's only possible to create subgroups. Ensuring separation of duties requires more granular permission configuration. ### Enforce policies globally in your GitLab.com namespace {{< details >}} - Tier: Ultimate - Offering: GitLab.com {{< /details >}} Prerequisites: - You must have the Owner role or a [custom role](../../../custom_roles/_index.md) with the `manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](_index.md#separation-of-duties). The high-level workflow for enforcing policies globally across all subgroups and projects in your GitLab.com namespace: 1. Visit the **Policies** tab from your top-level group. 1. In the subgroup, go to the **Policies** tab and create a test policy. You can create a policy as disabled for testing. Creating the policy automatically creates a new security policy project under your top-level group. This project is used to store your `policy.yml` or policy-as-code. 1. Check and set permissions in the newly created project as desired. By default, Owners and Maintainers are able to create, edit, and delete policies. Developers can propose policy changes but cannot merge them. 1. In the security policy project created within your subgroup, create the policies required. You can use the policy editor in the `Security Policy Management` project you created, under the **Policies** tab. Or you can directly update the policies in the `policy.yml` file stored in the newly-created security policy project `Security Policy Management - security policy project`. 1. Link up groups, subgroups, or projects to the security policy project. As a subgroup owner, or project owner with proper permissions, you can visit the **Policies** page and create a link to the security policy project. Include the full path and the project's name should end with "- security policy project". All linked groups, subgroups, and projects become "enforceable" by any policies created in the security policy project. For details, see [Link to a security policy project](#link-to-a-security-policy-project). 1. By default, when a policy is enabled, it is enforced on all projects in linked groups, subgroups, and projects. For more granular enforcement, add a policy scope. A policy scope allow you to enforce policies against a specific set of projects or against projects containing a set of compliance framework labels. 1. If you need additional restrictions, for example to block inherited permissions or require additional review or approval of policy changes, you can create an additional policy scoped only to your security policy project and enforce additional approvals. ### Enforce policies globally in GitLab Dedicated or GitLab Self-Managed {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="note" >}} In GitLab Self-Managed, you can also use [compliance and security policy groups](compliance_and_security_policy_groups.md) to enforce security policies across your instance. {{< /alert >}} Prerequisites: - You must have the Owner role or a [custom role](../../../custom_roles/_index.md) with the `manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](_index.md#separation-of-duties). - To support approval groups globally across your instance, enable `security_policy_global_group_approvers_enabled` in your [GitLab instance application settings](../../../../api/settings.md). The high-level workflow for enforcing policies across multiple groups: 1. Create a separate group to contain your policies and ensure separation of duties. By creating a separate standalone group, you can minimize the number of users who inherit permissions. 1. In the new group, visit the **Policies** tab. This serves as the primary location of the policy editor, allowing you to create and manage policies in the UI. 1. Create a test policy (you can create a policy as disabled for testing). Creating the policy automatically creates a new security policy project under your group. This project is used to store your `policy.yml` or policy-as-code. 1. Check and set permissions in the newly created project as desired. By default, Owners and Maintainers are able to create, edit, and delete policies. Developers can propose policy changes but cannot merge them. 1. In the security policy project created in your subgroup, create the policies required. You can use the policy editor in the `Security Policy Management` project you created, under the Policies tab. Or you can directly update the policies in the `policy.yml` file stored in the newly-created security policy project `Security Policy Management - security policy project`. 1. Link up groups, subgroups, or projects to the security policy project. As a subgroup owner, or project owner with proper permissions, you can visit the **Policies** page and create a link to the security policy project. Include the full path and the project's name should end with "- security policy project". All linked groups, subgroups, and projects become "enforceable" by any policies created in the security policy project. For more information, see [link to a security policy project](#link-to-a-security-policy-project). 1. By default, when a policy is enabled, it is enforced on all projects in linked groups, subgroups, and projects. For more granular enforcement, add a policy scope. A policy scope allows you to enforce policies against a specific set of projects or against projects that contain a set of compliance framework labels. 1. If you need additional restrictions, for example to block inherited permissions or require additional review or approval of policy changes, you can create an additional policy scoped only to your security policy project and enforce additional approvals. ## Link to a security policy project To enforce the policies contained in a security policy project against a group, subgroup, or project, you link them. By default, all linked entities are enforced. To enforce policies granularly per policy, you can set a policy scope in each policy. Prerequisites: - You must have the Owner role or [custom role](../../../custom_roles/_index.md) with the`manage_security_policy_link` permission to link to the security policy project. For more information, see [separation of duties](../_index.md#separation-of-duties). To link a group, subgroup, or project to a security policy project: 1. On the left sidebar, select **Search or go to** and find your project, subgroup, or group. 1. Select **Secure > Policies**. 1. Select **Edit Policy Project**, then search for and select the project you would like to link from the dropdown list. 1. Select **Save**. To unlink a security policy project, follow the same steps but instead select the trash can icon in the dialog. You can link to a security policy project from a different subgroup in the same top-level group, or from an entirely different top-level group. However, when you enforce a [pipeline execution policy](../pipeline_execution_policies.md#schema), users must have at least read-only access to the project that contains the CI/CD configuration referenced in the policy to trigger the pipeline. ### Viewing the linked security policy project Users with access to the project policy page and aren't project owners instead view a button linking to the associated security policy project. You can link a security policy project to more than one group or project. Anyone with permission to view the security policies in one linked group or project can determine which security policies are enforced in other linked groups and project. ## Changing policy limits {{< details >}} - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Configurable limits introduced](https://gitlab.com/groups/gitlab-org/-/epics/8084) in GitLab 18.3. {{< /history >}} For performance reasons, GitLab limits the number of policies that can be configured in a security policy project. {{< alert type="warning" >}} If you reduce the limit below the number of policies currently stored in a security policy project, GitLab does not enforce any policies after the limit. To re-enable the policies, increase the limit to match the number of policies in the largest security policy project. {{< /alert >}} The default limits are: | Policy type | Default policy limit | | --------------------------------- | ---------------------- | | Merge request approval policies | 5 | | Scan execution policies | 5 | | Pipeline execution policies | 5 | | Vulnerability management policies | 5 | On GitLab Self-Managed instances, instance administrators can adjust the limits for the entire instance, up to a maximum of 20 of each type of policy. Administrator can also change the limits for a specific top-level group. ### Change the policy limits for an instance To change the maximum number of policies your organization can store in a security policy project: 1. Go to **Admin Area** > **Settings** > **Security and compliance**. 1. Expand the **Security policies** section. 1. For each type of policy you want to change, set a new value for **Maximum number of {policy type} allowed per security policy configuration**. 1. Select **Save changes**. #### Change the policy limits for a top-level group Group limits can exceed the configured or default instance limits. To change the maximum number of policies your organization can store in a security policy project for a top-level group: {{< alert type="note" >}} Increasing these limits can affect system performance, especially if you apply a large number of complex policies. {{< /alert >}} To adjust the limit for a top-level group: 1. Go to **Admin Area** > **Overview** > **Groups**. 1. In the row of the top-level group you want to modify, select **Edit**. 1. For each type of policy you want to change, set a new value for **Maximum number of {policy type} allowed per security policy configuration**. 1. Select **Save changes**. If you set the limit for an individual group to `0`, the system uses the instance-wide default value. This ensures that groups with a zero limit can still create policies according to the default instance configuration. ## Delete a security policy project {{< history >}} - Deletion protection for security policy projects was introduced in GitLab 17.8 with a flag named `reject_security_policy_project_deletion`. Enabled by default. - Deletion protection for groups that contain security policy projects was introduced in GitLab 17.9 with a flag named `reject_security_policy_project_deletion_groups`. Enabled by default. - Deletion protection for security policy projects and groups that contain security policy projects is generally available in GitLab 17.10. Feature flags `reject_security_policy_project_deletion` and `reject_security_policy_project_deletion_groups` removed. {{< /history >}} To delete a security policy project or one of its parent groups, you must remove the link to it from all other projects or groups. Otherwise, an error message is displayed when you attempt to delete a linked security policy project or a parent group.
https://docs.gitlab.com/user/application_security/policies/compliance_and_security_policy_groups
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/policies/compliance_and_security_policy_groups.md
2025-08-13
doc/user/application_security/policies/enforcement
[ "doc", "user", "application_security", "policies", "enforcement" ]
compliance_and_security_policy_groups.md
Security Risk Management
Security Policies
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Compliance and security policy groups
Learn how to apply security policies across multiple groups and projects from a single, centralized location.
{{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/7622) in GitLab 18.2 [with a feature flag](../../../../administration/feature_flags/_index.md) named `security_policies_csp`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is subject to change and may not ready for production use. {{< /alert >}} Centralized security policy management allows instance administrators to designate a compliance and security policy group to apply security policies across multiple groups and projects from a single, centralized location. When you create or edit a security policy in the compliance and security policy group, you can scope the group to enforce the policy on: - **Specific groups and subgroups**: Apply the policy only to selected groups and their subgroups. - **Specific projects**: Apply the policy to individual projects. - **All projects in the instance**: Apply the policy across your entire GitLab instance. - **All projects with exceptions**: Apply to all projects except those you specify. When you designate a compliance and security policy group to serve as your centralized policy management hub, you can: - Create and configure security policies that automatically apply across your instance. - Scope policies to specific groups, projects, or your entire instance. - View comprehensive policy coverage to understand which policies are active and where they're active. - Maintain centralized control while allowing teams to create their own additional policies. ## Prerequisites - GitLab Self-Managed. - GitLab 18.2 or later. - You must be instance administrator. - You must have an existing top-level group to serve as the compliance and security policy group. - To use the REST API (optional), you must have a token with administrator access. ## Set up centralized security policy management To set up centralized security policy management, you designate a compliance and security policy group and then create policies in the group. For more information, see [instance-wide compliance and security policy management](../../../../security/compliance_security_policy_management.md). ### Enable global approval groups To support approval groups globally across your instance, you must: - Enable `security_policy_global_group_approvers_enabled` in your [GitLab instance application settings](../../../../api/settings.md). ### Create security policies in the compliance and security policy group To create the policies: 1. Go to your designated compliance and security policy group. 1. Go to **Secure** > **Policies**. 1. Create one or more security policies as you typically would. Before you save each policy: - In the **Policy scope** section, select a scope to apply the policy to: - **Groups**: Apply the policy to specific groups and subgroups. - **Projects**: Apply the policy individual projects. - **All projects**: Apply to the entire instance. - **All projects except**: Apply to all projects with specified exceptions. 1. Save your policy configuration. ## Policy storage and configuration Policies in a compliance and security policy group are stored in a `policy.yml` file in the designated compliance and security policy group, similar to how group policies are managed. Policies created in a compliance and security policy group use the same configuration format as security policies in other groups and projects. ## Policy synchronization - Depending on the number of groups and projects in scope, policy changes may take some time to apply across your instance. - The synchronization process uses background jobs that are automatically queued when you designate a compliance and security policy group, create policies, or update policies. - Instance administrators can monitor background job processing in **Admin Area** > **Monitoring** > **Background jobs**. - To verify that policies are successfully applied in a target group or project, go to **Secure** > **Policies** in the group or project. ### Managing performance To prevent performance issues, plan your policy management strategy to minimize the number of modifications to your configuration: - Plan changes carefully: Avoid making multiple compliance and security policy group changes in quick succession. - Schedule changes during maintenance windows: Make changes during low-usage periods to minimize the impact on users. - Monitor system performance: Be prepared for potential performance degradation during synchronization. - Allow extra time: The synchronization process completion time depends on your instance size. ## Troubleshooting **Policy does not appear in the target group or project** - Verify that the policy scope includes the target group or project. - Verify that the compliance and security policy group is properly designated in the admin settings. - Verify that the policy is enabled in the compliance and security policy group. - Policy changes may take time to be applied. See [policy synchronization](#policy-synchronization) for more information. **Performance concerns** - Monitor policy propagation times, especially with large scope configurations. - Consider scoping policies to specific groups or projects instead of applying the policies to all projects. - To reduce performance impacts when modifying compliance security policy groups, see [managing performance](#managing-performance). ## Feedback and support As this is a Beta release, we actively seek feedback from users. Share your experience, suggestions, and any issues through: - [GitLab Issues](https://gitlab.com/gitlab-org/gitlab/-/issues). - Your regular GitLab support channels.
--- stage: Security Risk Management group: Security Policies info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to apply security policies across multiple groups and projects from a single, centralized location. title: Compliance and security policy groups breadcrumbs: - doc - user - application_security - policies - enforcement --- {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/7622) in GitLab 18.2 [with a feature flag](../../../../administration/feature_flags/_index.md) named `security_policies_csp`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is subject to change and may not ready for production use. {{< /alert >}} Centralized security policy management allows instance administrators to designate a compliance and security policy group to apply security policies across multiple groups and projects from a single, centralized location. When you create or edit a security policy in the compliance and security policy group, you can scope the group to enforce the policy on: - **Specific groups and subgroups**: Apply the policy only to selected groups and their subgroups. - **Specific projects**: Apply the policy to individual projects. - **All projects in the instance**: Apply the policy across your entire GitLab instance. - **All projects with exceptions**: Apply to all projects except those you specify. When you designate a compliance and security policy group to serve as your centralized policy management hub, you can: - Create and configure security policies that automatically apply across your instance. - Scope policies to specific groups, projects, or your entire instance. - View comprehensive policy coverage to understand which policies are active and where they're active. - Maintain centralized control while allowing teams to create their own additional policies. ## Prerequisites - GitLab Self-Managed. - GitLab 18.2 or later. - You must be instance administrator. - You must have an existing top-level group to serve as the compliance and security policy group. - To use the REST API (optional), you must have a token with administrator access. ## Set up centralized security policy management To set up centralized security policy management, you designate a compliance and security policy group and then create policies in the group. For more information, see [instance-wide compliance and security policy management](../../../../security/compliance_security_policy_management.md). ### Enable global approval groups To support approval groups globally across your instance, you must: - Enable `security_policy_global_group_approvers_enabled` in your [GitLab instance application settings](../../../../api/settings.md). ### Create security policies in the compliance and security policy group To create the policies: 1. Go to your designated compliance and security policy group. 1. Go to **Secure** > **Policies**. 1. Create one or more security policies as you typically would. Before you save each policy: - In the **Policy scope** section, select a scope to apply the policy to: - **Groups**: Apply the policy to specific groups and subgroups. - **Projects**: Apply the policy individual projects. - **All projects**: Apply to the entire instance. - **All projects except**: Apply to all projects with specified exceptions. 1. Save your policy configuration. ## Policy storage and configuration Policies in a compliance and security policy group are stored in a `policy.yml` file in the designated compliance and security policy group, similar to how group policies are managed. Policies created in a compliance and security policy group use the same configuration format as security policies in other groups and projects. ## Policy synchronization - Depending on the number of groups and projects in scope, policy changes may take some time to apply across your instance. - The synchronization process uses background jobs that are automatically queued when you designate a compliance and security policy group, create policies, or update policies. - Instance administrators can monitor background job processing in **Admin Area** > **Monitoring** > **Background jobs**. - To verify that policies are successfully applied in a target group or project, go to **Secure** > **Policies** in the group or project. ### Managing performance To prevent performance issues, plan your policy management strategy to minimize the number of modifications to your configuration: - Plan changes carefully: Avoid making multiple compliance and security policy group changes in quick succession. - Schedule changes during maintenance windows: Make changes during low-usage periods to minimize the impact on users. - Monitor system performance: Be prepared for potential performance degradation during synchronization. - Allow extra time: The synchronization process completion time depends on your instance size. ## Troubleshooting **Policy does not appear in the target group or project** - Verify that the policy scope includes the target group or project. - Verify that the compliance and security policy group is properly designated in the admin settings. - Verify that the policy is enabled in the compliance and security policy group. - Policy changes may take time to be applied. See [policy synchronization](#policy-synchronization) for more information. **Performance concerns** - Monitor policy propagation times, especially with large scope configurations. - Consider scoping policies to specific groups or projects instead of applying the policies to all projects. - To reduce performance impacts when modifying compliance security policy groups, see [managing performance](#managing-performance). ## Feedback and support As this is a Beta release, we actively seek feedback from users. Share your experience, suggestions, and any issues through: - [GitLab Issues](https://gitlab.com/gitlab-org/gitlab/-/issues). - Your regular GitLab support channels.
https://docs.gitlab.com/user/application_security/configuration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/configuration
[ "doc", "user", "application_security", "configuration" ]
_index.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](../detect/security_configuration.md). <!-- This redirect file can be deleted after <2025-08-13>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
--- redirect_to: ../detect/security_configuration.md remove_date: '2025-08-13' breadcrumbs: - doc - user - application_security - configuration --- <!-- markdownlint-disable --> This document was moved to [another location](../detect/security_configuration.md). <!-- This redirect file can be deleted after <2025-08-13>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
https://docs.gitlab.com/user/application_security/security_report_validation
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/security_report_validation.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
security_report_validation.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Security report validation
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Security reports are validated before their content is added to the database. This prevents ingestion of broken vulnerability data into the database. Reports that fail validation are listed in the pipeline's **Security** tab with the validation error message. Validation is done against the [report schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/tree/master/dist), according to the schema version declared in the report: - If the security report specifies a supported schema version, GitLab uses this version to validate. - If the security report uses a deprecated version, GitLab attempts validation against that version and adds a deprecation warning to the validation result. - If the security report uses a supported MAJOR-MINOR version of the report schema but the PATCH version doesn't match any vendored versions, GitLab attempts to validate it against latest vendored PATCH version of the schema. - Example: security report uses version 14.1.1 but the latest vendored version is 14.1.0. GitLab would validate against schema version 14.1.0. - If the security report uses a version that is not supported, GitLab attempts to validate it against the earliest schema version available in your installation but doesn't ingest the report. - If the security report does not specify a schema version, GitLab attempts to validate it against the earliest schema version available in GitLab. Because the `version` property is required, validation always fails in this case, but other validation errors may also be present. For details of the supported and deprecated schema versions, view the [schema validator source code](https://gitlab.com/gitlab-org/ruby/gems/gitlab-security_report_schemas/-/blob/main/supported_versions).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Security report validation breadcrumbs: - doc - user - application_security - detect --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Security reports are validated before their content is added to the database. This prevents ingestion of broken vulnerability data into the database. Reports that fail validation are listed in the pipeline's **Security** tab with the validation error message. Validation is done against the [report schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/tree/master/dist), according to the schema version declared in the report: - If the security report specifies a supported schema version, GitLab uses this version to validate. - If the security report uses a deprecated version, GitLab attempts validation against that version and adds a deprecation warning to the validation result. - If the security report uses a supported MAJOR-MINOR version of the report schema but the PATCH version doesn't match any vendored versions, GitLab attempts to validate it against latest vendored PATCH version of the schema. - Example: security report uses version 14.1.1 but the latest vendored version is 14.1.0. GitLab would validate against schema version 14.1.0. - If the security report uses a version that is not supported, GitLab attempts to validate it against the earliest schema version available in your installation but doesn't ingest the report. - If the security report does not specify a schema version, GitLab attempts to validate it against the earliest schema version available in GitLab. Because the `version` property is required, validation always fails in this case, but other validation errors may also be present. For details of the supported and deprecated schema versions, view the [schema validator source code](https://gitlab.com/gitlab-org/ruby/gems/gitlab-security_report_schemas/-/blob/main/supported_versions).
https://docs.gitlab.com/user/application_security/vulnerability_scanner_maintenance
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/vulnerability_scanner_maintenance.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
vulnerability_scanner_maintenance.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability scanner maintenance
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The following vulnerability scanners and their databases are regularly updated: | Secure scanning tool | Vulnerabilities database updates | |:-------------------------------------------------------------------------|:---------------------------------| | [Container Scanning](../container_scanning/_index.md) | A job runs on a daily basis to build new images with the latest vulnerability database updates from the upstream scanner. GitLab monitors this job through an internal alert that tells the engineering team when the database becomes more than 48 hours old. For more information, see the [Vulnerabilities database update](../container_scanning/_index.md#vulnerabilities-database). | | [Dependency Scanning - Gemnasium](../dependency_scanning/_index.md) | Relies on the [GitLab Advisory Database](../gitlab_advisory_database/_index.md) which is updated on a daily basis using data from the National Vulnerability Database (NVD) and the GitHub Advisory Database. | | [Dynamic Application Security Testing (DAST)](../dast/_index.md) | [DAST](../dast/browser/_index.md) analyzer is updated on a periodic basis. | | [Secret Detection](../secret_detection/pipeline/_index.md#detected-secrets) | GitLab maintains the [detection rules](../secret_detection/pipeline/_index.md#detected-secrets) and [accepts community contributions](../secret_detection/pipeline/configure.md#add-new-patterns). The scanning engine is updated at least once per month if a relevant update is available. | | [Static Application Security Testing (SAST)](../sast/_index.md) | The source of scan rules depends on which [analyzer](../sast/analyzers.md) is used for each [supported programming language](../sast/_index.md#supported-languages-and-frameworks). GitLab maintains a ruleset for the Semgrep-based analyzer and updates it regularly based on internal research and user feedback. For other analyzers, the ruleset is sourced from the upstream open-source scanner. Each analyzer is updated at least once per month if a relevant update is available. | In versions of GitLab that use the same major version of the analyzer, you do not have to update them to benefit from the latest vulnerabilities definitions. The security tools are released as Docker images. The vendored job definitions that enable them use major release tags according to [semantic versioning](https://semver.org/). Each new release of the tools overrides these tags. Although in a major analyzer version you automatically get the latest versions of the scanning tools, there are some [known issues](https://gitlab.com/gitlab-org/gitlab/-/issues/9725) with this approach. {{< alert type="note" >}} To get the most updated vulnerability information on existing vulnerabilities you may need to re-run the default branch's pipeline. {{< /alert >}}
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability scanner maintenance breadcrumbs: - doc - user - application_security - detect --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The following vulnerability scanners and their databases are regularly updated: | Secure scanning tool | Vulnerabilities database updates | |:-------------------------------------------------------------------------|:---------------------------------| | [Container Scanning](../container_scanning/_index.md) | A job runs on a daily basis to build new images with the latest vulnerability database updates from the upstream scanner. GitLab monitors this job through an internal alert that tells the engineering team when the database becomes more than 48 hours old. For more information, see the [Vulnerabilities database update](../container_scanning/_index.md#vulnerabilities-database). | | [Dependency Scanning - Gemnasium](../dependency_scanning/_index.md) | Relies on the [GitLab Advisory Database](../gitlab_advisory_database/_index.md) which is updated on a daily basis using data from the National Vulnerability Database (NVD) and the GitHub Advisory Database. | | [Dynamic Application Security Testing (DAST)](../dast/_index.md) | [DAST](../dast/browser/_index.md) analyzer is updated on a periodic basis. | | [Secret Detection](../secret_detection/pipeline/_index.md#detected-secrets) | GitLab maintains the [detection rules](../secret_detection/pipeline/_index.md#detected-secrets) and [accepts community contributions](../secret_detection/pipeline/configure.md#add-new-patterns). The scanning engine is updated at least once per month if a relevant update is available. | | [Static Application Security Testing (SAST)](../sast/_index.md) | The source of scan rules depends on which [analyzer](../sast/analyzers.md) is used for each [supported programming language](../sast/_index.md#supported-languages-and-frameworks). GitLab maintains a ruleset for the Semgrep-based analyzer and updates it regularly based on internal research and user feedback. For other analyzers, the ruleset is sourced from the upstream open-source scanner. Each analyzer is updated at least once per month if a relevant update is available. | In versions of GitLab that use the same major version of the analyzer, you do not have to update them to benefit from the latest vulnerabilities definitions. The security tools are released as Docker images. The vendored job definitions that enable them use major release tags according to [semantic versioning](https://semver.org/). Each new release of the tools overrides these tags. Although in a major analyzer version you automatically get the latest versions of the scanning tools, there are some [known issues](https://gitlab.com/gitlab-org/gitlab/-/issues/9725) with this approach. {{< alert type="note" >}} To get the most updated vulnerability information on existing vulnerabilities you may need to re-run the default branch's pipeline. {{< /alert >}}
https://docs.gitlab.com/user/application_security/roll_out_security_scanning
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/roll_out_security_scanning.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
roll_out_security_scanning.md
Secure
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Roll out application security testing
null
Plan your application security testing implementation in phases to ensure a smooth transition to a more secure development practice. This guide helps you implement GitLab application security testing across your organization in phases. By starting with a pilot group and gradually expanding coverage, you can minimize disruption while maximizing security benefits. The phased approach allows your team to become familiar with application security testing tools and workflows before scaling to all projects. Prerequisites: - GitLab Ultimate. - Familiarity with GitLab CI/CD pipelines. The following GitLab self-paced courses provide a good introduction: - [Introduction to CI/CD](https://university.gitlab.com/courses/introduction-to-cicd-s2) - [Hands-on Labs: CI Fundamentals](https://university.gitlab.com/courses/hands-on-labs-ci-fundamentals) - Understanding of your organization's security requirements and risk tolerance. ## Scope This guide covers how to plan and execute a phased implementation of GitLab application security testing features, including configuration, vulnerability management, and prevention strategies. It assumes you want to gradually introduce application security testing to minimize disruption to existing workflows while securing your codebase. ## Phases The implementation consists of two main phases: 1. **Pilot phase**: Implement application security testing for a limited set of projects to validate configurations and train teams. 1. **Rollout phase**: Expand application security testing to all target projects using the knowledge gained during the pilot. ## Pilot phase The pilot phase allows you to apply application security testing with minimal risk before a wider rollout. Consider the following guidance before starting on the pilot phase: - Identify key stakeholders including security team members, developers, and project managers. - Select pilot projects that are representative of your codebase but not critical to daily operations. - Schedule training sessions for developers and security team members. - Document current security practices to measure improvements. ### Pilot goals The pilot phase helps you achieve several key objectives: - Implement application security testing without slowing development During the pilot, application security testing results are available to developers in the UI, without blocking merge requests. This approach minimizes risk to projects outside the pilot's scope while collecting valuable data on your current security posture. In the rollout phase you should use a [merge request approval policy](#merge-request-approval-policy) to add an additional approval gate when vulnerabilities are detected in merge requests. - Establish scalable detection methods Implement application security testing on pilot projects in a way that can be expanded to include all projects in the wider rollout scope. Focus on configurations that scale well and can be standardized across projects. - Test scan times Test scan times on representative codebases and applications. - Simulate the vulnerability remediation workflow Simulate detecting, triaging, analyzing, and remediating vulnerabilities in the developer workflows. Verify that engineers can act on findings. - Compare maintenance costs Compare the maintenance of a single solution versus integrating multiple endpoint solutions. How well does this integrate into the IDE, merge request, and pipeline? #### Benefits for developers Developers in the pilot group will gain: - Familiarity with application security testing methods and how to interpret results. - Experience preventing vulnerabilities from being merged into the default branch. - Understanding of the vulnerability management workflow that begins when a vulnerability is detected in the default branch. #### Benefits for security management Security team members participating in the pilot will gain: - Experience with vulnerability tracking and management in GitLab. - Data to establish security baselines and set realistic remediation goals. - Insights to refine the security policy before wider rollout. ### Pilot plan Proper planning ensures an effective pilot phase. #### Roles and responsibilities Define who is responsible for: - Configuring application security testing - Reviewing scan results - Triaging vulnerabilities - Managing remediation - Training team members - Measuring the pilot's success ### Pilot scope Carefully select which projects to include in the pilot phase. Consider these factors when selecting pilot projects: - Include projects with different technology stacks to test application security testing effectiveness. - Choose projects with active development to see real-time results. - Select projects with teams open to learning new security practices. - Avoid starting with mission-critical applications. ### Security application security testing order Introduce security application security testing in the following order. This balances value and ease of deployment. - Dependency scanning - SAST - Advanced SAST - Pipeline secret detection - Secret push protection - Container scanning - DAST - API security testing - IaC scanning - Operational container scanning ## Test pilot projects With planning complete, begin implementing application security testing of your pilot projects. ### Set up testing of pilot projects Prerequisites: - You must have the Maintainer role for the projects in which application security testing is to be enabled. For each project in scope: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Expand **Security configuration**. 1. Enable the appropriate application security testing based on your project's stack. For more details, see [Security configuration](../configuration/_index.md). ### For developers Introduce developers to the tools that provide visibility into security findings. #### Pipeline results Developers can view security findings directly in pipeline results: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline to review. 1. In the pipeline details, select the **Security** tab to view detected vulnerabilities. For more details, see [View security scan results in pipelines](../vulnerability_report/pipeline.md). #### Merge request security widget The security widget provides visibility into vulnerabilities detected in merge request pipelines: 1. Open a merge request. 1. Review the security widget to see detected vulnerabilities. 1. Select **Expand** to see detailed findings. For more details, see [View security scan results in merge requests](security_scan_results.md). #### VS Code integration with GitLab Workflow extension Developers can view security findings directly in their IDE: 1. Install the GitLab Workflow extension for VS Code. 1. Connect the extension to your GitLab instance. 1. Use the extension to view security findings without leaving your development environment. For more details, see [GitLab Workflow extension for VS Code](../../../editor_extensions/visual_studio_code/_index.md). ## Vulnerability management workflow Establish a structured workflow for handling detected vulnerabilities. The vulnerability management workflow consists of four key stages: 1. **Detect**: Find vulnerabilities through automated application security testing in pipelines. 1. **Triage**: Assess the severity and impact of detected vulnerabilities. 1. **Analyze**: Investigate the root cause and determine the best approach for remediation. 1. **Remediate**: Implement fixes to resolve the vulnerabilities. ### Efficient triage GitLab provides several features to streamline vulnerability triage: - Vulnerability filters to focus on high-impact issues first. - Severity and confidence ratings to prioritize efforts. - Vulnerability tracking to maintain visibility of outstanding issues. - Risk assessment data. For more details, see [Triage](../triage/_index.md). Triage should include regular reviews of the vulnerability report with security stakeholders. ### Efficient remediation Streamline the remediation process with these GitLab features: - Automated remediation suggestions for certain vulnerability types. - Merge request creation directly from vulnerability details. - Vulnerability history tracking to monitor progress. - Automatically resolve vulnerabilities that are no longer detected. For more details, see [Remediate](../remediate/_index.md). #### Integrate with ticketing systems You can use a GitLab issue to track the remediation work required for a vulnerability. Alternatively, you can use a Jira issue if that is your primary ticketing system. For more details, see [Linking a vulnerability to GitLab and Jira issues](../vulnerabilities/_index.md#linking-a-vulnerability-to-gitlab-and-jira-issues). ## Vulnerability prevention Implement features to prevent vulnerabilities from being introduced in the first place. ### Merge request approval policy Use a merge request approval policy to add an extra approval requirement if the number and severity of vulnerabilities in a merge request exceeds a specific threshold. This allows an extra review from a member of the application security team, providing an extra level of scrutiny. Configure approval policies to require security reviews: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Policies**. 1. Select **New policy** 1. In the **Merge request approval policy** pane, select **Select policy**. 1. Add a merge request approval policy requiring approval from security team members. For more details, see [Security approvals in merge requests](../policies/merge_request_approval_policies.md). ## Rollout phase After a successful pilot, expand application security testing to all target projects. Before starting on the rollout phase consider the following: - Evaluate the results of the pilot phase. - Document lessons learned and best practices. - Prepare training materials based on pilot experiences. - Update implementation plans based on pilot feedback. ### Define access to team members Application security testing tasks require specific roles or permissions. For each person taking part in the rollout phases, define their access according to the tasks they'll be performing. - Users with the Developer role can view vulnerabilities on their projects and merge requests. - Users with the Maintainer role can configure security configurations for projects. - Users assigned a Custom Role with `admin_vulnerability` permission can manage and triage vulnerabilities. - Users assigned a Custom Role with `manage_security_policy_link` permission can enforce policies on groups and projects. For more details, see [Roles and permissions](../../permissions.md#application-security-group-permissions). ### Rollout goals The rollout phase aims to implement application security testing across all projects in scope, using the knowledge and experience gained during the pilot. ### Rollout plan Review and update roles and responsibilities established during the pilot. The same team structure should work for the rollout, but you may need to add more team members as the scope expands. ## Implement application security testing at scale Use policy features to efficiently scale your security implementation. ### Use policy inheritance Use policy inheritance to maximize effectiveness while also minimizing the number of policies to be managed. Consider the scenario in which you have a top-level group named Finance which contains subgroups A, B, and C. You want to run dependency scanning and secret detection on all projects in the Finance group. For each subgroup you want to run different sets of application security testing tools. To achieve this goal, you could define 3 policies for the Finance group: - Policy 1: - Includes dependency scanning and secret detection. - Applies to the Finance group, all its subgroups, and their projects. - Policy 2: - Includes DAST and API security testing. - Scoped to only subgroups A and B. - Policy 3: - Includes SAST. - Scoped to only subgroup C. Only a single set of policies needs to be maintained but still provides the flexibility to suit the needs of different projects. For more details, see [Enforcement](../policies/enforcement/_index.md#enforcement). ### Configure scan execution policies Implement consistent application security testing across multiple projects by using scan execution policies. Prerequisites: - You must have the Owner role, or a custom role with `manage_security_policy_link` permission, for the groups in which application security testing is to be enabled. 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Policies**. 1. Create scan execution policies based on the application security testing configuration used during the pilot phase. For more details, see [Security policies](../policies/_index.md). ### Scale gradually Scale the rollout gradually, first to the pilot projects and incrementally to all target projects. When applying policies to all groups and projects, create awareness to all project stakeholders as this can impact changes in pipelines and merge request workflows. For example, notify stakeholders Implement your security policies in phases: 1. Start by applying policies to the projects from the pilot phase. 1. Monitor for any issues or disruptions. 1. Gradually expand the policies' scope to include more projects. 1. Continue until all target projects are covered. For more details, see the [policy design guidelines](../policies/enforcement/_index.md#policy-design-guidelines).
--- stage: Secure group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Roll out application security testing breadcrumbs: - doc - user - application_security - detect --- Plan your application security testing implementation in phases to ensure a smooth transition to a more secure development practice. This guide helps you implement GitLab application security testing across your organization in phases. By starting with a pilot group and gradually expanding coverage, you can minimize disruption while maximizing security benefits. The phased approach allows your team to become familiar with application security testing tools and workflows before scaling to all projects. Prerequisites: - GitLab Ultimate. - Familiarity with GitLab CI/CD pipelines. The following GitLab self-paced courses provide a good introduction: - [Introduction to CI/CD](https://university.gitlab.com/courses/introduction-to-cicd-s2) - [Hands-on Labs: CI Fundamentals](https://university.gitlab.com/courses/hands-on-labs-ci-fundamentals) - Understanding of your organization's security requirements and risk tolerance. ## Scope This guide covers how to plan and execute a phased implementation of GitLab application security testing features, including configuration, vulnerability management, and prevention strategies. It assumes you want to gradually introduce application security testing to minimize disruption to existing workflows while securing your codebase. ## Phases The implementation consists of two main phases: 1. **Pilot phase**: Implement application security testing for a limited set of projects to validate configurations and train teams. 1. **Rollout phase**: Expand application security testing to all target projects using the knowledge gained during the pilot. ## Pilot phase The pilot phase allows you to apply application security testing with minimal risk before a wider rollout. Consider the following guidance before starting on the pilot phase: - Identify key stakeholders including security team members, developers, and project managers. - Select pilot projects that are representative of your codebase but not critical to daily operations. - Schedule training sessions for developers and security team members. - Document current security practices to measure improvements. ### Pilot goals The pilot phase helps you achieve several key objectives: - Implement application security testing without slowing development During the pilot, application security testing results are available to developers in the UI, without blocking merge requests. This approach minimizes risk to projects outside the pilot's scope while collecting valuable data on your current security posture. In the rollout phase you should use a [merge request approval policy](#merge-request-approval-policy) to add an additional approval gate when vulnerabilities are detected in merge requests. - Establish scalable detection methods Implement application security testing on pilot projects in a way that can be expanded to include all projects in the wider rollout scope. Focus on configurations that scale well and can be standardized across projects. - Test scan times Test scan times on representative codebases and applications. - Simulate the vulnerability remediation workflow Simulate detecting, triaging, analyzing, and remediating vulnerabilities in the developer workflows. Verify that engineers can act on findings. - Compare maintenance costs Compare the maintenance of a single solution versus integrating multiple endpoint solutions. How well does this integrate into the IDE, merge request, and pipeline? #### Benefits for developers Developers in the pilot group will gain: - Familiarity with application security testing methods and how to interpret results. - Experience preventing vulnerabilities from being merged into the default branch. - Understanding of the vulnerability management workflow that begins when a vulnerability is detected in the default branch. #### Benefits for security management Security team members participating in the pilot will gain: - Experience with vulnerability tracking and management in GitLab. - Data to establish security baselines and set realistic remediation goals. - Insights to refine the security policy before wider rollout. ### Pilot plan Proper planning ensures an effective pilot phase. #### Roles and responsibilities Define who is responsible for: - Configuring application security testing - Reviewing scan results - Triaging vulnerabilities - Managing remediation - Training team members - Measuring the pilot's success ### Pilot scope Carefully select which projects to include in the pilot phase. Consider these factors when selecting pilot projects: - Include projects with different technology stacks to test application security testing effectiveness. - Choose projects with active development to see real-time results. - Select projects with teams open to learning new security practices. - Avoid starting with mission-critical applications. ### Security application security testing order Introduce security application security testing in the following order. This balances value and ease of deployment. - Dependency scanning - SAST - Advanced SAST - Pipeline secret detection - Secret push protection - Container scanning - DAST - API security testing - IaC scanning - Operational container scanning ## Test pilot projects With planning complete, begin implementing application security testing of your pilot projects. ### Set up testing of pilot projects Prerequisites: - You must have the Maintainer role for the projects in which application security testing is to be enabled. For each project in scope: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Expand **Security configuration**. 1. Enable the appropriate application security testing based on your project's stack. For more details, see [Security configuration](../configuration/_index.md). ### For developers Introduce developers to the tools that provide visibility into security findings. #### Pipeline results Developers can view security findings directly in pipeline results: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline to review. 1. In the pipeline details, select the **Security** tab to view detected vulnerabilities. For more details, see [View security scan results in pipelines](../vulnerability_report/pipeline.md). #### Merge request security widget The security widget provides visibility into vulnerabilities detected in merge request pipelines: 1. Open a merge request. 1. Review the security widget to see detected vulnerabilities. 1. Select **Expand** to see detailed findings. For more details, see [View security scan results in merge requests](security_scan_results.md). #### VS Code integration with GitLab Workflow extension Developers can view security findings directly in their IDE: 1. Install the GitLab Workflow extension for VS Code. 1. Connect the extension to your GitLab instance. 1. Use the extension to view security findings without leaving your development environment. For more details, see [GitLab Workflow extension for VS Code](../../../editor_extensions/visual_studio_code/_index.md). ## Vulnerability management workflow Establish a structured workflow for handling detected vulnerabilities. The vulnerability management workflow consists of four key stages: 1. **Detect**: Find vulnerabilities through automated application security testing in pipelines. 1. **Triage**: Assess the severity and impact of detected vulnerabilities. 1. **Analyze**: Investigate the root cause and determine the best approach for remediation. 1. **Remediate**: Implement fixes to resolve the vulnerabilities. ### Efficient triage GitLab provides several features to streamline vulnerability triage: - Vulnerability filters to focus on high-impact issues first. - Severity and confidence ratings to prioritize efforts. - Vulnerability tracking to maintain visibility of outstanding issues. - Risk assessment data. For more details, see [Triage](../triage/_index.md). Triage should include regular reviews of the vulnerability report with security stakeholders. ### Efficient remediation Streamline the remediation process with these GitLab features: - Automated remediation suggestions for certain vulnerability types. - Merge request creation directly from vulnerability details. - Vulnerability history tracking to monitor progress. - Automatically resolve vulnerabilities that are no longer detected. For more details, see [Remediate](../remediate/_index.md). #### Integrate with ticketing systems You can use a GitLab issue to track the remediation work required for a vulnerability. Alternatively, you can use a Jira issue if that is your primary ticketing system. For more details, see [Linking a vulnerability to GitLab and Jira issues](../vulnerabilities/_index.md#linking-a-vulnerability-to-gitlab-and-jira-issues). ## Vulnerability prevention Implement features to prevent vulnerabilities from being introduced in the first place. ### Merge request approval policy Use a merge request approval policy to add an extra approval requirement if the number and severity of vulnerabilities in a merge request exceeds a specific threshold. This allows an extra review from a member of the application security team, providing an extra level of scrutiny. Configure approval policies to require security reviews: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Policies**. 1. Select **New policy** 1. In the **Merge request approval policy** pane, select **Select policy**. 1. Add a merge request approval policy requiring approval from security team members. For more details, see [Security approvals in merge requests](../policies/merge_request_approval_policies.md). ## Rollout phase After a successful pilot, expand application security testing to all target projects. Before starting on the rollout phase consider the following: - Evaluate the results of the pilot phase. - Document lessons learned and best practices. - Prepare training materials based on pilot experiences. - Update implementation plans based on pilot feedback. ### Define access to team members Application security testing tasks require specific roles or permissions. For each person taking part in the rollout phases, define their access according to the tasks they'll be performing. - Users with the Developer role can view vulnerabilities on their projects and merge requests. - Users with the Maintainer role can configure security configurations for projects. - Users assigned a Custom Role with `admin_vulnerability` permission can manage and triage vulnerabilities. - Users assigned a Custom Role with `manage_security_policy_link` permission can enforce policies on groups and projects. For more details, see [Roles and permissions](../../permissions.md#application-security-group-permissions). ### Rollout goals The rollout phase aims to implement application security testing across all projects in scope, using the knowledge and experience gained during the pilot. ### Rollout plan Review and update roles and responsibilities established during the pilot. The same team structure should work for the rollout, but you may need to add more team members as the scope expands. ## Implement application security testing at scale Use policy features to efficiently scale your security implementation. ### Use policy inheritance Use policy inheritance to maximize effectiveness while also minimizing the number of policies to be managed. Consider the scenario in which you have a top-level group named Finance which contains subgroups A, B, and C. You want to run dependency scanning and secret detection on all projects in the Finance group. For each subgroup you want to run different sets of application security testing tools. To achieve this goal, you could define 3 policies for the Finance group: - Policy 1: - Includes dependency scanning and secret detection. - Applies to the Finance group, all its subgroups, and their projects. - Policy 2: - Includes DAST and API security testing. - Scoped to only subgroups A and B. - Policy 3: - Includes SAST. - Scoped to only subgroup C. Only a single set of policies needs to be maintained but still provides the flexibility to suit the needs of different projects. For more details, see [Enforcement](../policies/enforcement/_index.md#enforcement). ### Configure scan execution policies Implement consistent application security testing across multiple projects by using scan execution policies. Prerequisites: - You must have the Owner role, or a custom role with `manage_security_policy_link` permission, for the groups in which application security testing is to be enabled. 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Policies**. 1. Create scan execution policies based on the application security testing configuration used during the pilot phase. For more details, see [Security policies](../policies/_index.md). ### Scale gradually Scale the rollout gradually, first to the pilot projects and incrementally to all target projects. When applying policies to all groups and projects, create awareness to all project stakeholders as this can impact changes in pipelines and merge request workflows. For example, notify stakeholders Implement your security policies in phases: 1. Start by applying policies to the projects from the pilot phase. 1. Monitor for any issues or disruptions. 1. Gradually expand the policies' scope to include more projects. 1. Continue until all target projects are covered. For more details, see the [policy design guidelines](../policies/enforcement/_index.md#policy-design-guidelines).
https://docs.gitlab.com/user/application_security/vulnerability_deduplication
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/vulnerability_deduplication.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
vulnerability_deduplication.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability deduplication process
Deduplication of security scanning results
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When a pipeline contains jobs that produce multiple security reports of the same type, it is possible that the same vulnerability finding is present in multiple reports. This duplication is common when different scanners are used to increase coverage, but can also exist in a single report. The deduplication process allows you to maximize the vulnerability scanning coverage while reducing the number of findings you need to manage. A finding is considered a duplicate of another finding when their [scan type](../terminology/_index.md#scan-type-report-type), [location](../terminology/_index.md#location-fingerprint), and one or more of its [identifiers](../../../development/integrations/secure.md#identifiers) are the same. The scan type must match because each can have its own definition for the location of a vulnerability. For example, static analyzers are able to locate a file path and line number, whereas a container scanning analyzer uses the image name instead. When comparing identifiers, GitLab does not compare `CWE` and `WASC` during deduplication because they are "type identifiers" and are used to classify groups of vulnerabilities. Including these identifiers would result in many findings being incorrectly considered duplicates. Two findings are considered unique if none of their identifiers match. In a set of duplicated findings, the first occurrence of a finding is kept and the remaining are skipped. Security reports are processed in alphabetical file path order, and findings are processed sequentially in the order they appear in a report. ## Deduplication examples - Example 1: matching identifiers and location, mismatching scan type. - Finding - Scan type: `dependency_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510 - Other Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510 - Deduplication result: no deduplication occurs because the scan type is different. - Example 2: matching location and scan type, mismatching type identifiers. - Finding - Scan type: `sast` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CWE-259 - Other Finding - Scan type: `sast` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CWE-798 - Deduplication result: no duplication occurs because `CWE` identifiers are ignored. - Example 3: matching scan type, location and an identifier. - Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2019-12345, CVE-2022-25510, CWE-259 - Other Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510, CWE-798 - Deduplication result: duplication occurs because all criteria match, and type identifiers (CWE) are ignored. Only one identifier needs to match, in this case CVE-2022-25510. You can find definitions for each scan type [`gitlab/lib/gitlab/ci/reports/security/locations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/reports/security/locations) and [`gitlab/ee/lib/gitlab/ci/reports/security/locations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/gitlab/ci/reports/security/locations). For instance, for `container_scanning` type the location is defined by the Docker image name without tag. However, if the image tag matches a semver syntax and doesn't look like a Git commit hash, it isn't considered a duplicate. For example, the following locations are treated as duplicates: - `registry.gitlab.com/group-name/project-name/image1:12345019:libcrypto3` - `registry.gitlab.com/group-name/project-name/image1:libcrypto3` However, the following locations are considered different: - `registry.gitlab.com/group-name/project-name/image1:v19202021:libcrypto3` - `registry.gitlab.com/group-name/project-name/image1:libcrypto3`
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability deduplication process description: Deduplication of security scanning results breadcrumbs: - doc - user - application_security - detect --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When a pipeline contains jobs that produce multiple security reports of the same type, it is possible that the same vulnerability finding is present in multiple reports. This duplication is common when different scanners are used to increase coverage, but can also exist in a single report. The deduplication process allows you to maximize the vulnerability scanning coverage while reducing the number of findings you need to manage. A finding is considered a duplicate of another finding when their [scan type](../terminology/_index.md#scan-type-report-type), [location](../terminology/_index.md#location-fingerprint), and one or more of its [identifiers](../../../development/integrations/secure.md#identifiers) are the same. The scan type must match because each can have its own definition for the location of a vulnerability. For example, static analyzers are able to locate a file path and line number, whereas a container scanning analyzer uses the image name instead. When comparing identifiers, GitLab does not compare `CWE` and `WASC` during deduplication because they are "type identifiers" and are used to classify groups of vulnerabilities. Including these identifiers would result in many findings being incorrectly considered duplicates. Two findings are considered unique if none of their identifiers match. In a set of duplicated findings, the first occurrence of a finding is kept and the remaining are skipped. Security reports are processed in alphabetical file path order, and findings are processed sequentially in the order they appear in a report. ## Deduplication examples - Example 1: matching identifiers and location, mismatching scan type. - Finding - Scan type: `dependency_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510 - Other Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510 - Deduplication result: no deduplication occurs because the scan type is different. - Example 2: matching location and scan type, mismatching type identifiers. - Finding - Scan type: `sast` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CWE-259 - Other Finding - Scan type: `sast` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CWE-798 - Deduplication result: no duplication occurs because `CWE` identifiers are ignored. - Example 3: matching scan type, location and an identifier. - Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2019-12345, CVE-2022-25510, CWE-259 - Other Finding - Scan type: `container_scanning` - Location fingerprint: `adc83b19e793491b1c6ea0fd8b46cd9f32e592fc` - Identifiers: CVE-2022-25510, CWE-798 - Deduplication result: duplication occurs because all criteria match, and type identifiers (CWE) are ignored. Only one identifier needs to match, in this case CVE-2022-25510. You can find definitions for each scan type [`gitlab/lib/gitlab/ci/reports/security/locations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/reports/security/locations) and [`gitlab/ee/lib/gitlab/ci/reports/security/locations`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/gitlab/ci/reports/security/locations). For instance, for `container_scanning` type the location is defined by the Docker image name without tag. However, if the image tag matches a semver syntax and doesn't look like a Git commit hash, it isn't considered a duplicate. For example, the following locations are treated as duplicates: - `registry.gitlab.com/group-name/project-name/image1:12345019:libcrypto3` - `registry.gitlab.com/group-name/project-name/image1:libcrypto3` However, the following locations are considered different: - `registry.gitlab.com/group-name/project-name/image1:v19202021:libcrypto3` - `registry.gitlab.com/group-name/project-name/image1:libcrypto3`
https://docs.gitlab.com/user/application_security/security_configuration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/security_configuration.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
security_configuration.md
Security Risk Management
Security Platform Management
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Security configuration
Configuration, testing, compliance, scanning, and enablement.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can configure security scanners for projects individually or create a scanner configuration shared by multiple projects. Configuring each project manually gives you maximum flexibility but becomes difficult to maintain at scale. For multiple projects or groups, shared scanner configuration provides easier management while still allowing some customization where needed. For example, if you have 10 projects with the same security scanning configuration applied manually, a single change must be made 10 times. If instead you create a shared CI/CD configuration, the single change only needs to be made once. ## Configure an individual project To configure security scanning in an individual project, either: - Edit the CI/CD configuration file. - Edit the CI/CD configuration in the UI. ### With a CI/CD file To manually enable security scanning of individual projects, either: - Enable individual security scanners. - Enable all security scanners by using AutoDevOps. AutoDevOps provides a least-effort path to enabling most of the security scanners. However, customization options are limited, compared with enabling individual security scanners. #### Enable individual security scanners To enable individual security scanning tools with the option of customizing settings, include the security scanner's templates to your `.gitlab-ci.yml` file. For instructions on how to enable individual security scanners, see their documentation. #### Enable security scanning by using Auto DevOps To enable the following security scanning tools, with default settings, enable [Auto DevOps](../../../topics/autodevops/_index.md): - [Auto SAST](../../../topics/autodevops/stages.md#auto-sast) - [Auto Secret Detection](../../../topics/autodevops/stages.md#auto-secret-detection) - [Auto DAST](../../../topics/autodevops/stages.md#auto-dast) - [Auto Dependency Scanning](../../../topics/autodevops/stages.md#auto-dependency-scanning) - [Auto Container Scanning](../../../topics/autodevops/stages.md#auto-container-scanning) While you cannot directly customize Auto DevOps, you can [include the Auto DevOps template in your project's `.gitlab-ci.yml` file](../../../topics/autodevops/customize.md#customize-gitlab-ciyml) and override its settings as required. ### With the UI Use the **Security configuration** page to view and configure the security testing and vulnerability management settings of a project. The **Security testing** tab reflects the status of each of the security tools by checking the CI/CD pipeline in the most recent commit on the default branch. Enabled : The security testing tool's artifact was found in the pipeline's output. Not enabled : Either no CI/CD pipeline exists or the security testing tool's artifact was not found in the pipeline's output. #### View security configuration page To view a project's security configuration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. To see a historic view of changes to the CI/CD configuration file, select **Configuration history**. #### Edit a project's security configuration To edit a project's security configuration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Select the security scanner you want to enable or configure and follow the instructions. For more details on how to enable and configure individual security scanners, see their documentation. ## Create a shared configuration {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To apply the same security scanning configuration to multiple projects, use one of the following methods: - [Scan execution policy](../policies/scan_execution_policies.md) - [Pipeline execution policy](../policies/pipeline_execution_policies.md) - [Compliance framework](../../compliance/compliance_pipelines.md) Each of these methods allow a CI/CD configuration, including security scanning, to be defined once and applied to multiple projects and groups. These methods have several advantages over configuring each project individually, including: - Configuration changes only have to be made once instead of for each project. - Permission to make configuration changes is restricted, providing separation of duties. ### Scan execution policy compared to compliance framework Consider the following when deciding between using a scan execution policy or compliance framework. - Use a [compliance framework pipeline](../../compliance/compliance_pipelines.md) when: - Scan execution enforcement is required for any scanner that uses a GitLab template, such as SAST IaC, DAST, Dependency Scanning, API Fuzzing, or Coverage-guided Fuzzing. - Scan execution enforcement is required for scanners external to GitLab. - Scan execution enforcement is required for custom jobs other than security scans. - Use a [scan execution policy](../policies/scan_execution_policies.md) when: - Scan execution enforcement is required for DAST which uses a DAST site or scan profile. - Scan execution enforcement is required for SAST, SAST IaC, Secret Detection, Dependency Scanning, or Container Scanning with project-specific variable customizations. To accomplish this, users must create a separate security policy per project. - Scans are required to run on a regular, scheduled cadence. - Either solution can be used equally well when: - Scan execution enforcement is required for Container Scanning with no project-specific variable customizations. Additional details about the differences between these solutions are outlined below: | | Compliance Framework Pipelines | Scan Execution Policies | | ------ | ------ | ------ | | **Flexibility** | Supports anything that can be done in a CI/CD file. | Limited to only the items for which GitLab has explicitly added support. DAST, SAST, SAST IaC, Secret Detection, Dependency Scanning, and Container Scanning scans are supported. | | **Usability** | Requires knowledge of CI YAML. | Follows a `rules` and `actions`-based YAML structure. | | **Inclusion in CI pipeline** | The compliance pipeline is executed instead of the project's `.gitlab-ci.yml` file. To include the project's `.gitlab-ci.yml` file, use an `include` statement. Defined variables aren't allowed to be overwritten by the included project's YAML file. | Forced inclusion of a new job into the CI pipeline. DAST jobs that must be customized on a per-project basis can have project-level Site Profiles and Scan Profiles defined. To ensure separation of duties, these profiles are immutable when referenced in a scan execution policy. All jobs can be customized as part of the security policy itself with the same variables that are usually available to the CI job. | | **Schedulable** | Has to be scheduled through a scheduled pipeline on each project. | Can be scheduled natively through the policy configuration itself. | | **Separation of Duties** | Only group owners can create compliance framework labels. Only project owners can apply compliance framework labels to projects. The ability to make or approve changes to the compliance pipeline definition is limited to individuals who are explicitly given access to the project that contains the compliance pipeline. | Only project owners can define a linked security policy project. The ability to make or approve changes to security policies is limited to individuals who are explicitly given access to the security policy project. | | **Ability to apply one standard to multiple projects** | The same compliance framework label can be applied to multiple projects inside a group. | The same security policy project can be used for multiple projects across GitLab with no requirement of being located in the same group. | Feedback is welcome on our vision for [unifying the user experience for these two features](https://gitlab.com/groups/gitlab-org/-/epics/7312) ## Customize security scanning You can customize security scanning to suit your requirements and environment. For details of how to customize individual security scanners, refer to their documentation. ### Best practices When customizing the security scanning configuration: - Test all customization of security scanning tools by using a merge request before merging changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. - [Include](../../../ci/yaml/_index.md#include) the scanning tool's CI/CD template. Don't copy the content of the template. - Override values in the template only as needed. All other values are inherited from the template. - Use the stable edition of each template for production workflows. The stable edition changes less often, and breaking changes are only made between major GitLab versions. The latest version contains the most recent changes, but may have significant changes between minor GitLab versions. ### Template editions GitLab application security tools have up to two template editions: - **Stable**: The stable template is the default. It offers a reliable and consistent application security experience. You should use the stable template for most users and projects that require stability and predictable behavior in their CI/CD pipelines. - **Latest**: The latest template is for those who want to access and test cutting-edge features. It is identified by the word `latest` in the template's name. It is not considered stable and may include breaking changes that are planned for the next major release. This template allows you to try new features and updates before they become part of the stable release. {{< alert type="note" >}} Don't mix security templates in the same project. Mixing different security template editions can cause both merge request and branch pipelines to run. {{< /alert >}} ### Override the default registry base address By default, GitLab security scanners use `registry.gitlab.com/security-products` as the base address for Docker images. You can override this for most scanners by setting the CI/CD variable `SECURE_ANALYZERS_PREFIX` to another location. This affects all scanners at once. The [Container Scanning](../container_scanning/_index.md) analyzer is an exception, and it does not use the `SECURE_ANALYZERS_PREFIX` variable. To override its Docker image, see the instructions for [Running container scanning in an offline environment](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment). ### Use security scanning tools with merge request pipelines By default, the application security jobs are configured to run for branch pipelines only. To use them with [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md), either: - Set the CI/CD variable `AST_ENABLE_MR_PIPELINES` to `"true"` ([introduced in 18.0](https://gitlab.com/gitlab-org/gitlab/-/issues/410880)) (Recommended) - Use the [`latest` edition template](#template-editions) which enables merge request pipelines by default. For example, to run both SAST and Dependency Scanning with merge request pipelines enabled, the following configuration is used: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml variables: AST_ENABLE_MR_PIPELINES: "true" ``` ### Use a custom scanning stage Security scanner templates use the predefined `test` stage by default. To have them instead run in a different stage, add the custom stage's name to the `stages:` section of the `.gitlab-ci.yml` file. For more information about overriding security jobs, see: - [Overriding SAST jobs](../sast/_index.md#overriding-sast-jobs). - [Overriding Dependency Scanning jobs](../dependency_scanning/_index.md#overriding-dependency-scanning-jobs). - [Overriding Container Scanning jobs](../container_scanning/_index.md#overriding-the-container-scanning-template). - [Overriding Secret Detection jobs](../secret_detection/pipeline/configure.md). - [Overriding DAST jobs](../dast/browser/_index.md). ## Troubleshooting When configuring security scanning you might encounter the following issues. ### Error: `chosen stage test does not exist` When running a pipeline you might get an error that states `chosen stage test does not exist`. This issue occurs when the stage used by the security scanning jobs isn't declared in the `.gitlab-ci.yml` file. To resolve this, either: - Add a `test` stage in your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml - template: Jobs/Secret-Detection.gitlab-ci.yml stages: - test - unit-tests custom job: stage: unit-tests script: - echo "custom job" ``` - Override the default stage of each security job. For example, to use a pre-defined stage named `unit-tests`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml - template: Jobs/Secret-Detection.gitlab-ci.yml stages: - unit-tests dependency_scanning: stage: unit-tests sast: stage: unit-tests .secret-analyzer: stage: unit-tests custom job: stage: unit-tests script: - echo "custom job" ```
--- stage: Security Risk Management group: Security Platform Management info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Security configuration description: Configuration, testing, compliance, scanning, and enablement. breadcrumbs: - doc - user - application_security - detect --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can configure security scanners for projects individually or create a scanner configuration shared by multiple projects. Configuring each project manually gives you maximum flexibility but becomes difficult to maintain at scale. For multiple projects or groups, shared scanner configuration provides easier management while still allowing some customization where needed. For example, if you have 10 projects with the same security scanning configuration applied manually, a single change must be made 10 times. If instead you create a shared CI/CD configuration, the single change only needs to be made once. ## Configure an individual project To configure security scanning in an individual project, either: - Edit the CI/CD configuration file. - Edit the CI/CD configuration in the UI. ### With a CI/CD file To manually enable security scanning of individual projects, either: - Enable individual security scanners. - Enable all security scanners by using AutoDevOps. AutoDevOps provides a least-effort path to enabling most of the security scanners. However, customization options are limited, compared with enabling individual security scanners. #### Enable individual security scanners To enable individual security scanning tools with the option of customizing settings, include the security scanner's templates to your `.gitlab-ci.yml` file. For instructions on how to enable individual security scanners, see their documentation. #### Enable security scanning by using Auto DevOps To enable the following security scanning tools, with default settings, enable [Auto DevOps](../../../topics/autodevops/_index.md): - [Auto SAST](../../../topics/autodevops/stages.md#auto-sast) - [Auto Secret Detection](../../../topics/autodevops/stages.md#auto-secret-detection) - [Auto DAST](../../../topics/autodevops/stages.md#auto-dast) - [Auto Dependency Scanning](../../../topics/autodevops/stages.md#auto-dependency-scanning) - [Auto Container Scanning](../../../topics/autodevops/stages.md#auto-container-scanning) While you cannot directly customize Auto DevOps, you can [include the Auto DevOps template in your project's `.gitlab-ci.yml` file](../../../topics/autodevops/customize.md#customize-gitlab-ciyml) and override its settings as required. ### With the UI Use the **Security configuration** page to view and configure the security testing and vulnerability management settings of a project. The **Security testing** tab reflects the status of each of the security tools by checking the CI/CD pipeline in the most recent commit on the default branch. Enabled : The security testing tool's artifact was found in the pipeline's output. Not enabled : Either no CI/CD pipeline exists or the security testing tool's artifact was not found in the pipeline's output. #### View security configuration page To view a project's security configuration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. To see a historic view of changes to the CI/CD configuration file, select **Configuration history**. #### Edit a project's security configuration To edit a project's security configuration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Select the security scanner you want to enable or configure and follow the instructions. For more details on how to enable and configure individual security scanners, see their documentation. ## Create a shared configuration {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To apply the same security scanning configuration to multiple projects, use one of the following methods: - [Scan execution policy](../policies/scan_execution_policies.md) - [Pipeline execution policy](../policies/pipeline_execution_policies.md) - [Compliance framework](../../compliance/compliance_pipelines.md) Each of these methods allow a CI/CD configuration, including security scanning, to be defined once and applied to multiple projects and groups. These methods have several advantages over configuring each project individually, including: - Configuration changes only have to be made once instead of for each project. - Permission to make configuration changes is restricted, providing separation of duties. ### Scan execution policy compared to compliance framework Consider the following when deciding between using a scan execution policy or compliance framework. - Use a [compliance framework pipeline](../../compliance/compliance_pipelines.md) when: - Scan execution enforcement is required for any scanner that uses a GitLab template, such as SAST IaC, DAST, Dependency Scanning, API Fuzzing, or Coverage-guided Fuzzing. - Scan execution enforcement is required for scanners external to GitLab. - Scan execution enforcement is required for custom jobs other than security scans. - Use a [scan execution policy](../policies/scan_execution_policies.md) when: - Scan execution enforcement is required for DAST which uses a DAST site or scan profile. - Scan execution enforcement is required for SAST, SAST IaC, Secret Detection, Dependency Scanning, or Container Scanning with project-specific variable customizations. To accomplish this, users must create a separate security policy per project. - Scans are required to run on a regular, scheduled cadence. - Either solution can be used equally well when: - Scan execution enforcement is required for Container Scanning with no project-specific variable customizations. Additional details about the differences between these solutions are outlined below: | | Compliance Framework Pipelines | Scan Execution Policies | | ------ | ------ | ------ | | **Flexibility** | Supports anything that can be done in a CI/CD file. | Limited to only the items for which GitLab has explicitly added support. DAST, SAST, SAST IaC, Secret Detection, Dependency Scanning, and Container Scanning scans are supported. | | **Usability** | Requires knowledge of CI YAML. | Follows a `rules` and `actions`-based YAML structure. | | **Inclusion in CI pipeline** | The compliance pipeline is executed instead of the project's `.gitlab-ci.yml` file. To include the project's `.gitlab-ci.yml` file, use an `include` statement. Defined variables aren't allowed to be overwritten by the included project's YAML file. | Forced inclusion of a new job into the CI pipeline. DAST jobs that must be customized on a per-project basis can have project-level Site Profiles and Scan Profiles defined. To ensure separation of duties, these profiles are immutable when referenced in a scan execution policy. All jobs can be customized as part of the security policy itself with the same variables that are usually available to the CI job. | | **Schedulable** | Has to be scheduled through a scheduled pipeline on each project. | Can be scheduled natively through the policy configuration itself. | | **Separation of Duties** | Only group owners can create compliance framework labels. Only project owners can apply compliance framework labels to projects. The ability to make or approve changes to the compliance pipeline definition is limited to individuals who are explicitly given access to the project that contains the compliance pipeline. | Only project owners can define a linked security policy project. The ability to make or approve changes to security policies is limited to individuals who are explicitly given access to the security policy project. | | **Ability to apply one standard to multiple projects** | The same compliance framework label can be applied to multiple projects inside a group. | The same security policy project can be used for multiple projects across GitLab with no requirement of being located in the same group. | Feedback is welcome on our vision for [unifying the user experience for these two features](https://gitlab.com/groups/gitlab-org/-/epics/7312) ## Customize security scanning You can customize security scanning to suit your requirements and environment. For details of how to customize individual security scanners, refer to their documentation. ### Best practices When customizing the security scanning configuration: - Test all customization of security scanning tools by using a merge request before merging changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. - [Include](../../../ci/yaml/_index.md#include) the scanning tool's CI/CD template. Don't copy the content of the template. - Override values in the template only as needed. All other values are inherited from the template. - Use the stable edition of each template for production workflows. The stable edition changes less often, and breaking changes are only made between major GitLab versions. The latest version contains the most recent changes, but may have significant changes between minor GitLab versions. ### Template editions GitLab application security tools have up to two template editions: - **Stable**: The stable template is the default. It offers a reliable and consistent application security experience. You should use the stable template for most users and projects that require stability and predictable behavior in their CI/CD pipelines. - **Latest**: The latest template is for those who want to access and test cutting-edge features. It is identified by the word `latest` in the template's name. It is not considered stable and may include breaking changes that are planned for the next major release. This template allows you to try new features and updates before they become part of the stable release. {{< alert type="note" >}} Don't mix security templates in the same project. Mixing different security template editions can cause both merge request and branch pipelines to run. {{< /alert >}} ### Override the default registry base address By default, GitLab security scanners use `registry.gitlab.com/security-products` as the base address for Docker images. You can override this for most scanners by setting the CI/CD variable `SECURE_ANALYZERS_PREFIX` to another location. This affects all scanners at once. The [Container Scanning](../container_scanning/_index.md) analyzer is an exception, and it does not use the `SECURE_ANALYZERS_PREFIX` variable. To override its Docker image, see the instructions for [Running container scanning in an offline environment](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment). ### Use security scanning tools with merge request pipelines By default, the application security jobs are configured to run for branch pipelines only. To use them with [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md), either: - Set the CI/CD variable `AST_ENABLE_MR_PIPELINES` to `"true"` ([introduced in 18.0](https://gitlab.com/gitlab-org/gitlab/-/issues/410880)) (Recommended) - Use the [`latest` edition template](#template-editions) which enables merge request pipelines by default. For example, to run both SAST and Dependency Scanning with merge request pipelines enabled, the following configuration is used: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml variables: AST_ENABLE_MR_PIPELINES: "true" ``` ### Use a custom scanning stage Security scanner templates use the predefined `test` stage by default. To have them instead run in a different stage, add the custom stage's name to the `stages:` section of the `.gitlab-ci.yml` file. For more information about overriding security jobs, see: - [Overriding SAST jobs](../sast/_index.md#overriding-sast-jobs). - [Overriding Dependency Scanning jobs](../dependency_scanning/_index.md#overriding-dependency-scanning-jobs). - [Overriding Container Scanning jobs](../container_scanning/_index.md#overriding-the-container-scanning-template). - [Overriding Secret Detection jobs](../secret_detection/pipeline/configure.md). - [Overriding DAST jobs](../dast/browser/_index.md). ## Troubleshooting When configuring security scanning you might encounter the following issues. ### Error: `chosen stage test does not exist` When running a pipeline you might get an error that states `chosen stage test does not exist`. This issue occurs when the stage used by the security scanning jobs isn't declared in the `.gitlab-ci.yml` file. To resolve this, either: - Add a `test` stage in your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml - template: Jobs/Secret-Detection.gitlab-ci.yml stages: - test - unit-tests custom job: stage: unit-tests script: - echo "custom job" ``` - Override the default stage of each security job. For example, to use a pre-defined stage named `unit-tests`: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/SAST.gitlab-ci.yml - template: Jobs/Secret-Detection.gitlab-ci.yml stages: - unit-tests dependency_scanning: stage: unit-tests sast: stage: unit-tests .secret-analyzer: stage: unit-tests custom job: stage: unit-tests script: - echo "custom job" ```
https://docs.gitlab.com/user/application_security/security_scan_results
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/security_scan_results.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
security_scan_results.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](security_scanning_results.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: security_scanning_results.md remove_date: '2025-09-11' breadcrumbs: - doc - user - application_security - detect --- <!-- markdownlint-disable --> This document was moved to [another location](security_scanning_results.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/security_scanning_results
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/security_scanning_results.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
security_scanning_results.md
Security Risk Management
Security Insights
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Security scanning results
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/490334) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `dependency_scanning_for_pipelines_with_cyclonedx_reports`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/490332) in GitLab 17.9. - Feature flag `dependency_scanning_for_pipelines_with_cyclonedx_reports` removed in 17.10. {{< /history >}} View and act on the results of pipeline security scanning in GitLab. Select security scanners run in a pipeline and output security reports. The contents of these reports are processed and presented in GitLab. Key terminology for understanding security scan results: Finding : - A finding is a potential vulnerability identified in a development branch. A finding becomes a vulnerability when the branch is merged into the default branch. : - Findings expire, either when the related CI/CD job artifact expires, or 90 days after the pipeline is created, even if the related job artifacts are locked. Vulnerability : - A vulnerability is a software security weakness identified in the default branch. : - Vulnerability records persist until they are [archived](../vulnerability_archival/_index.md), even if the vulnerability is no longer detected in the default branch. The presentation of security scanning results differs depending on the [pipeline type](../../../ci/pipelines/pipeline_types.md) - branch pipeline or merge request pipeline. Vulnerabilities identified in the default branch are listed in the [vulnerability report](../vulnerability_report/_index.md). | Vulnerability information | Branch<br />pipeline | Merge request</br >pipeline | |----------------------------------------------------|-------------------------------------------------------------------|-----------------------------| | Security reports | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="check-circle-filled" >}} Yes | | Pipeline security report<br />(Ultimate only) | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="check-circle-filled" >}} Yes | | Merge request security widget<br />(Ultimate only) | {{< icon name="dash-circle" >}} No | {{< icon name="check-circle-filled" >}} Yes | | Vulnerability report | {{< icon name="check-circle-filled" >}} Yes - Default branch only | {{< icon name="dash-circle" >}} No | ## Security report artifacts Security scanners run in branch pipelines and, if enabled, merge request pipelines. Each security scanner outputs a security report artifact containing details of all findings or vulnerabilities detected by the specific security scanner. You can download these for analysis outside GitLab. In a development (non-default) branch, findings include any vulnerabilities present in the target branch when the development branch was created. Expired findings are not shown in the pipeline's **Security** tab. To reproduce them, re-run the pipeline. ### Download a security report {{< details >}} - Tier: Ultimate {{< /details >}} You can download a security report, for example to analyze outside GitLab or for archival purposes. A security report is a JSON file. To download a security report: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select **Download results**, then the desired security report. The selected security report is downloaded to your device. ![List of security reports](img/security_report_v18_1.png) ## Pipeline security report {{< details >}} - Tier: Ultimate {{< /details >}} The pipeline security report contains details of all findings or vulnerabilities detected in the branch. For a pipeline run against the default branch all vulnerabilities in the pipeline security report are also in the vulnerability report. For each finding or vulnerability you can: - View further details by selecting its description. - Change its status or severity. - Create a GitLab issue to track any action taken to resolve or mitigate it. ![List of findings in the branch](img/pipeline_security_report_v18_1.png) ### View pipeline security report View the pipeline security report to see details of all findings or vulnerabilities detected in the branch. To view a pipeline security report: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the latest pipeline. To see details of a finding or vulnerability, select its description. ### Change status or severity You can change the status, severity, or both of a finding or vulnerability in the pipeline's security tab. Any changes made to a finding persist when the branch is merged into the default branch. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` custom permission. To change the status and severity of findings or vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the latest pipeline. 1. Select the **Security** tab. 1. In the finding report: 1. Select the findings or vulnerabilities you want to change. - To select individual findings or vulnerabilities, select the checkbox beside each. - To select all findings or vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select either **Change status** or **Change severity**. ### Create an issue Create an issue to track, document, and manage the remediation work for a finding or vulnerability. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a finding's description. 1. Select **Create issue**. An issue is created in the project, with the description copied from the finding or vulnerability's description. ## Merge request security widget {{< details >}} - Tier: Ultimate {{< /details >}} The merge request displays a security widget that provides a summary of the difference the changes would make to findings. It takes some time after the CI/CD pipeline has run to process the security reports, so there may be a delay until the security widget is shown. For example, consider two pipelines with these scan results: - The source branch pipeline detects two vulnerabilities identified as `V1` and `V2`. - The target branch pipeline detects two vulnerabilities identified as `V1` and `V3`. - `V2` appears on the merge request widget as "added". - `V3` appears on the merge request widget as "fixed". - `V1` exists on both branches and is not shown on the merge request widget. To show the differences between the source branch and the target branch, security reports from both are required. The 10 most recent pipelines for the commit when the feature branch was created from the target branch are checked for a security report. If one can't be found in the 10 most recent pipelines then all findings are listed as new. Before enabling security scanning in merge requests ensure that security scanning is enabled for the default branch. ### View security widget View the merge request security widget to see the difference in findings the changes would make. To view the security widget: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Merge requests**. 1. Select a merge request. To see the details for each security report type, select **Show details** ({{< icon name="chevron-down" >}}). For each security report type, the widget displays the first 25 added and 25 fixed findings, sorted by severity. To see all findings on the source branch of the merge request, select **View all pipeline findings**. ![Security scanning results in a merge request](img/mr_security_widget_v18_1.png) ## Troubleshooting When working with security scanning, you might encounter the following issues. ### Dismissed vulnerabilities are visible in MR security widget When viewing the security widget in a merge request you might sometimes see dismissed vulnerabilities are still listed. No solution is yet available for this issue. For details, see [issue 411235](https://gitlab.com/gitlab-org/gitlab/-/issues/411235). ### Report parsing and scan ingestion errors {{< alert type="note" >}} These steps are to be used by GitLab Support to reproduce such errors. {{< /alert >}} Some security scans may result in errors in the **Security** tab of the pipeline related to report parsing or scan ingestion. If it is not possible to get a copy of the project from the user, you can reproduce the error using the report generated from the scan. To recreate the error: 1. Obtain a copy of the report from the user. In this example, `gl-sast-report.json`. 1. Create a project. 1. Commit the report to the repository. 1. Add your `.gitlab-ci.yml` file and have the report as an artifact in a job. For example, to reproduce an error caused by a SAST job: ```yaml sample-job: script: - echo "Testing report" artifacts: reports: sast: gl-sast-report.json ``` 1. After the pipeline completes, check the content of the pipeline's **Security** tab for errors. You can replace `sast: gl-sast-report.json` with the respective [`artifacts:reports`](../../../ci/yaml/_index.md#artifactsreports) type and the correct JSON report filename depending on the scan that generated the report.
--- stage: Security Risk Management group: Security Insights info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Security scanning results breadcrumbs: - doc - user - application_security - detect --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/490334) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `dependency_scanning_for_pipelines_with_cyclonedx_reports`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/490332) in GitLab 17.9. - Feature flag `dependency_scanning_for_pipelines_with_cyclonedx_reports` removed in 17.10. {{< /history >}} View and act on the results of pipeline security scanning in GitLab. Select security scanners run in a pipeline and output security reports. The contents of these reports are processed and presented in GitLab. Key terminology for understanding security scan results: Finding : - A finding is a potential vulnerability identified in a development branch. A finding becomes a vulnerability when the branch is merged into the default branch. : - Findings expire, either when the related CI/CD job artifact expires, or 90 days after the pipeline is created, even if the related job artifacts are locked. Vulnerability : - A vulnerability is a software security weakness identified in the default branch. : - Vulnerability records persist until they are [archived](../vulnerability_archival/_index.md), even if the vulnerability is no longer detected in the default branch. The presentation of security scanning results differs depending on the [pipeline type](../../../ci/pipelines/pipeline_types.md) - branch pipeline or merge request pipeline. Vulnerabilities identified in the default branch are listed in the [vulnerability report](../vulnerability_report/_index.md). | Vulnerability information | Branch<br />pipeline | Merge request</br >pipeline | |----------------------------------------------------|-------------------------------------------------------------------|-----------------------------| | Security reports | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="check-circle-filled" >}} Yes | | Pipeline security report<br />(Ultimate only) | {{< icon name="check-circle-filled" >}} Yes | {{< icon name="check-circle-filled" >}} Yes | | Merge request security widget<br />(Ultimate only) | {{< icon name="dash-circle" >}} No | {{< icon name="check-circle-filled" >}} Yes | | Vulnerability report | {{< icon name="check-circle-filled" >}} Yes - Default branch only | {{< icon name="dash-circle" >}} No | ## Security report artifacts Security scanners run in branch pipelines and, if enabled, merge request pipelines. Each security scanner outputs a security report artifact containing details of all findings or vulnerabilities detected by the specific security scanner. You can download these for analysis outside GitLab. In a development (non-default) branch, findings include any vulnerabilities present in the target branch when the development branch was created. Expired findings are not shown in the pipeline's **Security** tab. To reproduce them, re-run the pipeline. ### Download a security report {{< details >}} - Tier: Ultimate {{< /details >}} You can download a security report, for example to analyze outside GitLab or for archival purposes. A security report is a JSON file. To download a security report: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select **Download results**, then the desired security report. The selected security report is downloaded to your device. ![List of security reports](img/security_report_v18_1.png) ## Pipeline security report {{< details >}} - Tier: Ultimate {{< /details >}} The pipeline security report contains details of all findings or vulnerabilities detected in the branch. For a pipeline run against the default branch all vulnerabilities in the pipeline security report are also in the vulnerability report. For each finding or vulnerability you can: - View further details by selecting its description. - Change its status or severity. - Create a GitLab issue to track any action taken to resolve or mitigate it. ![List of findings in the branch](img/pipeline_security_report_v18_1.png) ### View pipeline security report View the pipeline security report to see details of all findings or vulnerabilities detected in the branch. To view a pipeline security report: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the latest pipeline. To see details of a finding or vulnerability, select its description. ### Change status or severity You can change the status, severity, or both of a finding or vulnerability in the pipeline's security tab. Any changes made to a finding persist when the branch is merged into the default branch. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` custom permission. To change the status and severity of findings or vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the latest pipeline. 1. Select the **Security** tab. 1. In the finding report: 1. Select the findings or vulnerabilities you want to change. - To select individual findings or vulnerabilities, select the checkbox beside each. - To select all findings or vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select either **Change status** or **Change severity**. ### Create an issue Create an issue to track, document, and manage the remediation work for a finding or vulnerability. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a finding's description. 1. Select **Create issue**. An issue is created in the project, with the description copied from the finding or vulnerability's description. ## Merge request security widget {{< details >}} - Tier: Ultimate {{< /details >}} The merge request displays a security widget that provides a summary of the difference the changes would make to findings. It takes some time after the CI/CD pipeline has run to process the security reports, so there may be a delay until the security widget is shown. For example, consider two pipelines with these scan results: - The source branch pipeline detects two vulnerabilities identified as `V1` and `V2`. - The target branch pipeline detects two vulnerabilities identified as `V1` and `V3`. - `V2` appears on the merge request widget as "added". - `V3` appears on the merge request widget as "fixed". - `V1` exists on both branches and is not shown on the merge request widget. To show the differences between the source branch and the target branch, security reports from both are required. The 10 most recent pipelines for the commit when the feature branch was created from the target branch are checked for a security report. If one can't be found in the 10 most recent pipelines then all findings are listed as new. Before enabling security scanning in merge requests ensure that security scanning is enabled for the default branch. ### View security widget View the merge request security widget to see the difference in findings the changes would make. To view the security widget: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Merge requests**. 1. Select a merge request. To see the details for each security report type, select **Show details** ({{< icon name="chevron-down" >}}). For each security report type, the widget displays the first 25 added and 25 fixed findings, sorted by severity. To see all findings on the source branch of the merge request, select **View all pipeline findings**. ![Security scanning results in a merge request](img/mr_security_widget_v18_1.png) ## Troubleshooting When working with security scanning, you might encounter the following issues. ### Dismissed vulnerabilities are visible in MR security widget When viewing the security widget in a merge request you might sometimes see dismissed vulnerabilities are still listed. No solution is yet available for this issue. For details, see [issue 411235](https://gitlab.com/gitlab-org/gitlab/-/issues/411235). ### Report parsing and scan ingestion errors {{< alert type="note" >}} These steps are to be used by GitLab Support to reproduce such errors. {{< /alert >}} Some security scans may result in errors in the **Security** tab of the pipeline related to report parsing or scan ingestion. If it is not possible to get a copy of the project from the user, you can reproduce the error using the report generated from the scan. To recreate the error: 1. Obtain a copy of the report from the user. In this example, `gl-sast-report.json`. 1. Create a project. 1. Commit the report to the repository. 1. Add your `.gitlab-ci.yml` file and have the report as an artifact in a job. For example, to reproduce an error caused by a SAST job: ```yaml sample-job: script: - echo "Testing report" artifacts: reports: sast: gl-sast-report.json ``` 1. After the pipeline completes, check the content of the pipeline's **Security** tab for errors. You can replace `sast: gl-sast-report.json` with the respective [`artifacts:reports`](../../../ci/yaml/_index.md#artifactsreports) type and the correct JSON report filename depending on the scan that generated the report.
https://docs.gitlab.com/user/application_security/detect
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/detect
[ "doc", "user", "application_security", "detect" ]
_index.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Detect
Vulnerability detection and result evaluation.
Detect vulnerabilities in your project's repository and application's behavior throughout the software development lifecycle. To help you manage the risk of vulnerabilities during development: - Security scanners run when you push code changes to a branch. - You can view details of vulnerabilities detected in the branch. Developers can remediate vulnerabilities at this point, fixing them before they reach production. - Optionally, you can enforce additional approval on merge requests containing vulnerabilities. For details, see [merge request approval policies](../policies/merge_request_approval_policies.md). To help manage vulnerabilities outside development: - Security scanning can be scheduled or run manually. - Vulnerabilities detected in the default branch appear in a vulnerability report. Use this report to triage, analyze, and remediate vulnerabilities. ## Security scanning To get the most from security scanning, it's important to understand: - How to trigger security scanning. - What aspects of your application or repository are scanned. - What determines which scanners run. - How security scanning occurs. ### Triggers Security scanning in a CI/CD pipeline is triggered by default when changes are pushed to a project's repository. You can also run security scanning by: - Running a CI/CD pipeline manually. - Scheduling security scanning by using a scan execution policy. - For DAST only, running an on-demand DAST scan manually or on a schedule. - For SAST only, running a scan by using the GitLab Workflow extension for VS Code. ### Detection coverage Scan your project's repository and test your application's behavior for vulnerabilities: - Repository scanning can detect vulnerabilities in your project's repository. Coverage includes your application's source code, also the libraries and container images it's dependent on. - Behavioral testing of your application and its API can detect vulnerabilities that occur only at runtime. #### Repository scanning Your project's repository may contain source code, dependency declarations, and infrastructure definitions. Repository scanning can detect vulnerabilities in each of these. Repository scanning tools include: - Static Application Security Testing (SAST): Analyze source code for vulnerabilities. - Infrastructure as Code (IaC) scanning: Detect vulnerabilities in your application's infrastructure definitions. - Secret detection: Detect and block secrets from being committed to the repository. - Dependency scanning: Detect vulnerabilities in your application's dependencies and container images. #### Behavioral testing Behavioral testing requires a deployable application to test for known vulnerabilities and unexpected behavior. Behavioral testing tools include: - Dynamic Application Security Testing (DAST): Test your application for known attack vectors. - API security testing: Test your application's API for known attacks and vulnerabilities to input. - Coverage-guided fuzz testing: Test your application for unexpected behavior. ### Scanner selection Security scanners are enabled for a project by either: - Adding the scanner's CI/CD template to the `.gitlab-ci.yml` file, either directly or by using AutoDevOps. - Enforcing the scanner by using a scan execution policy, pipeline execution policy, or compliance framework. This enforcement can be applied directly to the project or inherited from the project's parent group. For more details, see [Security configuration](security_configuration.md). ### Security scanning process The security scanning process is: 1. According to the CI/CD job criteria, those scanners that are enabled and intended to run in a pipeline run as separate jobs. Each successful job outputs one or more security reports as job artifacts. These reports contain details of all vulnerabilities detected in the branch, regardless of whether they were previously found, dismissed, or new. 1. Each security report is processed, including [validation](security_report_validation.md) and [deduplication](vulnerability_deduplication.md). 1. When all jobs finish, including manual jobs, you can download or view the results. For more details on the output of security scanning, see [Security scanning results](security_scanning_results.md). #### CI/CD security job criteria Security scanning jobs in a CI/CD pipeline are determined by the following criteria: 1. Inclusion of security scanning templates The selection of security scanning jobs is first determined by which templates are included or enforced by a policy or compliance framework. Security scanning runs by default in branch pipelines. To run security scanning in merge request pipelines you must specifically [enable it](security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). 1. Evaluation of rules Each template has defined [rules](../../../ci/yaml/_index.md#rules) which determine if the analyzer is run. For example, some analyzers run only if files of a specific type are detected in the repository. 1. Analyzer logic If the template's rules dictate that the job is to be run, a job is created in the pipeline stage specified in the template. However, each analyzer has its own logic which determines if the analyzer itself is to be run. For example, if dependency scanning doesn't detect supported files at the default depth, the analyzer is not run and no artifacts are output. Jobs pass if they complete a scan, even if they don't find vulnerabilities. The only exception is coverage fuzzing, which fails if it identifies findings. All jobs are permitted to fail so that they don't fail the entire pipeline. Don't change the job [`allow_failure` setting](../../../ci/yaml/_index.md#allow_failure) because that fails the entire pipeline. ## Data privacy GitLab processes the source code and performs analysis locally on the GitLab Runner. No data is transmitted outside GitLab infrastructure (server and runners). Security analyzers access the internet only to download the latest sets of signatures, rules, and patches. If you prefer the scanners do not access the internet, consider using an [offline environment](../offline_deployments/_index.md).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Detect description: Vulnerability detection and result evaluation. breadcrumbs: - doc - user - application_security - detect --- Detect vulnerabilities in your project's repository and application's behavior throughout the software development lifecycle. To help you manage the risk of vulnerabilities during development: - Security scanners run when you push code changes to a branch. - You can view details of vulnerabilities detected in the branch. Developers can remediate vulnerabilities at this point, fixing them before they reach production. - Optionally, you can enforce additional approval on merge requests containing vulnerabilities. For details, see [merge request approval policies](../policies/merge_request_approval_policies.md). To help manage vulnerabilities outside development: - Security scanning can be scheduled or run manually. - Vulnerabilities detected in the default branch appear in a vulnerability report. Use this report to triage, analyze, and remediate vulnerabilities. ## Security scanning To get the most from security scanning, it's important to understand: - How to trigger security scanning. - What aspects of your application or repository are scanned. - What determines which scanners run. - How security scanning occurs. ### Triggers Security scanning in a CI/CD pipeline is triggered by default when changes are pushed to a project's repository. You can also run security scanning by: - Running a CI/CD pipeline manually. - Scheduling security scanning by using a scan execution policy. - For DAST only, running an on-demand DAST scan manually or on a schedule. - For SAST only, running a scan by using the GitLab Workflow extension for VS Code. ### Detection coverage Scan your project's repository and test your application's behavior for vulnerabilities: - Repository scanning can detect vulnerabilities in your project's repository. Coverage includes your application's source code, also the libraries and container images it's dependent on. - Behavioral testing of your application and its API can detect vulnerabilities that occur only at runtime. #### Repository scanning Your project's repository may contain source code, dependency declarations, and infrastructure definitions. Repository scanning can detect vulnerabilities in each of these. Repository scanning tools include: - Static Application Security Testing (SAST): Analyze source code for vulnerabilities. - Infrastructure as Code (IaC) scanning: Detect vulnerabilities in your application's infrastructure definitions. - Secret detection: Detect and block secrets from being committed to the repository. - Dependency scanning: Detect vulnerabilities in your application's dependencies and container images. #### Behavioral testing Behavioral testing requires a deployable application to test for known vulnerabilities and unexpected behavior. Behavioral testing tools include: - Dynamic Application Security Testing (DAST): Test your application for known attack vectors. - API security testing: Test your application's API for known attacks and vulnerabilities to input. - Coverage-guided fuzz testing: Test your application for unexpected behavior. ### Scanner selection Security scanners are enabled for a project by either: - Adding the scanner's CI/CD template to the `.gitlab-ci.yml` file, either directly or by using AutoDevOps. - Enforcing the scanner by using a scan execution policy, pipeline execution policy, or compliance framework. This enforcement can be applied directly to the project or inherited from the project's parent group. For more details, see [Security configuration](security_configuration.md). ### Security scanning process The security scanning process is: 1. According to the CI/CD job criteria, those scanners that are enabled and intended to run in a pipeline run as separate jobs. Each successful job outputs one or more security reports as job artifacts. These reports contain details of all vulnerabilities detected in the branch, regardless of whether they were previously found, dismissed, or new. 1. Each security report is processed, including [validation](security_report_validation.md) and [deduplication](vulnerability_deduplication.md). 1. When all jobs finish, including manual jobs, you can download or view the results. For more details on the output of security scanning, see [Security scanning results](security_scanning_results.md). #### CI/CD security job criteria Security scanning jobs in a CI/CD pipeline are determined by the following criteria: 1. Inclusion of security scanning templates The selection of security scanning jobs is first determined by which templates are included or enforced by a policy or compliance framework. Security scanning runs by default in branch pipelines. To run security scanning in merge request pipelines you must specifically [enable it](security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). 1. Evaluation of rules Each template has defined [rules](../../../ci/yaml/_index.md#rules) which determine if the analyzer is run. For example, some analyzers run only if files of a specific type are detected in the repository. 1. Analyzer logic If the template's rules dictate that the job is to be run, a job is created in the pipeline stage specified in the template. However, each analyzer has its own logic which determines if the analyzer itself is to be run. For example, if dependency scanning doesn't detect supported files at the default depth, the analyzer is not run and no artifacts are output. Jobs pass if they complete a scan, even if they don't find vulnerabilities. The only exception is coverage fuzzing, which fails if it identifies findings. All jobs are permitted to fail so that they don't fail the entire pipeline. Don't change the job [`allow_failure` setting](../../../ci/yaml/_index.md#allow_failure) because that fails the entire pipeline. ## Data privacy GitLab processes the source code and performs analysis locally on the GitLab Runner. No data is transmitted outside GitLab infrastructure (server and runners). Security analyzers access the internet only to download the latest sets of signatures, rules, and patches. If you prefer the scanners do not access the internet, consider using an [offline environment](../offline_deployments/_index.md).
https://docs.gitlab.com/user/application_security/detect/security_report_validation
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/detect/security_report_validation.md
2025-08-13
doc/user/application_security/detect/pipeline_secret_scanning
[ "doc", "user", "application_security", "detect", "pipeline_secret_scanning" ]
security_report_validation.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](../security_report_validation.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: ../security_report_validation.md remove_date: '2025-09-11' breadcrumbs: - doc - user - application_security - detect - pipeline_secret_scanning --- <!-- markdownlint-disable --> This document was moved to [another location](../security_report_validation.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/coverage_fuzzing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/coverage_fuzzing
[ "doc", "user", "application_security", "coverage_fuzzing" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Coverage-guided fuzz testing (deprecated)
Coverage-guided fuzzing, random inputs, and unexpected behavior.
<!--- start_remove The following content will be removed on remove_date: '2026-08-15' --> {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/517841) in GitLab 18.0 and is planned for removal in 19.0. This is a breaking change. {{< /alert >}} ## Getting started Coverage-guided fuzz testing sends random inputs to an instrumented version of your application in an effort to cause unexpected behavior. Such behavior indicates a bug that you should address. GitLab allows you to add coverage-guided fuzz testing to your pipelines. This helps you discover bugs and potential security issues that other QA processes may miss. You should use fuzz testing in addition to the other security scanners in [GitLab Secure](../_index.md) and your own test processes. If you're using [GitLab CI/CD](../../../ci/_index.md), you can run your coverage-guided fuzz testing as part your CI/CD workflow. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Coverage-guided Fuzzing - Advanced Security Testing](https://www.youtube.com/watch?v=bbIenVVcjW0). ### Confirm status of coverage-guided fuzz testing To confirm the status of coverage-guided fuzz testing: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section the status is: - **Not configured** - **Enabled** - A prompt to upgrade to GitLab Ultimate. ### Enable coverage-guided fuzz testing To enable coverage-guided fuzz testing, edit `.gitlab-ci.yml`: 1. Add the `fuzz` stage to the list of stages. 1. If your application is not written in Go, [provide a Docker image](../../../ci/yaml/_index.md#image) using the matching fuzzing engine. For example: ```yaml image: python:latest ``` 1. [Include](../../../ci/yaml/_index.md#includetemplate) the [`Coverage-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/Coverage-Fuzzing.gitlab-ci.yml) provided as part of your GitLab installation. 1. Customize the `my_fuzz_target` job to meet your requirements. ### Example extract of coverage-guided fuzzing configuration ```yaml stages: - fuzz include: - template: Coverage-Fuzzing.gitlab-ci.yml my_fuzz_target: extends: .fuzz_base script: # Build your fuzz target binary in these steps, then run it with gitlab-cov-fuzz # See our example repos for how you could do this with any of our supported languages - ./gitlab-cov-fuzz run --regression=$REGRESSION -- <your fuzz target> ``` The `Coverage-Fuzzing` template includes the [hidden job](../../../ci/jobs/_index.md#hide-a-job) `.fuzz_base`, which you must [extend](../../../ci/yaml/_index.md#extends) for each of your fuzzing targets. Each fuzzing target **must** have a separate job. For example, the [go-fuzzing-example project](https://gitlab.com/gitlab-org/security-products/demos/go-fuzzing-example) contains one job that extends `.fuzz_base` for its single fuzzing target. The hidden job `.fuzz_base` uses several YAML keys that you must not override in your own job. If you include these keys in your own job, you must copy their original content: - `before_script` - `artifacts` - `rules` ## Understanding the results ### Output Each fuzzing step outputs these artifacts: - `gl-coverage-fuzzing-report.json`: A report containing details of the coverage-guided fuzz testing and its results. - `artifacts.zip`: This file contains two directories: - `corpus`: Contains all test cases generated by the current and all previous jobs. - `crashes`: Contains all crash events the current job found and those not fixed in previous jobs. You can download the JSON report file from the CI/CD pipelines page. For more information, see [Downloading artifacts](../../../ci/jobs/job_artifacts.md#download-job-artifacts). ### Corpus registry The corpus registry is a library of corpora. Corpora in a project's registry are available to all jobs in that project. A project-wide registry is a more efficient way to manage corpora than the default option of one corpus per job. The corpus registry uses the package registry to store the project's corpora. Corpora stored in the registry are hidden to ensure data integrity. When you download a corpus, the file is named `artifacts.zip`, regardless of the filename used when the corpus was initially uploaded. This file contains only the corpus, which is different to the artifacts files you can download from the CI/CD pipeline. Also, a project member with a Reporter or above privilege can download the corpus using the direct download link. #### View details of the corpus registry To view details of the corpus registry: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section, select **Manage corpus**. #### Create a corpus in the corpus registry To create a corpus in the corpus registry, either: - Create a corpus in a pipeline - Upload an existing corpus file ##### Create a corpus in a pipeline To create a corpus in a pipeline: 1. In the `.gitlab-ci.yml` file, edit the `my_fuzz_target` job. 1. Set the following variables: - Set `COVFUZZ_USE_REGISTRY` to `true`. - Set `COVFUZZ_CORPUS_NAME` to name the corpus. - Set `COVFUZZ_GITLAB_TOKEN` to the value of the personal access token. After the `my_fuzz_target` job runs, the corpus is stored in the corpus registry, with the name provided by the `COVFUZZ_CORPUS_NAME` variable. The corpus is updated on every pipeline run. ##### Upload a corpus file To upload an existing corpus file: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section, select **Manage corpus**. 1. Select **New corpus**. 1. Complete the fields. 1. Select **Upload file**. 1. Select **Add**. You can now reference the corpus in the `.gitlab-ci.yml` file. Ensure the value used in the `COVFUZZ_CORPUS_NAME` variable matches exactly the name given to the uploaded corpus file. ### Use a corpus stored in the corpus registry To use a corpus stored in the corpus registry, you must reference it by its name. To confirm the name of the relevant corpus, view details of the corpus registry. Prerequisites: - [Enable coverage-guide fuzz testing](#enable-coverage-guided-fuzz-testing) in the project. 1. Set the following variables in the `.gitlab-ci.yml` file: - Set `COVFUZZ_USE_REGISTRY` to `true`. - Set `COVFUZZ_CORPUS_NAME` to the name of the corpus. - Set `COVFUZZ_GITLAB_TOKEN` to the value of the personal access token. ### Coverage-guided fuzz testing report For detailed information about the `gl-coverage-fuzzing-report.json` file's format, read the [schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json). Example coverage-guided fuzzing report: ```json { "version": "v1.0.8", "regression": false, "exit_code": -1, "vulnerabilities": [ { "category": "coverage_fuzzing", "message": "Heap-buffer-overflow\nREAD 1", "description": "Heap-buffer-overflow\nREAD 1", "severity": "Critical", "stacktrace_snippet": "INFO: Seed: 3415817494\nINFO: Loaded 1 modules (7 inline 8-bit counters): 7 [0x10eee2470, 0x10eee2477), \nINFO: Loaded 1 PC tables (7 PCs): 7 [0x10eee2478,0x10eee24e8), \nINFO: 5 files found in corpus\nINFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes\nINFO: seed corpus: files: 5 min: 1b max: 4b total: 14b rss: 26Mb\n#6\tINITED cov: 7 ft: 7 corp: 5/14b exec/s: 0 rss: 26Mb\n=================================================================\n==43405==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000001573 at pc 0x00010eea205a bp 0x7ffee0d5e090 sp 0x7ffee0d5e088\nREAD of size 1 at 0x602000001573 thread T0\n #0 0x10eea2059 in FuzzMe(unsigned char const*, unsigned long) fuzz_me.cc:9\n #1 0x10eea20ba in LLVMFuzzerTestOneInput fuzz_me.cc:13\n #2 0x10eebe020 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:556\n #3 0x10eebd765 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) FuzzerLoop.cpp:470\n #4 0x10eebf966 in fuzzer::Fuzzer::MutateAndTestOne() FuzzerLoop.cpp:698\n #5 0x10eec0665 in fuzzer::Fuzzer::Loop(std::__1::vector\u003cfuzzer::SizedFile, fuzzer::fuzzer_allocator\u003cfuzzer::SizedFile\u003e \u003e\u0026) FuzzerLoop.cpp:830\n #6 0x10eead0cd in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:829\n #7 0x10eedaf82 in main FuzzerMain.cpp:19\n #8 0x7fff684fecc8 in start+0x0 (libdyld.dylib:x86_64+0x1acc8)\n\n0x602000001573 is located 0 bytes to the right of 3-byte region [0x602000001570,0x602000001573)\nallocated by thread T0 here:\n #0 0x10ef92cfd in wrap__Znam+0x7d (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x50cfd)\n #1 0x10eebdf31 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:541\n #2 0x10eebd765 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) FuzzerLoop.cpp:470\n #3 0x10eebf966 in fuzzer::Fuzzer::MutateAndTestOne() FuzzerLoop.cpp:698\n #4 0x10eec0665 in fuzzer::Fuzzer::Loop(std::__1::vector\u003cfuzzer::SizedFile, fuzzer::fuzzer_allocator\u003cfuzzer::SizedFile\u003e \u003e\u0026) FuzzerLoop.cpp:830\n #5 0x10eead0cd in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:829\n #6 0x10eedaf82 in main FuzzerMain.cpp:19\n #7 0x7fff684fecc8 in start+0x0 (libdyld.dylib:x86_64+0x1acc8)\n\nSUMMARY: AddressSanitizer: heap-buffer-overflow fuzz_me.cc:9 in FuzzMe(unsigned char const*, unsigned long)\nShadow bytes around the buggy address:\n 0x1c0400000250: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000260: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000270: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000280: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000290: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n=\u003e0x1c04000002a0: fa fa fd fa fa fa fd fa fa fa fd fa fa fa[03]fa\n 0x1c04000002b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\nShadow byte legend (one shadow byte represents 8 application bytes):\n Addressable: 00\n Partially addressable: 01 02 03 04 05 06 07 \n Heap left redzone: fa\n Freed heap region: fd\n Stack left redzone: f1\n Stack mid redzone: f2\n Stack right redzone: f3\n Stack after return: f5\n Stack use after scope: f8\n Global redzone: f9\n Global init order: f6\n Poisoned by user: f7\n Container overflow: fc\n Array cookie: ac\n Intra object redzone: bb\n ASan internal: fe\n Left alloca redzone: ca\n Right alloca redzone: cb\n Shadow gap: cc\n==43405==ABORTING\nMS: 1 EraseBytes-; base unit: de3a753d4f1def197604865d76dba888d6aefc71\n0x46,0x55,0x5a,\nFUZ\nartifact_prefix='./crashes/'; Test unit written to ./crashes/crash-0eb8e4ed029b774d80f2b66408203801cb982a60\nBase64: RlVa\nstat::number_of_executed_units: 122\nstat::average_exec_per_sec: 0\nstat::new_units_added: 0\nstat::slowest_unit_time_sec: 0\nstat::peak_rss_mb: 28", "scanner": { "id": "libFuzzer", "name": "libFuzzer" }, "location": { "crash_address": "0x602000001573", "crash_state": "FuzzMe\nstart\nstart+0x0\n\n", "crash_type": "Heap-buffer-overflow\nREAD 1" }, "tool": "libFuzzer" } ] } ``` ### Interacting with the vulnerabilities After a vulnerability is found, you can [address it](../vulnerabilities/_index.md). The merge request widget lists the vulnerability and contains a button for downloading the fuzzing artifacts. By selecting one of the detected vulnerabilities, you can see its details. ![Coverage Fuzzing Security Report](img/coverage_fuzzing_report_v13_6.png) You can also view the vulnerability from the [Security Dashboard](../security_dashboard/_index.md), which shows an overview of all the security vulnerabilities in your groups, projects, and pipelines. Selecting the vulnerability opens a modal that provides additional information about the vulnerability: - Status: The vulnerability's status. As with any type of vulnerability, a coverage fuzzing vulnerability can be Detected, Confirmed, Dismissed, or Resolved. - Project: The project in which the vulnerability exists. - Crash type: The type of crash or weakness in the code. This typically maps to a [CWE](https://cwe.mitre.org/). - Crash state: A normalized version of the stack trace, containing the last three functions of the crash (without random addresses). - Stack trace snippet: The last few lines of the stack trace, which shows details about the crash. - Identifier: The vulnerability's identifier. This maps to either a [CVE](https://cve.mitre.org/) or [CWE](https://cwe.mitre.org/). - Severity: The vulnerability's severity. This can be Critical, High, Medium, Low, Info, or Unknown. - Scanner: The scanner that detected the vulnerability (for example, Coverage Fuzzing). - Scanner Provider: The engine that did the scan. For Coverage Fuzzing, this can be any of the engines listed in [Supported fuzzing engines and languages](#supported-fuzzing-engines-and-languages). ## Optimization Use the following customization options to optimize coverage-guided fuzz testing to your project. ### Available CI/CD variables Use the following variables to configure coverage-guided fuzz testing in your CI/CD pipeline. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} | CI/CD variable | Description | |---------------------------|---------------------------------------------------------------------------------| | `COVFUZZ_ADDITIONAL_ARGS` | Arguments passed to `gitlab-cov-fuzz`. Used to customize the behavior of the underlying fuzzing engine. Read the fuzzing engine's documentation for a complete list of arguments. | | `COVFUZZ_BRANCH` | The branch on which long-running fuzzing jobs are to be run. On all other branches, only fuzzing regression tests are run. Default: Repository's default branch. | | `COVFUZZ_SEED_CORPUS` | Path to a seed corpus directory. Default: empty. | | `COVFUZZ_URL_PREFIX` | Path to the `gitlab-cov-fuzz` repository cloned for use with an offline environment. You should only change this value when using an offline environment. Default: `https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-cov-fuzz/-/raw`. | | `COVFUZZ_USE_REGISTRY` | Set to `true` to have the corpus stored in the GitLab corpus registry. The variables `COVFUZZ_CORPUS_NAME` and `COVFUZZ_GITLAB_TOKEN` are required if this variable is set to `true`. Default: `false`. | | `COVFUZZ_CORPUS_NAME` | Name of the corpus to be used in the job. | | `COVFUZZ_GITLAB_TOKEN` | Environment variable configured with [personal access token](../../profile/personal_access_tokens.md#create-a-personal-access-token) or [project access token](../../project/settings/project_access_tokens.md#create-a-project-access-token) with API read/write access. | #### Seed corpus Files in the [seed corpus](../terminology/_index.md#seed-corpus) must be updated manually. They are not updated or overwritten by the coverage-guide fuzz testing job. ### Coverage-guided fuzz testing process The fuzz testing process: 1. Compiles the target application. 1. Runs the instrumented application, using the `gitlab-cov-fuzz` tool. 1. Parses and analyzes the exception information output by the fuzzer. 1. Downloads the [corpus](../terminology/_index.md#corpus) from either: - The previous pipelines. - If `COVFUZZ_USE_REGISTRY` is set to `true`, the [corpus registry](#corpus-registry). 1. Downloads crash events from previous pipeline. 1. Outputs the parsed crash events and data to the `gl-coverage-fuzzing-report.json` file. 1. Updates the corpus, either: - In the job's pipeline. - If `COVFUZZ_USE_REGISTRY` is set to `true`, in the corpus registry. The results of the coverage-guided fuzz testing are available in the CI/CD pipeline. ## Roll out After you're comfortable using coverage-guided fuzz testing in a single project, you can take advantage of the following advanced features, including enabling testing in offline environments. ### Supported fuzzing engines and languages You can use the following fuzzing engines to test the specified languages. | Language | Fuzzing Engine | Example | |---------------------------------------------|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------| | C/C++ | [libFuzzer](https://llvm.org/docs/LibFuzzer.html) | [c-cpp-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/c-cpp-fuzzing-example) | | Go | [go-fuzz (libFuzzer support)](https://github.com/dvyukov/go-fuzz) | [go-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example) | | Swift | [libFuzzer](https://github.com/apple/swift/blob/master/docs/libFuzzerIntegration.md) | [swift-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/swift-fuzzing-example) | | Rust | [cargo-fuzz (libFuzzer support)](https://github.com/rust-fuzz/cargo-fuzz) | [rust-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/rust-fuzzing-example) | | Java (Maven only)<sup>1</sup> | [Javafuzz](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/javafuzz) (recommended) | [javafuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/javafuzz-fuzzing-example) | | Java | [JQF](https://github.com/rohanpadhye/JQF) (not preferred) | [jqf-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/java-fuzzing-example) | | JavaScript | [`jsfuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/jsfuzz) | [jsfuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/jsfuzz-fuzzing-example) | | Python | [`pythonfuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/pythonfuzz) | [pythonfuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/pythonfuzz-fuzzing-example) | | AFL (any language that works on top of AFL) | [AFL](https://lcamtuf.coredump.cx/afl/) | [afl-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/afl-fuzzing-example) | 1. Support for Gradle is planned in [issue 409764](https://gitlab.com/gitlab-org/gitlab/-/issues/409764). ### Duration of coverage-guided fuzz testing The available durations for coverage-guided fuzz testing are: - 10-minute duration (default): Recommended for the default branch. - 60-minute duration: Recommended for the development branch and merge requests. The longer duration provides greater coverage. In the `COVFUZZ_ADDITIONAL_ARGS` variable set the value `--regression=true`. For a complete example, read the [Go coverage-guided fuzzing example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example/-/blob/master/.gitlab-ci.yml). #### Continuous coverage-guided fuzz testing It's also possible to run the coverage-guided fuzzing jobs longer and without blocking your main pipeline. This configuration uses the GitLab [parent-child pipelines](../../../ci/pipelines/downstream_pipelines.md#parent-child-pipelines). The suggested workflow in this scenario is to have long-running, asynchronous fuzzing jobs on the main or development branch, and short synchronous fuzzing jobs on all other branches and MRs. This balances the needs of completing the per-commit pipeline complete quickly, while also giving the fuzzer a large amount of time to fully explore and test the app. Long-running fuzzing jobs are usually necessary for the coverage-guided fuzzer to find deeper bugs in your codebase. The following is an extract of the `.gitlab-ci.yml` file for this workflow. For the full example, see the [Go fuzzing example's repository](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example/-/tree/continuous_fuzzing): ```yaml sync_fuzzing: variables: COVFUZZ_ADDITIONAL_ARGS: '-max_total_time=300' trigger: include: .covfuzz-ci.yml strategy: depend rules: - if: $CI_COMMIT_BRANCH != 'continuous_fuzzing' && $CI_PIPELINE_SOURCE != 'merge_request_event' async_fuzzing: variables: COVFUZZ_ADDITIONAL_ARGS: '-max_total_time=3600' trigger: include: .covfuzz-ci.yml rules: - if: $CI_COMMIT_BRANCH == 'continuous_fuzzing' && $CI_PIPELINE_SOURCE != 'merge_request_event' ``` This creates two jobs: 1. `sync_fuzzing`: Runs all your fuzz targets for a short period of time in a blocking configuration. This finds simple bugs and allows you to be confident that your MRs aren't introducing new bugs or causing old bugs to reappear. 1. `async_fuzzing`: Runs on your branch and finds deep bugs in your code without blocking your development cycle and MRs. The `covfuzz-ci.yml` is the same as that in the [original synchronous example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example#running-go-fuzz-from-ci). ### FIPS-enabled binary [Starting in GitLab 15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/352549) the coverage fuzzing binary is compiled with `golang-fips` on Linux x86 and uses OpenSSL as the cryptographic backend. For more details, see FIPS compliance at GitLab with Go. ### Offline environment To use coverage fuzzing in an offline environment: 1. Clone [`gitlab-cov-fuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-cov-fuzz) to a private repository that your offline GitLab instance can access. 1. For each fuzzing step, set `COVFUZZ_URL_PREFIX` to `${NEW_URL_GITLAB_COV_FUZ}/-/raw`, where `NEW_URL_GITLAB_COV_FUZ` is the URL of the private `gitlab-cov-fuzz` clone that you set up in the first step. ## Troubleshooting ### Error `Unable to extract corpus folder from artifacts zip file` If you see this error message, and `COVFUZZ_USE_REGISTRY` is set to `true`, ensure that the uploaded corpus file extracts into a folder named `corpus`. ### Error `400 Bad request - Duplicate package is not allowed` If you see this error message when running the fuzzing job with `COVFUZZ_USE_REGISTRY` set to `true`, ensure that duplicates are allowed. For more details, see [duplicate Generic packages](../../packages/generic_packages/_index.md#disable-publishing-duplicate-package-names). <!--- end_remove -->
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Coverage-guided fuzz testing (deprecated) description: Coverage-guided fuzzing, random inputs, and unexpected behavior. breadcrumbs: - doc - user - application_security - coverage_fuzzing --- <!--- start_remove The following content will be removed on remove_date: '2026-08-15' --> {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/517841) in GitLab 18.0 and is planned for removal in 19.0. This is a breaking change. {{< /alert >}} ## Getting started Coverage-guided fuzz testing sends random inputs to an instrumented version of your application in an effort to cause unexpected behavior. Such behavior indicates a bug that you should address. GitLab allows you to add coverage-guided fuzz testing to your pipelines. This helps you discover bugs and potential security issues that other QA processes may miss. You should use fuzz testing in addition to the other security scanners in [GitLab Secure](../_index.md) and your own test processes. If you're using [GitLab CI/CD](../../../ci/_index.md), you can run your coverage-guided fuzz testing as part your CI/CD workflow. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Coverage-guided Fuzzing - Advanced Security Testing](https://www.youtube.com/watch?v=bbIenVVcjW0). ### Confirm status of coverage-guided fuzz testing To confirm the status of coverage-guided fuzz testing: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section the status is: - **Not configured** - **Enabled** - A prompt to upgrade to GitLab Ultimate. ### Enable coverage-guided fuzz testing To enable coverage-guided fuzz testing, edit `.gitlab-ci.yml`: 1. Add the `fuzz` stage to the list of stages. 1. If your application is not written in Go, [provide a Docker image](../../../ci/yaml/_index.md#image) using the matching fuzzing engine. For example: ```yaml image: python:latest ``` 1. [Include](../../../ci/yaml/_index.md#includetemplate) the [`Coverage-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/Coverage-Fuzzing.gitlab-ci.yml) provided as part of your GitLab installation. 1. Customize the `my_fuzz_target` job to meet your requirements. ### Example extract of coverage-guided fuzzing configuration ```yaml stages: - fuzz include: - template: Coverage-Fuzzing.gitlab-ci.yml my_fuzz_target: extends: .fuzz_base script: # Build your fuzz target binary in these steps, then run it with gitlab-cov-fuzz # See our example repos for how you could do this with any of our supported languages - ./gitlab-cov-fuzz run --regression=$REGRESSION -- <your fuzz target> ``` The `Coverage-Fuzzing` template includes the [hidden job](../../../ci/jobs/_index.md#hide-a-job) `.fuzz_base`, which you must [extend](../../../ci/yaml/_index.md#extends) for each of your fuzzing targets. Each fuzzing target **must** have a separate job. For example, the [go-fuzzing-example project](https://gitlab.com/gitlab-org/security-products/demos/go-fuzzing-example) contains one job that extends `.fuzz_base` for its single fuzzing target. The hidden job `.fuzz_base` uses several YAML keys that you must not override in your own job. If you include these keys in your own job, you must copy their original content: - `before_script` - `artifacts` - `rules` ## Understanding the results ### Output Each fuzzing step outputs these artifacts: - `gl-coverage-fuzzing-report.json`: A report containing details of the coverage-guided fuzz testing and its results. - `artifacts.zip`: This file contains two directories: - `corpus`: Contains all test cases generated by the current and all previous jobs. - `crashes`: Contains all crash events the current job found and those not fixed in previous jobs. You can download the JSON report file from the CI/CD pipelines page. For more information, see [Downloading artifacts](../../../ci/jobs/job_artifacts.md#download-job-artifacts). ### Corpus registry The corpus registry is a library of corpora. Corpora in a project's registry are available to all jobs in that project. A project-wide registry is a more efficient way to manage corpora than the default option of one corpus per job. The corpus registry uses the package registry to store the project's corpora. Corpora stored in the registry are hidden to ensure data integrity. When you download a corpus, the file is named `artifacts.zip`, regardless of the filename used when the corpus was initially uploaded. This file contains only the corpus, which is different to the artifacts files you can download from the CI/CD pipeline. Also, a project member with a Reporter or above privilege can download the corpus using the direct download link. #### View details of the corpus registry To view details of the corpus registry: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section, select **Manage corpus**. #### Create a corpus in the corpus registry To create a corpus in the corpus registry, either: - Create a corpus in a pipeline - Upload an existing corpus file ##### Create a corpus in a pipeline To create a corpus in a pipeline: 1. In the `.gitlab-ci.yml` file, edit the `my_fuzz_target` job. 1. Set the following variables: - Set `COVFUZZ_USE_REGISTRY` to `true`. - Set `COVFUZZ_CORPUS_NAME` to name the corpus. - Set `COVFUZZ_GITLAB_TOKEN` to the value of the personal access token. After the `my_fuzz_target` job runs, the corpus is stored in the corpus registry, with the name provided by the `COVFUZZ_CORPUS_NAME` variable. The corpus is updated on every pipeline run. ##### Upload a corpus file To upload an existing corpus file: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Coverage Fuzzing** section, select **Manage corpus**. 1. Select **New corpus**. 1. Complete the fields. 1. Select **Upload file**. 1. Select **Add**. You can now reference the corpus in the `.gitlab-ci.yml` file. Ensure the value used in the `COVFUZZ_CORPUS_NAME` variable matches exactly the name given to the uploaded corpus file. ### Use a corpus stored in the corpus registry To use a corpus stored in the corpus registry, you must reference it by its name. To confirm the name of the relevant corpus, view details of the corpus registry. Prerequisites: - [Enable coverage-guide fuzz testing](#enable-coverage-guided-fuzz-testing) in the project. 1. Set the following variables in the `.gitlab-ci.yml` file: - Set `COVFUZZ_USE_REGISTRY` to `true`. - Set `COVFUZZ_CORPUS_NAME` to the name of the corpus. - Set `COVFUZZ_GITLAB_TOKEN` to the value of the personal access token. ### Coverage-guided fuzz testing report For detailed information about the `gl-coverage-fuzzing-report.json` file's format, read the [schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/coverage-fuzzing-report-format.json). Example coverage-guided fuzzing report: ```json { "version": "v1.0.8", "regression": false, "exit_code": -1, "vulnerabilities": [ { "category": "coverage_fuzzing", "message": "Heap-buffer-overflow\nREAD 1", "description": "Heap-buffer-overflow\nREAD 1", "severity": "Critical", "stacktrace_snippet": "INFO: Seed: 3415817494\nINFO: Loaded 1 modules (7 inline 8-bit counters): 7 [0x10eee2470, 0x10eee2477), \nINFO: Loaded 1 PC tables (7 PCs): 7 [0x10eee2478,0x10eee24e8), \nINFO: 5 files found in corpus\nINFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes\nINFO: seed corpus: files: 5 min: 1b max: 4b total: 14b rss: 26Mb\n#6\tINITED cov: 7 ft: 7 corp: 5/14b exec/s: 0 rss: 26Mb\n=================================================================\n==43405==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000001573 at pc 0x00010eea205a bp 0x7ffee0d5e090 sp 0x7ffee0d5e088\nREAD of size 1 at 0x602000001573 thread T0\n #0 0x10eea2059 in FuzzMe(unsigned char const*, unsigned long) fuzz_me.cc:9\n #1 0x10eea20ba in LLVMFuzzerTestOneInput fuzz_me.cc:13\n #2 0x10eebe020 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:556\n #3 0x10eebd765 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) FuzzerLoop.cpp:470\n #4 0x10eebf966 in fuzzer::Fuzzer::MutateAndTestOne() FuzzerLoop.cpp:698\n #5 0x10eec0665 in fuzzer::Fuzzer::Loop(std::__1::vector\u003cfuzzer::SizedFile, fuzzer::fuzzer_allocator\u003cfuzzer::SizedFile\u003e \u003e\u0026) FuzzerLoop.cpp:830\n #6 0x10eead0cd in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:829\n #7 0x10eedaf82 in main FuzzerMain.cpp:19\n #8 0x7fff684fecc8 in start+0x0 (libdyld.dylib:x86_64+0x1acc8)\n\n0x602000001573 is located 0 bytes to the right of 3-byte region [0x602000001570,0x602000001573)\nallocated by thread T0 here:\n #0 0x10ef92cfd in wrap__Znam+0x7d (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x50cfd)\n #1 0x10eebdf31 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:541\n #2 0x10eebd765 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) FuzzerLoop.cpp:470\n #3 0x10eebf966 in fuzzer::Fuzzer::MutateAndTestOne() FuzzerLoop.cpp:698\n #4 0x10eec0665 in fuzzer::Fuzzer::Loop(std::__1::vector\u003cfuzzer::SizedFile, fuzzer::fuzzer_allocator\u003cfuzzer::SizedFile\u003e \u003e\u0026) FuzzerLoop.cpp:830\n #5 0x10eead0cd in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:829\n #6 0x10eedaf82 in main FuzzerMain.cpp:19\n #7 0x7fff684fecc8 in start+0x0 (libdyld.dylib:x86_64+0x1acc8)\n\nSUMMARY: AddressSanitizer: heap-buffer-overflow fuzz_me.cc:9 in FuzzMe(unsigned char const*, unsigned long)\nShadow bytes around the buggy address:\n 0x1c0400000250: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000260: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000270: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000280: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n 0x1c0400000290: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa\n=\u003e0x1c04000002a0: fa fa fd fa fa fa fd fa fa fa fd fa fa fa[03]fa\n 0x1c04000002b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\n 0x1c04000002f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa\nShadow byte legend (one shadow byte represents 8 application bytes):\n Addressable: 00\n Partially addressable: 01 02 03 04 05 06 07 \n Heap left redzone: fa\n Freed heap region: fd\n Stack left redzone: f1\n Stack mid redzone: f2\n Stack right redzone: f3\n Stack after return: f5\n Stack use after scope: f8\n Global redzone: f9\n Global init order: f6\n Poisoned by user: f7\n Container overflow: fc\n Array cookie: ac\n Intra object redzone: bb\n ASan internal: fe\n Left alloca redzone: ca\n Right alloca redzone: cb\n Shadow gap: cc\n==43405==ABORTING\nMS: 1 EraseBytes-; base unit: de3a753d4f1def197604865d76dba888d6aefc71\n0x46,0x55,0x5a,\nFUZ\nartifact_prefix='./crashes/'; Test unit written to ./crashes/crash-0eb8e4ed029b774d80f2b66408203801cb982a60\nBase64: RlVa\nstat::number_of_executed_units: 122\nstat::average_exec_per_sec: 0\nstat::new_units_added: 0\nstat::slowest_unit_time_sec: 0\nstat::peak_rss_mb: 28", "scanner": { "id": "libFuzzer", "name": "libFuzzer" }, "location": { "crash_address": "0x602000001573", "crash_state": "FuzzMe\nstart\nstart+0x0\n\n", "crash_type": "Heap-buffer-overflow\nREAD 1" }, "tool": "libFuzzer" } ] } ``` ### Interacting with the vulnerabilities After a vulnerability is found, you can [address it](../vulnerabilities/_index.md). The merge request widget lists the vulnerability and contains a button for downloading the fuzzing artifacts. By selecting one of the detected vulnerabilities, you can see its details. ![Coverage Fuzzing Security Report](img/coverage_fuzzing_report_v13_6.png) You can also view the vulnerability from the [Security Dashboard](../security_dashboard/_index.md), which shows an overview of all the security vulnerabilities in your groups, projects, and pipelines. Selecting the vulnerability opens a modal that provides additional information about the vulnerability: - Status: The vulnerability's status. As with any type of vulnerability, a coverage fuzzing vulnerability can be Detected, Confirmed, Dismissed, or Resolved. - Project: The project in which the vulnerability exists. - Crash type: The type of crash or weakness in the code. This typically maps to a [CWE](https://cwe.mitre.org/). - Crash state: A normalized version of the stack trace, containing the last three functions of the crash (without random addresses). - Stack trace snippet: The last few lines of the stack trace, which shows details about the crash. - Identifier: The vulnerability's identifier. This maps to either a [CVE](https://cve.mitre.org/) or [CWE](https://cwe.mitre.org/). - Severity: The vulnerability's severity. This can be Critical, High, Medium, Low, Info, or Unknown. - Scanner: The scanner that detected the vulnerability (for example, Coverage Fuzzing). - Scanner Provider: The engine that did the scan. For Coverage Fuzzing, this can be any of the engines listed in [Supported fuzzing engines and languages](#supported-fuzzing-engines-and-languages). ## Optimization Use the following customization options to optimize coverage-guided fuzz testing to your project. ### Available CI/CD variables Use the following variables to configure coverage-guided fuzz testing in your CI/CD pipeline. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} | CI/CD variable | Description | |---------------------------|---------------------------------------------------------------------------------| | `COVFUZZ_ADDITIONAL_ARGS` | Arguments passed to `gitlab-cov-fuzz`. Used to customize the behavior of the underlying fuzzing engine. Read the fuzzing engine's documentation for a complete list of arguments. | | `COVFUZZ_BRANCH` | The branch on which long-running fuzzing jobs are to be run. On all other branches, only fuzzing regression tests are run. Default: Repository's default branch. | | `COVFUZZ_SEED_CORPUS` | Path to a seed corpus directory. Default: empty. | | `COVFUZZ_URL_PREFIX` | Path to the `gitlab-cov-fuzz` repository cloned for use with an offline environment. You should only change this value when using an offline environment. Default: `https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-cov-fuzz/-/raw`. | | `COVFUZZ_USE_REGISTRY` | Set to `true` to have the corpus stored in the GitLab corpus registry. The variables `COVFUZZ_CORPUS_NAME` and `COVFUZZ_GITLAB_TOKEN` are required if this variable is set to `true`. Default: `false`. | | `COVFUZZ_CORPUS_NAME` | Name of the corpus to be used in the job. | | `COVFUZZ_GITLAB_TOKEN` | Environment variable configured with [personal access token](../../profile/personal_access_tokens.md#create-a-personal-access-token) or [project access token](../../project/settings/project_access_tokens.md#create-a-project-access-token) with API read/write access. | #### Seed corpus Files in the [seed corpus](../terminology/_index.md#seed-corpus) must be updated manually. They are not updated or overwritten by the coverage-guide fuzz testing job. ### Coverage-guided fuzz testing process The fuzz testing process: 1. Compiles the target application. 1. Runs the instrumented application, using the `gitlab-cov-fuzz` tool. 1. Parses and analyzes the exception information output by the fuzzer. 1. Downloads the [corpus](../terminology/_index.md#corpus) from either: - The previous pipelines. - If `COVFUZZ_USE_REGISTRY` is set to `true`, the [corpus registry](#corpus-registry). 1. Downloads crash events from previous pipeline. 1. Outputs the parsed crash events and data to the `gl-coverage-fuzzing-report.json` file. 1. Updates the corpus, either: - In the job's pipeline. - If `COVFUZZ_USE_REGISTRY` is set to `true`, in the corpus registry. The results of the coverage-guided fuzz testing are available in the CI/CD pipeline. ## Roll out After you're comfortable using coverage-guided fuzz testing in a single project, you can take advantage of the following advanced features, including enabling testing in offline environments. ### Supported fuzzing engines and languages You can use the following fuzzing engines to test the specified languages. | Language | Fuzzing Engine | Example | |---------------------------------------------|------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------| | C/C++ | [libFuzzer](https://llvm.org/docs/LibFuzzer.html) | [c-cpp-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/c-cpp-fuzzing-example) | | Go | [go-fuzz (libFuzzer support)](https://github.com/dvyukov/go-fuzz) | [go-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example) | | Swift | [libFuzzer](https://github.com/apple/swift/blob/master/docs/libFuzzerIntegration.md) | [swift-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/swift-fuzzing-example) | | Rust | [cargo-fuzz (libFuzzer support)](https://github.com/rust-fuzz/cargo-fuzz) | [rust-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/rust-fuzzing-example) | | Java (Maven only)<sup>1</sup> | [Javafuzz](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/javafuzz) (recommended) | [javafuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/javafuzz-fuzzing-example) | | Java | [JQF](https://github.com/rohanpadhye/JQF) (not preferred) | [jqf-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/java-fuzzing-example) | | JavaScript | [`jsfuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/jsfuzz) | [jsfuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/jsfuzz-fuzzing-example) | | Python | [`pythonfuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/fuzzers/pythonfuzz) | [pythonfuzz-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/pythonfuzz-fuzzing-example) | | AFL (any language that works on top of AFL) | [AFL](https://lcamtuf.coredump.cx/afl/) | [afl-fuzzing-example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/afl-fuzzing-example) | 1. Support for Gradle is planned in [issue 409764](https://gitlab.com/gitlab-org/gitlab/-/issues/409764). ### Duration of coverage-guided fuzz testing The available durations for coverage-guided fuzz testing are: - 10-minute duration (default): Recommended for the default branch. - 60-minute duration: Recommended for the development branch and merge requests. The longer duration provides greater coverage. In the `COVFUZZ_ADDITIONAL_ARGS` variable set the value `--regression=true`. For a complete example, read the [Go coverage-guided fuzzing example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example/-/blob/master/.gitlab-ci.yml). #### Continuous coverage-guided fuzz testing It's also possible to run the coverage-guided fuzzing jobs longer and without blocking your main pipeline. This configuration uses the GitLab [parent-child pipelines](../../../ci/pipelines/downstream_pipelines.md#parent-child-pipelines). The suggested workflow in this scenario is to have long-running, asynchronous fuzzing jobs on the main or development branch, and short synchronous fuzzing jobs on all other branches and MRs. This balances the needs of completing the per-commit pipeline complete quickly, while also giving the fuzzer a large amount of time to fully explore and test the app. Long-running fuzzing jobs are usually necessary for the coverage-guided fuzzer to find deeper bugs in your codebase. The following is an extract of the `.gitlab-ci.yml` file for this workflow. For the full example, see the [Go fuzzing example's repository](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example/-/tree/continuous_fuzzing): ```yaml sync_fuzzing: variables: COVFUZZ_ADDITIONAL_ARGS: '-max_total_time=300' trigger: include: .covfuzz-ci.yml strategy: depend rules: - if: $CI_COMMIT_BRANCH != 'continuous_fuzzing' && $CI_PIPELINE_SOURCE != 'merge_request_event' async_fuzzing: variables: COVFUZZ_ADDITIONAL_ARGS: '-max_total_time=3600' trigger: include: .covfuzz-ci.yml rules: - if: $CI_COMMIT_BRANCH == 'continuous_fuzzing' && $CI_PIPELINE_SOURCE != 'merge_request_event' ``` This creates two jobs: 1. `sync_fuzzing`: Runs all your fuzz targets for a short period of time in a blocking configuration. This finds simple bugs and allows you to be confident that your MRs aren't introducing new bugs or causing old bugs to reappear. 1. `async_fuzzing`: Runs on your branch and finds deep bugs in your code without blocking your development cycle and MRs. The `covfuzz-ci.yml` is the same as that in the [original synchronous example](https://gitlab.com/gitlab-org/security-products/demos/coverage-fuzzing/go-fuzzing-example#running-go-fuzz-from-ci). ### FIPS-enabled binary [Starting in GitLab 15.0](https://gitlab.com/gitlab-org/gitlab/-/issues/352549) the coverage fuzzing binary is compiled with `golang-fips` on Linux x86 and uses OpenSSL as the cryptographic backend. For more details, see FIPS compliance at GitLab with Go. ### Offline environment To use coverage fuzzing in an offline environment: 1. Clone [`gitlab-cov-fuzz`](https://gitlab.com/gitlab-org/security-products/analyzers/gitlab-cov-fuzz) to a private repository that your offline GitLab instance can access. 1. For each fuzzing step, set `COVFUZZ_URL_PREFIX` to `${NEW_URL_GITLAB_COV_FUZ}/-/raw`, where `NEW_URL_GITLAB_COV_FUZ` is the URL of the private `gitlab-cov-fuzz` clone that you set up in the first step. ## Troubleshooting ### Error `Unable to extract corpus folder from artifacts zip file` If you see this error message, and `COVFUZZ_USE_REGISTRY` is set to `true`, ensure that the uploaded corpus file extracts into a folder named `corpus`. ### Error `400 Bad request - Duplicate package is not allowed` If you see this error message when running the fuzzing job with `COVFUZZ_USE_REGISTRY` set to `true`, ensure that duplicates are allowed. For more details, see [duplicate Generic packages](../../packages/generic_packages/_index.md#disable-publishing-duplicate-package-names). <!--- end_remove -->
https://docs.gitlab.com/user/application_security/troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/troubleshooting.md
2025-08-13
doc/user/application_security/api_fuzzing
[ "doc", "user", "application_security", "api_fuzzing" ]
troubleshooting.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting API Fuzzing jobs
null
## API Fuzzing job times out after N hours For larger repositories, the API Fuzzing job could time out on the [small hosted runner on Linux](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64), which is set per default. If this happens in your jobs, you should scale up to a [larger runner](performance.md#using-a-larger-runner). See the following documentation sections for assistance: - [Performance tuning and testing speed](performance.md) - [Using a larger Runner](performance.md#using-a-larger-runner) - [Excluding operations by path](configuration/customizing_analyzer_settings.md#exclude-paths) - [Excluding slow operations](performance.md#excluding-slow-operations) ## API Fuzzing job takes too long to complete See [Performance Tuning and Testing Speed](performance.md) ## Error: `Error waiting for API Fuzzing 'http://127.0.0.1:5000' to become available` A bug exists in versions of the API Fuzzing analyzer prior to v1.6.196 that can cause a background process to fail under certain conditions. The solution is to update to a newer version of the API Fuzzing analyzer. The version information can be found in the job details for the `apifuzzer_fuzz` job. If the issue is occurring with versions v1.6.196 or greater, contact Support and provide the following information: 1. Reference this troubleshooting section and ask for the issue to be escalated to the Dynamic Analysis Team. 1. The full console output of the job. 1. The `gl-api-security-scanner.log` file available as a job artifact. In the right-hand panel of the job details page, select the **Browse** button. 1. The `apifuzzer_fuzz` job definition from your `.gitlab-ci.yml` file. **Error message** - In [GitLab 15.6 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/376078), `Error waiting for API Fuzzing 'http://127.0.0.1:5000' to become available` - In GitLab 15.5 and earlier, `Error waiting for API Security 'http://127.0.0.1:5000' to become available`. ### `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.` The API Fuzzing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `apifuzzer_fuzz` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts. Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause: 1. Go to the job console. 1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**. 1. Open the file `gl-api-security-scanner.log` in a text editor. 1. If the error message was produced because the port was already taken, you should see in the file a message like the following: - In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734): ```log Failed to bind to address http://127.0.0.1:5500: address already in use. ``` - In GitLab 15.4 and earlier: ```log Failed to bind to address http://[::]:5000: address already in use. ``` The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken. If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](_index.md#get-support-or-request-an-improvement) through the proper channels. Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `FUZZAPI_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component. **Solution** 1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `FUZZAPI_API_PORT`. 1. Update the value of `FUZZAPI_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports) ## Error: `Errors were found during validation of the document using the published OpenAPI schema` At the start of an API Fuzzing job the OpenAPI Specification is validated against the [published schema](https://github.com/OAI/OpenAPI-Specification/tree/master/schemas). This error is shown when the provided OpenAPI Specification has validation errors: ```plaintext Error, the OpenAPI document is not valid. Errors were found during validation of the document using the published OpenAPI schema ``` Errors can be introduced when creating an OpenAPI Specification manually, and also when the schema is generated. For OpenAPI Specifications that are generated automatically validation errors are often the result of missing code annotations. **Error message** - `Error, the OpenAPI document is not valid. Errors were found during validation of the document using the published OpenAPI schema` - `OpenAPI 2.0 schema validation error ...` - `OpenAPI 3.0.x schema validation error ...` **Solution** **For generated OpenAPI Specifications** 1. Identify the validation errors. 1. Use the [Swagger Editor](https://editor.swagger.io/) to identify validation problems in your specification. The visual nature of the Swagger Editor makes it easier to understand what needs to change. 1. Alternatively, you can check the log output and look for schema validation warnings. They are prefixed with messages such as `OpenAPI 2.0 schema validation error` or `OpenAPI 3.0.x schema validation error`. Each failed validation provides extra information about `location` and `description`. JSON Schema validation messages can be complex, and editors can help you validate schema documents. 1. Review the documentation for the OpenAPI generation your framework/tech stack is using. Identify the changes needed to produce a correct OpenAPI document. 1. After the validation issues are resolved, re-run your pipeline. **For manually created OpenAPI Specifications** 1. Identify the validation errors. 1. The simplest solution is to use a visual tool to edit and validate the OpenAPI document. For example the [Swagger Editor](https://editor.swagger.io/) highlights schema errors and possible solutions. 1. Alternatively, you can check the log output and look for schema validation warnings. They are prefixed with messages such as `OpenAPI 2.0 schema validation error` or `OpenAPI 3.0.x schema validation error`. Each failed validation provides extra information about `location` and `description`. Correct each of the validation failures and then resubmit the OpenAPI doc. JSON Schema validation messages can be complex, and editors can help you validate schema documents. 1. After the validation issues are resolved, re-run your pipeline. ## `Failed to start scanner session (version header not found)` The API Fuzzing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `apifuzzer_fuzz` job. A common cause of this issue is changing the `FUZZAPI_API` variable from its default. **Error message** - `Failed to start scanner session (version header not found).` **Solution** - Remove the `FUZZAPI_API` variable from the `.gitlab-ci.yml` file. The value is inherited from the API Fuzzing CI/CD template. We recommend this method instead of manually setting a value. - If removing the variable is not possible, check to see if this value has changed in the latest version of the [API Fuzzing CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml). If so, update the value in the `.gitlab-ci.yml` file. ## `Application cannot determine the base URL for the target API` The API Fuzzing analyzer outputs an error message when it cannot determine the target API after inspecting the OpenAPI document. This error message is shown when the target API has not been set in the `.gitlab-ci.yml`file, it is not available in the `environment_url.txt` file, and it could not be computed using the OpenAPI document. There is an order of precedence in which the API Fuzzing analyzer tries to get the target API when checking the different sources. First, it tries to use the `FUZZAPI_TARGET_URL`. If the environment variable has not been set, then the API Fuzzing analyzer attempts to use the `environment_url.txt` file. If there is no file `environment_url.txt`, the API Fuzzing analyzer now uses the OpenAPI document contents and the URL provided in `FUZZAPI_OPENAPI` (if a URL is provided) to try to compute the target API. The best-suited solution depends on whether or not your target API changes for each deployment: - If the target API is the same for each deployment (a static environment), use the [static environment solution](#static-environment-solution). - If the target API changes for each deployment, use a [dynamic environment solution](#dynamic-environment-solutions). ### Static environment solution This solution is for pipelines in which the target API URL doesn't change (is static). **Add environmental variable** For environments where the target API remains the same, we recommend you specify the target URL by using the `FUZZAPI_TARGET_URL` environment variable. In your `.gitlab-ci.yml` file, add a variable `FUZZAPI_TARGET_URL`. The variable must be set to the base URL of API testing target. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OPENAPI: test-api-specification.json ``` ### Dynamic environment solutions In a dynamic environment your target API changes for each different deployment. In this case, there is more than one possible solution, we recommend to use the `environment_url.txt` file when dealing with dynamic environments. **Use environment_url.txt** To support dynamic environments in which the target API URL changes during each pipeline, API Fuzzing supports the use of an `environment_url.txt` file that contains the URL to use. This file is not checked into the repository, instead it's created during the pipeline by the job that deploys the test target and collected as an artifact that can be used by later jobs in the pipeline. The job that creates the `environment_url.txt` file must run before the API Fuzzing job. 1. Modify the test target deployment job adding the base URL in an `environment_url.txt` file at the root of your project. 1. Modify the test target deployment job collecting the `environment_url.txt` as an artifact. Example: ```yaml deploy-test-target: script: # Perform deployment steps # Create environment_url.txt (example) - echo http://${CI_PROJECT_ID}-${CI_ENVIRONMENT_SLUG}.example.org > environment_url.txt artifacts: paths: - environment_url.txt ``` ## Use OpenAPI with an invalid schema There are cases where the document is autogenerated with an invalid schema or cannot be edited manually in a timely manner. In those scenarios, the API Fuzzing is able to perform a relaxed validation by setting the variable `FUZZAPI_OPENAPI_RELAXED_VALIDATION`. We recommend providing a fully compliant OpenAPI document to prevent unexpected behaviors. ### Edit a non-compliant OpenAPI file To detect and correct elements that don't comply with the OpenAPI specifications, we recommend using an editor. An editor commonly provides document validation, and suggestions to create a schema-compliant OpenAPI document. Suggested editors include: | Editor | OpenAPI 2.0 | OpenAPI 3.0.x | OpenAPI 3.1.x | |----------------------------------------------------|-------------------------------|-------------------------------|---------------| | [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON | | [Stoplight Studio](https://stoplight.io/solutions) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | If your OpenAPI document is generated manually, load your document in the editor and fix anything that is non-compliant. If your document is generated automatically, load it in your editor to identify the issues in the schema, then go to the application and perform the corrections based on the framework you are using. ### Enable OpenAPI relaxed validation Relaxed validation is meant for cases when the OpenAPI document cannot meet OpenAPI specifications, but it still has enough content to be consumed by different tools. A validation is performed but less strictly in regards to document schema. API Fuzzing can still try to consume an OpenAPI document that does not fully comply with OpenAPI specifications. To instruct API Fuzzing analyzer to perform a relaxed validation, set the variable `FUZZAPI_OPENAPI_RELAXED_VALIDATION` to any value, for example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_OPENAPI_RELAXED_VALIDATION: 'On' ``` ## `No operation in the OpenAPI document is consuming any supported media type` API Fuzzing uses the specified media types in the OpenAPI document to generate requests. If no request can be created due to the lack of supported media types, then an error is thrown. **Error message** - `Error, no operation in the OpenApi document is consuming any supported media type. Check 'OpenAPI Specification' to check the supported media types.` **Solution** 1. Review the supported media types in the [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) section. 1. Edit your OpenAPI document, allowing at least a given operation to accept any of the supported media types. Alternatively, a supported media type could be set in the OpenAPI document level and get applied to all operations. This step may require changes in your application to ensure the supported media type is accepted by the application. ## Error: `The SSL connection could not be established, see inner exception.` API fuzzing is compatible with a broad range of TLS configurations, including outdated protocols and ciphers. Despite broad support, you might encounter connection errors, like this: ```plaintext Error, error occurred trying to download `<URL>`: There was an error when retrieving content from Uri:' <URL>'. Error:The SSL connection could not be established, see inner exception. ``` This error occurs because API fuzzing could not establish a secure connection with the server at the given URL. To resolve the issue: If the host in the error message supports non-TLS connections, change `https://` to `http://` in your configuration. For example, if an error occurs with the following configuration: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: https://test-deployment/ FUZZAPI_OPENAPI: https://specs/openapi.json ``` Change the prefix of `FUZZAPI_OPENAPI` from `https://` to `http://`: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: https://test-deployment/ FUZZAPI_OPENAPI: http://specs/openapi.json ``` If you cannot use a non-TLS connection to access the URL, contact the Support team for help. You can expedite the investigation with the [testssl.sh tool](https://testssl.sh/). From a machine with a bash shell and connectivity to the affected server: 1. Download the latest release `zip` or `tar.gz` file and extract from <https://github.com/drwetter/testssl.sh/releases>. 1. Run `./testssl.sh --log https://specs`. 1. Attach the log file to your support ticket. ## `ERROR: Job failed: failed to pull image` This error message occurs when pulling an image from a container registry that requires authentication to access (it is not public). In the job console output the error looks like: ```plaintext Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX Resolving secrets 00:00 Preparing the "docker+machine" executor 00:06 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Pulling docker image registry.example.com/my-target-app:latest ... WARNING: Failed to pull image with policy "always": Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ERROR: Job failed: failed to pull image "registry.example.com/my-target-app:latest" with specified policies [always]: Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ``` **Error message** - In GitLab 15.9 and earlier, `ERROR: Job failed: failed to pull image` followed by `Error response from daemon: Get IMAGE: unauthorized`. **Solution** Authentication credentials are provided using the methods outlined in the [Access an image from a private container registry](../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry) documentation section. The method used is dictated by your container registry provider and its configuration. If your using a container registry provided by a third party, such as a cloud provider (Azure, Google Could (GCP), AWS and so on), check the providers documentation for information on how to authenticate to their container registries. The following example uses the [statically defined credentials](../../../ci/docker/using_docker_images.md#use-statically-defined-credentials) authentication method. In this example the container registry is `registry.example.com` and image is `my-target-app:latest`. 1. Read how to [Determine your `DOCKER_AUTH_CONFIG` data](../../../ci/docker/using_docker_images.md#determine-your-docker_auth_config-data) to understand how to compute the variable value for `DOCKER_AUTH_CONFIG`. The configuration variable `DOCKER_AUTH_CONFIG` contains the Docker JSON configuration to provide the appropriate authentication information. For example, to access private container registry: `registry.example.com` with the credentials `abcdefghijklmn`, the Docker JSON looks like: ```json { "auths": { "registry.example.com": { "auth": "abcdefghijklmn" } } } ``` 1. Add the `DOCKER_AUTH_CONFIG` as a CI/CD variable. Instead of adding the configuration variable directly in your `.gitlab-ci.yml` file you should create a project [CI/CD variable](../../../ci/variables/_index.md#for-a-project). 1. Rerun your job, and the statically-defined credentials are now used to sign in to the private container registry `registry.example.com`, and let you pull the image `my-target-app:latest`. If succeeded the job console shows an output like: ```log Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-4.shared.runners-manager.gitlab.com/default J2nyww-s Resolving secrets 00:00 Preparing the "docker+machine" executor 00:56 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Authenticating with credentials from $DOCKER_AUTH_CONFIG Pulling docker image registry.example.com/my-target-app:latest ... Using docker image sha256:139c39668e5e4417f7d0eb0eeb74145ba862f4f3c24f7c6594ecb2f82dc4ad06 for registry.example.com/my-target-app:latest with digest registry.example.com/my-target- app@sha256:2b69fc7c3627dbd0ebaa17674c264fcd2f2ba21ed9552a472acf8b065d39039c ... Waiting for services to be up and running (timeout 30 seconds)... ``` ## `sudo: The "no new privileges" flag is set, which prevents sudo from running as root.` Starting with v5 of the analyzer, a non-root user is used by default. This requires the use of `sudo` when performing privileged operations. This error occurs with a specific container daemon setup that prevents running containers from obtaining new permissions. In most settings, this is not the default configuration, it's something specifically configured, often as part of a security hardening guide. **Error message** This issue can be identified by the error message generated when a `before_script` or `FUZZAPI_PRE_SCRIPT` is executed: ```shell $ sudo apk add nodejs sudo: The "no new privileges" flag is set, which prevents sudo from running as root. sudo: If sudo is running in a container, you may need to adjust the container configuration to disable the flag. ``` **Solution** This issue can be worked around in the following ways: - Run the container as the `root` user. It's recommended to test this configuration as it may not work in all cases. This can be done by modifying the CICD configuration and checking the job output to make sure that `whoami` returns `root` and not `gitlab`. If `gitlab` is displayed, use another workaround. Once tested the `before_script` can be removed. ```yaml apifuzzer_fuzz: image: name: $SECURE_ANALYZERS_PREFIX/$FUZZAPI_IMAGE:$FUZZAPI_VERSION$FUZZAPI_IMAGE_SUFFIX docker: user: root before_script: - whoami ``` _Example job console output:_ ```log Executing "step_script" stage of the job script Using docker image sha256:8b95f188b37d6b342dc740f68557771bb214fe520a5dc78a88c7a9cc6a0f9901 for registry.gitlab.com/security-products/api-security:5 with digest registry.gitlab.com/security-products/api-security@sha256:092909baa2b41db8a7e3584f91b982174772abdfe8ceafc97cf567c3de3179d1 ... $ whoami root $ /peach/analyzer-api-fuzzing 17:17:14 [INF] API Security: Gitlab API Security 17:17:14 [INF] API Security: ------------------- 17:17:14 [INF] API Security: 17:17:14 [INF] API Security: version: 5.7.0 ``` - Wrap the container and add any dependencies at build time. This option has the benefit of running with lower privileges than root which may be a requirement for some customers. 1. Create a new `Dockerfile` that wraps the existing image. ```yaml ARG SECURE_ANALYZERS_PREFIX ARG FUZZAPI_IMAGE ARG FUZZAPI_VERSION ARG FUZZAPI_IMAGE_SUFFIX FROM $SECURE_ANALYZERS_PREFIX/$FUZZAPI_IMAGE:$FUZZAPI_VERSION$FUZZAPI_IMAGE_SUFFIX USER root RUN pip install ... RUN apk add ... USER gitlab ``` 1. Build the new image and push it to your local container registry before the API Fuzzing job starts. The image should be removed after the `` job has been completed. ```shell TARGET_NAME=apifuzz-$CI_COMMIT_SHA docker build -t $TARGET_IMAGE \ --build-arg "SECURE_ANALYZERS_PREFIX=$SECURE_ANALYZERS_PREFIX" \ --build-arg "FUZZAPI_IMAGE=$APISEC_IMAGE" \ --build-arg "FUZZAPI_VERSION=$APISEC_VERSION" \ --build-arg "FUZZAPI_IMAGE_SUFFIX=$APISEC_IMAGE_SUFFIX" \ . docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY docker push $TARGET_IMAGE ``` 1. Extend the `apifuzzer_fuzz` job and use the new image name. ```yaml apifuzzer_fuzz: image: apifuzz-$CI_COMMIT_SHA ``` 1. Remove the temporary container from the registry. See [this documentation page for information on removing container images.](../../packages/container_registry/delete_container_registry_images.md) - Change the GitLab Runner configuration, disabling the no-new-privileges flag. This could have security implications and should be discussed with your operations and security teams. ## `Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders()` This error message indicates that the API Fuzzing analyzer is unable to parse the value of the `FUZZAPI_REQUEST_HEADERS` or `FUZZAPI_REQUEST_HEADERS_BASE64` configuration variable. **Error message** This issue can be identified by two error messages. The first error message is seen in the job console output and the second in the `gl-api-security-scanner.log` file. _Error message from job console:_ ```plaintext 05:48:38 [ERR] API Security: Testing failed: An unexpected exception occurred: Index was outside the bounds of the array. ``` _Error message from `gl_api_security-scanner.log`:_ ```plaintext 08:45:43.616 [ERR] <Peach.Web.Core.Services.WebRunnerMachine> Unexpected exception in WebRunnerMachine::Run() System.IndexOutOfRangeException: Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders() in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/[RunnerOptions.cs:line 362 at Peach.Web.Runner.Services.RunnerService.Start(Job job, IRunnerOptions options) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/RunnerService.cs:line 67 at Peach.Web.Core.Services.WebRunnerMachine.Run(IRunnerOptions runnerOptions, CancellationToken token) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Core/Services/WebRunnerMachine.cs:line 321 08:45:43.634 [WRN] <Peach.Web.Core.Services.WebRunnerMachine> * Session failed: An unexpected exception occurred: Index was outside the bounds of the array. 08:45:43.677 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Finished testing. Performed a total of 0 requests. ``` **Solution** This issue occurs due to a malformed `FUZZAPI_REQUEST_HEADERS` or `FUZZAPI_REQUEST_HEADERS_BASE64` variable. The expected format is one or more headers of `Header: value` construction separated by a comma. The solution is to correct the syntax to match what is expected. _Valid examples:_ - `Authorization: Bearer XYZ` - `X-Custom: Value,Authorization: Bearer XYZ` _Invalid examples:_ - `Header:,value` - `HeaderA: value,HeaderB:,HeaderC: value` - `Header`
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting API Fuzzing jobs breadcrumbs: - doc - user - application_security - api_fuzzing --- ## API Fuzzing job times out after N hours For larger repositories, the API Fuzzing job could time out on the [small hosted runner on Linux](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64), which is set per default. If this happens in your jobs, you should scale up to a [larger runner](performance.md#using-a-larger-runner). See the following documentation sections for assistance: - [Performance tuning and testing speed](performance.md) - [Using a larger Runner](performance.md#using-a-larger-runner) - [Excluding operations by path](configuration/customizing_analyzer_settings.md#exclude-paths) - [Excluding slow operations](performance.md#excluding-slow-operations) ## API Fuzzing job takes too long to complete See [Performance Tuning and Testing Speed](performance.md) ## Error: `Error waiting for API Fuzzing 'http://127.0.0.1:5000' to become available` A bug exists in versions of the API Fuzzing analyzer prior to v1.6.196 that can cause a background process to fail under certain conditions. The solution is to update to a newer version of the API Fuzzing analyzer. The version information can be found in the job details for the `apifuzzer_fuzz` job. If the issue is occurring with versions v1.6.196 or greater, contact Support and provide the following information: 1. Reference this troubleshooting section and ask for the issue to be escalated to the Dynamic Analysis Team. 1. The full console output of the job. 1. The `gl-api-security-scanner.log` file available as a job artifact. In the right-hand panel of the job details page, select the **Browse** button. 1. The `apifuzzer_fuzz` job definition from your `.gitlab-ci.yml` file. **Error message** - In [GitLab 15.6 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/376078), `Error waiting for API Fuzzing 'http://127.0.0.1:5000' to become available` - In GitLab 15.5 and earlier, `Error waiting for API Security 'http://127.0.0.1:5000' to become available`. ### `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.` The API Fuzzing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `apifuzzer_fuzz` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts. Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause: 1. Go to the job console. 1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**. 1. Open the file `gl-api-security-scanner.log` in a text editor. 1. If the error message was produced because the port was already taken, you should see in the file a message like the following: - In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734): ```log Failed to bind to address http://127.0.0.1:5500: address already in use. ``` - In GitLab 15.4 and earlier: ```log Failed to bind to address http://[::]:5000: address already in use. ``` The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken. If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](_index.md#get-support-or-request-an-improvement) through the proper channels. Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `FUZZAPI_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component. **Solution** 1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `FUZZAPI_API_PORT`. 1. Update the value of `FUZZAPI_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports) ## Error: `Errors were found during validation of the document using the published OpenAPI schema` At the start of an API Fuzzing job the OpenAPI Specification is validated against the [published schema](https://github.com/OAI/OpenAPI-Specification/tree/master/schemas). This error is shown when the provided OpenAPI Specification has validation errors: ```plaintext Error, the OpenAPI document is not valid. Errors were found during validation of the document using the published OpenAPI schema ``` Errors can be introduced when creating an OpenAPI Specification manually, and also when the schema is generated. For OpenAPI Specifications that are generated automatically validation errors are often the result of missing code annotations. **Error message** - `Error, the OpenAPI document is not valid. Errors were found during validation of the document using the published OpenAPI schema` - `OpenAPI 2.0 schema validation error ...` - `OpenAPI 3.0.x schema validation error ...` **Solution** **For generated OpenAPI Specifications** 1. Identify the validation errors. 1. Use the [Swagger Editor](https://editor.swagger.io/) to identify validation problems in your specification. The visual nature of the Swagger Editor makes it easier to understand what needs to change. 1. Alternatively, you can check the log output and look for schema validation warnings. They are prefixed with messages such as `OpenAPI 2.0 schema validation error` or `OpenAPI 3.0.x schema validation error`. Each failed validation provides extra information about `location` and `description`. JSON Schema validation messages can be complex, and editors can help you validate schema documents. 1. Review the documentation for the OpenAPI generation your framework/tech stack is using. Identify the changes needed to produce a correct OpenAPI document. 1. After the validation issues are resolved, re-run your pipeline. **For manually created OpenAPI Specifications** 1. Identify the validation errors. 1. The simplest solution is to use a visual tool to edit and validate the OpenAPI document. For example the [Swagger Editor](https://editor.swagger.io/) highlights schema errors and possible solutions. 1. Alternatively, you can check the log output and look for schema validation warnings. They are prefixed with messages such as `OpenAPI 2.0 schema validation error` or `OpenAPI 3.0.x schema validation error`. Each failed validation provides extra information about `location` and `description`. Correct each of the validation failures and then resubmit the OpenAPI doc. JSON Schema validation messages can be complex, and editors can help you validate schema documents. 1. After the validation issues are resolved, re-run your pipeline. ## `Failed to start scanner session (version header not found)` The API Fuzzing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `apifuzzer_fuzz` job. A common cause of this issue is changing the `FUZZAPI_API` variable from its default. **Error message** - `Failed to start scanner session (version header not found).` **Solution** - Remove the `FUZZAPI_API` variable from the `.gitlab-ci.yml` file. The value is inherited from the API Fuzzing CI/CD template. We recommend this method instead of manually setting a value. - If removing the variable is not possible, check to see if this value has changed in the latest version of the [API Fuzzing CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml). If so, update the value in the `.gitlab-ci.yml` file. ## `Application cannot determine the base URL for the target API` The API Fuzzing analyzer outputs an error message when it cannot determine the target API after inspecting the OpenAPI document. This error message is shown when the target API has not been set in the `.gitlab-ci.yml`file, it is not available in the `environment_url.txt` file, and it could not be computed using the OpenAPI document. There is an order of precedence in which the API Fuzzing analyzer tries to get the target API when checking the different sources. First, it tries to use the `FUZZAPI_TARGET_URL`. If the environment variable has not been set, then the API Fuzzing analyzer attempts to use the `environment_url.txt` file. If there is no file `environment_url.txt`, the API Fuzzing analyzer now uses the OpenAPI document contents and the URL provided in `FUZZAPI_OPENAPI` (if a URL is provided) to try to compute the target API. The best-suited solution depends on whether or not your target API changes for each deployment: - If the target API is the same for each deployment (a static environment), use the [static environment solution](#static-environment-solution). - If the target API changes for each deployment, use a [dynamic environment solution](#dynamic-environment-solutions). ### Static environment solution This solution is for pipelines in which the target API URL doesn't change (is static). **Add environmental variable** For environments where the target API remains the same, we recommend you specify the target URL by using the `FUZZAPI_TARGET_URL` environment variable. In your `.gitlab-ci.yml` file, add a variable `FUZZAPI_TARGET_URL`. The variable must be set to the base URL of API testing target. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OPENAPI: test-api-specification.json ``` ### Dynamic environment solutions In a dynamic environment your target API changes for each different deployment. In this case, there is more than one possible solution, we recommend to use the `environment_url.txt` file when dealing with dynamic environments. **Use environment_url.txt** To support dynamic environments in which the target API URL changes during each pipeline, API Fuzzing supports the use of an `environment_url.txt` file that contains the URL to use. This file is not checked into the repository, instead it's created during the pipeline by the job that deploys the test target and collected as an artifact that can be used by later jobs in the pipeline. The job that creates the `environment_url.txt` file must run before the API Fuzzing job. 1. Modify the test target deployment job adding the base URL in an `environment_url.txt` file at the root of your project. 1. Modify the test target deployment job collecting the `environment_url.txt` as an artifact. Example: ```yaml deploy-test-target: script: # Perform deployment steps # Create environment_url.txt (example) - echo http://${CI_PROJECT_ID}-${CI_ENVIRONMENT_SLUG}.example.org > environment_url.txt artifacts: paths: - environment_url.txt ``` ## Use OpenAPI with an invalid schema There are cases where the document is autogenerated with an invalid schema or cannot be edited manually in a timely manner. In those scenarios, the API Fuzzing is able to perform a relaxed validation by setting the variable `FUZZAPI_OPENAPI_RELAXED_VALIDATION`. We recommend providing a fully compliant OpenAPI document to prevent unexpected behaviors. ### Edit a non-compliant OpenAPI file To detect and correct elements that don't comply with the OpenAPI specifications, we recommend using an editor. An editor commonly provides document validation, and suggestions to create a schema-compliant OpenAPI document. Suggested editors include: | Editor | OpenAPI 2.0 | OpenAPI 3.0.x | OpenAPI 3.1.x | |----------------------------------------------------|-------------------------------|-------------------------------|---------------| | [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON | | [Stoplight Studio](https://stoplight.io/solutions) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | If your OpenAPI document is generated manually, load your document in the editor and fix anything that is non-compliant. If your document is generated automatically, load it in your editor to identify the issues in the schema, then go to the application and perform the corrections based on the framework you are using. ### Enable OpenAPI relaxed validation Relaxed validation is meant for cases when the OpenAPI document cannot meet OpenAPI specifications, but it still has enough content to be consumed by different tools. A validation is performed but less strictly in regards to document schema. API Fuzzing can still try to consume an OpenAPI document that does not fully comply with OpenAPI specifications. To instruct API Fuzzing analyzer to perform a relaxed validation, set the variable `FUZZAPI_OPENAPI_RELAXED_VALIDATION` to any value, for example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_OPENAPI_RELAXED_VALIDATION: 'On' ``` ## `No operation in the OpenAPI document is consuming any supported media type` API Fuzzing uses the specified media types in the OpenAPI document to generate requests. If no request can be created due to the lack of supported media types, then an error is thrown. **Error message** - `Error, no operation in the OpenApi document is consuming any supported media type. Check 'OpenAPI Specification' to check the supported media types.` **Solution** 1. Review the supported media types in the [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) section. 1. Edit your OpenAPI document, allowing at least a given operation to accept any of the supported media types. Alternatively, a supported media type could be set in the OpenAPI document level and get applied to all operations. This step may require changes in your application to ensure the supported media type is accepted by the application. ## Error: `The SSL connection could not be established, see inner exception.` API fuzzing is compatible with a broad range of TLS configurations, including outdated protocols and ciphers. Despite broad support, you might encounter connection errors, like this: ```plaintext Error, error occurred trying to download `<URL>`: There was an error when retrieving content from Uri:' <URL>'. Error:The SSL connection could not be established, see inner exception. ``` This error occurs because API fuzzing could not establish a secure connection with the server at the given URL. To resolve the issue: If the host in the error message supports non-TLS connections, change `https://` to `http://` in your configuration. For example, if an error occurs with the following configuration: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: https://test-deployment/ FUZZAPI_OPENAPI: https://specs/openapi.json ``` Change the prefix of `FUZZAPI_OPENAPI` from `https://` to `http://`: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: https://test-deployment/ FUZZAPI_OPENAPI: http://specs/openapi.json ``` If you cannot use a non-TLS connection to access the URL, contact the Support team for help. You can expedite the investigation with the [testssl.sh tool](https://testssl.sh/). From a machine with a bash shell and connectivity to the affected server: 1. Download the latest release `zip` or `tar.gz` file and extract from <https://github.com/drwetter/testssl.sh/releases>. 1. Run `./testssl.sh --log https://specs`. 1. Attach the log file to your support ticket. ## `ERROR: Job failed: failed to pull image` This error message occurs when pulling an image from a container registry that requires authentication to access (it is not public). In the job console output the error looks like: ```plaintext Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX Resolving secrets 00:00 Preparing the "docker+machine" executor 00:06 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Pulling docker image registry.example.com/my-target-app:latest ... WARNING: Failed to pull image with policy "always": Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ERROR: Job failed: failed to pull image "registry.example.com/my-target-app:latest" with specified policies [always]: Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ``` **Error message** - In GitLab 15.9 and earlier, `ERROR: Job failed: failed to pull image` followed by `Error response from daemon: Get IMAGE: unauthorized`. **Solution** Authentication credentials are provided using the methods outlined in the [Access an image from a private container registry](../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry) documentation section. The method used is dictated by your container registry provider and its configuration. If your using a container registry provided by a third party, such as a cloud provider (Azure, Google Could (GCP), AWS and so on), check the providers documentation for information on how to authenticate to their container registries. The following example uses the [statically defined credentials](../../../ci/docker/using_docker_images.md#use-statically-defined-credentials) authentication method. In this example the container registry is `registry.example.com` and image is `my-target-app:latest`. 1. Read how to [Determine your `DOCKER_AUTH_CONFIG` data](../../../ci/docker/using_docker_images.md#determine-your-docker_auth_config-data) to understand how to compute the variable value for `DOCKER_AUTH_CONFIG`. The configuration variable `DOCKER_AUTH_CONFIG` contains the Docker JSON configuration to provide the appropriate authentication information. For example, to access private container registry: `registry.example.com` with the credentials `abcdefghijklmn`, the Docker JSON looks like: ```json { "auths": { "registry.example.com": { "auth": "abcdefghijklmn" } } } ``` 1. Add the `DOCKER_AUTH_CONFIG` as a CI/CD variable. Instead of adding the configuration variable directly in your `.gitlab-ci.yml` file you should create a project [CI/CD variable](../../../ci/variables/_index.md#for-a-project). 1. Rerun your job, and the statically-defined credentials are now used to sign in to the private container registry `registry.example.com`, and let you pull the image `my-target-app:latest`. If succeeded the job console shows an output like: ```log Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-4.shared.runners-manager.gitlab.com/default J2nyww-s Resolving secrets 00:00 Preparing the "docker+machine" executor 00:56 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Authenticating with credentials from $DOCKER_AUTH_CONFIG Pulling docker image registry.example.com/my-target-app:latest ... Using docker image sha256:139c39668e5e4417f7d0eb0eeb74145ba862f4f3c24f7c6594ecb2f82dc4ad06 for registry.example.com/my-target-app:latest with digest registry.example.com/my-target- app@sha256:2b69fc7c3627dbd0ebaa17674c264fcd2f2ba21ed9552a472acf8b065d39039c ... Waiting for services to be up and running (timeout 30 seconds)... ``` ## `sudo: The "no new privileges" flag is set, which prevents sudo from running as root.` Starting with v5 of the analyzer, a non-root user is used by default. This requires the use of `sudo` when performing privileged operations. This error occurs with a specific container daemon setup that prevents running containers from obtaining new permissions. In most settings, this is not the default configuration, it's something specifically configured, often as part of a security hardening guide. **Error message** This issue can be identified by the error message generated when a `before_script` or `FUZZAPI_PRE_SCRIPT` is executed: ```shell $ sudo apk add nodejs sudo: The "no new privileges" flag is set, which prevents sudo from running as root. sudo: If sudo is running in a container, you may need to adjust the container configuration to disable the flag. ``` **Solution** This issue can be worked around in the following ways: - Run the container as the `root` user. It's recommended to test this configuration as it may not work in all cases. This can be done by modifying the CICD configuration and checking the job output to make sure that `whoami` returns `root` and not `gitlab`. If `gitlab` is displayed, use another workaround. Once tested the `before_script` can be removed. ```yaml apifuzzer_fuzz: image: name: $SECURE_ANALYZERS_PREFIX/$FUZZAPI_IMAGE:$FUZZAPI_VERSION$FUZZAPI_IMAGE_SUFFIX docker: user: root before_script: - whoami ``` _Example job console output:_ ```log Executing "step_script" stage of the job script Using docker image sha256:8b95f188b37d6b342dc740f68557771bb214fe520a5dc78a88c7a9cc6a0f9901 for registry.gitlab.com/security-products/api-security:5 with digest registry.gitlab.com/security-products/api-security@sha256:092909baa2b41db8a7e3584f91b982174772abdfe8ceafc97cf567c3de3179d1 ... $ whoami root $ /peach/analyzer-api-fuzzing 17:17:14 [INF] API Security: Gitlab API Security 17:17:14 [INF] API Security: ------------------- 17:17:14 [INF] API Security: 17:17:14 [INF] API Security: version: 5.7.0 ``` - Wrap the container and add any dependencies at build time. This option has the benefit of running with lower privileges than root which may be a requirement for some customers. 1. Create a new `Dockerfile` that wraps the existing image. ```yaml ARG SECURE_ANALYZERS_PREFIX ARG FUZZAPI_IMAGE ARG FUZZAPI_VERSION ARG FUZZAPI_IMAGE_SUFFIX FROM $SECURE_ANALYZERS_PREFIX/$FUZZAPI_IMAGE:$FUZZAPI_VERSION$FUZZAPI_IMAGE_SUFFIX USER root RUN pip install ... RUN apk add ... USER gitlab ``` 1. Build the new image and push it to your local container registry before the API Fuzzing job starts. The image should be removed after the `` job has been completed. ```shell TARGET_NAME=apifuzz-$CI_COMMIT_SHA docker build -t $TARGET_IMAGE \ --build-arg "SECURE_ANALYZERS_PREFIX=$SECURE_ANALYZERS_PREFIX" \ --build-arg "FUZZAPI_IMAGE=$APISEC_IMAGE" \ --build-arg "FUZZAPI_VERSION=$APISEC_VERSION" \ --build-arg "FUZZAPI_IMAGE_SUFFIX=$APISEC_IMAGE_SUFFIX" \ . docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY docker push $TARGET_IMAGE ``` 1. Extend the `apifuzzer_fuzz` job and use the new image name. ```yaml apifuzzer_fuzz: image: apifuzz-$CI_COMMIT_SHA ``` 1. Remove the temporary container from the registry. See [this documentation page for information on removing container images.](../../packages/container_registry/delete_container_registry_images.md) - Change the GitLab Runner configuration, disabling the no-new-privileges flag. This could have security implications and should be discussed with your operations and security teams. ## `Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders()` This error message indicates that the API Fuzzing analyzer is unable to parse the value of the `FUZZAPI_REQUEST_HEADERS` or `FUZZAPI_REQUEST_HEADERS_BASE64` configuration variable. **Error message** This issue can be identified by two error messages. The first error message is seen in the job console output and the second in the `gl-api-security-scanner.log` file. _Error message from job console:_ ```plaintext 05:48:38 [ERR] API Security: Testing failed: An unexpected exception occurred: Index was outside the bounds of the array. ``` _Error message from `gl_api_security-scanner.log`:_ ```plaintext 08:45:43.616 [ERR] <Peach.Web.Core.Services.WebRunnerMachine> Unexpected exception in WebRunnerMachine::Run() System.IndexOutOfRangeException: Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders() in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/[RunnerOptions.cs:line 362 at Peach.Web.Runner.Services.RunnerService.Start(Job job, IRunnerOptions options) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/RunnerService.cs:line 67 at Peach.Web.Core.Services.WebRunnerMachine.Run(IRunnerOptions runnerOptions, CancellationToken token) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Core/Services/WebRunnerMachine.cs:line 321 08:45:43.634 [WRN] <Peach.Web.Core.Services.WebRunnerMachine> * Session failed: An unexpected exception occurred: Index was outside the bounds of the array. 08:45:43.677 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Finished testing. Performed a total of 0 requests. ``` **Solution** This issue occurs due to a malformed `FUZZAPI_REQUEST_HEADERS` or `FUZZAPI_REQUEST_HEADERS_BASE64` variable. The expected format is one or more headers of `Header: value` construction separated by a comma. The solution is to correct the syntax to match what is expected. _Valid examples:_ - `Authorization: Bearer XYZ` - `X-Custom: Value,Authorization: Bearer XYZ` _Invalid examples:_ - `Header:,value` - `HeaderA: value,HeaderB:,HeaderC: value` - `Header`
https://docs.gitlab.com/user/application_security/api_fuzzing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/api_fuzzing
[ "doc", "user", "application_security", "api_fuzzing" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Web API Fuzz Testing
Testing, security, vulnerabilities, automation, and errors.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Web API fuzz testing passes unexpected values to API operation parameters to cause unexpected behavior and errors in the backend. Use fuzz testing to discover bugs and potential vulnerabilities that other QA processes might miss. You should use fuzz testing in addition to the other security scanners in [GitLab Secure](../_index.md) and your own test processes. If you're using [GitLab CI/CD](../../../ci/_index.md), you can run fuzz tests as part your CI/CD workflow. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [WebAPI Fuzzing - Advanced Security Testing](https://www.youtube.com/watch?v=oUHsfvLGhDk). ## Getting started Get started with API fuzzing by editing your CI/CD configuration. Prerequisites: - A web API using one of the supported API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - An API specification in one of the following formats: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) - An available [GitLab Runner](../../../ci/runners/_index.md) with the [`docker` executor](https://docs.gitlab.com/runner/executors/docker.html) on Linux/amd64. - A deployed target application. For more details, see the [deployment options](#application-deployment-options). - The `fuzz` stage is added to your CI/CD pipeline definition, after the `deploy` stage: ```yaml stages: - build - test - deploy - fuzz ``` To enable API fuzzing: - Use the [Web API fuzzing configuration form](configuration/enabling_the_analyzer.md#web-api-fuzzing-configuration-form). The form lets you choose values for the most common API fuzzing options, and builds a YAML snippet that you can paste in your GitLab CI/CD configuration. ## Understanding the results To view the output of a security scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. For more information, see [severity levels](../vulnerabilities/severities.md). - Scanner: Identifies which analyzer detected the vulnerability. - Method: Establishes the vulnerable server interaction type. - URL: Shows the location of the vulnerability. - Evidence: Describes test case to prove the presence of a given vulnerability - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers. You can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see the [pipeline security report](../vulnerability_report/pipeline.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} ## Optimization To get the most out of API fuzzing, follow these recommendations: - Configure runners to use the [always pull policy](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy) to run the latest versions of the analyzers. - By default, API fuzzing downloads all artifacts defined by previous jobs in the pipeline. If your API fuzzing job does not rely on `environment_url.txt` to define the URL under test or any other files created in previous jobs, you should not download artifacts. To avoid downloading artifacts, extend the analyzer CI/CD job to specify no dependencies. For example, for the API fuzzing analyzer, add the following to your `.gitlab-ci.yml` file: ```yaml apifuzzer_fuzz: dependencies: [] ``` ### Application deployment options API fuzzing requires a deployed application to be available to scan. Depending on the complexity of the target application, there are a few options as to how to deploy and configure the API fuzzing template. #### Review apps Review apps are the most involved method of deploying your API Fuzzing target application. To assist in the process, GitLab created a review app deployment using Google Kubernetes Engine (GKE). This example can be found in the [Review apps - GKE](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke) project, plus detailed instructions to configure review apps in DAST in the [README](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke/-/blob/master/README.md). #### Docker Services If your application uses Docker containers, you have another option for deploying and scanning with API fuzzing. After your Docker build job completes and your image is added to your container registry, you can use the image as a [service](../../../ci/services/_index.md). By using service definitions in your `.gitlab-ci.yml`, you can scan services with the DAST analyzer. When adding a `services` section to the job, the `alias` is used to define the hostname that can be used to access the service. In the following example, the `alias: yourapp` portion of the `dast` job definition means that the URL to the deployed application uses `yourapp` as the hostname (`https://yourapp/`). ```yaml stages: - build - fuzz include: - template: API-Fuzzing.gitlab-ci.yml # Deploys the container to the GitLab container registry deploy: services: - name: docker:dind alias: dind image: docker:20.10.16 stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest apifuzzer_fuzz: services: # use services to link your app container to the dast job - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp variables: FUZZAPI_TARGET_URL: https://yourapp ``` Most applications depend on multiple services such as databases or caching services. By default, services defined in the services fields cannot communicate with each another. To allow communication between services, enable the `FF_NETWORK_PER_BUILD` [feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html#available-feature-flags). ```yaml variables: FF_NETWORK_PER_BUILD: "true" # enable network per build so all services can communicate on the same network services: # use services to link the container to the dast job - name: mongo:latest alias: mongo - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp ``` ## Roll out Web API fuzzing runs in the `fuzz` stage of the CI/CD pipeline. To ensure API fuzzing scans the latest code, your CI/CD pipeline should deploy changes to a test environment in one of the stages preceding the `fuzz` stage. If your pipeline is configured to deploy to the same web server on each run, running a pipeline while another is still running could cause a race condition in which one pipeline overwrites the code from another. The API to scan should be excluded from changes for the duration of a fuzzing scan. The only changes to the API should be from the fuzzing scanner. Any changes made to the API (for example, by users, scheduled tasks, database changes, code changes, other pipelines, or other scanners) during a scan could cause inaccurate results. You can run a Web API fuzzing scan using the following methods: - [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) - version 2, and 3. - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive](configuration/enabling_the_analyzer.md#http-archive-har) (HAR) - [Postman Collection](configuration/enabling_the_analyzer.md#postman-collection) - version 2.0 or 2.1 ### Example API fuzzing projects - [Example OpenAPI v2 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing-example/-/tree/openapi) - [Example HTTP Archive (HAR) project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing-example/-/tree/har) - [Example Postman Collection project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/postman-api-fuzzing-example) - [Example GraphQL project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/graphql-api-fuzzing-example) - [Example SOAP project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/soap-api-fuzzing-example) - [Authentication Token using Selenium](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/auth-token-selenium) ## Get support or request an improvement To get support for your particular problem use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Security and API Fuzzing. Use `~"Category:API Security"` label when opening a new issue regarding API fuzzing to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Scanner log file available as a job artifact named `gl-api-security-scanner.log`. {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}} ## Glossary - Assert: Assertions are detection modules used by checks to trigger a fault. Many assertions have configurations. A check can use multiple Assertions. For example, Log Analysis, Response Analysis, and Status Code are common Assertions used together by checks. Checks with multiple Assertions allow them to be turned on and off. - Check: Performs a specific type of test, or performed a check for a type of vulnerability. For example, the JSON Fuzzing Check performs fuzz testing of JSON payloads. The API fuzzer is comprised of several checks. Checks can be turned on and off in a profile. - Fault: During fuzzing, a failure identified by an Assert is called a fault. Faults are investigated to determine if they are a security vulnerability, a non-security issue, or a false positive. Faults don't have a known vulnerability type until they are investigated. Example vulnerability types are SQL Injection and Denial of Service. - Profile: A configuration file has one or more testing profiles, or sub-configurations. You may have a profile for feature branches and another with extra testing for a main branch.
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Web API Fuzz Testing description: Testing, security, vulnerabilities, automation, and errors. breadcrumbs: - doc - user - application_security - api_fuzzing --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Web API fuzz testing passes unexpected values to API operation parameters to cause unexpected behavior and errors in the backend. Use fuzz testing to discover bugs and potential vulnerabilities that other QA processes might miss. You should use fuzz testing in addition to the other security scanners in [GitLab Secure](../_index.md) and your own test processes. If you're using [GitLab CI/CD](../../../ci/_index.md), you can run fuzz tests as part your CI/CD workflow. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [WebAPI Fuzzing - Advanced Security Testing](https://www.youtube.com/watch?v=oUHsfvLGhDk). ## Getting started Get started with API fuzzing by editing your CI/CD configuration. Prerequisites: - A web API using one of the supported API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - An API specification in one of the following formats: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) - An available [GitLab Runner](../../../ci/runners/_index.md) with the [`docker` executor](https://docs.gitlab.com/runner/executors/docker.html) on Linux/amd64. - A deployed target application. For more details, see the [deployment options](#application-deployment-options). - The `fuzz` stage is added to your CI/CD pipeline definition, after the `deploy` stage: ```yaml stages: - build - test - deploy - fuzz ``` To enable API fuzzing: - Use the [Web API fuzzing configuration form](configuration/enabling_the_analyzer.md#web-api-fuzzing-configuration-form). The form lets you choose values for the most common API fuzzing options, and builds a YAML snippet that you can paste in your GitLab CI/CD configuration. ## Understanding the results To view the output of a security scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. For more information, see [severity levels](../vulnerabilities/severities.md). - Scanner: Identifies which analyzer detected the vulnerability. - Method: Establishes the vulnerable server interaction type. - URL: Shows the location of the vulnerability. - Evidence: Describes test case to prove the presence of a given vulnerability - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers. You can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see the [pipeline security report](../vulnerability_report/pipeline.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} ## Optimization To get the most out of API fuzzing, follow these recommendations: - Configure runners to use the [always pull policy](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy) to run the latest versions of the analyzers. - By default, API fuzzing downloads all artifacts defined by previous jobs in the pipeline. If your API fuzzing job does not rely on `environment_url.txt` to define the URL under test or any other files created in previous jobs, you should not download artifacts. To avoid downloading artifacts, extend the analyzer CI/CD job to specify no dependencies. For example, for the API fuzzing analyzer, add the following to your `.gitlab-ci.yml` file: ```yaml apifuzzer_fuzz: dependencies: [] ``` ### Application deployment options API fuzzing requires a deployed application to be available to scan. Depending on the complexity of the target application, there are a few options as to how to deploy and configure the API fuzzing template. #### Review apps Review apps are the most involved method of deploying your API Fuzzing target application. To assist in the process, GitLab created a review app deployment using Google Kubernetes Engine (GKE). This example can be found in the [Review apps - GKE](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke) project, plus detailed instructions to configure review apps in DAST in the [README](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke/-/blob/master/README.md). #### Docker Services If your application uses Docker containers, you have another option for deploying and scanning with API fuzzing. After your Docker build job completes and your image is added to your container registry, you can use the image as a [service](../../../ci/services/_index.md). By using service definitions in your `.gitlab-ci.yml`, you can scan services with the DAST analyzer. When adding a `services` section to the job, the `alias` is used to define the hostname that can be used to access the service. In the following example, the `alias: yourapp` portion of the `dast` job definition means that the URL to the deployed application uses `yourapp` as the hostname (`https://yourapp/`). ```yaml stages: - build - fuzz include: - template: API-Fuzzing.gitlab-ci.yml # Deploys the container to the GitLab container registry deploy: services: - name: docker:dind alias: dind image: docker:20.10.16 stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest apifuzzer_fuzz: services: # use services to link your app container to the dast job - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp variables: FUZZAPI_TARGET_URL: https://yourapp ``` Most applications depend on multiple services such as databases or caching services. By default, services defined in the services fields cannot communicate with each another. To allow communication between services, enable the `FF_NETWORK_PER_BUILD` [feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html#available-feature-flags). ```yaml variables: FF_NETWORK_PER_BUILD: "true" # enable network per build so all services can communicate on the same network services: # use services to link the container to the dast job - name: mongo:latest alias: mongo - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp ``` ## Roll out Web API fuzzing runs in the `fuzz` stage of the CI/CD pipeline. To ensure API fuzzing scans the latest code, your CI/CD pipeline should deploy changes to a test environment in one of the stages preceding the `fuzz` stage. If your pipeline is configured to deploy to the same web server on each run, running a pipeline while another is still running could cause a race condition in which one pipeline overwrites the code from another. The API to scan should be excluded from changes for the duration of a fuzzing scan. The only changes to the API should be from the fuzzing scanner. Any changes made to the API (for example, by users, scheduled tasks, database changes, code changes, other pipelines, or other scanners) during a scan could cause inaccurate results. You can run a Web API fuzzing scan using the following methods: - [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) - version 2, and 3. - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive](configuration/enabling_the_analyzer.md#http-archive-har) (HAR) - [Postman Collection](configuration/enabling_the_analyzer.md#postman-collection) - version 2.0 or 2.1 ### Example API fuzzing projects - [Example OpenAPI v2 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing-example/-/tree/openapi) - [Example HTTP Archive (HAR) project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing-example/-/tree/har) - [Example Postman Collection project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/postman-api-fuzzing-example) - [Example GraphQL project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/graphql-api-fuzzing-example) - [Example SOAP project](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/soap-api-fuzzing-example) - [Authentication Token using Selenium](https://gitlab.com/gitlab-org/security-products/demos/api-fuzzing/auth-token-selenium) ## Get support or request an improvement To get support for your particular problem use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Security and API Fuzzing. Use `~"Category:API Security"` label when opening a new issue regarding API fuzzing to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Scanner log file available as a job artifact named `gl-api-security-scanner.log`. {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}} ## Glossary - Assert: Assertions are detection modules used by checks to trigger a fault. Many assertions have configurations. A check can use multiple Assertions. For example, Log Analysis, Response Analysis, and Status Code are common Assertions used together by checks. Checks with multiple Assertions allow them to be turned on and off. - Check: Performs a specific type of test, or performed a check for a type of vulnerability. For example, the JSON Fuzzing Check performs fuzz testing of JSON payloads. The API fuzzer is comprised of several checks. Checks can be turned on and off in a profile. - Fault: During fuzzing, a failure identified by an Assert is called a fault. Faults are investigated to determine if they are a security vulnerability, a non-security issue, or a false positive. Faults don't have a known vulnerability type until they are investigated. Example vulnerability types are SQL Injection and Denial of Service. - Profile: A configuration file has one or more testing profiles, or sub-configurations. You may have a profile for feature branches and another with extra testing for a main branch.
https://docs.gitlab.com/user/application_security/performance
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/performance.md
2025-08-13
doc/user/application_security/api_fuzzing
[ "doc", "user", "application_security", "api_fuzzing" ]
performance.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Performance tuning and testing speed
null
Security tools that perform API fuzz testing, such as API Fuzzing, perform testing by sending requests to an instance of your running application. The requests are mutated by our fuzzing engine to trigger unexpected behavior that might exist in your application. The speed of an API fuzzing test depends on the following: - How many requests per second can be sent to your application by our tooling - How fast your application responds to requests - How many requests must be sent to test the application - How many operations your API is comprised of - How many fields are in each operation (think JSON bodies, headers, query string, cookies, etc.) If API Fuzzing testing job still takes longer than expected after following the advice in this performance guide, reach out to support for further assistance. ## Diagnosing performance issues The first step to resolving performance issues is to understand what is contributing to the slower-than-expected testing time. Some common issues we see are: - API Fuzzing is running on a low-vCPU runner - The application deployed to a slow/single-CPU instance and is not able to keep up with the testing load - The application contains a slow operation that impacts the overall test speed (> 1/2 second) - The application contains an operation that returns a large amount of data (> 500K+) - The application contains a large number of operations (> 40) ### The application contains a slow operation that impacts the overall test speed (> 1/2 second) The API Fuzzing job output contains helpful information about how fast we are testing, how fast each operation being tested responds, and summary information. Let's take a look at some sample output to see how it can be used in tracking down performance issues: ```shell API Fuzzing: Loaded 10 operations from: assets/har-large-response/large_responses.har API Fuzzing: API Fuzzing: Testing operation [1/10]: 'GET http://target:7777/api/large_response_json'. API Fuzzing: - Parameters: (Headers: 4, Query: 0, Body: 0) API Fuzzing: - Request body size: 0 Bytes (0 bytes) API Fuzzing: API Fuzzing: Finished testing operation 'GET http://target:7777/api/large_response_json'. API Fuzzing: - Excluded Parameters: (Headers: 0, Query: 0, Body: 0) API Fuzzing: - Performed 767 requests API Fuzzing: - Average response body size: 130 MB API Fuzzing: - Average call time: 2 seconds and 82.69 milliseconds (2.082693 seconds) API Fuzzing: - Time to complete: 14 minutes, 8 seconds and 788.36 milliseconds (848.788358 seconds) ``` This job console output snippet starts by telling us how many operations were found (10), followed by notifications that testing has started on a specific operation and a summary of the operation has been completed. The summary is the most interesting part of this log output. In the summary, we can see that it took API Fuzzing 767 requests to fully test this operation and its related fields. We can also see that the average response time was 2 seconds and the time to complete was 14 minutes for this one operation. An average response time of 2 seconds is a good initial indicator that this specific operation takes a long time to test. Further, we can see that the response body size is quite large. The large body size is the culprit here, transferring that much data on each request is what takes the majority of that 2 seconds. For this issue, the team might decide to: - Use a runner with more vCPUs, because this allows API Fuzzing to parallelize the work being performed. This helps lower the test time, but getting the test down under 10 minutes might still be problematic without moving to a high CPU machine due to how long the operation takes to test. While larger runners are more costly, you also pay for less minutes if the job executions are quicker. - [Exclude this operation](#excluding-slow-operations) from the API Fuzzing test. While this is the simplest, it has the downside of a gap in security test coverage. - [Exclude the operation from feature branch API Fuzzing tests, but include it in the default branch test](#excluding-operations-in-feature-branches-but-not-default-branch). - [Split up the API Fuzzing testing into multiple jobs](#splitting-a-test-into-multiple-jobs). The likely solution is to use a combination of these solutions to reach an acceptable test time, assuming your team's requirements are in the 5-7 minute range. ## Addressing performance issues The following sections document various options for addressing performance issues for API Fuzzing: - [Using a larger runner](#using-a-larger-runner) - [Excluding slow operations](#excluding-slow-operations) - [Splitting a test into multiple jobs](#splitting-a-test-into-multiple-jobs) - [Excluding operations in feature branches, but not default branch](#excluding-operations-in-feature-branches-but-not-default-branch) ### Using a larger runner One of the easiest performance boosts can be achieved using a [larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) with API Fuzzing. This table shows statistics collected during benchmarking of a Java Spring Boot REST API. In this benchmark, the target and API Fuzzing share a single runner instance. | Hosted runner on Linux tag | Requests per Second | |------------------------------------|-----------| | `saas-linux-small-amd64` (default) | 255 | | `saas-linux-medium-amd64` | 400 | As we can see from this table, increasing the size of the runner and vCPU count can have a large impact on testing speed/performance. Here is an example job definition for API Fuzzing that adds a `tags` section to use the medium SaaS runner on Linux. The job extends the job definition included through the API Fuzzing template. ```yaml apifuzzer_fuzz: tags: - saas-linux-medium-amd64 ``` In the `gl-api-security-scanner.log` file you can search for the string `Starting work item processor` to inspect the reported max DOP (degree of parallelism). The max DOP should be greater than or equal to the number of vCPUs assigned to the runner. If unable to identify the problem, open a ticket with support to assist. Example log entry: `17:00:01.084 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Starting work item processor with 4 max DOP` ### Excluding slow operations In the case of one or two slow operations, the team might decide to skip testing the operations. Excluding the operation is done using the `FUZZAPI_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `FUZZAPI_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. To verify the operation is excluded, run the API Fuzzing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml apifuzzer_fuzz: variables: FUZZAPI_EXCLUDE_PATHS: /api/large_response_json ``` {{< alert type="warning" >}} Excluding operations from testing could allow some vulnerabilities to go undetected. {{< /alert >}} ### Splitting a test into multiple jobs Splitting a test into multiple jobs is supported by API Fuzzing through the use of [`FUZZAPI_EXCLUDE_PATHS`](configuration/customizing_analyzer_settings.md#exclude-paths) and [`FUZZAPI_EXCLUDE_URLS`](configuration/customizing_analyzer_settings.md#exclude-urls). When splitting a test up, a good pattern is to disable the `apifuzzer_fuzz` job and replace it with two jobs with identifying names. In this example we have two jobs, each job is testing a version of the API, so our names reflect that. However, this technique can be applied to any situation, not just with versions of an API. The rules we are using in the `apifuzzer_v1` and `apifuzzer_v2` jobs are copied from the [API Fuzzing template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/DAST-API.gitlab-ci.yml). ```yaml # Disable the main apifuzzer_fuzz job apifuzzer_fuzz: rules: - if: $CI_COMMIT_BRANCH when: never apifuzzer_v1: extends: apifuzzer_fuzz variables: FUZZAPI_EXCLUDE_PATHS: /api/v1/** rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH apifuzzer_v2: variables: FUZZAPI_EXCLUDE_PATHS: /api/v2/** rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH ``` ### Excluding operations in feature branches, but not default branch In the case of one or two slow operations, the team might decide to skip testing the operations, or exclude them from feature branch tests, but include them for default branch tests. Excluding the operation is done using the `FUZZAPI_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `FUZZAPI_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. Our configuration disables the main `apifuzzer_fuzz` job and creates two new jobs `apifuzzer_main` and `apifuzzer_branch`. The `apifuzzer_branch` is set up to exclude the long operation and only run on non-default branches (for example, feature branches). The `apifuzzer_main` branch is set up to only execute on the default branch (`main` in this example). The `apifuzzer_branch` jobs run faster, allowing for quick development cycles, while the `apifuzzer_main` job which only runs on default branch builds, takes longer to run. To verify the operation is excluded, run the API Fuzzing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml # Disable the main job so we can create two jobs with # different names apifuzzer_fuzz: rules: - if: $CI_COMMIT_BRANCH when: never # API Fuzzing for feature branch work, excludes /api/large_response_json apifuzzer_branch: extends: apifuzzer_fuzz variables: FUZZAPI_EXCLUDE_PATHS: /api/large_response_json rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH # API Fuzzing for default branch (main in our case) # Includes the long running operations apifuzzer_main: extends: apifuzzer_fuzz rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ```
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Performance tuning and testing speed breadcrumbs: - doc - user - application_security - api_fuzzing --- Security tools that perform API fuzz testing, such as API Fuzzing, perform testing by sending requests to an instance of your running application. The requests are mutated by our fuzzing engine to trigger unexpected behavior that might exist in your application. The speed of an API fuzzing test depends on the following: - How many requests per second can be sent to your application by our tooling - How fast your application responds to requests - How many requests must be sent to test the application - How many operations your API is comprised of - How many fields are in each operation (think JSON bodies, headers, query string, cookies, etc.) If API Fuzzing testing job still takes longer than expected after following the advice in this performance guide, reach out to support for further assistance. ## Diagnosing performance issues The first step to resolving performance issues is to understand what is contributing to the slower-than-expected testing time. Some common issues we see are: - API Fuzzing is running on a low-vCPU runner - The application deployed to a slow/single-CPU instance and is not able to keep up with the testing load - The application contains a slow operation that impacts the overall test speed (> 1/2 second) - The application contains an operation that returns a large amount of data (> 500K+) - The application contains a large number of operations (> 40) ### The application contains a slow operation that impacts the overall test speed (> 1/2 second) The API Fuzzing job output contains helpful information about how fast we are testing, how fast each operation being tested responds, and summary information. Let's take a look at some sample output to see how it can be used in tracking down performance issues: ```shell API Fuzzing: Loaded 10 operations from: assets/har-large-response/large_responses.har API Fuzzing: API Fuzzing: Testing operation [1/10]: 'GET http://target:7777/api/large_response_json'. API Fuzzing: - Parameters: (Headers: 4, Query: 0, Body: 0) API Fuzzing: - Request body size: 0 Bytes (0 bytes) API Fuzzing: API Fuzzing: Finished testing operation 'GET http://target:7777/api/large_response_json'. API Fuzzing: - Excluded Parameters: (Headers: 0, Query: 0, Body: 0) API Fuzzing: - Performed 767 requests API Fuzzing: - Average response body size: 130 MB API Fuzzing: - Average call time: 2 seconds and 82.69 milliseconds (2.082693 seconds) API Fuzzing: - Time to complete: 14 minutes, 8 seconds and 788.36 milliseconds (848.788358 seconds) ``` This job console output snippet starts by telling us how many operations were found (10), followed by notifications that testing has started on a specific operation and a summary of the operation has been completed. The summary is the most interesting part of this log output. In the summary, we can see that it took API Fuzzing 767 requests to fully test this operation and its related fields. We can also see that the average response time was 2 seconds and the time to complete was 14 minutes for this one operation. An average response time of 2 seconds is a good initial indicator that this specific operation takes a long time to test. Further, we can see that the response body size is quite large. The large body size is the culprit here, transferring that much data on each request is what takes the majority of that 2 seconds. For this issue, the team might decide to: - Use a runner with more vCPUs, because this allows API Fuzzing to parallelize the work being performed. This helps lower the test time, but getting the test down under 10 minutes might still be problematic without moving to a high CPU machine due to how long the operation takes to test. While larger runners are more costly, you also pay for less minutes if the job executions are quicker. - [Exclude this operation](#excluding-slow-operations) from the API Fuzzing test. While this is the simplest, it has the downside of a gap in security test coverage. - [Exclude the operation from feature branch API Fuzzing tests, but include it in the default branch test](#excluding-operations-in-feature-branches-but-not-default-branch). - [Split up the API Fuzzing testing into multiple jobs](#splitting-a-test-into-multiple-jobs). The likely solution is to use a combination of these solutions to reach an acceptable test time, assuming your team's requirements are in the 5-7 minute range. ## Addressing performance issues The following sections document various options for addressing performance issues for API Fuzzing: - [Using a larger runner](#using-a-larger-runner) - [Excluding slow operations](#excluding-slow-operations) - [Splitting a test into multiple jobs](#splitting-a-test-into-multiple-jobs) - [Excluding operations in feature branches, but not default branch](#excluding-operations-in-feature-branches-but-not-default-branch) ### Using a larger runner One of the easiest performance boosts can be achieved using a [larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) with API Fuzzing. This table shows statistics collected during benchmarking of a Java Spring Boot REST API. In this benchmark, the target and API Fuzzing share a single runner instance. | Hosted runner on Linux tag | Requests per Second | |------------------------------------|-----------| | `saas-linux-small-amd64` (default) | 255 | | `saas-linux-medium-amd64` | 400 | As we can see from this table, increasing the size of the runner and vCPU count can have a large impact on testing speed/performance. Here is an example job definition for API Fuzzing that adds a `tags` section to use the medium SaaS runner on Linux. The job extends the job definition included through the API Fuzzing template. ```yaml apifuzzer_fuzz: tags: - saas-linux-medium-amd64 ``` In the `gl-api-security-scanner.log` file you can search for the string `Starting work item processor` to inspect the reported max DOP (degree of parallelism). The max DOP should be greater than or equal to the number of vCPUs assigned to the runner. If unable to identify the problem, open a ticket with support to assist. Example log entry: `17:00:01.084 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Starting work item processor with 4 max DOP` ### Excluding slow operations In the case of one or two slow operations, the team might decide to skip testing the operations. Excluding the operation is done using the `FUZZAPI_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `FUZZAPI_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. To verify the operation is excluded, run the API Fuzzing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml apifuzzer_fuzz: variables: FUZZAPI_EXCLUDE_PATHS: /api/large_response_json ``` {{< alert type="warning" >}} Excluding operations from testing could allow some vulnerabilities to go undetected. {{< /alert >}} ### Splitting a test into multiple jobs Splitting a test into multiple jobs is supported by API Fuzzing through the use of [`FUZZAPI_EXCLUDE_PATHS`](configuration/customizing_analyzer_settings.md#exclude-paths) and [`FUZZAPI_EXCLUDE_URLS`](configuration/customizing_analyzer_settings.md#exclude-urls). When splitting a test up, a good pattern is to disable the `apifuzzer_fuzz` job and replace it with two jobs with identifying names. In this example we have two jobs, each job is testing a version of the API, so our names reflect that. However, this technique can be applied to any situation, not just with versions of an API. The rules we are using in the `apifuzzer_v1` and `apifuzzer_v2` jobs are copied from the [API Fuzzing template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/DAST-API.gitlab-ci.yml). ```yaml # Disable the main apifuzzer_fuzz job apifuzzer_fuzz: rules: - if: $CI_COMMIT_BRANCH when: never apifuzzer_v1: extends: apifuzzer_fuzz variables: FUZZAPI_EXCLUDE_PATHS: /api/v1/** rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH apifuzzer_v2: variables: FUZZAPI_EXCLUDE_PATHS: /api/v2/** rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH ``` ### Excluding operations in feature branches, but not default branch In the case of one or two slow operations, the team might decide to skip testing the operations, or exclude them from feature branch tests, but include them for default branch tests. Excluding the operation is done using the `FUZZAPI_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `FUZZAPI_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. Our configuration disables the main `apifuzzer_fuzz` job and creates two new jobs `apifuzzer_main` and `apifuzzer_branch`. The `apifuzzer_branch` is set up to exclude the long operation and only run on non-default branches (for example, feature branches). The `apifuzzer_main` branch is set up to only execute on the default branch (`main` in this example). The `apifuzzer_branch` jobs run faster, allowing for quick development cycles, while the `apifuzzer_main` job which only runs on default branch builds, takes longer to run. To verify the operation is excluded, run the API Fuzzing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml # Disable the main job so we can create two jobs with # different names apifuzzer_fuzz: rules: - if: $CI_COMMIT_BRANCH when: never # API Fuzzing for feature branch work, excludes /api/large_response_json apifuzzer_branch: extends: apifuzzer_fuzz variables: FUZZAPI_EXCLUDE_PATHS: /api/large_response_json rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH # API Fuzzing for default branch (main in our case) # Includes the long running operations apifuzzer_main: extends: apifuzzer_fuzz rules: - if: $API_FUZZING_DISABLED == 'true' || $API_FUZZING_DISABLED == '1' when: never - if: $API_FUZZING_DISABLED_FOR_DEFAULT_BRANCH && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: FUZZAPI_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ```
https://docs.gitlab.com/user/application_security/create_har_files
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/create_har_files.md
2025-08-13
doc/user/application_security/api_fuzzing
[ "doc", "user", "application_security", "api_fuzzing" ]
create_har_files.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Create HAR Files
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} HTTP Archive (HAR) format files are an industry standard for exchanging information about HTTP requests and HTTP responses. A HAR file's content is JSON formatted, containing browser interactions with a web site. The file extension `.har` is commonly used. The HAR files can be used to perform [web API Fuzz Testing](configuration/enabling_the_analyzer.md#http-archive-har) as part of your [GitLab CI/CD](../../../ci/_index.md) pipelines. {{< alert type="warning" >}} A HAR file stores information exchanged between web client and web server. It could also store sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ## HAR file creation You can create HAR files manually or by using a specialized tool for recording web sessions. We recommend using a specialized tool. However, it is important to make sure files created by these tools do not expose sensitive information, and can be safely used. The following tools can be used generate a HAR file based on your network activity. They automatically record your network activity and generate the HAR file: 1. [GitLab HAR Recorder](#gitlab-har-recorder). 1. [Insomnia API Client](#insomnia-api-client). 1. [Fiddler debugging proxy](#fiddler-debugging-proxy). 1. [Safari web browser](#safari-web-browser). 1. [Chrome web browser](#chrome-web-browser). 1. [Firefox web browser](#firefox-web-browser). {{< alert type="warning" >}} HAR files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ### GitLab HAR Recorder [GitLab HAR Recorder](https://gitlab.com/gitlab-org/security-products/har-recorder) is a command line tool for recording HTTP messages and saving them to HTTP Archive (HAR) files. For more details about the GitLab HAR Recorder, see the [homepage](https://gitlab.com/gitlab-org/security-products/har-recorder). #### Install GitLab HAR Recorder Prerequisites: - Install Python 3.6 or greater. - For Microsoft Windows, you must also install `Microsoft Visual C++ 14.0`. It's included with *Build Tools for Visual Studio* from [Visual Studio Downloads page](https://visualstudio.microsoft.com/downloads/). - Install HAR Recorder. Install GitLab HAR Recorder: ```shell pip install gitlab-har-recorder --extra-index-url https://gitlab.com/api/v4/projects/22441624/packages/pypi/simple ``` #### Create a HAR file with GitLab HAR Recorder 1. Start recorder with the proxy port and HAR filename. 1. Complete the browser actions, using the proxy. 1. Make sure proxy is used! 1. Stop the recorder. To verify the HAR contains all requests, use an online HAR viewer, for example: - [HAR Viewer](http://www.softwareishard.com/har/viewer/) - [Google Admin Toolbox HAR Analyzer](https://toolbox.googleapps.com/apps/har_analyzer/) ### Insomnia API Client [Insomnia API Client](https://insomnia.rest/) is an API design tool that among many uses, helps you to design, describe, and test your API. You can also use it to generate HAR files that can be used in [Web API Fuzz Testing](configuration/enabling_the_analyzer.md#http-archive-har). #### Create a HAR file with the Insomnia API Client 1. Define or import your API. - Postman v2. - Curl. - OpenAPI v2, v3. 1. Verify each API call works. - If you imported an OpenAPI specification, go through and add working data. 1. Select **API > Import/Export**. 1. Select **Export Data > Current Workspace**. 1. Select requests to include in the HAR file. 1. Select **Export**. 1. In the **Select Export Type** dropdown list select **HAR -- HTTP Archive Format**. 1. Select **Done**. 1. Enter a location and filename for the HAR file. ### Fiddler debugging proxy [Fiddler](https://www.telerik.com/fiddler) is a web debugger tool. It captures HTTP and HTTP(S) network traffic and allows you to examine each request. It also lets you export the requests and responses in HAR format. #### Create a HAR file with Fiddler 1. Go to the [Fiddler home page](https://www.telerik.com/fiddler) and sign in. If you don't already have an account, first create an account. 1. Browse pages that call an API. Fiddler automatically captures the requests. 1. Select one or more requests, then from the context menu, select **Export > Selected Sessions**. 1. In the **Choose Format** dropdown list select **HTTPArchive v1.2**. 1. Enter a filename and select **Save**. Fiddler shows a popup message confirming the export has succeeded. ### Safari web browser [Safari](https://www.apple.com/safari/) is a web browser maintained by Apple. As web development evolves, browsers support new capabilities. With Safari you can explore network traffic and export it as a HAR file. #### Create a HAR file with Safari Prerequisites: - Enable the `Develop` menu item. 1. Open Safari's preferences. Press <kbd>Command</kbd>+<kbd>,</kbd> or from the menu, select **Safari > Preferences**. 1. Select **Advanced** tab, then select `Show Develop menu item in menu bar`. 1. Close the **Preferences** window. 1. Open the **Web Inspector**. Press <kbd>Option</kbd>+<kbd>Command</kbd>+<kbd>i</kbd>, or from the menu, select **Develop > Show Web Inspector**. 1. Select the **Network** tab, and select **Preserve Log**. 1. Browse pages that call the API. 1. Open the **Web Inspector** and select the **Network** tab 1. Right-click on the request to export and select **Export HAR**. 1. Enter a filename and select **Save**. ### Chrome web browser [Chrome](https://www.google.com/chrome/) is a web browser maintained by Google. As web development evolves, browsers support new capabilities. With Chrome you can explore network traffic and export it as a HAR file. #### Create a HAR file with Chrome 1. From the Chrome context menu, select **Inspect**. 1. Select the **Network** tab. 1. Select **Preserve log**. 1. Browse pages that call the API. 1. Select one or more requests. 1. Right-click and select **Save all as HAR with content**. 1. Enter a filename and select **Save**. 1. To append additional requests, select and save them to the same file. ### Firefox Web Browser [Firefox](https://www.mozilla.org/en-US/firefox/new/) is a web browser maintained by Mozilla. As web development evolves, browsers support new capabilities. With Firefox you can explore network traffic and export it as a HAR file. #### Create a HAR file with Firefox 1. From the Firefox context menu, select **Inspect**. 1. Select the **Network** tab. 1. Browse pages that call the API. 1. Check the **Network** tab and confirm requests are being recorded. If there is a message `Perform a request or Reload the page to see detailed information about network activity`, select **Reload** to start recording requests. 1. Select one or more requests. 1. Right-click and select **Save All As HAR**. 1. Enter a filename and select **Save**. 1. To append additional requests, select and save them to the same file. ## HAR verification Before using HAR files it's important to make sure they don't expose any sensitive information. For each HAR file you should: - View the HAR file's content - Review the HAR file for sensitive information - Edit or remove sensitive information ### View HAR file contents We recommend viewing a HAR file's content in a tool that can present its content in a structured way. Several HAR file viewers are available online. If you would prefer not to upload the HAR file, you can use a tool installed on your computer. HAR files used JSON format, so can also be viewed in a text editor. Tools recommended for viewing HAR files include: - [HAR Viewer](http://www.softwareishard.com/har/viewer/) - (online) - [Google Admin Toolbox HAR Analyzer](https://toolbox.googleapps.com/apps/har_analyzer/) - (online) - [Fiddler](https://www.telerik.com/fiddler) - local - [Insomnia API Client](https://insomnia.rest/) - local ## Review HAR file content Review the HAR file for any of the following: - Information that could help to grant access to your application, for example: authentication tokens, authentication tokens, cookies, API keys. - [Personally Identifiable Information (PII)](https://en.wikipedia.org/wiki/Personal_data). We strongly recommended that you [edit or remove it](#edit-or-remove-sensitive-information) any sensitive information. Use the following as a checklist to start with. It's not an exhaustive list. - Look for secrets. For example: if your application requires authentication, check common locations or authentication information: - Authentication related headers. For example: cookies, authorization. These headers could contain valid information. - A request related to authentication. The body of these requests might contain information such as user credentials or tokens. - Session tokens. Session tokens could grant access to your application. The location of these token could vary. They could be in headers, query parameters or body. - Look for Personally Identifiable Information - For example, if your application retrieves a list of users and their personal data: phones, names, emails. - Authentication information might also contain personal information. ## Edit or remove sensitive information Edit or remove sensitive information found during the [HAR file content review](#review-har-file-content). HAR files are JSON files and can be edited in any text editor. After editing the HAR file, open it in a HAR file viewer to verify its formatting and structure are intact. The following example demonstrates use of [Visual Studio Code](https://code.visualstudio.com/) text editor to edit an Authorization token found in a header. ![Authorization token edited in Visual Studio Code](img/vscode_har_edit_auth_header_v13_12.png)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Create HAR Files breadcrumbs: - doc - user - application_security - api_fuzzing --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} HTTP Archive (HAR) format files are an industry standard for exchanging information about HTTP requests and HTTP responses. A HAR file's content is JSON formatted, containing browser interactions with a web site. The file extension `.har` is commonly used. The HAR files can be used to perform [web API Fuzz Testing](configuration/enabling_the_analyzer.md#http-archive-har) as part of your [GitLab CI/CD](../../../ci/_index.md) pipelines. {{< alert type="warning" >}} A HAR file stores information exchanged between web client and web server. It could also store sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ## HAR file creation You can create HAR files manually or by using a specialized tool for recording web sessions. We recommend using a specialized tool. However, it is important to make sure files created by these tools do not expose sensitive information, and can be safely used. The following tools can be used generate a HAR file based on your network activity. They automatically record your network activity and generate the HAR file: 1. [GitLab HAR Recorder](#gitlab-har-recorder). 1. [Insomnia API Client](#insomnia-api-client). 1. [Fiddler debugging proxy](#fiddler-debugging-proxy). 1. [Safari web browser](#safari-web-browser). 1. [Chrome web browser](#chrome-web-browser). 1. [Firefox web browser](#firefox-web-browser). {{< alert type="warning" >}} HAR files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ### GitLab HAR Recorder [GitLab HAR Recorder](https://gitlab.com/gitlab-org/security-products/har-recorder) is a command line tool for recording HTTP messages and saving them to HTTP Archive (HAR) files. For more details about the GitLab HAR Recorder, see the [homepage](https://gitlab.com/gitlab-org/security-products/har-recorder). #### Install GitLab HAR Recorder Prerequisites: - Install Python 3.6 or greater. - For Microsoft Windows, you must also install `Microsoft Visual C++ 14.0`. It's included with *Build Tools for Visual Studio* from [Visual Studio Downloads page](https://visualstudio.microsoft.com/downloads/). - Install HAR Recorder. Install GitLab HAR Recorder: ```shell pip install gitlab-har-recorder --extra-index-url https://gitlab.com/api/v4/projects/22441624/packages/pypi/simple ``` #### Create a HAR file with GitLab HAR Recorder 1. Start recorder with the proxy port and HAR filename. 1. Complete the browser actions, using the proxy. 1. Make sure proxy is used! 1. Stop the recorder. To verify the HAR contains all requests, use an online HAR viewer, for example: - [HAR Viewer](http://www.softwareishard.com/har/viewer/) - [Google Admin Toolbox HAR Analyzer](https://toolbox.googleapps.com/apps/har_analyzer/) ### Insomnia API Client [Insomnia API Client](https://insomnia.rest/) is an API design tool that among many uses, helps you to design, describe, and test your API. You can also use it to generate HAR files that can be used in [Web API Fuzz Testing](configuration/enabling_the_analyzer.md#http-archive-har). #### Create a HAR file with the Insomnia API Client 1. Define or import your API. - Postman v2. - Curl. - OpenAPI v2, v3. 1. Verify each API call works. - If you imported an OpenAPI specification, go through and add working data. 1. Select **API > Import/Export**. 1. Select **Export Data > Current Workspace**. 1. Select requests to include in the HAR file. 1. Select **Export**. 1. In the **Select Export Type** dropdown list select **HAR -- HTTP Archive Format**. 1. Select **Done**. 1. Enter a location and filename for the HAR file. ### Fiddler debugging proxy [Fiddler](https://www.telerik.com/fiddler) is a web debugger tool. It captures HTTP and HTTP(S) network traffic and allows you to examine each request. It also lets you export the requests and responses in HAR format. #### Create a HAR file with Fiddler 1. Go to the [Fiddler home page](https://www.telerik.com/fiddler) and sign in. If you don't already have an account, first create an account. 1. Browse pages that call an API. Fiddler automatically captures the requests. 1. Select one or more requests, then from the context menu, select **Export > Selected Sessions**. 1. In the **Choose Format** dropdown list select **HTTPArchive v1.2**. 1. Enter a filename and select **Save**. Fiddler shows a popup message confirming the export has succeeded. ### Safari web browser [Safari](https://www.apple.com/safari/) is a web browser maintained by Apple. As web development evolves, browsers support new capabilities. With Safari you can explore network traffic and export it as a HAR file. #### Create a HAR file with Safari Prerequisites: - Enable the `Develop` menu item. 1. Open Safari's preferences. Press <kbd>Command</kbd>+<kbd>,</kbd> or from the menu, select **Safari > Preferences**. 1. Select **Advanced** tab, then select `Show Develop menu item in menu bar`. 1. Close the **Preferences** window. 1. Open the **Web Inspector**. Press <kbd>Option</kbd>+<kbd>Command</kbd>+<kbd>i</kbd>, or from the menu, select **Develop > Show Web Inspector**. 1. Select the **Network** tab, and select **Preserve Log**. 1. Browse pages that call the API. 1. Open the **Web Inspector** and select the **Network** tab 1. Right-click on the request to export and select **Export HAR**. 1. Enter a filename and select **Save**. ### Chrome web browser [Chrome](https://www.google.com/chrome/) is a web browser maintained by Google. As web development evolves, browsers support new capabilities. With Chrome you can explore network traffic and export it as a HAR file. #### Create a HAR file with Chrome 1. From the Chrome context menu, select **Inspect**. 1. Select the **Network** tab. 1. Select **Preserve log**. 1. Browse pages that call the API. 1. Select one or more requests. 1. Right-click and select **Save all as HAR with content**. 1. Enter a filename and select **Save**. 1. To append additional requests, select and save them to the same file. ### Firefox Web Browser [Firefox](https://www.mozilla.org/en-US/firefox/new/) is a web browser maintained by Mozilla. As web development evolves, browsers support new capabilities. With Firefox you can explore network traffic and export it as a HAR file. #### Create a HAR file with Firefox 1. From the Firefox context menu, select **Inspect**. 1. Select the **Network** tab. 1. Browse pages that call the API. 1. Check the **Network** tab and confirm requests are being recorded. If there is a message `Perform a request or Reload the page to see detailed information about network activity`, select **Reload** to start recording requests. 1. Select one or more requests. 1. Right-click and select **Save All As HAR**. 1. Enter a filename and select **Save**. 1. To append additional requests, select and save them to the same file. ## HAR verification Before using HAR files it's important to make sure they don't expose any sensitive information. For each HAR file you should: - View the HAR file's content - Review the HAR file for sensitive information - Edit or remove sensitive information ### View HAR file contents We recommend viewing a HAR file's content in a tool that can present its content in a structured way. Several HAR file viewers are available online. If you would prefer not to upload the HAR file, you can use a tool installed on your computer. HAR files used JSON format, so can also be viewed in a text editor. Tools recommended for viewing HAR files include: - [HAR Viewer](http://www.softwareishard.com/har/viewer/) - (online) - [Google Admin Toolbox HAR Analyzer](https://toolbox.googleapps.com/apps/har_analyzer/) - (online) - [Fiddler](https://www.telerik.com/fiddler) - local - [Insomnia API Client](https://insomnia.rest/) - local ## Review HAR file content Review the HAR file for any of the following: - Information that could help to grant access to your application, for example: authentication tokens, authentication tokens, cookies, API keys. - [Personally Identifiable Information (PII)](https://en.wikipedia.org/wiki/Personal_data). We strongly recommended that you [edit or remove it](#edit-or-remove-sensitive-information) any sensitive information. Use the following as a checklist to start with. It's not an exhaustive list. - Look for secrets. For example: if your application requires authentication, check common locations or authentication information: - Authentication related headers. For example: cookies, authorization. These headers could contain valid information. - A request related to authentication. The body of these requests might contain information such as user credentials or tokens. - Session tokens. Session tokens could grant access to your application. The location of these token could vary. They could be in headers, query parameters or body. - Look for Personally Identifiable Information - For example, if your application retrieves a list of users and their personal data: phones, names, emails. - Authentication information might also contain personal information. ## Edit or remove sensitive information Edit or remove sensitive information found during the [HAR file content review](#review-har-file-content). HAR files are JSON files and can be edited in any text editor. After editing the HAR file, open it in a HAR file viewer to verify its formatting and structure are intact. The following example demonstrates use of [Visual Studio Code](https://code.visualstudio.com/) text editor to edit an Authorization token found in a header. ![Authorization token edited in Visual Studio Code](img/vscode_har_edit_auth_header_v13_12.png)
https://docs.gitlab.com/user/application_security/api_fuzzing/overriding_analyzer_jobs
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/overriding_analyzer_jobs.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
overriding_analyzer_jobs.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Overriding API Fuzzing jobs
null
To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../../ci/yaml/_index.md#rules)), declare a job with the same name as the DAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this sets the target APIs base URL: ```yaml include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzing_fuzz: variables: FUZZAPI_TARGET_URL: https://target/api ```
--- type: reference, howto stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Overriding API Fuzzing jobs breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../../ci/yaml/_index.md#rules)), declare a job with the same name as the DAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this sets the target APIs base URL: ```yaml include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzing_fuzz: variables: FUZZAPI_TARGET_URL: https://target/api ```
https://docs.gitlab.com/user/application_security/api_fuzzing/variables
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/variables.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
variables.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Available CI/CD variables
null
| CI/CD variable | Description | |-------------------------------------------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. | | `FUZZAPI_VERSION` | Specify API Fuzzing container version. Defaults to `5`. | | `FUZZAPI_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. | | `FUZZAPI_API_PORT` | Specify the communication port number used by API Fuzzing engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. | | `FUZZAPI_TARGET_URL` | Base URL of API testing target. | | `FUZZAPI_TARGET_CHECK_SKIP` | Disable waiting for target to become available. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | `FUZZAPI_TARGET_CHECK_STATUS_CODE` | Provide the expected status code for target availability check. If not provided, any non-500 status code is acceptable. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | |[`FUZZAPI_PROFILE`](customizing_analyzer_settings.md#api-fuzzing-profiles) | Configuration profile to use during testing. Defaults to `Quick-10`. | |[`FUZZAPI_EXCLUDE_PATHS`](customizing_analyzer_settings.md#exclude-paths) | Exclude API URL paths from testing. | |[`FUZZAPI_EXCLUDE_URLS`](customizing_analyzer_settings.md#exclude-urls) | Exclude API URL from testing. | |[`FUZZAPI_EXCLUDE_PARAMETER_ENV`](customizing_analyzer_settings.md#exclude-parameters) | JSON string containing excluded parameters. | |[`FUZZAPI_EXCLUDE_PARAMETER_FILE`](customizing_analyzer_settings.md#exclude-parameters) | Path to a JSON file containing excluded parameters. | |[`FUZZAPI_OPENAPI`](enabling_the_analyzer.md#openapi-specification) | OpenAPI Specification file or URL. | |[`FUZZAPI_OPENAPI_RELAXED_VALIDATION`](enabling_the_analyzer.md#openapi-specification) | Relax document validation. Default is disabled. | |[`FUZZAPI_OPENAPI_ALL_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Use all supported media types instead of one when generating requests. Causes test duration to be longer. Default is disabled. | |[`FUZZAPI_OPENAPI_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Colon (`:`) separated media types accepted for testing. Default is disabled. | |[`FUZZAPI_HAR`](enabling_the_analyzer.md#http-archive-har) | HTTP Archive (HAR) file. | |[`FUZZAPI_GRAPHQL`](enabling_the_analyzer.md#graphql-schema) | Path to GraphQL endpoint, for example `/api/graphql`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | |[`FUZZAPI_GRAPHQL_SCHEMA`](enabling_the_analyzer.md#graphql-schema) | A URL or filename for a GraphQL schema in JSON format. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | |[`FUZZAPI_POSTMAN_COLLECTION`](enabling_the_analyzer.md#postman-collection) | Postman Collection file. | |[`FUZZAPI_POSTMAN_COLLECTION_VARIABLES`](enabling_the_analyzer.md#postman-variables) | Path to a JSON file to extract Postman variable values. The support for comma-separated (`,`) files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. | |[`FUZZAPI_OVERRIDES_FILE`](customizing_analyzer_settings.md#overrides) | Path to a JSON file containing overrides. | |[`FUZZAPI_OVERRIDES_ENV`](customizing_analyzer_settings.md#overrides) | JSON string containing headers to override. | |[`FUZZAPI_OVERRIDES_CMD`](customizing_analyzer_settings.md#overrides) | Overrides command. | |[`FUZZAPI_OVERRIDES_CMD_VERBOSE`](customizing_analyzer_settings.md#overrides) | When set to any value. It shows overrides command output as part of the job output. | |`FUZZAPI_PER_REQUEST_SCRIPT` | Full path and filename for a per-request script. [See demo project for examples.](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-with-request-example) [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13691) in GitLab 17.2. | |`FUZZAPI_PRE_SCRIPT` | Run user command or script before scan session starts. `sudo` must be used for privileged operations like installing packages. | |`FUZZAPI_POST_SCRIPT` | Run user command or script after scan session has finished. `sudo` must be used for privileged operations like installing packages. | |[`FUZZAPI_OVERRIDES_INTERVAL`](customizing_analyzer_settings.md#overrides) | How often to run overrides command in seconds. Defaults to `0` (once). | |[`FUZZAPI_HTTP_USERNAME`](customizing_analyzer_settings.md#http-basic-authentication) | Username for HTTP authentication. | |[`FUZZAPI_HTTP_PASSWORD`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication. | |[`FUZZAPI_HTTP_PASSWORD_BASE64`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication, Base64-encoded. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src/-/merge_requests/702) in GitLab 15.4. | |`FUZZAPI_SUCCESS_STATUS_CODES` | Specify a comma-separated (`,`) list of HTTP success status codes that determine whether an API Fuzzing testing scanning job has passed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442219) in GitLab 17.1. Example: `'200, 201, 204'` |
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Available CI/CD variables breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- | CI/CD variable | Description | |-------------------------------------------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. | | `FUZZAPI_VERSION` | Specify API Fuzzing container version. Defaults to `5`. | | `FUZZAPI_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. | | `FUZZAPI_API_PORT` | Specify the communication port number used by API Fuzzing engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. | | `FUZZAPI_TARGET_URL` | Base URL of API testing target. | | `FUZZAPI_TARGET_CHECK_SKIP` | Disable waiting for target to become available. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | `FUZZAPI_TARGET_CHECK_STATUS_CODE` | Provide the expected status code for target availability check. If not provided, any non-500 status code is acceptable. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | |[`FUZZAPI_PROFILE`](customizing_analyzer_settings.md#api-fuzzing-profiles) | Configuration profile to use during testing. Defaults to `Quick-10`. | |[`FUZZAPI_EXCLUDE_PATHS`](customizing_analyzer_settings.md#exclude-paths) | Exclude API URL paths from testing. | |[`FUZZAPI_EXCLUDE_URLS`](customizing_analyzer_settings.md#exclude-urls) | Exclude API URL from testing. | |[`FUZZAPI_EXCLUDE_PARAMETER_ENV`](customizing_analyzer_settings.md#exclude-parameters) | JSON string containing excluded parameters. | |[`FUZZAPI_EXCLUDE_PARAMETER_FILE`](customizing_analyzer_settings.md#exclude-parameters) | Path to a JSON file containing excluded parameters. | |[`FUZZAPI_OPENAPI`](enabling_the_analyzer.md#openapi-specification) | OpenAPI Specification file or URL. | |[`FUZZAPI_OPENAPI_RELAXED_VALIDATION`](enabling_the_analyzer.md#openapi-specification) | Relax document validation. Default is disabled. | |[`FUZZAPI_OPENAPI_ALL_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Use all supported media types instead of one when generating requests. Causes test duration to be longer. Default is disabled. | |[`FUZZAPI_OPENAPI_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Colon (`:`) separated media types accepted for testing. Default is disabled. | |[`FUZZAPI_HAR`](enabling_the_analyzer.md#http-archive-har) | HTTP Archive (HAR) file. | |[`FUZZAPI_GRAPHQL`](enabling_the_analyzer.md#graphql-schema) | Path to GraphQL endpoint, for example `/api/graphql`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | |[`FUZZAPI_GRAPHQL_SCHEMA`](enabling_the_analyzer.md#graphql-schema) | A URL or filename for a GraphQL schema in JSON format. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | |[`FUZZAPI_POSTMAN_COLLECTION`](enabling_the_analyzer.md#postman-collection) | Postman Collection file. | |[`FUZZAPI_POSTMAN_COLLECTION_VARIABLES`](enabling_the_analyzer.md#postman-variables) | Path to a JSON file to extract Postman variable values. The support for comma-separated (`,`) files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. | |[`FUZZAPI_OVERRIDES_FILE`](customizing_analyzer_settings.md#overrides) | Path to a JSON file containing overrides. | |[`FUZZAPI_OVERRIDES_ENV`](customizing_analyzer_settings.md#overrides) | JSON string containing headers to override. | |[`FUZZAPI_OVERRIDES_CMD`](customizing_analyzer_settings.md#overrides) | Overrides command. | |[`FUZZAPI_OVERRIDES_CMD_VERBOSE`](customizing_analyzer_settings.md#overrides) | When set to any value. It shows overrides command output as part of the job output. | |`FUZZAPI_PER_REQUEST_SCRIPT` | Full path and filename for a per-request script. [See demo project for examples.](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-with-request-example) [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13691) in GitLab 17.2. | |`FUZZAPI_PRE_SCRIPT` | Run user command or script before scan session starts. `sudo` must be used for privileged operations like installing packages. | |`FUZZAPI_POST_SCRIPT` | Run user command or script after scan session has finished. `sudo` must be used for privileged operations like installing packages. | |[`FUZZAPI_OVERRIDES_INTERVAL`](customizing_analyzer_settings.md#overrides) | How often to run overrides command in seconds. Defaults to `0` (once). | |[`FUZZAPI_HTTP_USERNAME`](customizing_analyzer_settings.md#http-basic-authentication) | Username for HTTP authentication. | |[`FUZZAPI_HTTP_PASSWORD`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication. | |[`FUZZAPI_HTTP_PASSWORD_BASE64`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication, Base64-encoded. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src/-/merge_requests/702) in GitLab 15.4. | |`FUZZAPI_SUCCESS_STATUS_CODES` | Specify a comma-separated (`,`) list of HTTP success status codes that determine whether an API Fuzzing testing scanning job has passed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442219) in GitLab 17.1. Example: `'200, 201, 204'` |
https://docs.gitlab.com/user/application_security/api_fuzzing/configuration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/_index.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configuration
null
- [Requirements](../_index.md) - [Enabling the analyzer](enabling_the_analyzer.md) - [Customize analyzer settings](customizing_analyzer_settings.md) - [Overriding analyzer jobs](overriding_analyzer_jobs.md) - [Available CI/CD variables](variables.md) - [Offline configuration](offline_configuration.md)
--- type: reference, howto stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configuration breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- - [Requirements](../_index.md) - [Enabling the analyzer](enabling_the_analyzer.md) - [Customize analyzer settings](customizing_analyzer_settings.md) - [Overriding analyzer jobs](overriding_analyzer_jobs.md) - [Available CI/CD variables](variables.md) - [Offline configuration](offline_configuration.md)
https://docs.gitlab.com/user/application_security/api_fuzzing/enabling_the_analyzer
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/enabling_the_analyzer.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
enabling_the_analyzer.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Enabling the analyzer
null
Prerequisites: - One of the following web API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - One of the following assets to provide APIs to test: - OpenAPI v2 or v3 API definition - HTTP Archive (HAR) of API requests to test - Postman Collection v2.0 or v2.1 {{< alert type="warning" >}} **Never** run fuzz testing against a production server. Not only can it perform any function that the API can, it may also trigger bugs in the API. This includes actions like modifying and deleting data. Only run fuzzing against a test server. {{< /alert >}} To enable Web API fuzzing use the Web API fuzzing configuration form. - For manual configuration instructions, see the respective section, depending on the API type: - [OpenAPI Specification](#openapi-specification) - [GraphQL Schema](#graphql-schema) - [HTTP Archive (HAR)](#http-archive-har) - [Postman Collection](#postman-collection) - Otherwise, see [Web API fuzzing configuration form](#web-api-fuzzing-configuration-form). API fuzzing configuration files must be in your repository's `.gitlab` directory. ## Web API fuzzing configuration form The API fuzzing configuration form helps you create or modify your project's API fuzzing configuration. The form lets you choose values for the most common API fuzzing options and builds a YAML snippet that you can paste in your GitLab CI/CD configuration. ### Configure Web API fuzzing in the UI To generate an API Fuzzing configuration snippet: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **API Fuzzing** row, select **Enable API Fuzzing**. 1. Complete the fields. For details see [Available CI/CD variables](variables.md). 1. Select **Generate code snippet**. A modal opens with the YAML snippet corresponding to the options you've selected in the form. 1. Do one of the following: 1. To copy the snippet to your clipboard, select **Copy code only**. 1. To add the snippet to your project's `.gitlab-ci.yml` file, select **Copy code and open `.gitlab-ci.yml` file**. The pipeline editor opens. 1. Paste the snippet into the `.gitlab-ci.yml` file. 1. Select the **Lint** tab to confirm the edited `.gitlab-ci.yml` file is valid. 1. Select the **Edit** tab, then select **Commit changes**. When the snippet is committed to the `.gitlab-ci.yml` file, pipelines include an API Fuzzing job. ## OpenAPI Specification The [OpenAPI Specification](https://www.openapis.org/) (formerly the Swagger Specification) is an API description format for REST APIs. This section shows you how to configure API fuzzing using an OpenAPI Specification to provide information about the target API to test. OpenAPI Specifications are provided as a file system resource or URL. Both JSON and YAML OpenAPI formats are supported. API fuzzing uses an OpenAPI document to generate the request body. When a request body is required, the body generation is limited to these body types: - `application/x-www-form-urlencoded` - `multipart/form-data` - `application/json` - `application/xml` ## OpenAPI and media types A media type (formerly known as MIME type) is an identifier for file formats and format contents transmitted. A OpenAPI document lets you specify that a given operation can accept different media types, hence a given request can send data using different file content. As for example, a `PUT /user` operation to update user data could accept data in either XML (media type `application/xml`) or JSON (media type `application/json`) format. OpenAPI 2.x lets you specify the accepted media types globally or per operation, and OpenAPI 3.x lets you specify the accepted media types per operation. API Fuzzing checks the listed media types and tries to produce sample data for each supported media type. - The default behavior is to select one of the supported media types to use. The first supported media type is chosen from the list. This behavior is configurable. Testing the same operation (for example, `POST /user`) using different media types (for example, `application/json` and `application/xml`) is not always desirable. For example, if the target application executes the same code regardless of the request content type, it takes longer to finish the test session, and it may report duplicate vulnerabilities related to the request body depending on the target app. The environment variable `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` lets you specify whether or not to use all supported media types instead of one when generating requests for a given operation. When the environmental variable `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` is set to any value, API Fuzzing tries to generate requests for all supported media types instead of one in a given operation. This causes testing to take longer as testing is repeated for each provided media type. Alternatively, the variable `FUZZAPI_OPENAPI_MEDIA_TYPES` is used to provide a list of media types that each is tested. Providing more than one media type causes testing to take longer, as testing is performed for each media type selected. When the environment variable `FUZZAPI_OPENAPI_MEDIA_TYPES` is set to a list of media types, only the listed media types are included when creating requests. Multiple media types in `FUZZAPI_OPENAPI_MEDIA_TYPES` must separated by a colon (`:`). For example, to limit request generation to the media types `application/x-www-form-urlencoded` and `multipart/form-data`, set the environment variable `FUZZAPI_OPENAPI_MEDIA_TYPES` to `application/x-www-form-urlencoded:multipart/form-data`. Only supported media types in this list are included when creating requests, though unsupported media types are always skipped. A media type text may contain different sections. For example, `application/vnd.api+json; charset=UTF-8` is a compound of `type "/" [tree "."] subtype ["+" suffix]* [";" parameter]`. Parameters are not taken into account when filtering media types on request generation. The environment variables `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` and `FUZZAPI_OPENAPI_MEDIA_TYPES` allow you to decide how to handle media types. These settings are mutually exclusive. If both are enabled, API Fuzzing reports an error. ### Configure Web API fuzzing with an OpenAPI Specification To configure API fuzzing in GitLab with an OpenAPI Specification: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the OpenAPI Specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_OPENAPI` variable. 1. Provide the target API instance's base URL. Use either the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an application dynamically created during a GitLab CI/CD pipeline, have the application persist its URL in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an example of this in the [Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using an OpenAPI Specification: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` This is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ## HTTP Archive (HAR) The [HTTP Archive format (HAR)](http://www.softwareishard.com/blog/har-12-spec/) is an archive file format for logging HTTP transactions. When used with the GitLab API fuzzer, HAR must contain records of calling the web API to test. The API fuzzer extracts all the requests and uses them to perform testing. For more details, including how to create a HAR file, see [HTTP Archive format](../create_har_files.md). {{< alert type="warning" >}} HAR files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ### Configure Web API fuzzing with a HAR file To configure API fuzzing to use a HAR file: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the HAR specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_HAR` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an app dynamically created during a GitLab CI/CD pipeline, have the app persist its domain in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an [example of this in our Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using a HAR file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_HAR: test-api-recording.har FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ## GraphQL Schema {{< history >}} - Support for GraphQL Schema was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. {{< /history >}} GraphQL is a query language for your API and an alternative to REST APIs. API Fuzzing supports testing GraphQL endpoints multiple ways: - Test using the GraphQL Schema. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. - Test using a recording (HAR) of GraphQL queries. - Test using a Postman Collection containing GraphQL queries. This section documents how to test using a GraphQL schema. The GraphQL schema support in API Fuzzing is able to query the schema from endpoints that support introspection. Introspection is enabled by default to allow tools like GraphiQL to work. ### API Fuzzing scanning with a GraphQL endpoint URL The GraphQL support in API Fuzzing is able to query a GraphQL endpoint for the schema. {{< alert type="note" >}} The GraphQL endpoint must support introspection queries for this method to work correctly. {{< /alert >}} To configure API Fuzzing to use an GraphQL endpoint URL that provides information about the target API to test: 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the GraphQL endpoint path, for example `/api/graphql`. Specify the path by adding the `FUZZAPI_GRAPHQL` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. See the [dynamic environment solutions](../troubleshooting.md#dynamic-environment-solutions) section of our documentation for more information. Complete example configuration of using a GraphQL endpoint URL: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). ### API Fuzzing with a GraphQL Schema file API Fuzzing can use a GraphQL schema file to understand and test a GraphQL endpoint that has introspection disabled. To use a GraphQL schema file, it must be in the introspection JSON format. A GraphQL schema can be converted to a the introspection JSON format using an online 3rd party tool: [https://transform.tools/graphql-to-introspection-json](https://transform.tools/graphql-to-introspection-json). To configure API Fuzzing to use a GraphQl schema file that provides information about the target API to test: 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the GraphQL endpoint path, for example `/api/graphql`. Specify the path by adding the `FUZZAPI_GRAPHQL` variable. 1. Provide the location of the GraphQL schema file. You can provide the location as a file path or URL. Specify the location by adding the `FUZZAPI_GRAPHQL_SCHEMA` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. See the [dynamic environment solutions](../troubleshooting.md#dynamic-environment-solutions) section of our documentation for more information. Complete example configuration of using an GraphQL schema file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_GRAPHQL_SCHEMA: test-api-graphql.schema FUZZAPI_TARGET_URL: http://test-deployment/ ``` Complete example configuration of using an GraphQL schema file URL: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_GRAPHQL_SCHEMA: http://file-store/files/test-api-graphql.schema FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). ## Postman Collection The [Postman API Client](https://www.postman.com/product/api-client/) is a popular tool that developers and testers use to call various types of APIs. The API definitions [can be exported as a Postman Collection file](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections) for use with API Fuzzing. When exporting, make sure to select a supported version of Postman Collection: v2.0 or v2.1. When used with the GitLab API fuzzer, Postman Collections must contain definitions of the web API to test with valid data. The API fuzzer extracts all the API definitions and uses them to perform testing. {{< alert type="warning" >}} Postman Collection files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the Postman Collection file contents before adding them to a repository. {{< /alert >}} ### Configure Web API fuzzing with a Postman Collection file To configure API fuzzing to use a Postman Collection file: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the Postman Collection specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_POSTMAN_COLLECTION` variable. 1. Provide the target API instance's base URL. Use either the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an app dynamically created during a GitLab CI/CD pipeline, have the app persist its domain in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an [example of this in our Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using a Postman Collection file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_POSTMAN_COLLECTION: postman-collection_serviceA.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` This is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ### Postman variables {{< history >}} - Support for Postman Environment file format was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. - Support for multiple variable files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. - Support for Postman variable scopes: Global and Environment was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. {{< /history >}} #### Variables in Postman Client Postman allows the developer to define placeholders that can be used in different parts of the requests. These placeholders are called variables, as explained in [using variables](https://learning.postman.com/docs/sending-requests/variables/variables/). You can use variables to store and reuse values in your requests and scripts. For example, you can edit the collection to add variables to the document: ![Edit collection variable tab View](img/api_fuzzing_postman_collection_edit_variable_v13_9.png) Or alternatively, you can add variables in an environment: ![Edit environment variables View](img/api_fuzzing_postman_environment_edit_variable_v13_9.png) You can then use the variables in sections such as URL, headers, and others: ![Edit request using variables View](img/api_fuzzing_postman_request_edit_v13_9.png) Postman has grown from a basic client tool with a nice UX experience to a more complex ecosystem that allows testing APIs with scripts, creating complex collections that trigger secondary requests, and setting variables along the way. Not every feature in the Postman ecosystem is supported. For example, scripts are not supported. The main focus of the Postman support is to ingest Postman Collection definitions that are used by the Postman Client and their related variables defined in the workspace, environments, and the collections themselves. Postman allows creating variables in different scopes. Each scope has a different level of visibility in the Postman tools. For example, you can create a variable in a _global environment_ scope that is seen by every operation definition and workspace. You can also create a variable in a specific _environment_ scope that is only visible and used when that specific environment is selected for use. Some scopes are not always available, for example in the Postman ecosystem you can create requests in the Postman Client, these requests do not have a _local_ scope, but test scripts do. Variable scopes in Postman can be a daunting topic and not everyone is familiar with it. We strongly recommend that you read [Variable Scopes](https://learning.postman.com/docs/sending-requests/variables/variables/#variable-scopes) from Postman documentation before moving forward. As mentioned previously, there are different variable scopes, and each of them has a purpose and can be used to provide more flexibility to your Postman document. There is an important note on how values for variables are computed, as per Postman documentation: {{< alert type="note" >}} If a variable with the same name is declared in two different scopes, the value stored in the variable with narrowest scope is used. For example, if there is a global variable named `username` and a local variable named `username`, the local value is used when the request runs. {{< /alert >}} The following is a summary of the variable scopes supported by the Postman Client and API Fuzzing: - **Global Environment (Global) scope** is a special pre-defined environment that is available throughout a workspace. We can also refer to the _global environment_ scope as the _global_ scope. The Postman Client allows exporting the global environment into a JSON file, which can be used with API Fuzzing. - **Environment scope** is a named group of variables created by a user in the Postman Client. The Postman Client supports a single active environment along with the global environment. The variables defined in an active user-created environment take precedence over variables defined in the global environment. The Postman Client allows exporting your environment into a JSON file, which can be used with API Fuzzing. - **Collection scope** is a group of variables declared in a given collection. The collection variables are available to the collection where they have been declared and the nested requests or collections. Variables defined in the collection scope take precedence over the _global environment_ scope and also the _environment_ scope. The Postman Client can export one or more collections into a JSON file, this JSON file contains selected collections, requests, and collection variables. - **API Fuzzing Scope** is a new scope added by API Fuzzing to allow users to provide extra variables, or override variables defined in other supported scopes. This scope is not supported by Postman. The _API Fuzzing Scope_ variables are provided using a [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). - Override values defined in the environment or collection - Defining variables from scripts - Define a single row of data from the unsupported _data scope_ - **Data scope** is a group of variables in which their name and values come from JSON or CSV files. A Postman collection runner like [Newman](https://learning.postman.com/docs/collections/using-newman-cli/command-line-integration-with-newman/) or [Postman Collection Runner](https://learning.postman.com/docs/collections/running-collections/intro-to-collection-runs/) executes the requests in a collection as many times as entries have the JSON or CSV file. A good use case for these variables is to automate tests using scripts in Postman. API Fuzzing does **not** support reading data from a CSV or JSON file. - **Local scope** are variables that are defined in Postman scripts. API Fuzzing does **not** support Postman scripts and by extension, variables defined in scripts. You can still provide values for the script-defined variables by defining them in one of the supported scopes, or our custom JSON format. Not all scopes are supported by API Fuzzing and variables defined in scripts are not supported. The following table is sorted by broadest scope to narrowest scope. | Scope |Postman | API Fuzzing | Comment | | ------------------ |:---------:|:------------:| :--------| | Global Environment | Yes | Yes | Special pre-defined environment | | Environment | Yes | Yes | Named environments | | Collection | Yes | Yes | Defined in your postman collection | | API Fuzzing Scope | No | Yes | Custom scope added by API Fuzzing | | Data | Yes | No | External files in CSV or JSON format | | Local | Yes | No | Variables defined in scripts | For more details on how to define variables and export variables in different scopes, see: - [Defining collection variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-collection-variables) - [Defining environment variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-environment-variables) - [Defining global variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-global-variables) #### Exporting from Postman Client The Postman Client lets you export different file formats, for instance, you can export a Postman collection or a Postman environment. The exported environment can be the global environment (which is always available) or can be any custom environment you previously have created. When you export a Postman Collection, it may contain only declarations for _collection_ and _local_ scoped variables; _environment_ scoped variables are not included. To get the declaration for _environment_ scoped variables, you have to export a given environment at the time. Each exported file only includes variables from the selected environment. For more details on exporting variables in different supported scopes, see: - [Exporting collections](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections) - [Exporting environments](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) - [Downloading global environments](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) #### API Fuzzing Scope, custom JSON file format Our custom JSON file format is a JSON object where each object property represents a variable name and the property value represents the variable value. This file can be created using your favorite text editor, or it can be produced by an earlier job in your pipeline. This example defines two variables `base_url` and `token` in the API Fuzzing scope: ```json { "base_url": "http://127.0.0.1/", "token": "Token 84816165151" } ``` #### Using scopes with API Fuzzing The scopes: _global_, _environment_, _collection_, and _GitLab API Fuzzing_ are supported in [GitLab 15.1 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/356312). GitLab 15.0 and earlier, supports only the _collection_, and _GitLab API Fuzzing_ scopes. The following table provides a quick reference for mapping scope files/URLs to API Fuzzing configuration variables: | Scope | How to Provide | | ------------------ | --------------- | | Global Environment | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Environment | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Collection | FUZZAPI_POSTMAN_COLLECTION | | API Fuzzing Scope | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Data | Not supported | | Local | Not supported | The Postman Collection document automatically includes any _collection_ scoped variables. The Postman Collection is provided with the configuration variable `FUZZAPI_POSTMAN_COLLECTION`. This variable can be set to a single [exported Postman collection](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections). Variables from other scopes are provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. The configuration variable supports a comma (`,`) delimited file list in [GitLab 15.1 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/356312). GitLab 15.0 and earlier, supports only one single file. The order of the files provided is not important as the files provide the needed scope information. The configuration variable `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` can be set to: - [Exported Global environment](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) - [Exported environments](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) - [API Fuzzing Custom JSON format](#api-fuzzing-scope-custom-json-file-format) #### Undefined Postman variables There is a chance that API Fuzzing engine does not find all variables references that your Postman collection file is using. Some cases can be: - You are using _data_ or _local_ scoped variables, and as stated previously these scopes are not supported by API Fuzzing. Thus, assuming the values for these variables have not been provided through [the API Fuzzing scope](#api-fuzzing-scope-custom-json-file-format), then the values of the _data_ and _local_ scoped variables are undefined. - A variable name was typed incorrectly, and the name does not match the defined variable. - Postman Client supports a new dynamic variable that is not supported by API Fuzzing. When possible, API Fuzzing follows the same behavior as the Postman Client does when dealing with undefined variables. The text of the variable reference remains the same, and there is no text substitution. The same behavior also applies to any unsupported dynamic variables. For example, if a request definition in the Postman Collection references the variable `{{full_url}}` and the variable is not found it is left unchanged with the value `{{full_url}}`. #### Dynamic Postman variables In addition to variables that a user can define at various scope levels, Postman has a set of pre-defined variables called _dynamic_ variables. The [_dynamic_ variables](https://learning.postman.com/docs/tests-and-scripts/write-scripts/variables-list/) are already defined and their name is prefixed with a dollar sign (`$`), for instance, `$guid`. _Dynamic_ variables can be used like any other variable, and in the Postman Client, they produce random values during the request/collection run. An important difference between API Fuzzing and Postman is that API Fuzzing returns the same value for each usage of the same dynamic variables. This differs from the Postman Client behavior which returns a random value on each use of the same dynamic variable. In other words, API Fuzzing uses static values for dynamic variables while Postman uses random values. The supported dynamic variables during the scanning process are: | Variable | Value | | ----------- | ----------- | | `$guid` | `611c2e81-2ccb-42d8-9ddc-2d0bfa65c1b4` | | `$isoTimestamp` | `2020-06-09T21:10:36.177Z` | | `$randomAbbreviation` | `PCI` | | `$randomAbstractImage` | `http://no-a-valid-host/640/480/abstract` | | `$randomAdjective` | `auxiliary` | | `$randomAlphaNumeric` | `a` | | `$randomAnimalsImage` | `http://no-a-valid-host/640/480/animals` | | `$randomAvatarImage` | `https://no-a-valid-host/path/to/some/image.jpg` | | `$randomBankAccount` | `09454073` | | `$randomBankAccountBic` | `EZIAUGJ1` | | `$randomBankAccountIban` | `MU20ZPUN3039684000618086155TKZ` | | `$randomBankAccountName` | `Home Loan Account` | | `$randomBitcoin` | `3VB8JGT7Y4Z63U68KGGKDXMLLH5` | | `$randomBoolean` | `true` | | `$randomBs` | `killer leverage schemas` | | `$randomBsAdjective` | `viral` | | `$randomBsBuzz` | `repurpose` | | `$randomBsNoun` | `markets` | | `$randomBusinessImage` | `http://no-a-valid-host/640/480/business` | | `$randomCatchPhrase` | `Future-proofed heuristic open architecture` | | `$randomCatchPhraseAdjective` | `Business-focused` | | `$randomCatchPhraseDescriptor` | `bandwidth-monitored` | | `$randomCatchPhraseNoun` | `superstructure` | | `$randomCatsImage` | `http://no-a-valid-host/640/480/cats` | | `$randomCity` | `Spinkahaven` | | `$randomCityImage` | `http://no-a-valid-host/640/480/city` | | `$randomColor` | `fuchsia` | | `$randomCommonFileExt` | `wav` | | `$randomCommonFileName` | `well_modulated.mpg4` | | `$randomCommonFileType` | `audio` | | `$randomCompanyName` | `Grady LLC` | | `$randomCompanySuffix` | `Inc` | | `$randomCountry` | `Kazakhstan` | | `$randomCountryCode` | `MD` | | `$randomCreditCardMask` | `3622` | | `$randomCurrencyCode` | `ZMK` | | `$randomCurrencyName` | `Pound Sterling` | | `$randomCurrencySymbol` | `£` | | `$randomDatabaseCollation` | `utf8_general_ci` | | `$randomDatabaseColumn` | `updatedAt` | | `$randomDatabaseEngine` | `Memory` | | `$randomDatabaseType` | `text` | | `$randomDateFuture` | `Tue Mar 17 2020 13:11:50 GMT+0530 (India Standard Time)` | | `$randomDatePast` | `Sat Mar 02 2019 09:09:26 GMT+0530 (India Standard Time)` | | `$randomDateRecent` | `Tue Jul 09 2019 23:12:37 GMT+0530 (India Standard Time)` | | `$randomDepartment` | `Electronics` | | `$randomDirectoryPath` | `/usr/local/bin` | | `$randomDomainName` | `trevor.info` | | `$randomDomainSuffix` | `org` | | `$randomDomainWord` | `jaden` | | `$randomEmail` | `Iva.Kovacek61@no-a-valid-host.com` | | `$randomExampleEmail` | `non-a-valid-user@example.net` | | `$randomFashionImage` | `http://no-a-valid-host/640/480/fashion` | | `$randomFileExt` | `war` | | `$randomFileName` | `neural_sri_lanka_rupee_gloves.gdoc` | | `$randomFilePath` | `/home/programming_chicken.cpio` | | `$randomFileType` | `application` | | `$randomFirstName` | `Chandler` | | `$randomFoodImage` | `http://no-a-valid-host/640/480/food` | | `$randomFullName` | `Connie Runolfsdottir` | | `$randomHexColor` | `#47594a` | | `$randomImageDataUri` | `data:image/svg+xml;charset=UTF-8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20version%3D%221.1%22%20baseProfile%3D%22full%22%20width%3D%22undefined%22%20height%3D%22undefined%22%3E%20%3Crect%20width%3D%22100%25%22%20height%3D%22100%25%22%20fill%3D%22grey%22%2F%3E%20%20%3Ctext%20x%3D%220%22%20y%3D%2220%22%20font-size%3D%2220%22%20text-anchor%3D%22start%22%20fill%3D%22white%22%3Eundefinedxundefined%3C%2Ftext%3E%20%3C%2Fsvg%3E` | | `$randomImageUrl` | `http://no-a-valid-host/640/480` | | `$randomIngverb` | `navigating` | | `$randomInt` | `494` | | `$randomIP` | `241.102.234.100` | | `$randomIPV6` | `dbe2:7ae6:119b:c161:1560:6dda:3a9b:90a9` | | `$randomJobArea` | `Mobility` | | `$randomJobDescriptor` | `Senior` | | `$randomJobTitle` | `International Creative Liaison` | | `$randomJobType` | `Supervisor` | | `$randomLastName` | `Schneider` | | `$randomLatitude` | `55.2099` | | `$randomLocale` | `ny` | | `$randomLongitude` | `40.6609` | | `$randomLoremLines` | `Ducimus in ut mollitia.\nA itaque non.\nHarum temporibus nihil voluptas.\nIste in sed et nesciunt in quaerat sed.` | | `$randomLoremParagraph` | `Ab aliquid odio iste quo voluptas voluptatem dignissimos velit. Recusandae facilis qui commodi ea magnam enim nostrum quia quis. Nihil est suscipit assumenda ut voluptatem sed. Esse ab voluptas odit qui molestiae. Rem est nesciunt est quis ipsam expedita consequuntur.` | | `$randomLoremParagraphs` | `Voluptatem rem magnam aliquam ab id aut quaerat. Placeat provident possimus voluptatibus dicta velit non aut quasi. Mollitia et aliquam expedita sunt dolores nam consequuntur. Nam dolorum delectus ipsam repudiandae et ipsam ut voluptatum totam. Nobis labore labore recusandae ipsam quo.` | | `$randomLoremSentence` | `Molestias consequuntur nisi non quod.` | | `$randomLoremSentences` | `Et sint voluptas similique iure amet perspiciatis vero sequi atque. Ut porro sit et hic. Neque aspernatur vitae fugiat ut dolore et veritatis. Ab iusto ex delectus animi. Voluptates nisi iusto. Impedit quod quae voluptate qui.` | | `$randomLoremSlug` | `eos-aperiam-accusamus, beatae-id-molestiae, qui-est-repellat` | | `$randomLoremText` | `Quisquam asperiores exercitationem ut ipsum. Aut eius nesciunt. Et reiciendis aut alias eaque. Nihil amet laboriosam pariatur eligendi. Sunt ullam ut sint natus ducimus. Voluptas harum aspernatur soluta rem nam.` | | `$randomLoremWord` | `est` | | `$randomLoremWords` | `vel repellat nobis` | | `$randomMACAddress` | `33:d4:68:5f:b4:c7` | | `$randomMimeType` | `audio/vnd.vmx.cvsd` | | `$randomMonth` | `February` | | `$randomNamePrefix` | `Dr.` | | `$randomNameSuffix` | `MD` | | `$randomNatureImage` | `http://no-a-valid-host/640/480/nature` | | `$randomNightlifeImage` | `http://no-a-valid-host/640/480/nightlife` | | `$randomNoun` | `bus` | | `$randomPassword` | `t9iXe7COoDKv8k3` | | `$randomPeopleImage` | `http://no-a-valid-host/640/480/people` | | `$randomPhoneNumber` | `700-008-5275` | | `$randomPhoneNumberExt` | `27-199-983-3864` | | `$randomPhrase` | `You can't program the monitor without navigating the mobile XML program!` | | `$randomPrice` | `531.55` | | `$randomProduct` | `Pizza` | | `$randomProductAdjective` | `Unbranded` | | `$randomProductMaterial` | `Steel` | | `$randomProductName` | `Handmade Concrete Tuna` | | `$randomProtocol` | `https` | | `$randomSemver` | `7.0.5` | | `$randomSportsImage` | `http://no-a-valid-host/640/480/sports` | | `$randomStreetAddress` | `5742 Harvey Streets` | | `$randomStreetName` | `Kuhic Island` | | `$randomTransactionType` | `payment` | | `$randomTransportImage` | `http://no-a-valid-host/640/480/transport` | | `$randomUrl` | `https://no-a-valid-host.net` | | `$randomUserAgent` | `Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.9.8; rv:15.6) Gecko/20100101 Firefox/15.6.6` | | `$randomUserName` | `Jarrell.Gutkowski` | | `$randomUUID` | `6929bb52-3ab2-448a-9796-d6480ecad36b` | | `$randomVerb` | `navigate` | | `$randomWeekday` | `Thursday` | | `$randomWord` | `withdrawal` | | `$randomWords` | `Samoa Synergistic sticky copying Grocery` | | `$timestamp` | `1562757107` | #### Example: Global Scope In this example, [the _global_ scope is exported](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) from the Postman Client as `global-scope.json` and provided to API Fuzzing through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Environment Scope In this example, [the _environment_ scope is exported](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) from the Postman Client as `environment-scope.json` and provided to API Fuzzing through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: environment-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Collection Scope The _collection_ scope variables are included in the exported Postman Collection file and provided through the `FUZZAPI_POSTMAN_COLLECTION` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_POSTMAN_COLLECTION_VARIABLES: variable-collection-dictionary.json ``` #### Example: API Fuzzing Scope The API Fuzzing Scope is used for two main purposes, defining _data_ and _local_ scope variables that are not supported by API Fuzzing, and changing the value of an existing variable defined in another scope. The API Fuzzing Scope is provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` The file `api-fuzzing-scope.json` uses our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "base_url": "http://127.0.0.1/", "token": "Token 84816165151" } ``` #### Example: Multiple Scopes In this example, a _global_ scope, _environment_ scope, and _collection_ scope are configured. The first step is to export our various scopes. - [Export the _global_ scope](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) as `global-scope.json` - [Export the _environment_ scope](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) as `environment-scope.json` - Export the Postman Collection which includes the _collection_ scope as `postman-collection.json` The Postman Collection is provided using the `FUZZAPI_POSTMAN_COLLECTION` variable, while the other scopes are provided using the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`. API Fuzzing can identify which scope the provided files match using data provided in each file. ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json,environment-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Changing a Variables Value When using exported scopes, it's often the case that the value of a variable must be changed for use with API Fuzzing. For example, a _collection_ scoped variable might contain a variable named `api_version` with a value of `v2`, while your test needs a value of `v1`. Instead of modifying the exported collection to change the value, the API Fuzzing scope can be used to change its value. This works because the _API Fuzzing_ scope takes precedence over all other scopes. The _collection_ scope variables are included in the exported Postman Collection file and provided through the `FUZZAPI_POSTMAN_COLLECTION` configuration variable. The API Fuzzing Scope is provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable, but first, we must create the file. The file `api-fuzzing-scope.json` uses our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "api_version": "v1" } ``` Our CI definition: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Changing a Variables Value with Multiple Scopes When using exported scopes, it's often the case that the value of a variable must be changed for use with API Fuzzing. For example, an _environment_ scope might contain a variable named `api_version` with a value of `v2`, while your test needs a value of `v1`. Instead of modifying the exported file to change the value, the API Fuzzing scope can be used. This works because the _API Fuzzing_ scope takes precedence over all other scopes. In this example, a _global_ scope, _environment_ scope, _collection_ scope, and _API Fuzzing_ scope are configured. The first step is to export and create our various scopes. - [Export the _global_ scope](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) as `global-scope.json` - [Export the _environment_ scope](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) as `environment-scope.json` - Export the Postman Collection which includes the _collection_ scope as `postman-collection.json` The API Fuzzing scope is used by creating a file `api-fuzzing-scope.json` using our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "api_version": "v1" } ``` The Postman Collection is provided using the `FUZZAPI_POSTMAN_COLLECTION` variable, while the other scopes are provided using the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`. API Fuzzing can identify which scope the provided files match using data provided in each file. ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json,environment-scope.json,api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` ## Running your first scan When configured correctly, a CI/CD pipeline contains a `fuzz` stage and an `apifuzzer_fuzz` or `apifuzzer_fuzz_dnd` job. The job only fails when an invalid configuration is provided. During typical operation, the job always succeeds even if faults are identified during fuzz testing. Faults are displayed on the **Security** pipeline tab with the suite name. When testing against the repositories default branch, the fuzzing faults are also shown on the Security and compliance's vulnerability report. To prevent an excessive number of reported faults, the API fuzzing scanner limits the number of faults it reports. ## Viewing fuzzing faults The API Fuzzing analyzer produces a JSON report that is collected and used [to populate the faults into GitLab vulnerability screens](#view-details-of-an-api-fuzzing-vulnerability). Fuzzing faults show up as vulnerabilities with a severity of Unknown. The faults that API fuzzing finds require manual investigation and aren't associated with a specific vulnerability type. They require investigation to determine if they are a security issue, and if they should be fixed. See [handling false positives](#handling-false-positives) for information about configuration changes you can make to limit the number of false positives reported. ### View details of an API Fuzzing vulnerability Faults detected by API Fuzzing occur in the live web application, and require manual investigation to determine if they are vulnerabilities. Fuzzing faults are included as vulnerabilities with a severity of Unknown. To facilitate investigation of the fuzzing faults, detailed information is provided about the HTTP messages sent and received along with a description of the modifications made. Follow these steps to view details of a fuzzing fault: 1. You can view faults in a project, or a merge request: - In a project, go to the project's **Secure > Vulnerability report** page. This page shows all vulnerabilities from the default branch only. - In a merge request, go the merge request's **Security** section and select the **Expand** button. API Fuzzing faults are available in a section labeled **API Fuzzing detected N potential vulnerabilities**. Select the title to display the fault details. 1. Select the fault's title to display the fault's details. The table below describes these details. | Field | Description | |:--------------------|:----------------------------------------------------------------------------------------| | Description | Description of the fault including what was modified. | | Project | Namespace and project in which the vulnerability was detected. | | Method | HTTP method used to detect the vulnerability. | | URL | URL at which the vulnerability was detected. | | Request | The HTTP request that caused the fault. | | Unmodified Response | Response from an unmodified request. This is what a typical working response looks like. | | Actual Response | Response received from fuzzed request. | | Evidence | How we determined a fault occurred. | | Identifiers | The fuzzing check used to find this fault. | | Severity | Severity of the finding is always Unknown. | | Scanner Type | Scanner used to perform testing. | ### Security Dashboard Fuzzing faults show up as vulnerabilities with a severity of Unknown. The Security Dashboard is a good place to get an overview of all the security vulnerabilities in your groups, projects and pipelines. For more information, see the [Security Dashboard documentation](../../security_dashboard/_index.md). ### Interacting with the vulnerabilities Fuzzing faults show up as vulnerabilities with a severity of Unknown. After a fault is found, you can interact with it. Read more on how to [address the vulnerabilities](../../vulnerabilities/_index.md). ## Handling False Positives False positives can be handled in two ways: - Turn off the Check producing the false positive. This prevents the check from generating any faults. Example checks are the JSON Fuzzing Check, and Form Body Fuzzing Check. - Fuzzing checks have several methods of detecting when a fault is identified, called _Asserts_. Asserts can also be turned off and configured. For example, the API fuzzer by default uses HTTP status codes to help identify when something is a real issue. If an API returns a 500 error during testing, this creates a fault. This isn't always desired, as some frameworks return 500 errors often. ### Turn off a Check Checks perform testing of a specific type and can be turned on and off for specific configuration profiles. The default configuration file defines several profiles that you can use. The profile definition in the configuration file lists all the checks that are active during a scan. To turn off a specific check, remove it from the profile definition in the configuration file. The profiles are defined in the `Profiles` section of the configuration file. Example profile definition: ```yaml Profiles: - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` To turn off the General Fuzzing Check you can remove these lines: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` This results in the following YAML: ```yaml - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` ### Turn off an Assertion for a Check Assertions detect faults in tests produced by checks. Many checks support multiple Assertions such as Log Analysis, Response Analysis, and Status Code. When a fault is found, the Assertion used is provided. To identify which Assertions are on by default, see the Checks default configuration in the configuration file. The section is called `Checks`. This example shows the FormBody Fuzzing Check: ```yaml Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 30 UnicodeFuzzing: true Assertions: - Name: LogAnalysisAssertion - Name: ResponseAnalysisAssertion - Name: StatusCodeAssertion ``` Here you can see three Assertions are on by default. A common source of false positives is `StatusCodeAssertion`. To turn it off, modify its configuration in the `Profiles` section. This example provides only the other two Assertions (`LogAnalysisAssertion`, `ResponseAnalysisAssertion`). This prevents `FormBodyFuzzingCheck` from using `StatusCodeAssertion`: ```yaml Profiles: - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true Assertions: - Name: LogAnalysisAssertion - Name: ResponseAnalysisAssertion - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlInjectionCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ```
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Enabling the analyzer breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- Prerequisites: - One of the following web API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - One of the following assets to provide APIs to test: - OpenAPI v2 or v3 API definition - HTTP Archive (HAR) of API requests to test - Postman Collection v2.0 or v2.1 {{< alert type="warning" >}} **Never** run fuzz testing against a production server. Not only can it perform any function that the API can, it may also trigger bugs in the API. This includes actions like modifying and deleting data. Only run fuzzing against a test server. {{< /alert >}} To enable Web API fuzzing use the Web API fuzzing configuration form. - For manual configuration instructions, see the respective section, depending on the API type: - [OpenAPI Specification](#openapi-specification) - [GraphQL Schema](#graphql-schema) - [HTTP Archive (HAR)](#http-archive-har) - [Postman Collection](#postman-collection) - Otherwise, see [Web API fuzzing configuration form](#web-api-fuzzing-configuration-form). API fuzzing configuration files must be in your repository's `.gitlab` directory. ## Web API fuzzing configuration form The API fuzzing configuration form helps you create or modify your project's API fuzzing configuration. The form lets you choose values for the most common API fuzzing options and builds a YAML snippet that you can paste in your GitLab CI/CD configuration. ### Configure Web API fuzzing in the UI To generate an API Fuzzing configuration snippet: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **API Fuzzing** row, select **Enable API Fuzzing**. 1. Complete the fields. For details see [Available CI/CD variables](variables.md). 1. Select **Generate code snippet**. A modal opens with the YAML snippet corresponding to the options you've selected in the form. 1. Do one of the following: 1. To copy the snippet to your clipboard, select **Copy code only**. 1. To add the snippet to your project's `.gitlab-ci.yml` file, select **Copy code and open `.gitlab-ci.yml` file**. The pipeline editor opens. 1. Paste the snippet into the `.gitlab-ci.yml` file. 1. Select the **Lint** tab to confirm the edited `.gitlab-ci.yml` file is valid. 1. Select the **Edit** tab, then select **Commit changes**. When the snippet is committed to the `.gitlab-ci.yml` file, pipelines include an API Fuzzing job. ## OpenAPI Specification The [OpenAPI Specification](https://www.openapis.org/) (formerly the Swagger Specification) is an API description format for REST APIs. This section shows you how to configure API fuzzing using an OpenAPI Specification to provide information about the target API to test. OpenAPI Specifications are provided as a file system resource or URL. Both JSON and YAML OpenAPI formats are supported. API fuzzing uses an OpenAPI document to generate the request body. When a request body is required, the body generation is limited to these body types: - `application/x-www-form-urlencoded` - `multipart/form-data` - `application/json` - `application/xml` ## OpenAPI and media types A media type (formerly known as MIME type) is an identifier for file formats and format contents transmitted. A OpenAPI document lets you specify that a given operation can accept different media types, hence a given request can send data using different file content. As for example, a `PUT /user` operation to update user data could accept data in either XML (media type `application/xml`) or JSON (media type `application/json`) format. OpenAPI 2.x lets you specify the accepted media types globally or per operation, and OpenAPI 3.x lets you specify the accepted media types per operation. API Fuzzing checks the listed media types and tries to produce sample data for each supported media type. - The default behavior is to select one of the supported media types to use. The first supported media type is chosen from the list. This behavior is configurable. Testing the same operation (for example, `POST /user`) using different media types (for example, `application/json` and `application/xml`) is not always desirable. For example, if the target application executes the same code regardless of the request content type, it takes longer to finish the test session, and it may report duplicate vulnerabilities related to the request body depending on the target app. The environment variable `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` lets you specify whether or not to use all supported media types instead of one when generating requests for a given operation. When the environmental variable `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` is set to any value, API Fuzzing tries to generate requests for all supported media types instead of one in a given operation. This causes testing to take longer as testing is repeated for each provided media type. Alternatively, the variable `FUZZAPI_OPENAPI_MEDIA_TYPES` is used to provide a list of media types that each is tested. Providing more than one media type causes testing to take longer, as testing is performed for each media type selected. When the environment variable `FUZZAPI_OPENAPI_MEDIA_TYPES` is set to a list of media types, only the listed media types are included when creating requests. Multiple media types in `FUZZAPI_OPENAPI_MEDIA_TYPES` must separated by a colon (`:`). For example, to limit request generation to the media types `application/x-www-form-urlencoded` and `multipart/form-data`, set the environment variable `FUZZAPI_OPENAPI_MEDIA_TYPES` to `application/x-www-form-urlencoded:multipart/form-data`. Only supported media types in this list are included when creating requests, though unsupported media types are always skipped. A media type text may contain different sections. For example, `application/vnd.api+json; charset=UTF-8` is a compound of `type "/" [tree "."] subtype ["+" suffix]* [";" parameter]`. Parameters are not taken into account when filtering media types on request generation. The environment variables `FUZZAPI_OPENAPI_ALL_MEDIA_TYPES` and `FUZZAPI_OPENAPI_MEDIA_TYPES` allow you to decide how to handle media types. These settings are mutually exclusive. If both are enabled, API Fuzzing reports an error. ### Configure Web API fuzzing with an OpenAPI Specification To configure API fuzzing in GitLab with an OpenAPI Specification: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the OpenAPI Specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_OPENAPI` variable. 1. Provide the target API instance's base URL. Use either the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an application dynamically created during a GitLab CI/CD pipeline, have the application persist its URL in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an example of this in the [Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using an OpenAPI Specification: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` This is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ## HTTP Archive (HAR) The [HTTP Archive format (HAR)](http://www.softwareishard.com/blog/har-12-spec/) is an archive file format for logging HTTP transactions. When used with the GitLab API fuzzer, HAR must contain records of calling the web API to test. The API fuzzer extracts all the requests and uses them to perform testing. For more details, including how to create a HAR file, see [HTTP Archive format](../create_har_files.md). {{< alert type="warning" >}} HAR files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the HAR file contents before adding them to a repository. {{< /alert >}} ### Configure Web API fuzzing with a HAR file To configure API fuzzing to use a HAR file: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the HAR specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_HAR` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an app dynamically created during a GitLab CI/CD pipeline, have the app persist its domain in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an [example of this in our Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using a HAR file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_HAR: test-api-recording.har FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ## GraphQL Schema {{< history >}} - Support for GraphQL Schema was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. {{< /history >}} GraphQL is a query language for your API and an alternative to REST APIs. API Fuzzing supports testing GraphQL endpoints multiple ways: - Test using the GraphQL Schema. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. - Test using a recording (HAR) of GraphQL queries. - Test using a Postman Collection containing GraphQL queries. This section documents how to test using a GraphQL schema. The GraphQL schema support in API Fuzzing is able to query the schema from endpoints that support introspection. Introspection is enabled by default to allow tools like GraphiQL to work. ### API Fuzzing scanning with a GraphQL endpoint URL The GraphQL support in API Fuzzing is able to query a GraphQL endpoint for the schema. {{< alert type="note" >}} The GraphQL endpoint must support introspection queries for this method to work correctly. {{< /alert >}} To configure API Fuzzing to use an GraphQL endpoint URL that provides information about the target API to test: 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the GraphQL endpoint path, for example `/api/graphql`. Specify the path by adding the `FUZZAPI_GRAPHQL` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. See the [dynamic environment solutions](../troubleshooting.md#dynamic-environment-solutions) section of our documentation for more information. Complete example configuration of using a GraphQL endpoint URL: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). ### API Fuzzing with a GraphQL Schema file API Fuzzing can use a GraphQL schema file to understand and test a GraphQL endpoint that has introspection disabled. To use a GraphQL schema file, it must be in the introspection JSON format. A GraphQL schema can be converted to a the introspection JSON format using an online 3rd party tool: [https://transform.tools/graphql-to-introspection-json](https://transform.tools/graphql-to-introspection-json). To configure API Fuzzing to use a GraphQl schema file that provides information about the target API to test: 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the GraphQL endpoint path, for example `/api/graphql`. Specify the path by adding the `FUZZAPI_GRAPHQL` variable. 1. Provide the location of the GraphQL schema file. You can provide the location as a file path or URL. Specify the location by adding the `FUZZAPI_GRAPHQL_SCHEMA` variable. 1. The target API instance's base URL is also required. Provide it by using the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. See the [dynamic environment solutions](../troubleshooting.md#dynamic-environment-solutions) section of our documentation for more information. Complete example configuration of using an GraphQL schema file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_GRAPHQL_SCHEMA: test-api-graphql.schema FUZZAPI_TARGET_URL: http://test-deployment/ ``` Complete example configuration of using an GraphQL schema file URL: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml apifuzzer_fuzz: variables: FUZZAPI_GRAPHQL: /api/graphql FUZZAPI_GRAPHQL_SCHEMA: http://file-store/files/test-api-graphql.schema FUZZAPI_TARGET_URL: http://test-deployment/ ``` This example is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). ## Postman Collection The [Postman API Client](https://www.postman.com/product/api-client/) is a popular tool that developers and testers use to call various types of APIs. The API definitions [can be exported as a Postman Collection file](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections) for use with API Fuzzing. When exporting, make sure to select a supported version of Postman Collection: v2.0 or v2.1. When used with the GitLab API fuzzer, Postman Collections must contain definitions of the web API to test with valid data. The API fuzzer extracts all the API definitions and uses them to perform testing. {{< alert type="warning" >}} Postman Collection files may contain sensitive information such as authentication tokens, API keys, and session cookies. We recommend that you review the Postman Collection file contents before adding them to a repository. {{< /alert >}} ### Configure Web API fuzzing with a Postman Collection file To configure API fuzzing to use a Postman Collection file: 1. Add the `fuzz` stage to your `.gitlab-ci.yml` file. 1. [Include](../../../../ci/yaml/_index.md#includetemplate) the [`API-Fuzzing.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Fuzzing.gitlab-ci.yml) in your `.gitlab-ci.yml` file. 1. Provide the profile by adding the `FUZZAPI_PROFILE` CI/CD variable to your `.gitlab-ci.yml` file. The profile specifies how many tests are run. Substitute `Quick-10` for the profile you choose. For more details, see [API fuzzing profiles](customizing_analyzer_settings.md#api-fuzzing-profiles). ```yaml variables: FUZZAPI_PROFILE: Quick-10 ``` 1. Provide the location of the Postman Collection specification. You can provide the specification as a file or URL. Specify the location by adding the `FUZZAPI_POSTMAN_COLLECTION` variable. 1. Provide the target API instance's base URL. Use either the `FUZZAPI_TARGET_URL` variable or an `environment_url.txt` file. Adding the URL in an `environment_url.txt` file at your project's root is great for testing in dynamic environments. To run API fuzzing against an app dynamically created during a GitLab CI/CD pipeline, have the app persist its domain in an `environment_url.txt` file. API fuzzing automatically parses that file to find its scan target. You can see an [example of this in our Auto DevOps CI YAML](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml). Example `.gitlab-ci.yml` file using a Postman Collection file: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_POSTMAN_COLLECTION: postman-collection_serviceA.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` This is a minimal configuration for API Fuzzing. From here you can: - [Run your first scan](#running-your-first-scan). - [Add authentication](customizing_analyzer_settings.md#authentication). - Learn how to [handle false positives](#handling-false-positives). For details of API fuzzing configuration options, see [Available CI/CD variables](variables.md). ### Postman variables {{< history >}} - Support for Postman Environment file format was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. - Support for multiple variable files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. - Support for Postman variable scopes: Global and Environment was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. {{< /history >}} #### Variables in Postman Client Postman allows the developer to define placeholders that can be used in different parts of the requests. These placeholders are called variables, as explained in [using variables](https://learning.postman.com/docs/sending-requests/variables/variables/). You can use variables to store and reuse values in your requests and scripts. For example, you can edit the collection to add variables to the document: ![Edit collection variable tab View](img/api_fuzzing_postman_collection_edit_variable_v13_9.png) Or alternatively, you can add variables in an environment: ![Edit environment variables View](img/api_fuzzing_postman_environment_edit_variable_v13_9.png) You can then use the variables in sections such as URL, headers, and others: ![Edit request using variables View](img/api_fuzzing_postman_request_edit_v13_9.png) Postman has grown from a basic client tool with a nice UX experience to a more complex ecosystem that allows testing APIs with scripts, creating complex collections that trigger secondary requests, and setting variables along the way. Not every feature in the Postman ecosystem is supported. For example, scripts are not supported. The main focus of the Postman support is to ingest Postman Collection definitions that are used by the Postman Client and their related variables defined in the workspace, environments, and the collections themselves. Postman allows creating variables in different scopes. Each scope has a different level of visibility in the Postman tools. For example, you can create a variable in a _global environment_ scope that is seen by every operation definition and workspace. You can also create a variable in a specific _environment_ scope that is only visible and used when that specific environment is selected for use. Some scopes are not always available, for example in the Postman ecosystem you can create requests in the Postman Client, these requests do not have a _local_ scope, but test scripts do. Variable scopes in Postman can be a daunting topic and not everyone is familiar with it. We strongly recommend that you read [Variable Scopes](https://learning.postman.com/docs/sending-requests/variables/variables/#variable-scopes) from Postman documentation before moving forward. As mentioned previously, there are different variable scopes, and each of them has a purpose and can be used to provide more flexibility to your Postman document. There is an important note on how values for variables are computed, as per Postman documentation: {{< alert type="note" >}} If a variable with the same name is declared in two different scopes, the value stored in the variable with narrowest scope is used. For example, if there is a global variable named `username` and a local variable named `username`, the local value is used when the request runs. {{< /alert >}} The following is a summary of the variable scopes supported by the Postman Client and API Fuzzing: - **Global Environment (Global) scope** is a special pre-defined environment that is available throughout a workspace. We can also refer to the _global environment_ scope as the _global_ scope. The Postman Client allows exporting the global environment into a JSON file, which can be used with API Fuzzing. - **Environment scope** is a named group of variables created by a user in the Postman Client. The Postman Client supports a single active environment along with the global environment. The variables defined in an active user-created environment take precedence over variables defined in the global environment. The Postman Client allows exporting your environment into a JSON file, which can be used with API Fuzzing. - **Collection scope** is a group of variables declared in a given collection. The collection variables are available to the collection where they have been declared and the nested requests or collections. Variables defined in the collection scope take precedence over the _global environment_ scope and also the _environment_ scope. The Postman Client can export one or more collections into a JSON file, this JSON file contains selected collections, requests, and collection variables. - **API Fuzzing Scope** is a new scope added by API Fuzzing to allow users to provide extra variables, or override variables defined in other supported scopes. This scope is not supported by Postman. The _API Fuzzing Scope_ variables are provided using a [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). - Override values defined in the environment or collection - Defining variables from scripts - Define a single row of data from the unsupported _data scope_ - **Data scope** is a group of variables in which their name and values come from JSON or CSV files. A Postman collection runner like [Newman](https://learning.postman.com/docs/collections/using-newman-cli/command-line-integration-with-newman/) or [Postman Collection Runner](https://learning.postman.com/docs/collections/running-collections/intro-to-collection-runs/) executes the requests in a collection as many times as entries have the JSON or CSV file. A good use case for these variables is to automate tests using scripts in Postman. API Fuzzing does **not** support reading data from a CSV or JSON file. - **Local scope** are variables that are defined in Postman scripts. API Fuzzing does **not** support Postman scripts and by extension, variables defined in scripts. You can still provide values for the script-defined variables by defining them in one of the supported scopes, or our custom JSON format. Not all scopes are supported by API Fuzzing and variables defined in scripts are not supported. The following table is sorted by broadest scope to narrowest scope. | Scope |Postman | API Fuzzing | Comment | | ------------------ |:---------:|:------------:| :--------| | Global Environment | Yes | Yes | Special pre-defined environment | | Environment | Yes | Yes | Named environments | | Collection | Yes | Yes | Defined in your postman collection | | API Fuzzing Scope | No | Yes | Custom scope added by API Fuzzing | | Data | Yes | No | External files in CSV or JSON format | | Local | Yes | No | Variables defined in scripts | For more details on how to define variables and export variables in different scopes, see: - [Defining collection variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-collection-variables) - [Defining environment variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-environment-variables) - [Defining global variables](https://learning.postman.com/docs/sending-requests/variables/variables/#defining-global-variables) #### Exporting from Postman Client The Postman Client lets you export different file formats, for instance, you can export a Postman collection or a Postman environment. The exported environment can be the global environment (which is always available) or can be any custom environment you previously have created. When you export a Postman Collection, it may contain only declarations for _collection_ and _local_ scoped variables; _environment_ scoped variables are not included. To get the declaration for _environment_ scoped variables, you have to export a given environment at the time. Each exported file only includes variables from the selected environment. For more details on exporting variables in different supported scopes, see: - [Exporting collections](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections) - [Exporting environments](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) - [Downloading global environments](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) #### API Fuzzing Scope, custom JSON file format Our custom JSON file format is a JSON object where each object property represents a variable name and the property value represents the variable value. This file can be created using your favorite text editor, or it can be produced by an earlier job in your pipeline. This example defines two variables `base_url` and `token` in the API Fuzzing scope: ```json { "base_url": "http://127.0.0.1/", "token": "Token 84816165151" } ``` #### Using scopes with API Fuzzing The scopes: _global_, _environment_, _collection_, and _GitLab API Fuzzing_ are supported in [GitLab 15.1 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/356312). GitLab 15.0 and earlier, supports only the _collection_, and _GitLab API Fuzzing_ scopes. The following table provides a quick reference for mapping scope files/URLs to API Fuzzing configuration variables: | Scope | How to Provide | | ------------------ | --------------- | | Global Environment | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Environment | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Collection | FUZZAPI_POSTMAN_COLLECTION | | API Fuzzing Scope | FUZZAPI_POSTMAN_COLLECTION_VARIABLES | | Data | Not supported | | Local | Not supported | The Postman Collection document automatically includes any _collection_ scoped variables. The Postman Collection is provided with the configuration variable `FUZZAPI_POSTMAN_COLLECTION`. This variable can be set to a single [exported Postman collection](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-collections). Variables from other scopes are provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. The configuration variable supports a comma (`,`) delimited file list in [GitLab 15.1 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/356312). GitLab 15.0 and earlier, supports only one single file. The order of the files provided is not important as the files provide the needed scope information. The configuration variable `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` can be set to: - [Exported Global environment](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) - [Exported environments](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) - [API Fuzzing Custom JSON format](#api-fuzzing-scope-custom-json-file-format) #### Undefined Postman variables There is a chance that API Fuzzing engine does not find all variables references that your Postman collection file is using. Some cases can be: - You are using _data_ or _local_ scoped variables, and as stated previously these scopes are not supported by API Fuzzing. Thus, assuming the values for these variables have not been provided through [the API Fuzzing scope](#api-fuzzing-scope-custom-json-file-format), then the values of the _data_ and _local_ scoped variables are undefined. - A variable name was typed incorrectly, and the name does not match the defined variable. - Postman Client supports a new dynamic variable that is not supported by API Fuzzing. When possible, API Fuzzing follows the same behavior as the Postman Client does when dealing with undefined variables. The text of the variable reference remains the same, and there is no text substitution. The same behavior also applies to any unsupported dynamic variables. For example, if a request definition in the Postman Collection references the variable `{{full_url}}` and the variable is not found it is left unchanged with the value `{{full_url}}`. #### Dynamic Postman variables In addition to variables that a user can define at various scope levels, Postman has a set of pre-defined variables called _dynamic_ variables. The [_dynamic_ variables](https://learning.postman.com/docs/tests-and-scripts/write-scripts/variables-list/) are already defined and their name is prefixed with a dollar sign (`$`), for instance, `$guid`. _Dynamic_ variables can be used like any other variable, and in the Postman Client, they produce random values during the request/collection run. An important difference between API Fuzzing and Postman is that API Fuzzing returns the same value for each usage of the same dynamic variables. This differs from the Postman Client behavior which returns a random value on each use of the same dynamic variable. In other words, API Fuzzing uses static values for dynamic variables while Postman uses random values. The supported dynamic variables during the scanning process are: | Variable | Value | | ----------- | ----------- | | `$guid` | `611c2e81-2ccb-42d8-9ddc-2d0bfa65c1b4` | | `$isoTimestamp` | `2020-06-09T21:10:36.177Z` | | `$randomAbbreviation` | `PCI` | | `$randomAbstractImage` | `http://no-a-valid-host/640/480/abstract` | | `$randomAdjective` | `auxiliary` | | `$randomAlphaNumeric` | `a` | | `$randomAnimalsImage` | `http://no-a-valid-host/640/480/animals` | | `$randomAvatarImage` | `https://no-a-valid-host/path/to/some/image.jpg` | | `$randomBankAccount` | `09454073` | | `$randomBankAccountBic` | `EZIAUGJ1` | | `$randomBankAccountIban` | `MU20ZPUN3039684000618086155TKZ` | | `$randomBankAccountName` | `Home Loan Account` | | `$randomBitcoin` | `3VB8JGT7Y4Z63U68KGGKDXMLLH5` | | `$randomBoolean` | `true` | | `$randomBs` | `killer leverage schemas` | | `$randomBsAdjective` | `viral` | | `$randomBsBuzz` | `repurpose` | | `$randomBsNoun` | `markets` | | `$randomBusinessImage` | `http://no-a-valid-host/640/480/business` | | `$randomCatchPhrase` | `Future-proofed heuristic open architecture` | | `$randomCatchPhraseAdjective` | `Business-focused` | | `$randomCatchPhraseDescriptor` | `bandwidth-monitored` | | `$randomCatchPhraseNoun` | `superstructure` | | `$randomCatsImage` | `http://no-a-valid-host/640/480/cats` | | `$randomCity` | `Spinkahaven` | | `$randomCityImage` | `http://no-a-valid-host/640/480/city` | | `$randomColor` | `fuchsia` | | `$randomCommonFileExt` | `wav` | | `$randomCommonFileName` | `well_modulated.mpg4` | | `$randomCommonFileType` | `audio` | | `$randomCompanyName` | `Grady LLC` | | `$randomCompanySuffix` | `Inc` | | `$randomCountry` | `Kazakhstan` | | `$randomCountryCode` | `MD` | | `$randomCreditCardMask` | `3622` | | `$randomCurrencyCode` | `ZMK` | | `$randomCurrencyName` | `Pound Sterling` | | `$randomCurrencySymbol` | `£` | | `$randomDatabaseCollation` | `utf8_general_ci` | | `$randomDatabaseColumn` | `updatedAt` | | `$randomDatabaseEngine` | `Memory` | | `$randomDatabaseType` | `text` | | `$randomDateFuture` | `Tue Mar 17 2020 13:11:50 GMT+0530 (India Standard Time)` | | `$randomDatePast` | `Sat Mar 02 2019 09:09:26 GMT+0530 (India Standard Time)` | | `$randomDateRecent` | `Tue Jul 09 2019 23:12:37 GMT+0530 (India Standard Time)` | | `$randomDepartment` | `Electronics` | | `$randomDirectoryPath` | `/usr/local/bin` | | `$randomDomainName` | `trevor.info` | | `$randomDomainSuffix` | `org` | | `$randomDomainWord` | `jaden` | | `$randomEmail` | `Iva.Kovacek61@no-a-valid-host.com` | | `$randomExampleEmail` | `non-a-valid-user@example.net` | | `$randomFashionImage` | `http://no-a-valid-host/640/480/fashion` | | `$randomFileExt` | `war` | | `$randomFileName` | `neural_sri_lanka_rupee_gloves.gdoc` | | `$randomFilePath` | `/home/programming_chicken.cpio` | | `$randomFileType` | `application` | | `$randomFirstName` | `Chandler` | | `$randomFoodImage` | `http://no-a-valid-host/640/480/food` | | `$randomFullName` | `Connie Runolfsdottir` | | `$randomHexColor` | `#47594a` | | `$randomImageDataUri` | `data:image/svg+xml;charset=UTF-8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20version%3D%221.1%22%20baseProfile%3D%22full%22%20width%3D%22undefined%22%20height%3D%22undefined%22%3E%20%3Crect%20width%3D%22100%25%22%20height%3D%22100%25%22%20fill%3D%22grey%22%2F%3E%20%20%3Ctext%20x%3D%220%22%20y%3D%2220%22%20font-size%3D%2220%22%20text-anchor%3D%22start%22%20fill%3D%22white%22%3Eundefinedxundefined%3C%2Ftext%3E%20%3C%2Fsvg%3E` | | `$randomImageUrl` | `http://no-a-valid-host/640/480` | | `$randomIngverb` | `navigating` | | `$randomInt` | `494` | | `$randomIP` | `241.102.234.100` | | `$randomIPV6` | `dbe2:7ae6:119b:c161:1560:6dda:3a9b:90a9` | | `$randomJobArea` | `Mobility` | | `$randomJobDescriptor` | `Senior` | | `$randomJobTitle` | `International Creative Liaison` | | `$randomJobType` | `Supervisor` | | `$randomLastName` | `Schneider` | | `$randomLatitude` | `55.2099` | | `$randomLocale` | `ny` | | `$randomLongitude` | `40.6609` | | `$randomLoremLines` | `Ducimus in ut mollitia.\nA itaque non.\nHarum temporibus nihil voluptas.\nIste in sed et nesciunt in quaerat sed.` | | `$randomLoremParagraph` | `Ab aliquid odio iste quo voluptas voluptatem dignissimos velit. Recusandae facilis qui commodi ea magnam enim nostrum quia quis. Nihil est suscipit assumenda ut voluptatem sed. Esse ab voluptas odit qui molestiae. Rem est nesciunt est quis ipsam expedita consequuntur.` | | `$randomLoremParagraphs` | `Voluptatem rem magnam aliquam ab id aut quaerat. Placeat provident possimus voluptatibus dicta velit non aut quasi. Mollitia et aliquam expedita sunt dolores nam consequuntur. Nam dolorum delectus ipsam repudiandae et ipsam ut voluptatum totam. Nobis labore labore recusandae ipsam quo.` | | `$randomLoremSentence` | `Molestias consequuntur nisi non quod.` | | `$randomLoremSentences` | `Et sint voluptas similique iure amet perspiciatis vero sequi atque. Ut porro sit et hic. Neque aspernatur vitae fugiat ut dolore et veritatis. Ab iusto ex delectus animi. Voluptates nisi iusto. Impedit quod quae voluptate qui.` | | `$randomLoremSlug` | `eos-aperiam-accusamus, beatae-id-molestiae, qui-est-repellat` | | `$randomLoremText` | `Quisquam asperiores exercitationem ut ipsum. Aut eius nesciunt. Et reiciendis aut alias eaque. Nihil amet laboriosam pariatur eligendi. Sunt ullam ut sint natus ducimus. Voluptas harum aspernatur soluta rem nam.` | | `$randomLoremWord` | `est` | | `$randomLoremWords` | `vel repellat nobis` | | `$randomMACAddress` | `33:d4:68:5f:b4:c7` | | `$randomMimeType` | `audio/vnd.vmx.cvsd` | | `$randomMonth` | `February` | | `$randomNamePrefix` | `Dr.` | | `$randomNameSuffix` | `MD` | | `$randomNatureImage` | `http://no-a-valid-host/640/480/nature` | | `$randomNightlifeImage` | `http://no-a-valid-host/640/480/nightlife` | | `$randomNoun` | `bus` | | `$randomPassword` | `t9iXe7COoDKv8k3` | | `$randomPeopleImage` | `http://no-a-valid-host/640/480/people` | | `$randomPhoneNumber` | `700-008-5275` | | `$randomPhoneNumberExt` | `27-199-983-3864` | | `$randomPhrase` | `You can't program the monitor without navigating the mobile XML program!` | | `$randomPrice` | `531.55` | | `$randomProduct` | `Pizza` | | `$randomProductAdjective` | `Unbranded` | | `$randomProductMaterial` | `Steel` | | `$randomProductName` | `Handmade Concrete Tuna` | | `$randomProtocol` | `https` | | `$randomSemver` | `7.0.5` | | `$randomSportsImage` | `http://no-a-valid-host/640/480/sports` | | `$randomStreetAddress` | `5742 Harvey Streets` | | `$randomStreetName` | `Kuhic Island` | | `$randomTransactionType` | `payment` | | `$randomTransportImage` | `http://no-a-valid-host/640/480/transport` | | `$randomUrl` | `https://no-a-valid-host.net` | | `$randomUserAgent` | `Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.9.8; rv:15.6) Gecko/20100101 Firefox/15.6.6` | | `$randomUserName` | `Jarrell.Gutkowski` | | `$randomUUID` | `6929bb52-3ab2-448a-9796-d6480ecad36b` | | `$randomVerb` | `navigate` | | `$randomWeekday` | `Thursday` | | `$randomWord` | `withdrawal` | | `$randomWords` | `Samoa Synergistic sticky copying Grocery` | | `$timestamp` | `1562757107` | #### Example: Global Scope In this example, [the _global_ scope is exported](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) from the Postman Client as `global-scope.json` and provided to API Fuzzing through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Environment Scope In this example, [the _environment_ scope is exported](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) from the Postman Client as `environment-scope.json` and provided to API Fuzzing through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: environment-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Collection Scope The _collection_ scope variables are included in the exported Postman Collection file and provided through the `FUZZAPI_POSTMAN_COLLECTION` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_POSTMAN_COLLECTION_VARIABLES: variable-collection-dictionary.json ``` #### Example: API Fuzzing Scope The API Fuzzing Scope is used for two main purposes, defining _data_ and _local_ scope variables that are not supported by API Fuzzing, and changing the value of an existing variable defined in another scope. The API Fuzzing Scope is provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable. Here is an example of using `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` The file `api-fuzzing-scope.json` uses our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "base_url": "http://127.0.0.1/", "token": "Token 84816165151" } ``` #### Example: Multiple Scopes In this example, a _global_ scope, _environment_ scope, and _collection_ scope are configured. The first step is to export our various scopes. - [Export the _global_ scope](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) as `global-scope.json` - [Export the _environment_ scope](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) as `environment-scope.json` - Export the Postman Collection which includes the _collection_ scope as `postman-collection.json` The Postman Collection is provided using the `FUZZAPI_POSTMAN_COLLECTION` variable, while the other scopes are provided using the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`. API Fuzzing can identify which scope the provided files match using data provided in each file. ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json,environment-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Changing a Variables Value When using exported scopes, it's often the case that the value of a variable must be changed for use with API Fuzzing. For example, a _collection_ scoped variable might contain a variable named `api_version` with a value of `v2`, while your test needs a value of `v1`. Instead of modifying the exported collection to change the value, the API Fuzzing scope can be used to change its value. This works because the _API Fuzzing_ scope takes precedence over all other scopes. The _collection_ scope variables are included in the exported Postman Collection file and provided through the `FUZZAPI_POSTMAN_COLLECTION` configuration variable. The API Fuzzing Scope is provided through the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES` configuration variable, but first, we must create the file. The file `api-fuzzing-scope.json` uses our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "api_version": "v1" } ``` Our CI definition: ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` #### Example: Changing a Variables Value with Multiple Scopes When using exported scopes, it's often the case that the value of a variable must be changed for use with API Fuzzing. For example, an _environment_ scope might contain a variable named `api_version` with a value of `v2`, while your test needs a value of `v1`. Instead of modifying the exported file to change the value, the API Fuzzing scope can be used. This works because the _API Fuzzing_ scope takes precedence over all other scopes. In this example, a _global_ scope, _environment_ scope, _collection_ scope, and _API Fuzzing_ scope are configured. The first step is to export and create our various scopes. - [Export the _global_ scope](https://learning.postman.com/docs/sending-requests/variables/variables/#downloading-global-environments) as `global-scope.json` - [Export the _environment_ scope](https://learning.postman.com/docs/getting-started/importing-and-exporting/exporting-data/#export-environments) as `environment-scope.json` - Export the Postman Collection which includes the _collection_ scope as `postman-collection.json` The API Fuzzing scope is used by creating a file `api-fuzzing-scope.json` using our [custom JSON file format](#api-fuzzing-scope-custom-json-file-format). This JSON is an object with key-value pairs for properties. The keys are the variables' names, and the values are the variables' values. For example: ```json { "api_version": "v1" } ``` The Postman Collection is provided using the `FUZZAPI_POSTMAN_COLLECTION` variable, while the other scopes are provided using the `FUZZAPI_POSTMAN_COLLECTION_VARIABLES`. API Fuzzing can identify which scope the provided files match using data provided in each file. ```yaml stages: - fuzz include: - template: Security/API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_POSTMAN_COLLECTION: postman-collection.json FUZZAPI_POSTMAN_COLLECTION_VARIABLES: global-scope.json,environment-scope.json,api-fuzzing-scope.json FUZZAPI_TARGET_URL: http://test-deployment/ ``` ## Running your first scan When configured correctly, a CI/CD pipeline contains a `fuzz` stage and an `apifuzzer_fuzz` or `apifuzzer_fuzz_dnd` job. The job only fails when an invalid configuration is provided. During typical operation, the job always succeeds even if faults are identified during fuzz testing. Faults are displayed on the **Security** pipeline tab with the suite name. When testing against the repositories default branch, the fuzzing faults are also shown on the Security and compliance's vulnerability report. To prevent an excessive number of reported faults, the API fuzzing scanner limits the number of faults it reports. ## Viewing fuzzing faults The API Fuzzing analyzer produces a JSON report that is collected and used [to populate the faults into GitLab vulnerability screens](#view-details-of-an-api-fuzzing-vulnerability). Fuzzing faults show up as vulnerabilities with a severity of Unknown. The faults that API fuzzing finds require manual investigation and aren't associated with a specific vulnerability type. They require investigation to determine if they are a security issue, and if they should be fixed. See [handling false positives](#handling-false-positives) for information about configuration changes you can make to limit the number of false positives reported. ### View details of an API Fuzzing vulnerability Faults detected by API Fuzzing occur in the live web application, and require manual investigation to determine if they are vulnerabilities. Fuzzing faults are included as vulnerabilities with a severity of Unknown. To facilitate investigation of the fuzzing faults, detailed information is provided about the HTTP messages sent and received along with a description of the modifications made. Follow these steps to view details of a fuzzing fault: 1. You can view faults in a project, or a merge request: - In a project, go to the project's **Secure > Vulnerability report** page. This page shows all vulnerabilities from the default branch only. - In a merge request, go the merge request's **Security** section and select the **Expand** button. API Fuzzing faults are available in a section labeled **API Fuzzing detected N potential vulnerabilities**. Select the title to display the fault details. 1. Select the fault's title to display the fault's details. The table below describes these details. | Field | Description | |:--------------------|:----------------------------------------------------------------------------------------| | Description | Description of the fault including what was modified. | | Project | Namespace and project in which the vulnerability was detected. | | Method | HTTP method used to detect the vulnerability. | | URL | URL at which the vulnerability was detected. | | Request | The HTTP request that caused the fault. | | Unmodified Response | Response from an unmodified request. This is what a typical working response looks like. | | Actual Response | Response received from fuzzed request. | | Evidence | How we determined a fault occurred. | | Identifiers | The fuzzing check used to find this fault. | | Severity | Severity of the finding is always Unknown. | | Scanner Type | Scanner used to perform testing. | ### Security Dashboard Fuzzing faults show up as vulnerabilities with a severity of Unknown. The Security Dashboard is a good place to get an overview of all the security vulnerabilities in your groups, projects and pipelines. For more information, see the [Security Dashboard documentation](../../security_dashboard/_index.md). ### Interacting with the vulnerabilities Fuzzing faults show up as vulnerabilities with a severity of Unknown. After a fault is found, you can interact with it. Read more on how to [address the vulnerabilities](../../vulnerabilities/_index.md). ## Handling False Positives False positives can be handled in two ways: - Turn off the Check producing the false positive. This prevents the check from generating any faults. Example checks are the JSON Fuzzing Check, and Form Body Fuzzing Check. - Fuzzing checks have several methods of detecting when a fault is identified, called _Asserts_. Asserts can also be turned off and configured. For example, the API fuzzer by default uses HTTP status codes to help identify when something is a real issue. If an API returns a 500 error during testing, this creates a fault. This isn't always desired, as some frameworks return 500 errors often. ### Turn off a Check Checks perform testing of a specific type and can be turned on and off for specific configuration profiles. The default configuration file defines several profiles that you can use. The profile definition in the configuration file lists all the checks that are active during a scan. To turn off a specific check, remove it from the profile definition in the configuration file. The profiles are defined in the `Profiles` section of the configuration file. Example profile definition: ```yaml Profiles: - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` To turn off the General Fuzzing Check you can remove these lines: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` This results in the following YAML: ```yaml - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` ### Turn off an Assertion for a Check Assertions detect faults in tests produced by checks. Many checks support multiple Assertions such as Log Analysis, Response Analysis, and Status Code. When a fault is found, the Assertion used is provided. To identify which Assertions are on by default, see the Checks default configuration in the configuration file. The section is called `Checks`. This example shows the FormBody Fuzzing Check: ```yaml Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 30 UnicodeFuzzing: true Assertions: - Name: LogAnalysisAssertion - Name: ResponseAnalysisAssertion - Name: StatusCodeAssertion ``` Here you can see three Assertions are on by default. A common source of false positives is `StatusCodeAssertion`. To turn it off, modify its configuration in the `Profiles` section. This example provides only the other two Assertions (`LogAnalysisAssertion`, `ResponseAnalysisAssertion`). This prevents `FormBodyFuzzingCheck` from using `StatusCodeAssertion`: ```yaml Profiles: - Name: Quick-10 DefaultProfile: Quick Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true Assertions: - Name: LogAnalysisAssertion - Name: ResponseAnalysisAssertion - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlInjectionCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ```
https://docs.gitlab.com/user/application_security/api_fuzzing/customizing_analyzer_settings
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/customizing_analyzer_settings.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
customizing_analyzer_settings.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Customizing analyzer settings
null
The API fuzzing behavior can be changed through CI/CD variables. The API fuzzing configuration files must be in your repository's `.gitlab` directory. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ## Authentication Authentication is handled by providing the authentication token as a header or cookie. You can provide a script that performs an authentication flow or calculates the token. ### HTTP Basic Authentication [HTTP basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is an authentication method built into the HTTP protocol and used in conjunction with [transport layer security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). We recommended that you [create a CI/CD variable](../../../../ci/variables/_index.md#for-a-project) for the password (for example, `TEST_API_PASSWORD`), and set it to be masked. You can create CI/CD variables from the GitLab project's page at **Settings > CI/CD**, in the **Variables** section. Because of the [limitations on masked variables](../../../../ci/variables/_index.md#mask-a-cicd-variable), you should Base64-encode the password before adding it as a variable. Finally, add two CI/CD variables to your `.gitlab-ci.yml` file: - `FUZZAPI_HTTP_USERNAME`: The username for authentication. - `FUZZAPI_HTTP_PASSWORD_BASE64`: The Base64-encoded password for authentication. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_HAR: test-api-recording.har FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_HTTP_USERNAME: testuser FUZZAPI_HTTP_PASSWORD_BASE64: $TEST_API_PASSWORD ``` ### Raw password If you do not want to Base64-encode the password (or if you are using GitLab 15.3 or earlier) you can provide the raw password `FUZZAPI_HTTP_PASSWORD`, instead of using `FUZZAPI_HTTP_PASSWORD_BASE64`. ### Bearer Tokens Bearer tokens are used by several different authentication mechanisms, including OAuth2 and JSON Web Tokens (JWT). Bearer tokens are transmitted using the `Authorization` HTTP header. To use bearer tokens with API fuzzing, you need one of the following: - A token that doesn't expire - A way to generate a token that lasts the length of testing - A Python script that API fuzzing can call to generate the token #### Token doesn't expire If the bearer token doesn't expire, use the `FUZZAPI_OVERRIDES_ENV` variable to provide it. This variable's content is a JSON snippet that provides headers and cookies to add to API fuzzing's outgoing HTTP requests. Follow these steps to provide the bearer token with `FUZZAPI_OVERRIDES_ENV`: 1. [Create a CI/CD variable](../../../../ci/variables/_index.md#for-a-project), for example `TEST_API_BEARERAUTH`, with the value `{"headers":{"Authorization":"Bearer dXNlcm5hbWU6cGFzc3dvcmQ="}}` (substitute your token). You can create CI/CD variables from the GitLab projects page at **Settings > CI/CD**, in the **Variables** section. 1. In your `.gitlab-ci.yml` file, set `FUZZAPI_OVERRIDES_ENV` to the variable you just created: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: $TEST_API_BEARERAUTH ``` 1. To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. See the [overrides section](#overrides) for more information about override commands. #### Token generated at test runtime If the bearer token must be generated and doesn't expire during testing, you can provide to API fuzzing with a file containing the token. A prior stage and job, or part of the API fuzzing job, can generate this file. API fuzzing expects to receive a JSON file with the following structure: ```json { "headers" : { "Authorization" : "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` This file can be generated by a prior stage and provided to API fuzzing through the `FUZZAPI_OVERRIDES_FILE` CI/CD variable. Set `FUZZAPI_OVERRIDES_FILE` in your `.gitlab-ci.yml` file: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json ``` To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. #### Token has short expiration If the bearer token must be generated and expires prior to the scan's completion, you can provide a program or script for the API fuzzer to execute on a provided interval. The provided script runs in an Alpine Linux container that has Python 3 and Bash installed. If the Python script requires additional packages, it must detect this and install the packages at runtime. The script must create a JSON file containing the bearer token in a specific format: ```json { "headers" : { "Authorization" : "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` You must provide three CI/CD variables, each set for correct operation: - `FUZZAPI_OVERRIDES_FILE`: JSON file the provided command generates. - `FUZZAPI_OVERRIDES_CMD`: Command that generates the JSON file. - `FUZZAPI_OVERRIDES_INTERVAL`: Interval (in seconds) to run command. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. ## API fuzzing profiles GitLab provides the configuration file [`gitlab-api-fuzzing-config.yml`](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing/-/blob/master/gitlab-api-fuzzing-config.yml). It contains several testing profiles that perform a specific numbers of tests. The runtime of each profile increases as the number of tests increases. | Profile | Fuzz Tests (per parameter) | |:----------|:---------------------------| | Quick-10 | 10 | | Medium-20 | 20 | | Medium-50 | 50 | | Long-100 | 100 | ## Overrides API Fuzzing provides a method to add or override specific items in your request, for example: - Headers - Cookies - Query string - Form data - JSON nodes - XML nodes You can use this to inject semantic version headers, authentication, and so on. The [authentication section](#authentication) includes examples of using overrides for that purpose. Overrides use a JSON document, where each type of override is represented by a JSON object: ```json { "headers": { "header1": "value", "header2": "value" }, "cookies": { "cookie1": "value", "cookie2": "value" }, "query": { "query-string1": "value", "query-string2": "value" }, "body-form": { "form-param1": "value", "form-param2": "value" }, "body-json": { "json-path1": "value", "json-path2": "value" }, "body-xml" : { "xpath1": "value", "xpath2": "value" } } ``` Example of setting a single header: ```json { "headers": { "Authorization": "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` Example of setting both a header and cookie: ```json { "headers": { "Authorization": "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" }, "cookies": { "flags": "677" } } ``` Example usage for setting a `body-form` override: ```json { "body-form": { "username": "john.doe" } } ``` The override engine uses `body-form` when the request body has only form-data content. Example usage for setting a `body-json` override: ```json { "body-json": { "$.credentials.access-token": "iddqd!42.$" } } ``` Each JSON property name in the object `body-json` is set to a [JSON Path](https://goessner.net/articles/JsonPath/) expression. The JSON Path expression `$.credentials.access-token` identifies the node to be overridden with the value `iddqd!42.$`. The override engine uses `body-json` when the request body has only [JSON](https://www.json.org/json-en.html) content. For example, if the body is set to the following JSON: ```json { "credentials" : { "username" :"john.doe", "access-token" : "non-valid-password" } } ``` It is changed to: ```json { "credentials" : { "username" :"john.doe", "access-token" : "iddqd!42.$" } } ``` Here's an example for setting a `body-xml` override. The first entry overrides an XML attribute and the second entry overrides an XML element: ```json { "body-xml" : { "/credentials/@isEnabled": "true", "/credentials/access-token/text()" : "iddqd!42.$" } } ``` Each JSON property name in the object `body-xml` is set to an [XPath v2](https://www.w3.org/TR/xpath20/) expression. The XPath expression `/credentials/@isEnabled` identifies the attribute node to override with the value `true`. The XPath expression `/credentials/access-token/text()` identifies the element node to override with the value `iddqd!42.$`. The override engine uses `body-xml` when the request body has only [XML](https://www.w3.org/XML/) content. For example, if the body is set to the following XML: ```xml <credentials isEnabled="false"> <username>john.doe</username> <access-token>non-valid-password</access-token> </credentials> ``` It is changed to: ```xml <credentials isEnabled="true"> <username>john.doe</username> <access-token>iddqd!42.$</access-token> </credentials> ``` You can provide this JSON document as a file or environment variable. You may also provide a command to generate the JSON document. The command can run at intervals to support values that expire. ### Using a file To provide the overrides JSON as a file, the `FUZZAPI_OVERRIDES_FILE` CI/CD variable is set. The path is relative to the job current working directory. Here's an example `.gitlab-ci.yml`: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json ``` ### Using a CI/CD variable To provide the overrides JSON as a CI/CD variable, use the `FUZZAPI_OVERRIDES_ENV` variable. This allows you to place the JSON as variables that can be masked and protected. In this example `.gitlab-ci.yml`, the `FUZZAPI_OVERRIDES_ENV` variable is set directly to the JSON: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: '{"headers":{"X-API-Version":"2"}}' ``` In this example `.gitlab-ci.yml`, the `SECRET_OVERRIDES` variable provides the JSON. This is a [group or instance level CI/CD variable defined in the UI](../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui): ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: $SECRET_OVERRIDES ``` ### Using a command If the value must be generated or regenerated on expiration, you can provide a program or script for the API fuzzer to execute on a specified interval. The provided script runs in an Alpine Linux container that has Python 3 and Bash installed. You have to set the environment variable `FUZZAPI_OVERRIDES_CMD` to the program or script you would like to execute. The provided command creates the overrides JSON file as defined previously. You might want to install other scripting runtimes like NodeJS or Ruby, or maybe you need to install a dependency for your overrides command. In this case, you should set the `FUZZAPI_PRE_SCRIPT` to the file path of a script that provides those prerequisites. The script provided by `FUZZAPI_PRE_SCRIPT` is executed once, before the analyzer starts. {{< alert type="note" >}} When performing actions that require elevated permissions, make use of the `sudo` command. For example, `sudo apk add nodejs`. {{< /alert >}} See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management) page for information about installing Alpine Linux packages. You must provide three CI/CD variables, each set for correct operation: - `FUZZAPI_OVERRIDES_FILE`: File generated by the provided command. - `FUZZAPI_OVERRIDES_CMD`: Overrides command in charge of generating the overrides JSON file periodically. - `FUZZAPI_OVERRIDES_INTERVAL`: Interval in seconds to run command. Optionally: - `FUZZAPI_PRE_SCRIPT`: Script to install runtimes or dependencies before the analyzer starts. {{< alert type="warning" >}} To execute scripts in Alpine Linux you must first use the command [`chmod`](https://www.gnu.org/software/coreutils/manual/html_node/chmod-invocation.html) to set the [execution permission](https://www.gnu.org/software/coreutils/manual/html_node/Setting-Permissions.html). For example, to set the execution permission of `script.py` for everyone, use the command: `sudo chmod a+x script.py`. If needed, you can version your `script.py` with the execution permission already set. {{< /alert >}} ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` ### Debugging overrides By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `FUZZAPI_OVERRIDES_CMD_VERBOSE` to any value to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing. It is also possible to write messages from your script to a log file that is collected when the job completes or fails. The log file must be created in a specific location and follow a naming convention. Adding some basic logging to your overrides script is useful in case the script fails unexpectedly during typical running of the job. The log file is automatically included as an artifact of the job, allowing you to download it after the job has finished. Following our example, we provided `renew_token.py` in the environmental variable `FUZZAPI_OVERRIDES_CMD`. Notice two things in the script: - Log file is saved in the location indicated by the environment variable `CI_PROJECT_DIR`. - Log filename should match `gl-*.log`. ```python #!/usr/bin/env python # Example of an overrides command # Override commands can update the overrides json file # with new values to be used. This is a great way to # update an authentication token that will expire # during testing. import logging import json import os import requests import backoff # [1] Store log file in directory indicated by env var CI_PROJECT_DIR working_directory = os.environ.get( 'CI_PROJECT_DIR') overrides_file_name = os.environ.get('FUZZAPI_OVERRIDES_FILE', 'api-fuzzing-overrides.json') overrides_file_path = os.path.join(working_directory, overrides_file_name) # [2] File name should match the pattern: gl-*.log log_file_path = os.path.join(working_directory, 'gl-user-overrides.log') # Set up logger logging.basicConfig(filename=log_file_path, level=logging.DEBUG) # Use `backoff` decorator to retry in case of transient errors. @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError), max_time=30) def get_auth_response(): authorization_url = 'https://authorization.service/api/get_api_token' return requests.get( f'{authorization_url}', auth=(os.environ.get('AUTH_USER'), os.environ.get('AUTH_PWD')) ) # In our example, access token is retrieved from a given endpoint try: # Performs a http request, response sample: # { "Token" : "abcdefghijklmn" } response = get_auth_response() # Check that the request is successful. may raise `requests.exceptions.HTTPError` response.raise_for_status() # Gets JSON data response_body = response.json() # If needed specific exceptions can be caught # requests.ConnectionError : A network connection error problem occurred # requests.HTTPError : HTTP request returned an unsuccessful status code. [Response.raise_for_status()] # requests.ConnectTimeout : The request timed out while trying to connect to the remote server # requests.ReadTimeout : The server did not send any data in the allotted amount of time. # requests.TooManyRedirects : The request exceeds the configured number of maximum redirections # requests.exceptions.RequestException : All exceptions that related to Requests except json.JSONDecodeError as json_decode_error: # logs errors related decoding JSON response logging.error(f'Error, failed while decoding JSON response. Error message: {json_decode_error}') raise except requests.exceptions.RequestException as requests_error: # logs exceptions related to `Requests` logging.error(f'Error, failed while performing HTTP request. Error message: {requests_error}') raise except Exception as e: # logs any other error logging.error(f'Error, unknown error while retrieving access token. Error message: {e}') raise # computes object that holds overrides file content. # It uses data fetched from request overrides_data = { "headers": { "Authorization": f"Token {response_body['Token']}" } } # log entry informing about the file override computation logging.info("Creating overrides file: %s" % overrides_file_path) # attempts to overwrite the file try: if os.path.exists(overrides_file_path): os.unlink(overrides_file_path) # overwrites the file with our updated dictionary with open(overrides_file_path, "wb+") as fd: fd.write(json.dumps(overrides_data).encode('utf-8')) except Exception as e: # logs any other error logging.error(f'Error, unknown error when overwriting file {overrides_file_path}. Error message: {e}') raise # logs informing override has finished successfully logging.info("Override file has been updated") # end ``` In the overrides command example, the Python script depends on the `backoff` library. To make sure the library is installed before executing the Python script, the `FUZZAPI_PRE_SCRIPT` is set to a script that installs the dependencies of your overrides command. As for example, the following script `user-pre-scan-set-up.sh`: ```shell #!/bin/bash # user-pre-scan-set-up.sh # Ensures python dependencies are installed echo "**** install python dependencies ****" sudo pip3 install --no-cache --upgrade --break-system-packages \ requests \ backoff echo "**** python dependencies installed ****" # end ``` You have to update your configuration to set the `FUZZAPI_PRE_SCRIPT` to our new `user-pre-scan-set-up.sh` script. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_PRE_SCRIPT: user-pre-scan-set-up.sh FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` In the previous sample, you could use the script `user-pre-scan-set-up.sh` to also install new runtimes or applications that later on you could use in your overrides command. ## Exclude Paths When testing an API it can be useful to exclude certain paths. For example, you might exclude testing of an authentication service or an older version of the API. To exclude paths, use the `FUZZAPI_EXCLUDE_PATHS` CI/CD variable. This variable is specified in your `.gitlab-ci.yml` file. To exclude multiple paths, separate entries using the `;` character. In the provided paths you can use a single character wildcard `?` and `*` for a multiple character wildcard. To verify the paths are excluded, review the `Tested Operations` and `Excluded Operations` portion of the job output. You should not see any excluded paths listed under `Tested Operations`. ```plaintext 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Tested Operations ]------------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: 201 POST http://target:7777/api/users CREATED 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Excluded Operations ]----------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: GET http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: POST http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ ``` ### Examples of excluding paths This example excludes the `/auth` resource. This does not exclude child resources (`/auth/child`). ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth ``` To exclude `/auth`, and child resources (`/auth/child`), we use a wildcard. ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth* ``` To exclude multiple paths we can use the `;` character. In this example we exclude `/auth*` and `/v1/*`. ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth*;/v1/* ``` ## Exclude parameters While testing an API you may might want to exclude a parameter (query string, header, or body element) from testing. This may be needed because a parameter always causes a failure, slows down testing, or for other reasons. To exclude parameters you can use one of the following variables: `FUZZAPI_EXCLUDE_PARAMETER_ENV` or `FUZZAPI_EXCLUDE_PARAMETER_FILE`. The `FUZZAPI_EXCLUDE_PARAMETER_ENV` allows providing a JSON string containing excluded parameters. This is a good option if the JSON is short and can not often change. Another option is the variable `FUZZAPI_EXCLUDE_PARAMETER_FILE`. This variable is set to a file path that can be checked into the repository, created by another job as an artifact, or generated at runtime from a pre-script using `FUZZAPI_PRE_SCRIPT`. ### Exclude parameters using a JSON document The JSON document contains a JSON object which uses specific properties to identify which parameter should be excluded. You can provide the following properties to exclude specific parameters during the scanning process: - `headers`: Use this property to exclude specific headers. The property's value is an array of header names to be excluded. Names are case-insensitive. - `cookies`: Use this property's value to exclude specific cookies. The property's value is an array of cookie names to be excluded. Names are case-sensitive. - `query`: Use this property to exclude specific fields from the query string. The property's value is an array of field names from the query string to be excluded. Names are case-sensitive. - `body-form`: Use this property to exclude specific fields from a request that uses the media type `application/x-www-form-urlencoded`. The property's value is an array of the field names from the body to be excluded. Names are case-sensitive. - `body-json`: Use this property to exclude specific JSON nodes from a request that uses the media type `application/json`. The property's value is an array, each entry of the array is a [JSON Path](https://goessner.net/articles/JsonPath/) expression. - `body-xml`: Use this property to exclude specific XML nodes from a request that uses media type `application/xml`. The property's value is an array, each entry of the array is a [XPath v2](https://www.w3.org/TR/xpath20/) expression. The following JSON document is an example of the expected structure to exclude parameters. ```json { "headers": [ "header1", "header2" ], "cookies": [ "cookie1", "cookie2" ], "query": [ "query-string1", "query-string2" ], "body-form": [ "form-param1", "form-param2" ], "body-json": [ "json-path-expression-1", "json-path-expression-2" ], "body-xml" : [ "xpath-expression-1", "xpath-expression-2" ] } ``` ### Examples #### Excluding a single header To exclude the header `Upgrade-Insecure-Requests`, set the `header` property's value to an array with the header name: `[ "Upgrade-Insecure-Requests" ]`. For instance, the JSON document looks like this: ```json { "headers": [ "Upgrade-Insecure-Requests" ] } ``` Header names are case-insensitive, thus the header name `UPGRADE-INSECURE-REQUESTS` is equivalent to `Upgrade-Insecure-Requests`. #### Excluding both a header and two cookies To exclude the header `Authorization` and the cookies `PHPSESSID` and `csrftoken`, set the `headers` property's value to an array with header name `[ "Authorization" ]` and the `cookies` property's value to an array with the cookies' names `[ "PHPSESSID", "csrftoken" ]`. For instance, the JSON document looks like this: ```json { "headers": [ "Authorization" ], "cookies": [ "PHPSESSID", "csrftoken" ] } ``` #### Excluding a `body-form` parameter To exclude the `password` field in a request that uses `application/x-www-form-urlencoded`, set the `body-form` property's value to an array with the field name `[ "password" ]`. For instance, the JSON document looks like this: ```json { "body-form": [ "password" ] } ``` The exclude parameters uses `body-form` when the request uses a content type `application/x-www-form-urlencoded`. #### Excluding a specific JSON nodes using JSON Path To exclude the `schema` property in the root object, set the `body-json` property's value to an array with the JSON Path expression `[ "$.schema" ]`. The JSON Path expression uses special syntax to identify JSON nodes: `$` refers to the root of the JSON document, `.` refers to the current object (in our case the root object), and the text `schema` refers to a property name. Thus, the JSON path expression `$.schema` refers to a property `schema` in the root object. For instance, the JSON document looks like this: ```json { "body-json": [ "$.schema" ] } ``` The exclude parameters uses `body-json` when the request uses a content type `application/json`. Each entry in `body-json` is expected to be a [JSON Path expression](https://goessner.net/articles/JsonPath/). In JSON Path, characters like `$`, `*`, `.` among others have special meaning. #### Excluding multiple JSON nodes using JSON Path To exclude the property `password` on each entry of an array of `users` at the root level, set the `body-json` property's value to an array with the JSON Path expression `[ "$.users[*].paswword" ]`. The JSON Path expression starts with `$` to refer to the root node and uses `.` to refer to the current node. Then, it uses `users` to refer to a property and the characters `[` and `]` to enclose the index in the array you want to use, instead of providing a number as an index you use `*` to specify any index. After the index reference, we find `.` which now refers to any given selected index in the array, preceded by a property name `password`. For instance, the JSON document looks like this: ```json { "body-json": [ "$.users[*].paswword" ] } ``` The exclude parameters uses `body-json` when the request uses a content type `application/json`. Each entry in `body-json` is expected to be a [JSON Path expression](https://goessner.net/articles/JsonPath/). In JSON Path characters like `$`, `*`, `.` among others have special meaning. #### Excluding an XML attribute To exclude an attribute named `isEnabled` located in the root element `credentials`, set the `body-xml` property's value to an array with the XPath expression `[ "/credentials/@isEnabled" ]`. The XPath expression `/credentials/@isEnabled`, starts with `/` to indicate the root of the XML document, then it is followed by the word `credentials` which indicates the name of the element to match. It uses a `/` to refer to a node of the previous XML element, and the character `@` to indicate that the name `isEnable` is an attribute. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/@isEnabled" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be an [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions, characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML element's text To exclude the text of the `username` element contained in root node `credentials`, set the `body-xml` property's value to an array with the XPath expression `[/credentials/username/text()" ]`. In the XPath expression `/credentials/username/text()`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `username`. Last part has a `/` that refers to the current element, and uses a XPath function called `text()` which identifies the text of the current element. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/username/text()" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML element To exclude the element `username` contained in root node `credentials`, set the `body-xml` property's value to an array with the XPath expression `[/credentials/username" ]`. In the XPath expression `/credentials/username`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `username`. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/username" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML node with namespaces To exclude a XML element `login` which is defined in namespace `s`, and contained in `credentials` root node, set the `body-xml` property's value to an array with the XPath expression `[ "/credentials/s:login" ]`. In the XPath expression `/credentials/s:login`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `s:login`. Notice that name contains the character `:`, this character separates the namespace from the node name. The namespace name should have been defined in the XML document which is part of the body request. You may check the namespace in the specification document HAR, OpenAPI, or Postman Collection file. ```json { "body-xml": [ "/credentials/s:login" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. ### Using a JSON string To provide the exclusion JSON document set the variable `FUZZAPI_EXCLUDE_PARAMETER_ENV` with the JSON string. In the following example, the `.gitlab-ci.yml`, the `FUZZAPI_EXCLUDE_PARAMETER_ENV` variable is set to a JSON string: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_EXCLUDE_PARAMETER_ENV: '{ "headers": [ "Upgrade-Insecure-Requests" ] }' ``` ### Using a file To provide the exclusion JSON document, set the variable `FUZZAPI_EXCLUDE_PARAMETER_FILE` with the JSON file path. The file path is relative to the job current working directory. In the following example `.gitlab-ci.yml` file, the `FUZZAPI_EXCLUDE_PARAMETER_FILE` variable is set to a JSON file path: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_EXCLUDE_PARAMETER_FILE: api-fuzzing-exclude-parameters.json ``` The `api-fuzzing-exclude-parameters.json` is a JSON document that follows the structure of [exclude parameters document](#exclude-parameters-using-a-json-document). ## Exclude URLs As an alternative to excluding by paths, you can filter by any other component in the URL by using the `FUZZAPI_EXCLUDE_URLS` CI/CD variable. This variable can be set in your `.gitlab-ci.yml` file. The variable can store multiple values, separated by commas (`,`). Each value is a regular expression. Because each entry is a regular expression, an entry such as `.*` excludes all URLs because it is a regular expression that matches everything. In your job output you can check if any URLs matched any provided regular expression from `FUZZAPI_EXCLUDE_URLS`. Matching operations are listed in the **Excluded Operations** section. Operations listed in the **Excluded Operations** should not be listed in the **Tested Operations** section. For example the following portion of a job output: ```plaintext 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Tested Operations ]------------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: 201 POST http://target:7777/api/users CREATED 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Excluded Operations ]----------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: GET http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: POST http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ ``` {{< alert type="note" >}} Each value in `FUZZAPI_EXCLUDE_URLS` is a regular expression. Characters such as `.` , `*` and `$` among many others have special meanings in [regular expressions](https://en.wikipedia.org/wiki/Regular_expression#Standards). {{< /alert >}} ### Examples #### Excluding a URL and child resources The following example excludes the URL `http://target/api/auth` and its child resources. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/auth ``` #### Excluding two URLs and allow their child resources To exclude the URLs `http://target/api/buy` and `http://target/api/sell` but allowing to scan their child resources, for instance: `http://target/api/buy/toy` or `http://target/api/sell/chair`. You could use the value `http://target/api/buy/$,http://target/api/sell/$`. This value is using two regular expressions, each of them separated by a `,` character. Hence, it contains `http://target/api/buy$` and `http://target/api/sell$`. In each regular expression, the trailing `$` character points out where the matching URL should end. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/buy/$,http://target/api/sell/$ ``` #### Excluding two URLs and their child resources To exclude the URLs: `http://target/api/buy` and `http://target/api/sell`, and their child resources. To provide multiple URLs we use the `,` character as follows: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/buy,http://target/api/sell ``` #### Excluding URL using regular expressions To exclude exactly `https://target/api/v1/user/create` and `https://target/api/v2/user/create` or any other version (`v3`,`v4`, and more), we could use `https://target/api/v.*/user/create$`. In the previous regular expression: - `.` indicates any character. - `*` indicates zero or more times. - `$` indicates that the URL should end there. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: https://target/api/v.*/user/create$ ``` ## Header Fuzzing Header fuzzing is disabled by default due to the high number of false positives that occur with many technology stacks. When header fuzzing is enabled, you must specify a list of headers to include in fuzzing. Each profile in the default configuration file has an entry for `GeneralFuzzingCheck`. This check performs header fuzzing. Under the `Configuration` section, you must change the `HeaderFuzzing` and `Headers` settings to enable header fuzzing. This snippet shows the `Quick-10` profile's default configuration with header fuzzing disabled: ```yaml - Name: Quick-10 DefaultProfile: Empty Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: false Headers: - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` `HeaderFuzzing` is a boolean that turns header fuzzing on and off. The default setting is `false` for off. To turn header fuzzing on, change this setting to `true`: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: ``` `Headers` is a list of headers to fuzz. Only headers listed are fuzzed. To fuzz a header used by your APIs, add an entry for it using the syntax `- Name: HeaderName`. For example, to fuzz a custom header `X-Custom`, add `- Name: X-Custom`: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: - Name: X-Custom ``` You now have a configuration to fuzz the header `X-Custom`. Use the same notation to list additional headers: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: - Name: X-Custom - Name: X-AnotherHeader ``` Repeat this configuration for each profile as needed.
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Customizing analyzer settings breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- The API fuzzing behavior can be changed through CI/CD variables. The API fuzzing configuration files must be in your repository's `.gitlab` directory. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ## Authentication Authentication is handled by providing the authentication token as a header or cookie. You can provide a script that performs an authentication flow or calculates the token. ### HTTP Basic Authentication [HTTP basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) is an authentication method built into the HTTP protocol and used in conjunction with [transport layer security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). We recommended that you [create a CI/CD variable](../../../../ci/variables/_index.md#for-a-project) for the password (for example, `TEST_API_PASSWORD`), and set it to be masked. You can create CI/CD variables from the GitLab project's page at **Settings > CI/CD**, in the **Variables** section. Because of the [limitations on masked variables](../../../../ci/variables/_index.md#mask-a-cicd-variable), you should Base64-encode the password before adding it as a variable. Finally, add two CI/CD variables to your `.gitlab-ci.yml` file: - `FUZZAPI_HTTP_USERNAME`: The username for authentication. - `FUZZAPI_HTTP_PASSWORD_BASE64`: The Base64-encoded password for authentication. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_HAR: test-api-recording.har FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_HTTP_USERNAME: testuser FUZZAPI_HTTP_PASSWORD_BASE64: $TEST_API_PASSWORD ``` ### Raw password If you do not want to Base64-encode the password (or if you are using GitLab 15.3 or earlier) you can provide the raw password `FUZZAPI_HTTP_PASSWORD`, instead of using `FUZZAPI_HTTP_PASSWORD_BASE64`. ### Bearer Tokens Bearer tokens are used by several different authentication mechanisms, including OAuth2 and JSON Web Tokens (JWT). Bearer tokens are transmitted using the `Authorization` HTTP header. To use bearer tokens with API fuzzing, you need one of the following: - A token that doesn't expire - A way to generate a token that lasts the length of testing - A Python script that API fuzzing can call to generate the token #### Token doesn't expire If the bearer token doesn't expire, use the `FUZZAPI_OVERRIDES_ENV` variable to provide it. This variable's content is a JSON snippet that provides headers and cookies to add to API fuzzing's outgoing HTTP requests. Follow these steps to provide the bearer token with `FUZZAPI_OVERRIDES_ENV`: 1. [Create a CI/CD variable](../../../../ci/variables/_index.md#for-a-project), for example `TEST_API_BEARERAUTH`, with the value `{"headers":{"Authorization":"Bearer dXNlcm5hbWU6cGFzc3dvcmQ="}}` (substitute your token). You can create CI/CD variables from the GitLab projects page at **Settings > CI/CD**, in the **Variables** section. 1. In your `.gitlab-ci.yml` file, set `FUZZAPI_OVERRIDES_ENV` to the variable you just created: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: $TEST_API_BEARERAUTH ``` 1. To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. See the [overrides section](#overrides) for more information about override commands. #### Token generated at test runtime If the bearer token must be generated and doesn't expire during testing, you can provide to API fuzzing with a file containing the token. A prior stage and job, or part of the API fuzzing job, can generate this file. API fuzzing expects to receive a JSON file with the following structure: ```json { "headers" : { "Authorization" : "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` This file can be generated by a prior stage and provided to API fuzzing through the `FUZZAPI_OVERRIDES_FILE` CI/CD variable. Set `FUZZAPI_OVERRIDES_FILE` in your `.gitlab-ci.yml` file: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json ``` To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. #### Token has short expiration If the bearer token must be generated and expires prior to the scan's completion, you can provide a program or script for the API fuzzer to execute on a provided interval. The provided script runs in an Alpine Linux container that has Python 3 and Bash installed. If the Python script requires additional packages, it must detect this and install the packages at runtime. The script must create a JSON file containing the bearer token in a specific format: ```json { "headers" : { "Authorization" : "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` You must provide three CI/CD variables, each set for correct operation: - `FUZZAPI_OVERRIDES_FILE`: JSON file the provided command generates. - `FUZZAPI_OVERRIDES_CMD`: Command that generates the JSON file. - `FUZZAPI_OVERRIDES_INTERVAL`: Interval (in seconds) to run command. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick-10 FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` To validate that authentication is working, run an API fuzzing test and review the fuzzing logs and the test API's application logs. ## API fuzzing profiles GitLab provides the configuration file [`gitlab-api-fuzzing-config.yml`](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing/-/blob/master/gitlab-api-fuzzing-config.yml). It contains several testing profiles that perform a specific numbers of tests. The runtime of each profile increases as the number of tests increases. | Profile | Fuzz Tests (per parameter) | |:----------|:---------------------------| | Quick-10 | 10 | | Medium-20 | 20 | | Medium-50 | 50 | | Long-100 | 100 | ## Overrides API Fuzzing provides a method to add or override specific items in your request, for example: - Headers - Cookies - Query string - Form data - JSON nodes - XML nodes You can use this to inject semantic version headers, authentication, and so on. The [authentication section](#authentication) includes examples of using overrides for that purpose. Overrides use a JSON document, where each type of override is represented by a JSON object: ```json { "headers": { "header1": "value", "header2": "value" }, "cookies": { "cookie1": "value", "cookie2": "value" }, "query": { "query-string1": "value", "query-string2": "value" }, "body-form": { "form-param1": "value", "form-param2": "value" }, "body-json": { "json-path1": "value", "json-path2": "value" }, "body-xml" : { "xpath1": "value", "xpath2": "value" } } ``` Example of setting a single header: ```json { "headers": { "Authorization": "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" } } ``` Example of setting both a header and cookie: ```json { "headers": { "Authorization": "Bearer dXNlcm5hbWU6cGFzc3dvcmQ=" }, "cookies": { "flags": "677" } } ``` Example usage for setting a `body-form` override: ```json { "body-form": { "username": "john.doe" } } ``` The override engine uses `body-form` when the request body has only form-data content. Example usage for setting a `body-json` override: ```json { "body-json": { "$.credentials.access-token": "iddqd!42.$" } } ``` Each JSON property name in the object `body-json` is set to a [JSON Path](https://goessner.net/articles/JsonPath/) expression. The JSON Path expression `$.credentials.access-token` identifies the node to be overridden with the value `iddqd!42.$`. The override engine uses `body-json` when the request body has only [JSON](https://www.json.org/json-en.html) content. For example, if the body is set to the following JSON: ```json { "credentials" : { "username" :"john.doe", "access-token" : "non-valid-password" } } ``` It is changed to: ```json { "credentials" : { "username" :"john.doe", "access-token" : "iddqd!42.$" } } ``` Here's an example for setting a `body-xml` override. The first entry overrides an XML attribute and the second entry overrides an XML element: ```json { "body-xml" : { "/credentials/@isEnabled": "true", "/credentials/access-token/text()" : "iddqd!42.$" } } ``` Each JSON property name in the object `body-xml` is set to an [XPath v2](https://www.w3.org/TR/xpath20/) expression. The XPath expression `/credentials/@isEnabled` identifies the attribute node to override with the value `true`. The XPath expression `/credentials/access-token/text()` identifies the element node to override with the value `iddqd!42.$`. The override engine uses `body-xml` when the request body has only [XML](https://www.w3.org/XML/) content. For example, if the body is set to the following XML: ```xml <credentials isEnabled="false"> <username>john.doe</username> <access-token>non-valid-password</access-token> </credentials> ``` It is changed to: ```xml <credentials isEnabled="true"> <username>john.doe</username> <access-token>iddqd!42.$</access-token> </credentials> ``` You can provide this JSON document as a file or environment variable. You may also provide a command to generate the JSON document. The command can run at intervals to support values that expire. ### Using a file To provide the overrides JSON as a file, the `FUZZAPI_OVERRIDES_FILE` CI/CD variable is set. The path is relative to the job current working directory. Here's an example `.gitlab-ci.yml`: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json ``` ### Using a CI/CD variable To provide the overrides JSON as a CI/CD variable, use the `FUZZAPI_OVERRIDES_ENV` variable. This allows you to place the JSON as variables that can be masked and protected. In this example `.gitlab-ci.yml`, the `FUZZAPI_OVERRIDES_ENV` variable is set directly to the JSON: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: '{"headers":{"X-API-Version":"2"}}' ``` In this example `.gitlab-ci.yml`, the `SECRET_OVERRIDES` variable provides the JSON. This is a [group or instance level CI/CD variable defined in the UI](../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui): ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_ENV: $SECRET_OVERRIDES ``` ### Using a command If the value must be generated or regenerated on expiration, you can provide a program or script for the API fuzzer to execute on a specified interval. The provided script runs in an Alpine Linux container that has Python 3 and Bash installed. You have to set the environment variable `FUZZAPI_OVERRIDES_CMD` to the program or script you would like to execute. The provided command creates the overrides JSON file as defined previously. You might want to install other scripting runtimes like NodeJS or Ruby, or maybe you need to install a dependency for your overrides command. In this case, you should set the `FUZZAPI_PRE_SCRIPT` to the file path of a script that provides those prerequisites. The script provided by `FUZZAPI_PRE_SCRIPT` is executed once, before the analyzer starts. {{< alert type="note" >}} When performing actions that require elevated permissions, make use of the `sudo` command. For example, `sudo apk add nodejs`. {{< /alert >}} See the [Alpine Linux package management](https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management) page for information about installing Alpine Linux packages. You must provide three CI/CD variables, each set for correct operation: - `FUZZAPI_OVERRIDES_FILE`: File generated by the provided command. - `FUZZAPI_OVERRIDES_CMD`: Overrides command in charge of generating the overrides JSON file periodically. - `FUZZAPI_OVERRIDES_INTERVAL`: Interval in seconds to run command. Optionally: - `FUZZAPI_PRE_SCRIPT`: Script to install runtimes or dependencies before the analyzer starts. {{< alert type="warning" >}} To execute scripts in Alpine Linux you must first use the command [`chmod`](https://www.gnu.org/software/coreutils/manual/html_node/chmod-invocation.html) to set the [execution permission](https://www.gnu.org/software/coreutils/manual/html_node/Setting-Permissions.html). For example, to set the execution permission of `script.py` for everyone, use the command: `sudo chmod a+x script.py`. If needed, you can version your `script.py` with the execution permission already set. {{< /alert >}} ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` ### Debugging overrides By default the output of the overrides command is hidden. If the overrides command returns a non zero exit code, the command is displayed as part of your job output. Optionally, you can set the variable `FUZZAPI_OVERRIDES_CMD_VERBOSE` to any value to display overrides command output as it is generated. This is useful when testing your overrides script, but should be disabled afterwards as it slows down testing. It is also possible to write messages from your script to a log file that is collected when the job completes or fails. The log file must be created in a specific location and follow a naming convention. Adding some basic logging to your overrides script is useful in case the script fails unexpectedly during typical running of the job. The log file is automatically included as an artifact of the job, allowing you to download it after the job has finished. Following our example, we provided `renew_token.py` in the environmental variable `FUZZAPI_OVERRIDES_CMD`. Notice two things in the script: - Log file is saved in the location indicated by the environment variable `CI_PROJECT_DIR`. - Log filename should match `gl-*.log`. ```python #!/usr/bin/env python # Example of an overrides command # Override commands can update the overrides json file # with new values to be used. This is a great way to # update an authentication token that will expire # during testing. import logging import json import os import requests import backoff # [1] Store log file in directory indicated by env var CI_PROJECT_DIR working_directory = os.environ.get( 'CI_PROJECT_DIR') overrides_file_name = os.environ.get('FUZZAPI_OVERRIDES_FILE', 'api-fuzzing-overrides.json') overrides_file_path = os.path.join(working_directory, overrides_file_name) # [2] File name should match the pattern: gl-*.log log_file_path = os.path.join(working_directory, 'gl-user-overrides.log') # Set up logger logging.basicConfig(filename=log_file_path, level=logging.DEBUG) # Use `backoff` decorator to retry in case of transient errors. @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError), max_time=30) def get_auth_response(): authorization_url = 'https://authorization.service/api/get_api_token' return requests.get( f'{authorization_url}', auth=(os.environ.get('AUTH_USER'), os.environ.get('AUTH_PWD')) ) # In our example, access token is retrieved from a given endpoint try: # Performs a http request, response sample: # { "Token" : "abcdefghijklmn" } response = get_auth_response() # Check that the request is successful. may raise `requests.exceptions.HTTPError` response.raise_for_status() # Gets JSON data response_body = response.json() # If needed specific exceptions can be caught # requests.ConnectionError : A network connection error problem occurred # requests.HTTPError : HTTP request returned an unsuccessful status code. [Response.raise_for_status()] # requests.ConnectTimeout : The request timed out while trying to connect to the remote server # requests.ReadTimeout : The server did not send any data in the allotted amount of time. # requests.TooManyRedirects : The request exceeds the configured number of maximum redirections # requests.exceptions.RequestException : All exceptions that related to Requests except json.JSONDecodeError as json_decode_error: # logs errors related decoding JSON response logging.error(f'Error, failed while decoding JSON response. Error message: {json_decode_error}') raise except requests.exceptions.RequestException as requests_error: # logs exceptions related to `Requests` logging.error(f'Error, failed while performing HTTP request. Error message: {requests_error}') raise except Exception as e: # logs any other error logging.error(f'Error, unknown error while retrieving access token. Error message: {e}') raise # computes object that holds overrides file content. # It uses data fetched from request overrides_data = { "headers": { "Authorization": f"Token {response_body['Token']}" } } # log entry informing about the file override computation logging.info("Creating overrides file: %s" % overrides_file_path) # attempts to overwrite the file try: if os.path.exists(overrides_file_path): os.unlink(overrides_file_path) # overwrites the file with our updated dictionary with open(overrides_file_path, "wb+") as fd: fd.write(json.dumps(overrides_data).encode('utf-8')) except Exception as e: # logs any other error logging.error(f'Error, unknown error when overwriting file {overrides_file_path}. Error message: {e}') raise # logs informing override has finished successfully logging.info("Override file has been updated") # end ``` In the overrides command example, the Python script depends on the `backoff` library. To make sure the library is installed before executing the Python script, the `FUZZAPI_PRE_SCRIPT` is set to a script that installs the dependencies of your overrides command. As for example, the following script `user-pre-scan-set-up.sh`: ```shell #!/bin/bash # user-pre-scan-set-up.sh # Ensures python dependencies are installed echo "**** install python dependencies ****" sudo pip3 install --no-cache --upgrade --break-system-packages \ requests \ backoff echo "**** python dependencies installed ****" # end ``` You have to update your configuration to set the `FUZZAPI_PRE_SCRIPT` to our new `user-pre-scan-set-up.sh` script. For example: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_PRE_SCRIPT: user-pre-scan-set-up.sh FUZZAPI_OVERRIDES_FILE: api-fuzzing-overrides.json FUZZAPI_OVERRIDES_CMD: renew_token.py FUZZAPI_OVERRIDES_INTERVAL: 300 ``` In the previous sample, you could use the script `user-pre-scan-set-up.sh` to also install new runtimes or applications that later on you could use in your overrides command. ## Exclude Paths When testing an API it can be useful to exclude certain paths. For example, you might exclude testing of an authentication service or an older version of the API. To exclude paths, use the `FUZZAPI_EXCLUDE_PATHS` CI/CD variable. This variable is specified in your `.gitlab-ci.yml` file. To exclude multiple paths, separate entries using the `;` character. In the provided paths you can use a single character wildcard `?` and `*` for a multiple character wildcard. To verify the paths are excluded, review the `Tested Operations` and `Excluded Operations` portion of the job output. You should not see any excluded paths listed under `Tested Operations`. ```plaintext 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Tested Operations ]------------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: 201 POST http://target:7777/api/users CREATED 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Excluded Operations ]----------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: GET http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: POST http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ ``` ### Examples of excluding paths This example excludes the `/auth` resource. This does not exclude child resources (`/auth/child`). ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth ``` To exclude `/auth`, and child resources (`/auth/child`), we use a wildcard. ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth* ``` To exclude multiple paths we can use the `;` character. In this example we exclude `/auth*` and `/v1/*`. ```yaml variables: FUZZAPI_EXCLUDE_PATHS: /auth*;/v1/* ``` ## Exclude parameters While testing an API you may might want to exclude a parameter (query string, header, or body element) from testing. This may be needed because a parameter always causes a failure, slows down testing, or for other reasons. To exclude parameters you can use one of the following variables: `FUZZAPI_EXCLUDE_PARAMETER_ENV` or `FUZZAPI_EXCLUDE_PARAMETER_FILE`. The `FUZZAPI_EXCLUDE_PARAMETER_ENV` allows providing a JSON string containing excluded parameters. This is a good option if the JSON is short and can not often change. Another option is the variable `FUZZAPI_EXCLUDE_PARAMETER_FILE`. This variable is set to a file path that can be checked into the repository, created by another job as an artifact, or generated at runtime from a pre-script using `FUZZAPI_PRE_SCRIPT`. ### Exclude parameters using a JSON document The JSON document contains a JSON object which uses specific properties to identify which parameter should be excluded. You can provide the following properties to exclude specific parameters during the scanning process: - `headers`: Use this property to exclude specific headers. The property's value is an array of header names to be excluded. Names are case-insensitive. - `cookies`: Use this property's value to exclude specific cookies. The property's value is an array of cookie names to be excluded. Names are case-sensitive. - `query`: Use this property to exclude specific fields from the query string. The property's value is an array of field names from the query string to be excluded. Names are case-sensitive. - `body-form`: Use this property to exclude specific fields from a request that uses the media type `application/x-www-form-urlencoded`. The property's value is an array of the field names from the body to be excluded. Names are case-sensitive. - `body-json`: Use this property to exclude specific JSON nodes from a request that uses the media type `application/json`. The property's value is an array, each entry of the array is a [JSON Path](https://goessner.net/articles/JsonPath/) expression. - `body-xml`: Use this property to exclude specific XML nodes from a request that uses media type `application/xml`. The property's value is an array, each entry of the array is a [XPath v2](https://www.w3.org/TR/xpath20/) expression. The following JSON document is an example of the expected structure to exclude parameters. ```json { "headers": [ "header1", "header2" ], "cookies": [ "cookie1", "cookie2" ], "query": [ "query-string1", "query-string2" ], "body-form": [ "form-param1", "form-param2" ], "body-json": [ "json-path-expression-1", "json-path-expression-2" ], "body-xml" : [ "xpath-expression-1", "xpath-expression-2" ] } ``` ### Examples #### Excluding a single header To exclude the header `Upgrade-Insecure-Requests`, set the `header` property's value to an array with the header name: `[ "Upgrade-Insecure-Requests" ]`. For instance, the JSON document looks like this: ```json { "headers": [ "Upgrade-Insecure-Requests" ] } ``` Header names are case-insensitive, thus the header name `UPGRADE-INSECURE-REQUESTS` is equivalent to `Upgrade-Insecure-Requests`. #### Excluding both a header and two cookies To exclude the header `Authorization` and the cookies `PHPSESSID` and `csrftoken`, set the `headers` property's value to an array with header name `[ "Authorization" ]` and the `cookies` property's value to an array with the cookies' names `[ "PHPSESSID", "csrftoken" ]`. For instance, the JSON document looks like this: ```json { "headers": [ "Authorization" ], "cookies": [ "PHPSESSID", "csrftoken" ] } ``` #### Excluding a `body-form` parameter To exclude the `password` field in a request that uses `application/x-www-form-urlencoded`, set the `body-form` property's value to an array with the field name `[ "password" ]`. For instance, the JSON document looks like this: ```json { "body-form": [ "password" ] } ``` The exclude parameters uses `body-form` when the request uses a content type `application/x-www-form-urlencoded`. #### Excluding a specific JSON nodes using JSON Path To exclude the `schema` property in the root object, set the `body-json` property's value to an array with the JSON Path expression `[ "$.schema" ]`. The JSON Path expression uses special syntax to identify JSON nodes: `$` refers to the root of the JSON document, `.` refers to the current object (in our case the root object), and the text `schema` refers to a property name. Thus, the JSON path expression `$.schema` refers to a property `schema` in the root object. For instance, the JSON document looks like this: ```json { "body-json": [ "$.schema" ] } ``` The exclude parameters uses `body-json` when the request uses a content type `application/json`. Each entry in `body-json` is expected to be a [JSON Path expression](https://goessner.net/articles/JsonPath/). In JSON Path, characters like `$`, `*`, `.` among others have special meaning. #### Excluding multiple JSON nodes using JSON Path To exclude the property `password` on each entry of an array of `users` at the root level, set the `body-json` property's value to an array with the JSON Path expression `[ "$.users[*].paswword" ]`. The JSON Path expression starts with `$` to refer to the root node and uses `.` to refer to the current node. Then, it uses `users` to refer to a property and the characters `[` and `]` to enclose the index in the array you want to use, instead of providing a number as an index you use `*` to specify any index. After the index reference, we find `.` which now refers to any given selected index in the array, preceded by a property name `password`. For instance, the JSON document looks like this: ```json { "body-json": [ "$.users[*].paswword" ] } ``` The exclude parameters uses `body-json` when the request uses a content type `application/json`. Each entry in `body-json` is expected to be a [JSON Path expression](https://goessner.net/articles/JsonPath/). In JSON Path characters like `$`, `*`, `.` among others have special meaning. #### Excluding an XML attribute To exclude an attribute named `isEnabled` located in the root element `credentials`, set the `body-xml` property's value to an array with the XPath expression `[ "/credentials/@isEnabled" ]`. The XPath expression `/credentials/@isEnabled`, starts with `/` to indicate the root of the XML document, then it is followed by the word `credentials` which indicates the name of the element to match. It uses a `/` to refer to a node of the previous XML element, and the character `@` to indicate that the name `isEnable` is an attribute. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/@isEnabled" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be an [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions, characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML element's text To exclude the text of the `username` element contained in root node `credentials`, set the `body-xml` property's value to an array with the XPath expression `[/credentials/username/text()" ]`. In the XPath expression `/credentials/username/text()`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `username`. Last part has a `/` that refers to the current element, and uses a XPath function called `text()` which identifies the text of the current element. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/username/text()" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML element To exclude the element `username` contained in root node `credentials`, set the `body-xml` property's value to an array with the XPath expression `[/credentials/username" ]`. In the XPath expression `/credentials/username`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `username`. For instance, the JSON document looks like this: ```json { "body-xml": [ "/credentials/username" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. #### Excluding an XML node with namespaces To exclude a XML element `login` which is defined in namespace `s`, and contained in `credentials` root node, set the `body-xml` property's value to an array with the XPath expression `[ "/credentials/s:login" ]`. In the XPath expression `/credentials/s:login`, the first character `/` refers to the root XML node, and then after it indicates an XML element's name `credentials`. Similarly, the character `/` refers to the current element, followed by a new XML element's name `s:login`. Notice that name contains the character `:`, this character separates the namespace from the node name. The namespace name should have been defined in the XML document which is part of the body request. You may check the namespace in the specification document HAR, OpenAPI, or Postman Collection file. ```json { "body-xml": [ "/credentials/s:login" ] } ``` The exclude parameters uses `body-xml` when the request uses a content type `application/xml`. Each entry in `body-xml` is expected to be a [XPath v2 expression](https://www.w3.org/TR/xpath20/). In XPath expressions characters like `@`, `/`, `:`, `[`, `]` among others have special meanings. ### Using a JSON string To provide the exclusion JSON document set the variable `FUZZAPI_EXCLUDE_PARAMETER_ENV` with the JSON string. In the following example, the `.gitlab-ci.yml`, the `FUZZAPI_EXCLUDE_PARAMETER_ENV` variable is set to a JSON string: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_EXCLUDE_PARAMETER_ENV: '{ "headers": [ "Upgrade-Insecure-Requests" ] }' ``` ### Using a file To provide the exclusion JSON document, set the variable `FUZZAPI_EXCLUDE_PARAMETER_FILE` with the JSON file path. The file path is relative to the job current working directory. In the following example `.gitlab-ci.yml` file, the `FUZZAPI_EXCLUDE_PARAMETER_FILE` variable is set to a JSON file path: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_PROFILE: Quick FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_TARGET_URL: http://test-deployment/ FUZZAPI_EXCLUDE_PARAMETER_FILE: api-fuzzing-exclude-parameters.json ``` The `api-fuzzing-exclude-parameters.json` is a JSON document that follows the structure of [exclude parameters document](#exclude-parameters-using-a-json-document). ## Exclude URLs As an alternative to excluding by paths, you can filter by any other component in the URL by using the `FUZZAPI_EXCLUDE_URLS` CI/CD variable. This variable can be set in your `.gitlab-ci.yml` file. The variable can store multiple values, separated by commas (`,`). Each value is a regular expression. Because each entry is a regular expression, an entry such as `.*` excludes all URLs because it is a regular expression that matches everything. In your job output you can check if any URLs matched any provided regular expression from `FUZZAPI_EXCLUDE_URLS`. Matching operations are listed in the **Excluded Operations** section. Operations listed in the **Excluded Operations** should not be listed in the **Tested Operations** section. For example the following portion of a job output: ```plaintext 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Tested Operations ]------------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: 201 POST http://target:7777/api/users CREATED 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ 2021-05-27 21:51:08 [INF] API Fuzzing: --[ Excluded Operations ]----------------------- 2021-05-27 21:51:08 [INF] API Fuzzing: GET http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: POST http://target:7777/api/messages 2021-05-27 21:51:08 [INF] API Fuzzing: ------------------------------------------------ ``` {{< alert type="note" >}} Each value in `FUZZAPI_EXCLUDE_URLS` is a regular expression. Characters such as `.` , `*` and `$` among many others have special meanings in [regular expressions](https://en.wikipedia.org/wiki/Regular_expression#Standards). {{< /alert >}} ### Examples #### Excluding a URL and child resources The following example excludes the URL `http://target/api/auth` and its child resources. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/auth ``` #### Excluding two URLs and allow their child resources To exclude the URLs `http://target/api/buy` and `http://target/api/sell` but allowing to scan their child resources, for instance: `http://target/api/buy/toy` or `http://target/api/sell/chair`. You could use the value `http://target/api/buy/$,http://target/api/sell/$`. This value is using two regular expressions, each of them separated by a `,` character. Hence, it contains `http://target/api/buy$` and `http://target/api/sell$`. In each regular expression, the trailing `$` character points out where the matching URL should end. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/buy/$,http://target/api/sell/$ ``` #### Excluding two URLs and their child resources To exclude the URLs: `http://target/api/buy` and `http://target/api/sell`, and their child resources. To provide multiple URLs we use the `,` character as follows: ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: http://target/api/buy,http://target/api/sell ``` #### Excluding URL using regular expressions To exclude exactly `https://target/api/v1/user/create` and `https://target/api/v2/user/create` or any other version (`v3`,`v4`, and more), we could use `https://target/api/v.*/user/create$`. In the previous regular expression: - `.` indicates any character. - `*` indicates zero or more times. - `$` indicates that the URL should end there. ```yaml stages: - fuzz include: - template: API-Fuzzing.gitlab-ci.yml variables: FUZZAPI_TARGET_URL: http://target/ FUZZAPI_OPENAPI: test-api-specification.json FUZZAPI_EXCLUDE_URLS: https://target/api/v.*/user/create$ ``` ## Header Fuzzing Header fuzzing is disabled by default due to the high number of false positives that occur with many technology stacks. When header fuzzing is enabled, you must specify a list of headers to include in fuzzing. Each profile in the default configuration file has an entry for `GeneralFuzzingCheck`. This check performs header fuzzing. Under the `Configuration` section, you must change the `HeaderFuzzing` and `Headers` settings to enable header fuzzing. This snippet shows the `Quick-10` profile's default configuration with header fuzzing disabled: ```yaml - Name: Quick-10 DefaultProfile: Empty Routes: - Route: *Route0 Checks: - Name: FormBodyFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: false Headers: - Name: JsonFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true - Name: XmlFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true ``` `HeaderFuzzing` is a boolean that turns header fuzzing on and off. The default setting is `false` for off. To turn header fuzzing on, change this setting to `true`: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: ``` `Headers` is a list of headers to fuzz. Only headers listed are fuzzed. To fuzz a header used by your APIs, add an entry for it using the syntax `- Name: HeaderName`. For example, to fuzz a custom header `X-Custom`, add `- Name: X-Custom`: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: - Name: X-Custom ``` You now have a configuration to fuzz the header `X-Custom`. Use the same notation to list additional headers: ```yaml - Name: GeneralFuzzingCheck Configuration: FuzzingCount: 10 UnicodeFuzzing: true HeaderFuzzing: true Headers: - Name: X-Custom - Name: X-AnotherHeader ``` Repeat this configuration for each profile as needed.
https://docs.gitlab.com/user/application_security/api_fuzzing/requirements
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/requirements.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
requirements.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](../_index.md#getting-started). <!-- This redirect file can be deleted after <2025-09-17>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: ../_index.md remove_date: '2025-09-17' breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- <!-- markdownlint-disable --> This document was moved to [another location](../_index.md#getting-started). <!-- This redirect file can be deleted after <2025-09-17>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/api_fuzzing/offline_configuration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_fuzzing/offline_configuration.md
2025-08-13
doc/user/application_security/api_fuzzing/configuration
[ "doc", "user", "application_security", "api_fuzzing", "configuration" ]
offline_configuration.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Offline configuration
null
{{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the Web API Fuzz testing job to successfully run. Steps: 1. Host the Docker image in a local container registry. 1. Set the `SECURE_ANALYZERS_PREFIX` to the local container registry. The Docker image for API Fuzzing must be pulled (downloaded) from the public registry and then pushed (imported) into a local registry. The GitLab container registry can be used to locally host the Docker image. This process can be performed using a special template. See [loading Docker images onto your offline host](../../offline_deployments/_index.md#loading-docker-images-onto-your-offline-host) for instructions. Once the Docker image is hosted locally, the `SECURE_ANALYZERS_PREFIX` variable is set with the location of the local registry. The variable must be set such that concatenating `/api-security:2` results in a valid image location. For example, the below line sets a registry for the image `registry.gitlab.com/security-products/api-security:2`: `SECURE_ANALYZERS_PREFIX: "registry.gitlab.com/security-products"` {{< alert type="note" >}} Setting `SECURE_ANALYZERS_PREFIX` changes the Docker image registry location for all GitLab Secure templates. {{< /alert >}} For more information, see [Offline environments](../../offline_deployments/_index.md).
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Offline configuration breadcrumbs: - doc - user - application_security - api_fuzzing - configuration --- {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the Web API Fuzz testing job to successfully run. Steps: 1. Host the Docker image in a local container registry. 1. Set the `SECURE_ANALYZERS_PREFIX` to the local container registry. The Docker image for API Fuzzing must be pulled (downloaded) from the public registry and then pushed (imported) into a local registry. The GitLab container registry can be used to locally host the Docker image. This process can be performed using a special template. See [loading Docker images onto your offline host](../../offline_deployments/_index.md#loading-docker-images-onto-your-offline-host) for instructions. Once the Docker image is hosted locally, the `SECURE_ANALYZERS_PREFIX` variable is set with the location of the local registry. The variable must be set such that concatenating `/api-security:2` results in a valid image location. For example, the below line sets a registry for the image `registry.gitlab.com/security-products/api-security:2`: `SECURE_ANALYZERS_PREFIX: "registry.gitlab.com/security-products"` {{< alert type="note" >}} Setting `SECURE_ANALYZERS_PREFIX` changes the Docker image registry location for all GitLab Secure templates. {{< /alert >}} For more information, see [Offline environments](../../offline_deployments/_index.md).
https://docs.gitlab.com/user/application_security/vulnerability_archival
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/vulnerability_archival
[ "doc", "user", "application_security", "vulnerability_archival" ]
_index.md
Security Risk Management
Security Insights
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability archival
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced in GitLab 18.0 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_archival`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} To ensure the GitLab database remains performant, vulnerabilities are archived monthly from the database. Vulnerabilities that were last updated more than one year ago are archived. A vulnerability is updated every time a change is made, for example, when its status is changed. Archived vulnerabilities remain available for download for an additional 3 years, after which they are deleted. Vulnerability metrics, such as those in the security dashboard and value streams dashboards, include statistics on archived vulnerabilities. ## Archival process Every month, the vulnerability archival process runs and does the following: - Archives vulnerabilities last updated more than 12 months ago. Archived vulnerabilities are deleted from the vulnerability report. To retrieve their details, download the relevant vulnerability archive. - Deletes archives created more than 3 years ago. ## Vulnerability archive A vulnerability archive is a CSV file containing details of all vulnerabilities that were archived in a specific month or year, or within a specific date range. ### Download a vulnerability archive Download a vulnerability archive to search or analyze the details it contains. Prerequisites: - You must have the Owner role for the project or the `read_vulnerability_archive` permission. To download a vulnerability archive: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Security configuration**, then select **Vulnerability Management**. 1. To download details of all vulnerabilities archived: - For a specific year, in the row for that year, select **Download all**. - For a specific year and month, expand the year, then in the row for that month select **Download** ({{< icon name="download" >}}). - For a specific date range, in the **From** and **To** fields enter the dates and then select **Download**. The selected vulnerability archive is downloaded as a CSV file.
--- stage: Security Risk Management group: Security Insights info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability archival breadcrumbs: - doc - user - application_security - vulnerability_archival --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced in GitLab 18.0 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_archival`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} To ensure the GitLab database remains performant, vulnerabilities are archived monthly from the database. Vulnerabilities that were last updated more than one year ago are archived. A vulnerability is updated every time a change is made, for example, when its status is changed. Archived vulnerabilities remain available for download for an additional 3 years, after which they are deleted. Vulnerability metrics, such as those in the security dashboard and value streams dashboards, include statistics on archived vulnerabilities. ## Archival process Every month, the vulnerability archival process runs and does the following: - Archives vulnerabilities last updated more than 12 months ago. Archived vulnerabilities are deleted from the vulnerability report. To retrieve their details, download the relevant vulnerability archive. - Deletes archives created more than 3 years ago. ## Vulnerability archive A vulnerability archive is a CSV file containing details of all vulnerabilities that were archived in a specific month or year, or within a specific date range. ### Download a vulnerability archive Download a vulnerability archive to search or analyze the details it contains. Prerequisites: - You must have the Owner role for the project or the `read_vulnerability_archive` permission. To download a vulnerability archive: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Security configuration**, then select **Vulnerability Management**. 1. To download details of all vulnerabilities archived: - For a specific year, in the row for that year, select **Download all**. - For a specific year and month, expand the year, then in the row for that month select **Download** ({{< icon name="download" >}}). - For a specific date range, in the **From** and **To** fields enter the dates and then select **Download**. The selected vulnerability archive is downloaded as a CSV file.
https://docs.gitlab.com/user/application_security/gitlab_advisory_database
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/gitlab_advisory_database
[ "doc", "user", "application_security", "gitlab_advisory_database" ]
_index.md
Application Security Testing
Vulnerability Research
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab Advisory Database
Security advisories, vulnerabilities, dependencies, database, and updates.
The [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) serves as a repository for security advisories related to software dependencies. It is updated on an hourly basis with the latest security advisories. The database is an essential component of both [Dependency Scanning](../dependency_scanning/_index.md) and [Container Scanning](../container_scanning/_index.md). A free and open-source version of the GitLab Advisory Database is also available as [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). However, there is a 30-day delay in updates. ## Standardization In our advisories, we adopt standardized practices to effectively communicate vulnerabilities and their impact. - [CVE](../terminology/_index.md#cve) - [CVSS](../terminology/_index.md#cvss) - [CWE](../terminology/_index.md#cwe) ## Explore the database To view the database content, go to the [GitLab Advisory Database](https://advisories.gitlab.com) home page. On the home page you can: - Search the database, by identifier, package name, and description. - View advisories that were added recently. - View statistical information, including coverage and update frequency. ### Search Each advisory has a page with the following details: - **Identifiers**: Public identifiers. For example, CVE ID, GHSA ID, or the GitLab internal ID (`GMS-<year>-<nr>`). - **Package Slug**: Package type and package name separated by a slash. - **Vulnerability**: A short description of the security flaw. - **Description**: A detailed description of the security flaw and potential risks. - **Affected Versions**: The affected versions. - **Solution**: How to remediate the vulnerability. - **Last Modified**: The date when the advisory was last modified. ## Open Source Edition GitLab provides a free and open-source version of the database, the [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). The open-source version is a time-delayed clone of the GitLab Advisory Database, MIT-licensed and contains all advisories from the GitLab Advisory Database that are older than 30 days or with the `community-sync` flag. ## Integrations - [Dependency Scanning](../dependency_scanning/_index.md) - [Container Scanning](../container_scanning/_index.md) - Third-party tools {{< alert type="note" >}} GitLab Advisory Database Terms prohibit the use of data contained in the GitLab Advisory Database by third-party tools. Third-party integrators can use the MIT-licensed, time-delayed [repository clone](https://gitlab.com/gitlab-org/advisories-community) instead. {{< /alert >}} ### How the database can be used As an example, we highlight the use of the database as a source for an Advisory Ingestion process as part of Continuous Vulnerability Scans. ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% flowchart TB accTitle: Advisory ingestion process accDescr: Sequence of actions that make up the advisory ingestion process. subgraph Dependency Scanning A[GitLab Advisory Database] end subgraph Container Scanning C[GitLab Advisory Database Open Source Edition integrated into Trivy] end A --> B{Ingest} C --> B B --> |store| D{{"Cloud Storage (NDJSON format)"}} F[\GitLab Instance/] --> |pulls data| D F --> |stores| G[(Relational Database)] ``` ## Maintenance The Vulnerability Research team is responsible for the maintenance and regular updates of the GitLab Advisory Database and the GitLab Advisory Database (Open Source Edition). Community contributions are accessible in [advisories-community](https://gitlab.com/gitlab-org/advisories-community) via the `community-sync` flag. ## Contributing to the vulnerability database If you know about a vulnerability that is not listed, you can contribute to the GitLab Advisory Database by either opening an issue or submit the vulnerability. For more information, see [Contribution Guidelines](https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/CONTRIBUTING.md). ## License The GitLab Advisory Database is freely accessible in accordance with the [GitLab Advisory Database Terms](https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/LICENSE.md#gitlab-advisory-database-term).
--- stage: Application Security Testing group: Vulnerability Research info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab Advisory Database description: Security advisories, vulnerabilities, dependencies, database, and updates. breadcrumbs: - doc - user - application_security - gitlab_advisory_database --- The [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) serves as a repository for security advisories related to software dependencies. It is updated on an hourly basis with the latest security advisories. The database is an essential component of both [Dependency Scanning](../dependency_scanning/_index.md) and [Container Scanning](../container_scanning/_index.md). A free and open-source version of the GitLab Advisory Database is also available as [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). However, there is a 30-day delay in updates. ## Standardization In our advisories, we adopt standardized practices to effectively communicate vulnerabilities and their impact. - [CVE](../terminology/_index.md#cve) - [CVSS](../terminology/_index.md#cvss) - [CWE](../terminology/_index.md#cwe) ## Explore the database To view the database content, go to the [GitLab Advisory Database](https://advisories.gitlab.com) home page. On the home page you can: - Search the database, by identifier, package name, and description. - View advisories that were added recently. - View statistical information, including coverage and update frequency. ### Search Each advisory has a page with the following details: - **Identifiers**: Public identifiers. For example, CVE ID, GHSA ID, or the GitLab internal ID (`GMS-<year>-<nr>`). - **Package Slug**: Package type and package name separated by a slash. - **Vulnerability**: A short description of the security flaw. - **Description**: A detailed description of the security flaw and potential risks. - **Affected Versions**: The affected versions. - **Solution**: How to remediate the vulnerability. - **Last Modified**: The date when the advisory was last modified. ## Open Source Edition GitLab provides a free and open-source version of the database, the [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). The open-source version is a time-delayed clone of the GitLab Advisory Database, MIT-licensed and contains all advisories from the GitLab Advisory Database that are older than 30 days or with the `community-sync` flag. ## Integrations - [Dependency Scanning](../dependency_scanning/_index.md) - [Container Scanning](../container_scanning/_index.md) - Third-party tools {{< alert type="note" >}} GitLab Advisory Database Terms prohibit the use of data contained in the GitLab Advisory Database by third-party tools. Third-party integrators can use the MIT-licensed, time-delayed [repository clone](https://gitlab.com/gitlab-org/advisories-community) instead. {{< /alert >}} ### How the database can be used As an example, we highlight the use of the database as a source for an Advisory Ingestion process as part of Continuous Vulnerability Scans. ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% flowchart TB accTitle: Advisory ingestion process accDescr: Sequence of actions that make up the advisory ingestion process. subgraph Dependency Scanning A[GitLab Advisory Database] end subgraph Container Scanning C[GitLab Advisory Database Open Source Edition integrated into Trivy] end A --> B{Ingest} C --> B B --> |store| D{{"Cloud Storage (NDJSON format)"}} F[\GitLab Instance/] --> |pulls data| D F --> |stores| G[(Relational Database)] ``` ## Maintenance The Vulnerability Research team is responsible for the maintenance and regular updates of the GitLab Advisory Database and the GitLab Advisory Database (Open Source Edition). Community contributions are accessible in [advisories-community](https://gitlab.com/gitlab-org/advisories-community) via the `community-sync` flag. ## Contributing to the vulnerability database If you know about a vulnerability that is not listed, you can contribute to the GitLab Advisory Database by either opening an issue or submit the vulnerability. For more information, see [Contribution Guidelines](https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/CONTRIBUTING.md). ## License The GitLab Advisory Database is freely accessible in accordance with the [GitLab Advisory Database Terms](https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/blob/master/LICENSE.md#gitlab-advisory-database-term).
https://docs.gitlab.com/user/application_security/exclusions
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/exclusions.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
exclusions.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Secret detection exclusions
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14878) as an [experiment](../../../policy/development_stages_support.md) in GitLab 17.5 [with a flag](../../feature_flags.md) named `secret_detection_project_level_exclusions`. Enabled by default. - Feature flag `secret_detection_project_level_exclusions` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/499059) in GitLab 17.7. {{< /history >}} Secret detection may detect something that's not actually a secret. For example, if you use a fake value as a placeholder in your code, it might be detected and possibly blocked. To avoid false positives you can exclude from secret detection: - A path. - A raw value. - A rule from the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules) You can define multiple exclusions for a project. ## Restrictions The following restrictions apply: - Exclusions can only be defined for each project. - Exclusions apply only to [secret push protection](secret_push_protection/_index.md). - The maximum number of path-based exclusions per project is 10. - The maximum depth for path-based exclusions is 20. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Secret Detection Exclusions - Demo](https://www.youtube.com/watch?v=vh_Uh4_4aoc). <!-- Video published on 2024-10-12 --> ## Add an exclusion Define an exclusion to avoid false positives from secret detection. Path exclusions support glob patterns which are supported and interpreted with the Ruby method [`File.fnmatch`](https://docs.ruby-lang.org/en/master/File.html#method-c-fnmatch) with the [flags](https://docs.ruby-lang.org/en/master/File/Constants.html#module-File::Constants-label-Filename+Globbing+Constants+-28File-3A-3AFNM_-2A-29) `File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB`. Prerequisites: - You must have the **Maintainer** role for the project. To define an exclusion: 1. In the left sidebar, select **Search or go to** and go to your project or group. 1. Select **Secure > Security configuration**. 1. Scroll down to **Secret push protection**. 1. Turn on the **Secret push protection** toggle. 1. Select **Configure Secret Detection** ({{< icon name="settings" >}}). 1. Select **Add exclusion** to open the exclusion form. 1. Enter the details of the exclusion, then select **Add exclusion**.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Secret detection exclusions breadcrumbs: - doc - user - application_security - secret_detection --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14878) as an [experiment](../../../policy/development_stages_support.md) in GitLab 17.5 [with a flag](../../feature_flags.md) named `secret_detection_project_level_exclusions`. Enabled by default. - Feature flag `secret_detection_project_level_exclusions` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/499059) in GitLab 17.7. {{< /history >}} Secret detection may detect something that's not actually a secret. For example, if you use a fake value as a placeholder in your code, it might be detected and possibly blocked. To avoid false positives you can exclude from secret detection: - A path. - A raw value. - A rule from the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules) You can define multiple exclusions for a project. ## Restrictions The following restrictions apply: - Exclusions can only be defined for each project. - Exclusions apply only to [secret push protection](secret_push_protection/_index.md). - The maximum number of path-based exclusions per project is 10. - The maximum depth for path-based exclusions is 20. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Secret Detection Exclusions - Demo](https://www.youtube.com/watch?v=vh_Uh4_4aoc). <!-- Video published on 2024-10-12 --> ## Add an exclusion Define an exclusion to avoid false positives from secret detection. Path exclusions support glob patterns which are supported and interpreted with the Ruby method [`File.fnmatch`](https://docs.ruby-lang.org/en/master/File.html#method-c-fnmatch) with the [flags](https://docs.ruby-lang.org/en/master/File/Constants.html#module-File::Constants-label-Filename+Globbing+Constants+-28File-3A-3AFNM_-2A-29) `File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB`. Prerequisites: - You must have the **Maintainer** role for the project. To define an exclusion: 1. In the left sidebar, select **Search or go to** and go to your project or group. 1. Select **Secure > Security configuration**. 1. Scroll down to **Secret push protection**. 1. Turn on the **Secret push protection** toggle. 1. Select **Configure Secret Detection** ({{< icon name="settings" >}}). 1. Select **Add exclusion** to open the exclusion form. 1. Enter the details of the exclusion, then select **Add exclusion**.
https://docs.gitlab.com/user/application_security/remove_secrets_tutorial
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/remove_secrets_tutorial.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
remove_secrets_tutorial.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Remove a secret from your commits
null
If your application uses external resources, you usually need to authenticate your application with a **secret**, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. If you accidentally commit a secret, you can still remove it before you push. In this tutorial, you'll commit a fake secret, then remove the secret from your commit history before you push it to a project. You'll also learn what to do when a secret is pushed to a repository. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> This tutorial is adapted from the GitLab Unfiltered video [Remove a secret from your commits](https://www.youtube.com/watch?v=2jBC3uBUlyU). <!-- Video published on 2024-06-12 --> ## Before you begin Make sure you have the following before you complete this tutorial: - A test project. You can use any project you like, but consider creating a test project specifically for this tutorial. - Some familiarity with command-line Git. ## Commit a secret GitLab identifies secrets by matching specific patterns of letters, digits, and symbols. These patterns are also used to identify the type of secret. For example, the fake secret `glpat-12345678901234567890` <!-- gitleaks:allow --> is a personal access token because it begins with the string `glpat-`. Although many secrets can be identified by format, you might accidentally commit a secret while you're working in a repository. Let's simulate accidentally committing a secret: 1. In your test repository, check out a new branch: ```shell git checkout -b secret-tutorial ``` 1. Create a new text file with the following content, removing the spaces before and after the `-` to match the exact format of a personal access token: ```txt fake-secret: glpat - 12345678901234567890 message: hello, world! ``` 1. Commit the file to your branch: ```shell git add . git commit -m "Add fake secret" ``` We've created a problematic situation: if we push our changes, the personal access token we committed to our text file will be leaked! We need to remove the secret from the commit history before we can proceed. ## Remove the secret from the history If the only commit that contains a secret is the most recent commit in the Git history, you can amend the history to remove it: 1. Open the text file and remove the fake secret: ```txt fake-secret: message: hello, world! ``` 1. Overwrite the old commit with the changes: ```shell git add . git commit --amend ``` The secret is removed from the file and the commit history, and you can safely push your changes. ### Amending multiple commits Sometimes, you only notice that a secret was added after you make several additional commits. When this happens, it's not enough to delete the secret from the most recent commit. You need to make changes to every commit after the secret was added: 1. Add the fake secret to your file and commit it to the branch. 1. Make at least one additional commit. When you inspect the history, you should see something like this: ```shell $ git log commit 456def Do other things commit 123abc Add fake secret ... ``` Even if we remove the secret from commit `456def`, it still exists in the history and will be exposed if we push our changes now. 1. To fix the history, start an interactive rebase from the commit that introduced the secret: ```shell git rebase -i 123abc~1 ``` 1. In the edit window, for every commit that includes the secret, change `pick` to `edit`: ```txt edit 456def Do other things edit 123abc Add fake secret ``` 1. Open your text file and remove the fake secret. 1. Commit your changes: ```shell git add . git commit --amend ``` 1. Optional. When you delete the secret, you might remove the only diff in the commit. If this happens, Git displays this message: ```shell No changes You asked to amend the most recent commit, but doing so would make it empty. ``` Remove the empty commit: ```shell git reset HEAD^ ``` 1. Continue the rebase: ```shell git rebase --continue ``` 1. Remove the secret from the next commit and continue the rebase. Repeat this process until the rebase is complete: ```shell Successfully rebased and updated refs/heads/secret-tutorial ``` The secret is removed and you can safely push your changes to the remote. ## What to do when you push a secret Sometimes, people push changes before they notice the changes include a secret. If secret push protection is enabled in the project, the push is blocked automatically and the offending commits are displayed. However, if a secret is successfully pushed to a remote repository, it is no longer secure and you should revoke it immediately. Even if you don't think many people have access to the secret, you should replace it. Exposed secrets are a substantial security risk. ## Next steps To improve your application security, consider enabling at least one of the [secret detection](_index.md) methods in your project.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Remove a secret from your commits' breadcrumbs: - doc - user - application_security - secret_detection --- If your application uses external resources, you usually need to authenticate your application with a **secret**, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. If you accidentally commit a secret, you can still remove it before you push. In this tutorial, you'll commit a fake secret, then remove the secret from your commit history before you push it to a project. You'll also learn what to do when a secret is pushed to a repository. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> This tutorial is adapted from the GitLab Unfiltered video [Remove a secret from your commits](https://www.youtube.com/watch?v=2jBC3uBUlyU). <!-- Video published on 2024-06-12 --> ## Before you begin Make sure you have the following before you complete this tutorial: - A test project. You can use any project you like, but consider creating a test project specifically for this tutorial. - Some familiarity with command-line Git. ## Commit a secret GitLab identifies secrets by matching specific patterns of letters, digits, and symbols. These patterns are also used to identify the type of secret. For example, the fake secret `glpat-12345678901234567890` <!-- gitleaks:allow --> is a personal access token because it begins with the string `glpat-`. Although many secrets can be identified by format, you might accidentally commit a secret while you're working in a repository. Let's simulate accidentally committing a secret: 1. In your test repository, check out a new branch: ```shell git checkout -b secret-tutorial ``` 1. Create a new text file with the following content, removing the spaces before and after the `-` to match the exact format of a personal access token: ```txt fake-secret: glpat - 12345678901234567890 message: hello, world! ``` 1. Commit the file to your branch: ```shell git add . git commit -m "Add fake secret" ``` We've created a problematic situation: if we push our changes, the personal access token we committed to our text file will be leaked! We need to remove the secret from the commit history before we can proceed. ## Remove the secret from the history If the only commit that contains a secret is the most recent commit in the Git history, you can amend the history to remove it: 1. Open the text file and remove the fake secret: ```txt fake-secret: message: hello, world! ``` 1. Overwrite the old commit with the changes: ```shell git add . git commit --amend ``` The secret is removed from the file and the commit history, and you can safely push your changes. ### Amending multiple commits Sometimes, you only notice that a secret was added after you make several additional commits. When this happens, it's not enough to delete the secret from the most recent commit. You need to make changes to every commit after the secret was added: 1. Add the fake secret to your file and commit it to the branch. 1. Make at least one additional commit. When you inspect the history, you should see something like this: ```shell $ git log commit 456def Do other things commit 123abc Add fake secret ... ``` Even if we remove the secret from commit `456def`, it still exists in the history and will be exposed if we push our changes now. 1. To fix the history, start an interactive rebase from the commit that introduced the secret: ```shell git rebase -i 123abc~1 ``` 1. In the edit window, for every commit that includes the secret, change `pick` to `edit`: ```txt edit 456def Do other things edit 123abc Add fake secret ``` 1. Open your text file and remove the fake secret. 1. Commit your changes: ```shell git add . git commit --amend ``` 1. Optional. When you delete the secret, you might remove the only diff in the commit. If this happens, Git displays this message: ```shell No changes You asked to amend the most recent commit, but doing so would make it empty. ``` Remove the empty commit: ```shell git reset HEAD^ ``` 1. Continue the rebase: ```shell git rebase --continue ``` 1. Remove the secret from the next commit and continue the rebase. Repeat this process until the rebase is complete: ```shell Successfully rebased and updated refs/heads/secret-tutorial ``` The secret is removed and you can safely push your changes to the remote. ## What to do when you push a secret Sometimes, people push changes before they notice the changes include a secret. If secret push protection is enabled in the project, the push is blocked automatically and the offending commits are displayed. However, if a secret is successfully pushed to a remote repository, it is no longer secure and you should revoke it immediately. Even if you don't think many people have access to the secret, you should replace it. Exposed secrets are a substantial security risk. ## Next steps To improve your application security, consider enabling at least one of the [secret detection](_index.md) methods in your project.
https://docs.gitlab.com/user/application_security/detected_secrets
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/detected_secrets.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
detected_secrets.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Detected secrets
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This table lists the secrets detected by: - Pipeline secret detection - Client-side secret detection - Secret push protection Secret detection rules are updated in the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/tree/main). Detected secrets with patterns that have been removed or updated remain open so you can triage them. <!-- markdownlint-disable MD034 --> <!-- markdownlint-disable MD044 --> <!-- vale gitlab_base.SentenceSpacing = NO --> | Description | ID | Pipeline secret detection | Client-side secret detection | Secret push protection | |:----------------------------------------------|:----------------------------------------------|:--------------------------|:-----------------------------|:-----------------------| | Adafruit IO Key | AdafruitIOKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Adobe Client ID (OAuth Web) | Adobe Client ID (Oauth Web) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Adobe client secret | Adobe Client Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Age secret key | Age secret key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Aiven Service Password | AivenServicePassword | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Alibaba AccessKey ID | Alibaba AccessKey ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Alibaba Secret Key | Alibaba Secret Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Anthropic API key | anthropic_key | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Artifactory API Key | ArtifactoryApiKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Artifactory Identity Token | ArtifactoryIdentityToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Asana client ID | Asana Client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Asana client secret | Asana Client Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Asana Personal Access Token V1 | AsanaPersonalAccessTokenV1 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Asana Personal Access Token V2 | AsanaPersonalAccessTokenV2 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Atlassian API Key | AtlassianApiKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Atlassian API token | Atlassian API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Atlassian User API Token | AtlassianUserApiToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | AWS access token | AWS | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Azure Entra Client Secret | AzureEntraClientSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Beamer API token | Beamer API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Bitbucket client ID | Bitbucket client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Bitbucket client secret | Bitbucket client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Brevo API token | Sendinblue API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Brevo SMTP token | Sendinblue SMTP token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | CircleCI access token | CircleCI access tokens | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | CircleCI Personal Access Token | CircleCIPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Clojars deploy token | Clojars API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Contentful delivery API token | Contentful delivery API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Contentful personal access token | ContentfulPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Contentful preview API token | Contentful preview API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Databricks API token | Databricks API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean OAuth access token | digitalocean-access-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean personal access token | digitalocean-pat | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean refresh token | digitalocean-refresh-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord API key | Discord API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord client ID | Discord client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord client secret | Discord client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Docker Personal Access Token | DockerPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Doppler API token | Doppler API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Doppler Service token | Doppler Service token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Dropbox API secret/key | Dropbox API secret/key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dropbox long lived API token | Dropbox long lived API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dropbox short lived API token | Dropbox short lived API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Duffel API token | Duffel API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dynatrace Platform Token | DynatracePlatformToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | EasyPost production API key | EasyPost API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | EasyPost test API key | EasyPost test API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Facebook token | Facebook token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Fastly API user or automation token | Fastly API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Figma Personal Access Token | FigmaPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Finicity API token | Finicity API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Finicity client secret | Finicity client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test encrypted key | Flutterwave encrypted key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test public key | Flutterwave public key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test secret key | Flutterwave secret key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Frame.io API token | Frame.io API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | GCP API key | GCP API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | GCP OAuth client secret | GCP OAuth client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub app token | Github App Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub App Installation Token | GithubAppInstallationToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub Fine Grained Personal Access Token | GithubFineGrainedPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub OAuth Access Token | Github OAuth Access Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub personal access token (classic) | Github Personal Access Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub refresh token | Github Refresh Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitLab CI/CD job token | gitlab_ci_build_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab deploy token | gitlab_deploy_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab Feature Flags Client Token | None | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab feed token | gitlab_feed_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab feed token v2 | gitlab_feed_token_v2 | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab incoming email token | gitlab_incoming_email_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab Kubernetes agent token | gitlab_kubernetes_agent_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab OAuth application secret | gitlab_oauth_app_secret | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab personal access token | gitlab_personal_access_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab Personal Access Token (routable) | gitlab_personal_access_token_routable | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab pipeline trigger token | gitlab_pipeline_trigger_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab runner authentication token | gitlab_runner_auth_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab runner registration token | gitlab_runner_registration_token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitLab SCIM OAuth token | gitlab_scim_oauth_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GoCardless API token | GoCardless API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Google (GCP) service account | Google (GCP) Service-account | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Grafana API token | Grafana API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HashiCorp Terraform API token | Hashicorp Terraform user/org API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HashiCorp Vault batch token | Hashicorp Vault batch token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Heroku API key or application authorization token | Heroku API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Highnote Live Secret Key | HighnoteLiveSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Highnote Test Secret Key | HighnoteTestSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HubSpot private app API token | Hubspot API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Hugging Face User Access Token | HuggingFaceUserAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Instagram access token | Instagram access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Intercom API token | Intercom API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Intercom client secret or client ID | Intercom client secret/ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Ionic personal access token | Ionic API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Linear API token | Linear API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Linear client secret or ID (OAuth 2.0) | Linear client secret/ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | LinkedIn client ID | Linkedin Client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | LinkedIn client secret | Linkedin Client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Lob API key | Lob API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Lob publishable API key | Lob Publishable API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Mailchimp API key | Mailchimp API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mailgun private API token | Mailgun private API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mailgun public verification key | Mailgun public validation key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Mailgun webhook signing key | Mailgun webhook signing key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mapbox API token | Mapbox API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | MaxMind License Key | MaxMind License Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | MessageBird access key | messagebird-api-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | MessageBird API client ID | MessageBird API client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Meta access token | Meta access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | New Relic ingest browser API token | New Relic ingest browser API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | New Relic ingest browser API token v2 | New Relic ingest browser API token v2 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic REST API Key | New Relic REST API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic user API ID | New Relic user API ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic user API key | New Relic user API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | npm access token | npm access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Oculus access token | Oculus access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Onfido Live API Token | Onfido Live API Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | OpenAI API key | open ai token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Password in URL | Password in URL | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PGP private key | PGP private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PKCS8 private key | PKCS8 private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PlanetScale API token | Planetscale API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale App Secret | PlanetscaleAppSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale OAuth Secret | PlanetscaleOAuthSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale password | Planetscale password | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PostHog Personal API key | PostHogPersonalAPIkey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PostHog Project API key | PostHogProjectAPIkey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Postman API token | Postman API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Pulumi API token | Pulumi API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PyPi upload token | PyPI upload token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | RSA private key | RSA private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | RubyGems API token | Rubygem API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Segment public API token | Segment Public API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SendGrid API token | Sendgrid API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shippo API token | Shippo API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shippo Test API token | Shippo Test API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Shopify custom app access token | Shopify custom app access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify personal access token | Shopify access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify private app access token | Shopify private app access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify shared secret | Shopify shared secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack app level token | SlackAppLevelToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack bot user OAuth token | Slack token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack webhook | Slack Webhook | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SonarQube Global Analysis Token | SonarQubeGlobalAnalysisToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SonarQube Project Analysis Token | SonarQubeProjectAnalysisToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SonarQube User Token | SonarQubeUserToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SSH (DSA) private key | SSH (DSA) private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SSH (EC) private key | SSH (EC) private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SSH private key | SSH private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe live restricted key | StripeLiveRestrictedKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe live secret key | StripeLiveSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe Live Short Secret Key | StripeLiveShortSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe publishable live key | StripeLivePublishableKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe publishable test key | StripeTestPublishableKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe restricted test key | StripeTestRestrictedKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe secret test key | StripeTestSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe Test Short Secret Key | StripeTestShortSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Tailscale key | Tailscale key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Tencent Cloud Secret ID | TencentCloudSecretID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twilio Account SID | Twilio Account SID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twilio API key | Twilio API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twitch OAuth client secret | Twitch API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Typeform personal access token | Typeform API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Volcengine Access Key ID | VolcengineAccessKeyID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | WakaTime API Key | WakaTimeAPIKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | X token | Twitter token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud AWS API compatible access secret | Yandex.Cloud AWS API compatible Access Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud API Key | Yandex.Cloud API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud IAM cookie v1-1 | Yandex.Cloud IAM Cookie v1 - 1 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud IAM cookie v1-3 | Yandex.Cloud IAM Cookie v1 - 3 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | <!-- vale gitlab_base.SentenceSpacing = YES --> <!-- markdownlint-enable MD034 --> <!-- markdownlint-enable MD044 -->
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Detected secrets breadcrumbs: - doc - user - application_security - secret_detection --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This table lists the secrets detected by: - Pipeline secret detection - Client-side secret detection - Secret push protection Secret detection rules are updated in the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/tree/main). Detected secrets with patterns that have been removed or updated remain open so you can triage them. <!-- markdownlint-disable MD034 --> <!-- markdownlint-disable MD044 --> <!-- vale gitlab_base.SentenceSpacing = NO --> | Description | ID | Pipeline secret detection | Client-side secret detection | Secret push protection | |:----------------------------------------------|:----------------------------------------------|:--------------------------|:-----------------------------|:-----------------------| | Adafruit IO Key | AdafruitIOKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Adobe Client ID (OAuth Web) | Adobe Client ID (Oauth Web) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Adobe client secret | Adobe Client Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Age secret key | Age secret key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Aiven Service Password | AivenServicePassword | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Alibaba AccessKey ID | Alibaba AccessKey ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Alibaba Secret Key | Alibaba Secret Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Anthropic API key | anthropic_key | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Artifactory API Key | ArtifactoryApiKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Artifactory Identity Token | ArtifactoryIdentityToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Asana client ID | Asana Client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Asana client secret | Asana Client Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Asana Personal Access Token V1 | AsanaPersonalAccessTokenV1 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Asana Personal Access Token V2 | AsanaPersonalAccessTokenV2 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Atlassian API Key | AtlassianApiKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Atlassian API token | Atlassian API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Atlassian User API Token | AtlassianUserApiToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | AWS access token | AWS | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Azure Entra Client Secret | AzureEntraClientSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Beamer API token | Beamer API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Bitbucket client ID | Bitbucket client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Bitbucket client secret | Bitbucket client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Brevo API token | Sendinblue API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Brevo SMTP token | Sendinblue SMTP token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | CircleCI access token | CircleCI access tokens | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | CircleCI Personal Access Token | CircleCIPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Clojars deploy token | Clojars API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Contentful delivery API token | Contentful delivery API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Contentful personal access token | ContentfulPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Contentful preview API token | Contentful preview API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Databricks API token | Databricks API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean OAuth access token | digitalocean-access-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean personal access token | digitalocean-pat | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | DigitalOcean refresh token | digitalocean-refresh-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord API key | Discord API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord client ID | Discord client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Discord client secret | Discord client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Docker Personal Access Token | DockerPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Doppler API token | Doppler API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Doppler Service token | Doppler Service token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Dropbox API secret/key | Dropbox API secret/key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dropbox long lived API token | Dropbox long lived API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dropbox short lived API token | Dropbox short lived API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Duffel API token | Duffel API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Dynatrace Platform Token | DynatracePlatformToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | EasyPost production API key | EasyPost API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | EasyPost test API key | EasyPost test API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Facebook token | Facebook token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Fastly API user or automation token | Fastly API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Figma Personal Access Token | FigmaPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Finicity API token | Finicity API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Finicity client secret | Finicity client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test encrypted key | Flutterwave encrypted key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test public key | Flutterwave public key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Flutterwave test secret key | Flutterwave secret key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Frame.io API token | Frame.io API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | GCP API key | GCP API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | GCP OAuth client secret | GCP OAuth client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub app token | Github App Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub App Installation Token | GithubAppInstallationToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub Fine Grained Personal Access Token | GithubFineGrainedPersonalAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub OAuth Access Token | Github OAuth Access Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub personal access token (classic) | Github Personal Access Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitHub refresh token | Github Refresh Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitLab CI/CD job token | gitlab_ci_build_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab deploy token | gitlab_deploy_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab Feature Flags Client Token | None | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab feed token | gitlab_feed_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GitLab feed token v2 | gitlab_feed_token_v2 | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab incoming email token | gitlab_incoming_email_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab Kubernetes agent token | gitlab_kubernetes_agent_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab OAuth application secret | gitlab_oauth_app_secret | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab personal access token | gitlab_personal_access_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab Personal Access Token (routable) | gitlab_personal_access_token_routable | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab pipeline trigger token | gitlab_pipeline_trigger_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab runner authentication token | gitlab_runner_auth_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | GitLab runner registration token | gitlab_runner_registration_token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | GitLab SCIM OAuth token | gitlab_scim_oauth_token | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | GoCardless API token | GoCardless API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Google (GCP) service account | Google (GCP) Service-account | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Grafana API token | Grafana API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HashiCorp Terraform API token | Hashicorp Terraform user/org API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HashiCorp Vault batch token | Hashicorp Vault batch token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Heroku API key or application authorization token | Heroku API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Highnote Live Secret Key | HighnoteLiveSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Highnote Test Secret Key | HighnoteTestSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | HubSpot private app API token | Hubspot API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Hugging Face User Access Token | HuggingFaceUserAccessToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Instagram access token | Instagram access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Intercom API token | Intercom API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Intercom client secret or client ID | Intercom client secret/ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Ionic personal access token | Ionic API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Linear API token | Linear API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Linear client secret or ID (OAuth 2.0) | Linear client secret/ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | LinkedIn client ID | Linkedin Client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | LinkedIn client secret | Linkedin Client secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Lob API key | Lob API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Lob publishable API key | Lob Publishable API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Mailchimp API key | Mailchimp API key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mailgun private API token | Mailgun private API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mailgun public verification key | Mailgun public validation key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Mailgun webhook signing key | Mailgun webhook signing key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Mapbox API token | Mapbox API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | MaxMind License Key | MaxMind License Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | MessageBird access key | messagebird-api-token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | MessageBird API client ID | MessageBird API client ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Meta access token | Meta access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | New Relic ingest browser API token | New Relic ingest browser API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | New Relic ingest browser API token v2 | New Relic ingest browser API token v2 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic REST API Key | New Relic REST API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic user API ID | New Relic user API ID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New Relic user API key | New Relic user API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | npm access token | npm access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Oculus access token | Oculus access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Onfido Live API Token | Onfido Live API Token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | OpenAI API key | open ai token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Password in URL | Password in URL | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PGP private key | PGP private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PKCS8 private key | PKCS8 private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PlanetScale API token | Planetscale API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale App Secret | PlanetscaleAppSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale OAuth Secret | PlanetscaleOAuthSecret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PlanetScale password | Planetscale password | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PostHog Personal API key | PostHogPersonalAPIkey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | PostHog Project API key | PostHogProjectAPIkey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Postman API token | Postman API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Pulumi API token | Pulumi API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | PyPi upload token | PyPI upload token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | RSA private key | RSA private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | RubyGems API token | Rubygem API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Segment public API token | Segment Public API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SendGrid API token | Sendgrid API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shippo API token | Shippo API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shippo Test API token | Shippo Test API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Shopify custom app access token | Shopify custom app access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify personal access token | Shopify access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify private app access token | Shopify private app access token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Shopify shared secret | Shopify shared secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack app level token | SlackAppLevelToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack bot user OAuth token | Slack token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Slack webhook | Slack Webhook | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SonarQube Global Analysis Token | SonarQubeGlobalAnalysisToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SonarQube Project Analysis Token | SonarQubeProjectAnalysisToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SonarQube User Token | SonarQubeUserToken | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | SSH (DSA) private key | SSH (DSA) private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SSH (EC) private key | SSH (EC) private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | SSH private key | SSH private key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe live restricted key | StripeLiveRestrictedKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe live secret key | StripeLiveSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe Live Short Secret Key | StripeLiveShortSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Stripe publishable live key | StripeLivePublishableKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe publishable test key | StripeTestPublishableKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe restricted test key | StripeTestRestrictedKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe secret test key | StripeTestSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Stripe Test Short Secret Key | StripeTestShortSecretKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Tailscale key | Tailscale key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Tencent Cloud Secret ID | TencentCloudSecretID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twilio Account SID | Twilio Account SID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twilio API key | Twilio API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Twitch OAuth client secret | Twitch API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Typeform personal access token | Typeform API token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Volcengine Access Key ID | VolcengineAccessKeyID | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | WakaTime API Key | WakaTimeAPIKey | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | X token | Twitter token | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud AWS API compatible access secret | Yandex.Cloud AWS API compatible Access Secret | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud API Key | Yandex.Cloud API Key | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud IAM cookie v1-1 | Yandex.Cloud IAM Cookie v1 - 1 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | Yandex.Cloud IAM cookie v1-3 | Yandex.Cloud IAM Cookie v1 - 3 | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | <!-- vale gitlab_base.SentenceSpacing = YES --> <!-- markdownlint-enable MD034 --> <!-- markdownlint-enable MD044 -->
https://docs.gitlab.com/user/application_security/secret_detection
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
_index.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Secret detection
Detection, prevention, monitoring, storage, revocation, and reporting.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Your application might use external resources, including a CI/CD service, a database, or external storage. Access to these resources requires authentication, usually using static methods like private keys and tokens. These methods are called "secrets" because they're not meant to be shared with anyone else. To minimize the risk of exposing your secrets, always [store secrets outside of the repository](../../../ci/secrets/_index.md). However, secrets are sometimes accidentally committed to Git repositories. After a sensitive value is pushed to a remote repository, anyone with access to the repository can use the secret to impersonate the authorized user. Secret detection monitors your activity to both: - Help prevent your secrets from being leaked. - Help you respond if a secret is leaked. You should take a multi-layered security approach and enable all available secret detection methods: - [Secret push protection](secret_push_protection/_index.md) scans commits for secrets when you push changes to GitLab. The push is blocked if secrets are detected, unless you skip secret push protection. This method reduces the risk of secrets being leaked. - [Pipeline secret detection](pipeline/_index.md) runs as part of a project's CI/CD pipeline. Commits to the repository's default branch are scanned for secrets. If pipeline secret detection is enabled in merge request pipelines, commits to the development branch are scanned for secrets, enabling you to respond before they're committed to the default branch. - [Client-side secret detection](client/_index.md) scans descriptions and comments in both issues and merge requests for secrets before they're saved to GitLab. When a secret is detected you can choose to edit the input and remove the secret or, if it's a false positive, save the description or comment. If a secret is committed to a repository, GitLab records the exposure in the vulnerability report. For some secret types, GitLab can even automatically revoke the exposed secret. You should always revoke and replace exposed secrets as soon as possible. For secret-specific remediation guidance, review the details provided in the vulnerability report. ## Related topics - [Secret detection exclusions](exclusions.md) - [Vulnerability report](../vulnerability_report/_index.md) - [Automatic response to leaked secrets](automatic_response.md) - [Push rules](../../project/repository/push_rules.md)
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Secret detection description: Detection, prevention, monitoring, storage, revocation, and reporting. breadcrumbs: - doc - user - application_security - secret_detection --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Your application might use external resources, including a CI/CD service, a database, or external storage. Access to these resources requires authentication, usually using static methods like private keys and tokens. These methods are called "secrets" because they're not meant to be shared with anyone else. To minimize the risk of exposing your secrets, always [store secrets outside of the repository](../../../ci/secrets/_index.md). However, secrets are sometimes accidentally committed to Git repositories. After a sensitive value is pushed to a remote repository, anyone with access to the repository can use the secret to impersonate the authorized user. Secret detection monitors your activity to both: - Help prevent your secrets from being leaked. - Help you respond if a secret is leaked. You should take a multi-layered security approach and enable all available secret detection methods: - [Secret push protection](secret_push_protection/_index.md) scans commits for secrets when you push changes to GitLab. The push is blocked if secrets are detected, unless you skip secret push protection. This method reduces the risk of secrets being leaked. - [Pipeline secret detection](pipeline/_index.md) runs as part of a project's CI/CD pipeline. Commits to the repository's default branch are scanned for secrets. If pipeline secret detection is enabled in merge request pipelines, commits to the development branch are scanned for secrets, enabling you to respond before they're committed to the default branch. - [Client-side secret detection](client/_index.md) scans descriptions and comments in both issues and merge requests for secrets before they're saved to GitLab. When a secret is detected you can choose to edit the input and remove the secret or, if it's a false positive, save the description or comment. If a secret is committed to a repository, GitLab records the exposure in the vulnerability report. For some secret types, GitLab can even automatically revoke the exposed secret. You should always revoke and replace exposed secrets as soon as possible. For secret-specific remediation guidance, review the details provided in the vulnerability report. ## Related topics - [Secret detection exclusions](exclusions.md) - [Vulnerability report](../vulnerability_report/_index.md) - [Automatic response to leaked secrets](automatic_response.md) - [Push rules](../../project/repository/push_rules.md)
https://docs.gitlab.com/user/application_security/automatic_response
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/automatic_response.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
automatic_response.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Automatic response to leaked secrets
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab Secret Detection automatically responds when it finds certain types of leaked secrets. Automatic responses can: - Automatically revoke the secret. - Notify the partner that issued the secret. The partner can then revoke the secret, notify its owner, or otherwise protect against abuse. ## Supported secret types and actions GitLab supports automatic response for the following types of secrets: | Secret type | Action taken | Supported on GitLab.com | Supported in GitLab Self-Managed | | ----- | --- | --- | --- | | GitLab [personal access tokens](../../profile/personal_access_tokens.md) | Immediately revoke token, send email to owner | ✅ | ✅ [15.9 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/371658) | | Amazon Web Services (AWS) [IAM access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) | Notify AWS | ✅ | ⚙ | | Google Cloud [service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys), [API keys](https://cloud.google.com/docs/authentication/api-keys), and [OAuth client secrets](https://support.google.com/cloud/answer/6158849#rotate-client-secret) | Notify Google Cloud | ✅ | ⚙ | | Postman [API keys](https://learning.postman.com/docs/developer/postman-api/authentication/) | Notify Postman; Postman [notifies the key owner](https://learning.postman.com/docs/administration/managing-your-team/secret-scanner/#protect-postman-api-keys-in-gitlab) | ✅ | ⚙ | **Component legend** - ✅ - Available by default - ⚙ - Requires manual integration using a Token Revocation API ## Feature availability {{< history >}} - [Enabled for non-default branches](https://gitlab.com/gitlab-org/gitlab/-/issues/299212) in GitLab 15.11. {{< /history >}} Credentials are only post-processed when Secret Detection finds them: - In public projects, because publicly exposed credentials pose an increased threat. Expansion to private projects is considered in [issue 391379](https://gitlab.com/gitlab-org/gitlab/-/issues/391379). - In projects with GitLab Ultimate, for technical reasons. Expansion to all tiers is tracked in [issue 391763](https://gitlab.com/gitlab-org/gitlab/-/issues/391763). ## High-level architecture This diagram describes how a post-processing hook revokes a secret in the GitLab application: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Architecture diagram accDescr: How a post-processing hook revokes a secret in the GitLab application. autonumber GitLab Rails-->+GitLab Rails: gl-secret-detection-report.json GitLab Rails->>+GitLab Sidekiq: StoreScansService GitLab Sidekiq-->+GitLab Sidekiq: ScanSecurityReportSecretsWorker GitLab Sidekiq-->+GitLab Token Revocation API: GET revocable keys types GitLab Token Revocation API-->>-GitLab Sidekiq: OK GitLab Sidekiq->>+GitLab Token Revocation API: POST revoke revocable keys GitLab Token Revocation API-->>-GitLab Sidekiq: ACCEPTED GitLab Token Revocation API-->>+Partner API: revoke revocable keys Partner API-->>+GitLab Token Revocation API: ACCEPTED ``` 1. A pipeline with a Secret Detection job completes, producing a scan report (**1**). 1. The report is processed (**2**) by a service class, which schedules an asynchronous worker if token revocation is possible. 1. The asynchronous worker (**3**) communicates with an externally deployed HTTP service (**4** and **5**) to determine which kinds of secrets can be automatically revoked. 1. The worker sends (**6** and **7**) the list of detected secrets which the GitLab Token Revocation API is able to revoke. 1. The GitLab Token Revocation API sends (**8** and **9**) each revocable token to their respective vendor's [Partner API](#implement-a-partner-api). ## Partner program for leaked-credential notifications GitLab notifies partners when credentials they issue are leaked in public repositories on GitLab.com. If you operate a cloud or SaaS product and you're interested in receiving these notifications, learn more in [epic 4944](https://gitlab.com/groups/gitlab-org/-/epics/4944). Partners must [implement a Partner API](#implement-a-partner-api), which is called by the GitLab Token Revocation API. ### Implement a Partner API A Partner API integrates with the GitLab Token Revocation API to receive and respond to leaked token revocation requests. The service should be a publicly accessible HTTP API that is idempotent and rate-limited. Requests to your service can include one or more leaked tokens, and a header with the signature of the request body. We strongly recommend that you verify incoming requests using this signature, to prove it's a genuine request from GitLab. The diagram below details the necessary steps to receive, verify, and revoke leaked tokens: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Partner API data flow accDescr: How a Partner API should receive and respond to leaked token revocation requests. autonumber GitLab Token Revocation API-->>+Partner API: Send new leaked credentials Partner API-->>+GitLab Public Keys endpoint: Get active public keys GitLab Public Keys endpoint-->>+Partner API: One or more public keys Partner API-->>+Partner API: Verify request is signed by GitLab Partner API-->>+Partner API: Respond to leaks Partner API-->>+GitLab Token Revocation API: HTTP status ``` 1. The GitLab Token Revocation API sends (**1**) a [revocation request](#revocation-request) to the Partner API. The request includes headers containing a public key identifier and signature of the request body. 1. The Partner API requests (**2**) a list of [public keys](#public-keys-endpoint) from GitLab. The response (**3**) may include multiple public keys in the event of key rotation and should be filtered with the identifier in the request header. 1. The Partner API [verifies the signature](#verifying-the-request) against the actual request body, using the public key (**4**). 1. The Partner API processes the leaked tokens, which may involve automatic revocation (**5**). 1. The Partner API responds to the GitLab Token Revocation API (**6**) with the appropriate HTTP status code: - A successful response code (HTTP 200 through 299) acknowledges that the partner has received and processed the request. - An error code (HTTP 400 or higher) causes the GitLab Token Revocation API to retry the request. #### Revocation request This JSON schema document describes the body of the revocation request: ```json { "type": "array", "items": { "description": "A leaked token", "type": "object", "properties": { "type": { "description": "The type of token. This is vendor-specific and can be customised to suit your revocation service", "type": "string", "examples": [ "my_api_token" ] }, "token": { "description": "The substring that was matched by the Secret Detection analyser. In most cases, this is the entire token itself", "type": "string", "examples": [ "XXXXXXXXXXXXXXXX" ] }, "url": { "description": "The URL to the raw source file hosted on GitLab where the leaked token was detected", "type": "string", "examples": [ "https://gitlab.example.com/some-repo/-/raw/abcdefghijklmnop/compromisedfile1.java" ] } } } } ``` Example: ```json [{"type": "my_api_token", "token": "XXXXXXXXXXXXXXXX", "url": "https://example.com/some-repo/-/raw/abcdefghijklmnop/compromisedfile1.java"}] ``` In this example, Secret Detection has determined that an instance of `my_api_token` has been leaked. The value of the token is provided to you, in addition to a publicly accessible URL to the raw content of the file containing the leaked token. The request includes two special headers: | Header | Type | Description | |--------|------|-------------| | `Gitlab-Public-Key-Identifier` | string | A unique identifier for the key pair used to sign this request. Primarily used to aid in key rotation. | | `Gitlab-Public-Key-Signature` | string | A base64-encoded signature of the request body. | You can use these headers along with the GitLab Public Keys endpoint to verify that the revocation request was genuine. #### Public Keys endpoint GitLab maintains a publicly-accessible endpoint for retrieving public keys used to verify revocation requests. The endpoint can be provided on request. This JSON schema document describes the response body of the public keys endpoint: ```json { "type": "object", "properties": { "public_keys": { "description": "An array of public keys managed by GitLab used to sign token revocation requests.", "type": "array", "items": { "type": "object", "properties": { "key_identifier": { "description": "A unique identifier for the keypair. Match this against the value of the Gitlab-Public-Key-Identifier header", "type": "string" }, "key": { "description": "The value of the public key", "type": "string" }, "is_current": { "description": "Whether the key is currently active and signing new requests", "type": "boolean" } } } } } } ``` Example: ```json { "public_keys": [ { "key_identifier": "6917d7584f0fa65c8c33df5ab20f54dfb9a6e6ae", "key": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEN05/VjsBwWTUGYMpijqC5pDtoLEf\nuWz2CVZAZd5zfa/NAlSFgWRDdNRpazTARndB2+dHDtcHIVfzyVPNr2aznw==\n-----END PUBLIC KEY-----\n", "is_current": true } ] } ``` #### Verifying the request You can check whether a revocation request is genuine by verifying the `Gitlab-Public-Key-Signature` header against the request body, using the corresponding public key taken from the API response above. We use [ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) with SHA256 hashing to produce the signature, which is then base64-encoded into the header value. The Python script below demonstrates how the signature can be verified. It uses the popular [pyca/cryptography](https://cryptography.io/en/latest/) module for cryptographic operations: ```python import hashlib import base64 from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.serialization import load_pem_public_key from cryptography.hazmat.primitives.asymmetric import ec public_key = str.encode("") # obtained from the public keys endpoint signature_header = "" # obtained from the `Gitlab-Public-Key-Signature` header request_body = str.encode(r'') # obtained from the revocation request body pk = load_pem_public_key(public_key) decoded_signature = base64.b64decode(signature_header) pk.verify(decoded_signature, request_body, ec.ECDSA(hashes.SHA256())) # throws if unsuccessful print("Signature verified!") ``` The main steps are: 1. Loading the public key into a format appropriate for the crypto library you're using. 1. Base64-decoding the `Gitlab-Public-Key-Signature` header value. 1. Verifying the body against the decoded signature, specifying ECDSA with SHA256 hashing.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Automatic response to leaked secrets breadcrumbs: - doc - user - application_security - secret_detection --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab Secret Detection automatically responds when it finds certain types of leaked secrets. Automatic responses can: - Automatically revoke the secret. - Notify the partner that issued the secret. The partner can then revoke the secret, notify its owner, or otherwise protect against abuse. ## Supported secret types and actions GitLab supports automatic response for the following types of secrets: | Secret type | Action taken | Supported on GitLab.com | Supported in GitLab Self-Managed | | ----- | --- | --- | --- | | GitLab [personal access tokens](../../profile/personal_access_tokens.md) | Immediately revoke token, send email to owner | ✅ | ✅ [15.9 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/371658) | | Amazon Web Services (AWS) [IAM access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) | Notify AWS | ✅ | ⚙ | | Google Cloud [service account keys](https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys), [API keys](https://cloud.google.com/docs/authentication/api-keys), and [OAuth client secrets](https://support.google.com/cloud/answer/6158849#rotate-client-secret) | Notify Google Cloud | ✅ | ⚙ | | Postman [API keys](https://learning.postman.com/docs/developer/postman-api/authentication/) | Notify Postman; Postman [notifies the key owner](https://learning.postman.com/docs/administration/managing-your-team/secret-scanner/#protect-postman-api-keys-in-gitlab) | ✅ | ⚙ | **Component legend** - ✅ - Available by default - ⚙ - Requires manual integration using a Token Revocation API ## Feature availability {{< history >}} - [Enabled for non-default branches](https://gitlab.com/gitlab-org/gitlab/-/issues/299212) in GitLab 15.11. {{< /history >}} Credentials are only post-processed when Secret Detection finds them: - In public projects, because publicly exposed credentials pose an increased threat. Expansion to private projects is considered in [issue 391379](https://gitlab.com/gitlab-org/gitlab/-/issues/391379). - In projects with GitLab Ultimate, for technical reasons. Expansion to all tiers is tracked in [issue 391763](https://gitlab.com/gitlab-org/gitlab/-/issues/391763). ## High-level architecture This diagram describes how a post-processing hook revokes a secret in the GitLab application: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Architecture diagram accDescr: How a post-processing hook revokes a secret in the GitLab application. autonumber GitLab Rails-->+GitLab Rails: gl-secret-detection-report.json GitLab Rails->>+GitLab Sidekiq: StoreScansService GitLab Sidekiq-->+GitLab Sidekiq: ScanSecurityReportSecretsWorker GitLab Sidekiq-->+GitLab Token Revocation API: GET revocable keys types GitLab Token Revocation API-->>-GitLab Sidekiq: OK GitLab Sidekiq->>+GitLab Token Revocation API: POST revoke revocable keys GitLab Token Revocation API-->>-GitLab Sidekiq: ACCEPTED GitLab Token Revocation API-->>+Partner API: revoke revocable keys Partner API-->>+GitLab Token Revocation API: ACCEPTED ``` 1. A pipeline with a Secret Detection job completes, producing a scan report (**1**). 1. The report is processed (**2**) by a service class, which schedules an asynchronous worker if token revocation is possible. 1. The asynchronous worker (**3**) communicates with an externally deployed HTTP service (**4** and **5**) to determine which kinds of secrets can be automatically revoked. 1. The worker sends (**6** and **7**) the list of detected secrets which the GitLab Token Revocation API is able to revoke. 1. The GitLab Token Revocation API sends (**8** and **9**) each revocable token to their respective vendor's [Partner API](#implement-a-partner-api). ## Partner program for leaked-credential notifications GitLab notifies partners when credentials they issue are leaked in public repositories on GitLab.com. If you operate a cloud or SaaS product and you're interested in receiving these notifications, learn more in [epic 4944](https://gitlab.com/groups/gitlab-org/-/epics/4944). Partners must [implement a Partner API](#implement-a-partner-api), which is called by the GitLab Token Revocation API. ### Implement a Partner API A Partner API integrates with the GitLab Token Revocation API to receive and respond to leaked token revocation requests. The service should be a publicly accessible HTTP API that is idempotent and rate-limited. Requests to your service can include one or more leaked tokens, and a header with the signature of the request body. We strongly recommend that you verify incoming requests using this signature, to prove it's a genuine request from GitLab. The diagram below details the necessary steps to receive, verify, and revoke leaked tokens: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Partner API data flow accDescr: How a Partner API should receive and respond to leaked token revocation requests. autonumber GitLab Token Revocation API-->>+Partner API: Send new leaked credentials Partner API-->>+GitLab Public Keys endpoint: Get active public keys GitLab Public Keys endpoint-->>+Partner API: One or more public keys Partner API-->>+Partner API: Verify request is signed by GitLab Partner API-->>+Partner API: Respond to leaks Partner API-->>+GitLab Token Revocation API: HTTP status ``` 1. The GitLab Token Revocation API sends (**1**) a [revocation request](#revocation-request) to the Partner API. The request includes headers containing a public key identifier and signature of the request body. 1. The Partner API requests (**2**) a list of [public keys](#public-keys-endpoint) from GitLab. The response (**3**) may include multiple public keys in the event of key rotation and should be filtered with the identifier in the request header. 1. The Partner API [verifies the signature](#verifying-the-request) against the actual request body, using the public key (**4**). 1. The Partner API processes the leaked tokens, which may involve automatic revocation (**5**). 1. The Partner API responds to the GitLab Token Revocation API (**6**) with the appropriate HTTP status code: - A successful response code (HTTP 200 through 299) acknowledges that the partner has received and processed the request. - An error code (HTTP 400 or higher) causes the GitLab Token Revocation API to retry the request. #### Revocation request This JSON schema document describes the body of the revocation request: ```json { "type": "array", "items": { "description": "A leaked token", "type": "object", "properties": { "type": { "description": "The type of token. This is vendor-specific and can be customised to suit your revocation service", "type": "string", "examples": [ "my_api_token" ] }, "token": { "description": "The substring that was matched by the Secret Detection analyser. In most cases, this is the entire token itself", "type": "string", "examples": [ "XXXXXXXXXXXXXXXX" ] }, "url": { "description": "The URL to the raw source file hosted on GitLab where the leaked token was detected", "type": "string", "examples": [ "https://gitlab.example.com/some-repo/-/raw/abcdefghijklmnop/compromisedfile1.java" ] } } } } ``` Example: ```json [{"type": "my_api_token", "token": "XXXXXXXXXXXXXXXX", "url": "https://example.com/some-repo/-/raw/abcdefghijklmnop/compromisedfile1.java"}] ``` In this example, Secret Detection has determined that an instance of `my_api_token` has been leaked. The value of the token is provided to you, in addition to a publicly accessible URL to the raw content of the file containing the leaked token. The request includes two special headers: | Header | Type | Description | |--------|------|-------------| | `Gitlab-Public-Key-Identifier` | string | A unique identifier for the key pair used to sign this request. Primarily used to aid in key rotation. | | `Gitlab-Public-Key-Signature` | string | A base64-encoded signature of the request body. | You can use these headers along with the GitLab Public Keys endpoint to verify that the revocation request was genuine. #### Public Keys endpoint GitLab maintains a publicly-accessible endpoint for retrieving public keys used to verify revocation requests. The endpoint can be provided on request. This JSON schema document describes the response body of the public keys endpoint: ```json { "type": "object", "properties": { "public_keys": { "description": "An array of public keys managed by GitLab used to sign token revocation requests.", "type": "array", "items": { "type": "object", "properties": { "key_identifier": { "description": "A unique identifier for the keypair. Match this against the value of the Gitlab-Public-Key-Identifier header", "type": "string" }, "key": { "description": "The value of the public key", "type": "string" }, "is_current": { "description": "Whether the key is currently active and signing new requests", "type": "boolean" } } } } } } ``` Example: ```json { "public_keys": [ { "key_identifier": "6917d7584f0fa65c8c33df5ab20f54dfb9a6e6ae", "key": "-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEN05/VjsBwWTUGYMpijqC5pDtoLEf\nuWz2CVZAZd5zfa/NAlSFgWRDdNRpazTARndB2+dHDtcHIVfzyVPNr2aznw==\n-----END PUBLIC KEY-----\n", "is_current": true } ] } ``` #### Verifying the request You can check whether a revocation request is genuine by verifying the `Gitlab-Public-Key-Signature` header against the request body, using the corresponding public key taken from the API response above. We use [ECDSA](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) with SHA256 hashing to produce the signature, which is then base64-encoded into the header value. The Python script below demonstrates how the signature can be verified. It uses the popular [pyca/cryptography](https://cryptography.io/en/latest/) module for cryptographic operations: ```python import hashlib import base64 from cryptography.hazmat.primitives import hashes from cryptography.hazmat.primitives.serialization import load_pem_public_key from cryptography.hazmat.primitives.asymmetric import ec public_key = str.encode("") # obtained from the public keys endpoint signature_header = "" # obtained from the `Gitlab-Public-Key-Signature` header request_body = str.encode(r'') # obtained from the revocation request body pk = load_pem_public_key(public_key) decoded_signature = base64.b64decode(signature_header) pk.verify(decoded_signature, request_body, ec.ECDSA(hashes.SHA256())) # throws if unsuccessful print("Signature verified!") ``` The main steps are: 1. Loading the public key into a format appropriate for the crypto library you're using. 1. Base64-decoding the `Gitlab-Public-Key-Signature` header value. 1. Verifying the body against the decoded signature, specifying ECDSA with SHA256 hashing.
https://docs.gitlab.com/user/application_security/push_protection_tutorial
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/push_protection_tutorial.md
2025-08-13
doc/user/application_security/secret_detection
[ "doc", "user", "application_security", "secret_detection" ]
push_protection_tutorial.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Protect your project with secret push protection
null
If your application uses external resources, you usually need to authenticate your application with a **secret**, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. With secret push protection, if GitLab detects a secret in the commit history, it can block a push to prevent a leak. Enabling secret push protection is a good way to reduce the amount of time you spend reviewing your commits for sensitive data and remediating leaks if they occur. In this tutorial, you'll configure secret push protection and see what happens when you try to commit a fake secret. You'll also learn how to skip secret push protection, in case you need to bypass a false positive. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> This tutorial is adapted from the following GitLab Unfiltered videos: - [Introduction to Secret Push Protection](https://www.youtube.com/watch?v=SFVuKx3hwNI) <!-- Video published on 2024-06-21 --> - [Configuration - Enabling Secret Push Protection for your project](https://www.youtube.com/watch?v=t1DJN6Vsmp0) <!-- Video published on 2024-06-23 --> - [Skip Secret Push Protection](https://www.youtube.com/watch?v=wBAhe_d2DkQ) <!-- Video published on 2024-06-04 --> ## Before you begin Before you begin this tutorial make sure you have the following: - A GitLab Ultimate subscription. - A test project. You can use any project you like, but consider creating a test project specifically for this tutorial. - Some familiarity with command-line Git. Additionally, on GitLab Self-Managed only, ensure secret push protection is [enabled on the instance](secret_push_protection/_index.md#allow-the-use-of-secret-push-protection-in-your-gitlab-instance). ## Enable secret push protection To use secret push protection, you need to enable it for each project you want to protect. Let's start by enabling it in a test project. 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Secure > Security configuration**. 1. Turn on the **Secret push protection** toggle. Next, you'll test secret push protection. ## Try pushing a secret to your project GitLab identifies secrets by matching specific patterns of letters, digits, and symbols. These patterns are also used to identify the type of secret. Let's test this feature by adding the fake secret `glpat-12345678901234567890` to our project: <!-- gitleaks:allow --> 1. In the project, check out a new branch: ```shell git checkout -b push-protection-tutorial ``` 1. Create a new file with the following content. Be sure to remove the spaces before and after the `-` to match the exact format of a personal access token: ```plaintext hello, world! # To make the example work, remove # the spaces before and after the dash: glpat - 12345678901234567890 ``` 1. Commit the file to your branch: ```shell git add . git commit -m "Add fake secret" ``` The secret is now entered into the commit history. Secret push protection doesn't stop you from committing a secret; it only alerts you when you push. 1. Push the changes to GitLab. You should see something like this: ```shell $ git push remote: GitLab: remote: PUSH BLOCKED: Secrets detected in code changes remote: remote: Secret push protection found the following secrets in commit: 123abc remote: -- myFile.txt:2 | GitLab Personal Access Token remote: remote: To push your changes you must remove the identified secrets. To gitlab.com: ! [remote rejected] push-protection-tutorial -> main (pre-receive hook declined) ``` GitLab detects the secret and blocks the push. From the error report, we can see: - The commit that contains the secret (`123abc`) - The file and line number that contains the secret (`myFile.txt:2`) - The type of secret (`GitLab Personal Access Token`) If we had successfully pushed our changes, we would need to spend considerable time and effort to revoke and replace the secret. Instead, we can [remove the secret from the commit history](remove_secrets_tutorial.md) and rest easy knowing we stopped the secret from being leaked. ## Skip secret push protection Sometimes you need to push a commit, even if secret push protection has identified a secret. This can happen when GitLab detects a false positive. To demonstrate, we'll push our last commit to GitLab. ### With a push option You can use a push option to skip secret detection: - Push your commit with the `secret_detection.skip_all` option: ```shell git push -o secret_detection.skip_all ``` Secret detection is skipped, and the changes are pushed to the remote. ### With a commit message If you don't have access to the command line, or you don't want to use a push option: - Add the string `[skip secret push protection]` to the commit message. For example: ```shell git commit --amend -m "Add fake secret [skip secret push protection]" ``` You only need to add `[skip secret push protection]` to one of the commit messages in order to push your changes, even if there are multiple commits. ## Next steps Consider enabling [pipeline secret detection](pipeline/_index.md) to further improve the security of your projects.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Protect your project with secret push protection' breadcrumbs: - doc - user - application_security - secret_detection --- If your application uses external resources, you usually need to authenticate your application with a **secret**, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. With secret push protection, if GitLab detects a secret in the commit history, it can block a push to prevent a leak. Enabling secret push protection is a good way to reduce the amount of time you spend reviewing your commits for sensitive data and remediating leaks if they occur. In this tutorial, you'll configure secret push protection and see what happens when you try to commit a fake secret. You'll also learn how to skip secret push protection, in case you need to bypass a false positive. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> This tutorial is adapted from the following GitLab Unfiltered videos: - [Introduction to Secret Push Protection](https://www.youtube.com/watch?v=SFVuKx3hwNI) <!-- Video published on 2024-06-21 --> - [Configuration - Enabling Secret Push Protection for your project](https://www.youtube.com/watch?v=t1DJN6Vsmp0) <!-- Video published on 2024-06-23 --> - [Skip Secret Push Protection](https://www.youtube.com/watch?v=wBAhe_d2DkQ) <!-- Video published on 2024-06-04 --> ## Before you begin Before you begin this tutorial make sure you have the following: - A GitLab Ultimate subscription. - A test project. You can use any project you like, but consider creating a test project specifically for this tutorial. - Some familiarity with command-line Git. Additionally, on GitLab Self-Managed only, ensure secret push protection is [enabled on the instance](secret_push_protection/_index.md#allow-the-use-of-secret-push-protection-in-your-gitlab-instance). ## Enable secret push protection To use secret push protection, you need to enable it for each project you want to protect. Let's start by enabling it in a test project. 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Secure > Security configuration**. 1. Turn on the **Secret push protection** toggle. Next, you'll test secret push protection. ## Try pushing a secret to your project GitLab identifies secrets by matching specific patterns of letters, digits, and symbols. These patterns are also used to identify the type of secret. Let's test this feature by adding the fake secret `glpat-12345678901234567890` to our project: <!-- gitleaks:allow --> 1. In the project, check out a new branch: ```shell git checkout -b push-protection-tutorial ``` 1. Create a new file with the following content. Be sure to remove the spaces before and after the `-` to match the exact format of a personal access token: ```plaintext hello, world! # To make the example work, remove # the spaces before and after the dash: glpat - 12345678901234567890 ``` 1. Commit the file to your branch: ```shell git add . git commit -m "Add fake secret" ``` The secret is now entered into the commit history. Secret push protection doesn't stop you from committing a secret; it only alerts you when you push. 1. Push the changes to GitLab. You should see something like this: ```shell $ git push remote: GitLab: remote: PUSH BLOCKED: Secrets detected in code changes remote: remote: Secret push protection found the following secrets in commit: 123abc remote: -- myFile.txt:2 | GitLab Personal Access Token remote: remote: To push your changes you must remove the identified secrets. To gitlab.com: ! [remote rejected] push-protection-tutorial -> main (pre-receive hook declined) ``` GitLab detects the secret and blocks the push. From the error report, we can see: - The commit that contains the secret (`123abc`) - The file and line number that contains the secret (`myFile.txt:2`) - The type of secret (`GitLab Personal Access Token`) If we had successfully pushed our changes, we would need to spend considerable time and effort to revoke and replace the secret. Instead, we can [remove the secret from the commit history](remove_secrets_tutorial.md) and rest easy knowing we stopped the secret from being leaked. ## Skip secret push protection Sometimes you need to push a commit, even if secret push protection has identified a secret. This can happen when GitLab detects a false positive. To demonstrate, we'll push our last commit to GitLab. ### With a push option You can use a push option to skip secret detection: - Push your commit with the `secret_detection.skip_all` option: ```shell git push -o secret_detection.skip_all ``` Secret detection is skipped, and the changes are pushed to the remote. ### With a commit message If you don't have access to the command line, or you don't want to use a push option: - Add the string `[skip secret push protection]` to the commit message. For example: ```shell git commit --amend -m "Add fake secret [skip secret push protection]" ``` You only need to add `[skip secret push protection]` to one of the commit messages in order to push your changes, even if there are multiple commits. ## Next steps Consider enabling [pipeline secret detection](pipeline/_index.md) to further improve the security of your projects.
https://docs.gitlab.com/user/application_security/secret_detection/client
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/_index.md
2025-08-13
doc/user/application_security/secret_detection/client
[ "doc", "user", "application_security", "secret_detection", "client" ]
_index.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Client-side secret detection
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368434) in GitLab 15.11. - Detection of personal access tokens with a custom prefix was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/411146) in GitLab 16.1. GitLab Self-Managed only. {{< /history >}} When you create an issue, add a description to a merge request, or write a comment, you might accidentally post a secret. For example, you might paste in the details of an API request or an environment variable that contains an authentication token. If a secret is leaked, an adversary can use it to impersonate a legitimate user. Client-side secret detection helps minimize the risk of accidental secret exposure. When you edit a description, or comment in an issue or merge request, GitLab automatically scans the content for secrets. ## Secret detection workflow Client-side secret detection operates entirely within your browser using pattern matching. This approach ensures that: - Secrets are detected before they are submitted to GitLab. - No sensitive information is transmitted during the detection process. - The feature works seamlessly without requiring additional configuration. ## Getting started Client-side secret detection is enabled by default for all GitLab tiers. No setup or configuration is required. To test this feature: 1. Navigate to any issue or merge request 1. Add a comment containing a test secret pattern, such as `glpat-xxxxxxxxxxxxxxxxxxxx` 1. Observe the warning message that appears before you submit Always use placeholder values when you test to avoid exposing real secrets. ## Coverage Client-side secret detection analyzes the following content: - Issue descriptions and comments - Merge request descriptions and comments For detailed information about the specific types of secrets detected, see the [Detected secrets](../detected_secrets.md) documentation. ## Understanding the results When client-side secret detection identifies a potential secret, GitLab displays a warning that highlights the detected secret. You can either: - **Edit** the content of the comment or description to remove the secret. - **Add** content without making any changes. Exercise caution before you add content that contains a potential secret. The detection occurs entirely in your browser. No information is transmitted unless you select **Add**. ## Optimization To maximize the effectiveness of client-side secret detection: - Review warnings carefully. Always investigate flagged content before proceeding. - Use placeholders. Replace actual secrets with placeholder text like `[REDACTED]` or `<API_KEY>`.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Client-side secret detection breadcrumbs: - doc - user - application_security - secret_detection - client --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368434) in GitLab 15.11. - Detection of personal access tokens with a custom prefix was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/411146) in GitLab 16.1. GitLab Self-Managed only. {{< /history >}} When you create an issue, add a description to a merge request, or write a comment, you might accidentally post a secret. For example, you might paste in the details of an API request or an environment variable that contains an authentication token. If a secret is leaked, an adversary can use it to impersonate a legitimate user. Client-side secret detection helps minimize the risk of accidental secret exposure. When you edit a description, or comment in an issue or merge request, GitLab automatically scans the content for secrets. ## Secret detection workflow Client-side secret detection operates entirely within your browser using pattern matching. This approach ensures that: - Secrets are detected before they are submitted to GitLab. - No sensitive information is transmitted during the detection process. - The feature works seamlessly without requiring additional configuration. ## Getting started Client-side secret detection is enabled by default for all GitLab tiers. No setup or configuration is required. To test this feature: 1. Navigate to any issue or merge request 1. Add a comment containing a test secret pattern, such as `glpat-xxxxxxxxxxxxxxxxxxxx` 1. Observe the warning message that appears before you submit Always use placeholder values when you test to avoid exposing real secrets. ## Coverage Client-side secret detection analyzes the following content: - Issue descriptions and comments - Merge request descriptions and comments For detailed information about the specific types of secrets detected, see the [Detected secrets](../detected_secrets.md) documentation. ## Understanding the results When client-side secret detection identifies a potential secret, GitLab displays a warning that highlights the detected secret. You can either: - **Edit** the content of the comment or description to remove the secret. - **Add** content without making any changes. Exercise caution before you add content that contains a potential secret. The detection occurs entirely in your browser. No information is transmitted unless you select **Add**. ## Optimization To maximize the effectiveness of client-side secret detection: - Review warnings carefully. Always investigate flagged content before proceeding. - Use placeholders. Replace actual secrets with placeholder text like `[REDACTED]` or `<API_KEY>`.
https://docs.gitlab.com/user/application_security/secret_detection/pipeline
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/_index.md
2025-08-13
doc/user/application_security/secret_detection/pipeline
[ "doc", "user", "application_security", "secret_detection", "pipeline" ]
_index.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Pipeline secret detection
null
<!-- markdownlint-disable MD025 --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Pipeline secret detection scans files after they are committed to a Git repository and pushed to GitLab. After you [enable pipeline secret detection](#getting-started), scans run in a CI/CD job named `secret_detection`. You can run scans and view [pipeline secret detection JSON report artifacts](../../../../ci/yaml/artifacts_reports.md#artifactsreportssecret_detection) in any GitLab tier. With GitLab Ultimate, pipeline secret detection results are also processed so you can: - See them in the [merge request widget](../../detect/security_scanning_results.md), [pipeline security report](../../detect/security_scanning_results.md), and [vulnerability report](../../vulnerability_report/_index.md). - Use them in approval workflows. - Review them in the security dashboard. - [Automatically respond](../automatic_response.md) to leaks in public repositories. - Enforce consistent secret detection rules across projects by using [security policies](../../policies/_index.md). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an interactive reading and how-to demo of this pipeline secret detection documentation see: - [How to enable secret detection in GitLab Application Security Part 1/2](https://youtu.be/dbMxeO6nJCE?feature=shared) - [How to enable secret detection in GitLab Application Security Part 2/2](https://youtu.be/VL-_hdiTazo?feature=shared) <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For other interactive reading and how-to demos, see the [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9). ## Availability Different features are available in different [GitLab tiers](https://about.gitlab.com/pricing/). | Capability | In Free & Premium | In Ultimate | |:------------------------------------------------------------------------|:-------------------------------------|:------------| | [Customize analyzer behavior](configure.md#customize-analyzer-behavior) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Download [output](#secret-detection-results) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | See new findings in the merge request widget | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | View identified secrets in the pipelines' **Security** tab | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Manage vulnerabilities](../../vulnerability_report/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Access the Security Dashboard](../../security_dashboard/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Customize analyzer rulesets](configure.md#customize-analyzer-rulesets) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Enable security policies](../../policies/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started To get started with pipeline secret detection, select a pilot project and enable the analyzer. Prerequisites: - You have a Linux-based runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. If you use hosted runners for GitLab.com, this is enabled by default. - Windows Runners are not supported. - CPU architectures other than amd64 are not supported. - You have a `.gitlab-ci.yml` file that includes the `test` stage. Enable the secret detection analyzer by using one of the following: - Edit the `.gitlab-ci.yml` file manually. Use this method if your CI/CD configuration is complex. - Use an automatically configured merge request. Use this method if you don't have a CI/CD configuration, or your configuration is minimal. - Enable pipeline secret detection in a [scan execution policy](../../policies/scan_execution_policies.md). If this is your first time running a secret detection scan on your project, you should run a historic scan immediately after you enable the analyzer. After you enable pipeline secret detection, you can [customize the analyzer settings](configure.md). ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit an existing `.gitlab-ci.yml` file. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** indicates the file is valid. 1. Select the **Edit** tab. 1. Optional. In the **Commit message** text box, customize the commit message. 1. In the **Branch** text box, enter the name of the default branch. 1. Select **Commit changes**. Pipelines now include a pipeline secret detection job. Consider [running a historic scan](#run-a-historic-scan) after you enable the analyzer. ### Use an automatically configured merge request This method automatically prepares a merge request to add a `.gitlab-ci.yml` file that includes the pipeline secret detection template. Merge the merge request to enable pipeline secret detection. To enable pipeline secret detection: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Pipeline secret detection** row, select **Configure with a merge request**. 1. Optional. Complete the fields. 1. Select **Create merge request**. 1. Review and merge the merge request. Pipelines now include a pipeline secret detection job. ## Coverage Pipeline secret detection is optimized to balance coverage and run time. Only the current state of the repository and future commits are scanned for secrets. To identify secrets already present in the repository's history, run a historic scan once after enabling pipeline secret detection. Scan results are available only after the pipeline is completed. Exactly what is scanned for secrets depends on the type of pipeline, and whether any additional configuration is set. By default, when you run a pipeline: - On a branch: - On the **default branch**, the Git working tree is scanned. This means the entire repository is scanned as though it were a typical directory. - On a **new, non-default branch**, the content of all commits from the most recent commit on the parent branch to the latest commit is scanned. - On an **existing, non-default branch**, the content of all commits from the last pushed commit to the latest commit is scanned. - On a **merge request**, the content of all commits on the branch is scanned. If the analyzer can't access every commit, the content of all commits from the parent to the latest commit is scanned. To scan all commits, you must enable [merge request pipelines](../../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). To override the default behavior, use the [available CI/CD variables](configure.md#available-cicd-variables). ### Run a historic scan By default, pipeline secret detection scans only the current state of the Git repository. Any secrets contained in the repository's history are not detected. Run a historic scan to check for secrets from all commits and branches in the Git repository. You should run a historic scan only once, after enabling pipeline secret detection. Historic scans can take a long time, especially for larger repositories with lengthy Git histories. After completing an initial historic scan, use only standard pipeline secret detection as part of your pipeline. If you enable pipeline secret detection with a [scan execution policy](../../policies/scan_execution_policies.md#scanner-behavior), by default the first scheduled scan is a historic scan. To run a historic scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select **New pipeline**. 1. Add a CI/CD variable: 1. From the dropdown list, select **Variable**. 1. In the **Input variable key** box, enter `SECRET_DETECTION_HISTORIC_SCAN`. 1. In the **Input variable value** box, enter `true`. 1. Select **New pipeline**. ### Advanced vulnerability tracking {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/434096) in GitLab 17.0. {{< /history >}} When developers make changes to a file with identified secrets, it's likely that the positions of these secrets will also change. Pipeline secret detection may have already flagged these secrets as vulnerabilities, tracked in the [vulnerability report](../../vulnerability_report/_index.md). These vulnerabilities are associated with specific secrets for easy identification and action. However, if the detected secrets aren't accurately tracked as they shift, managing vulnerabilities becomes challenging, potentially resulting in duplicate vulnerability reports. Pipeline secret detection uses an advanced vulnerability tracking algorithm to more accurately identify when the same secret has moved within a file due to refactoring or unrelated changes. For more information, see the confidential project `https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator`. The content of this project is available only to GitLab team members. #### Unsupported workflows - The algorithm does not support the workflow where the existing finding lacks a tracking signature and does not share the same location as the newly detected finding. - For some rule types, such as cryptographic keys, pipeline secret detection identifies leaks by matching prefix of the secret rather than the entire secret value. In this scenario, the algorithm consolidates different secrets of the same rule type in a file into a single finding, rather than treating each distinct secret as a separate finding. For example, the [SSH Private Key rule type](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/d2919f65f1d8001755015b5d790af620676b97ea/gitleaks.toml#L138) matches only the `-----BEGIN OPENSSH PRIVATE KEY-----` prefix of a value to confirm the presence of a SSH private key. If there are two distinct SSH Private Keys within the same file, the algorithm considers both values as identical and reports only one finding instead of two. - The algorithm's scope is limited to a per-file basis, meaning that the same secret appearing in two different files is treated as two distinct findings. ### Detected secrets Pipeline secret detection scans the repository's content for specific patterns. Each pattern matches a specific type of secret and is specified in a rule by using a TOML syntax. GitLab maintains the default set of rules. With GitLab Ultimate you can extend these rules to suit your needs. For example, while personal access tokens that use a custom prefix are not detected by default, you can customize the rules to identify these tokens. For details, see [Customize analyzer rulesets](configure.md#customize-analyzer-rulesets). To confirm which secrets are detected by pipeline secret detection, see [Detected secrets](../detected_secrets.md). To provide reliable, high-confidence results, pipeline secret detection only looks for passwords or other unstructured secrets in specific contexts like URLs. When a secret is detected a vulnerability is created for it. The vulnerability remains as "Still detected" even if the secret is removed from the scanned file and pipeline secret detection has been run again. This is because the leaked secret continues to be a security risk until it has been revoked. Removed secrets also persist in the Git history. To remove a secret from the Git repository's history, see [Redact text from repository](../../../project/merge_requests/revert_changes.md#redact-text-from-repository). ## Secret detection results Pipeline secret detection outputs the file `gl-secret-detection-report.json` as a job artifact. The file contains detected secrets. You can [download](../../../../ci/jobs/job_artifacts.md#download-job-artifacts) the file for processing outside GitLab. For more information, see the [report file schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json) and the [example report file](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/master/qa/expect/secrets/gl-secret-detection-report.json). ### Additional output {{< details >}} - Tier: Ultimate {{< /details >}} Job results are also reported on the: - [Merge request widget](../../detect/security_scanning_results.md#merge-request-security-widget): shows new findings introduced in the merge request. - [Pipeline security report](../../vulnerability_report/pipeline.md): displays all findings from the latest pipeline run. - [Vulnerability report](../../vulnerability_report/_index.md): provides centralized management of all security findings. - Security dashboard: offers organization-wide visibility into all vulnerabilities across projects and groups. ## Understanding the results Pipeline secret detection provides detailed information about potential secrets found in your repository. Each secret includes the type of secret leaked and remediation guidelines. When reviewing results: 1. Look at the surrounding code to determine if the detected pattern is actually a secret 1. Test whether the detected value is a working credential. 1. Consider the repository's visibility and the secret's scope. 1. Address active, high-privilege secrets first. ### Common detection categories Detections by pipeline secret detection often fall into one of three categories: - **True positives**: Legitimate secrets that should be rotated and removed. For example: - Active API keys, database passwords, authentication tokens - Private keys and certificates - Service account credentials - **False positives**: Detected patterns that aren't actual secrets. For example: - Example values in documentation - Test data or mock credentials - Configuration templates with placeholder values - **Historical findings**: Secrets that were previously committed but might not be active. These detections: - Require investigation to determine current status - Should still be rotated as a precaution ## Remediate a leaked secret When a secret is detected, you should rotate it immediately. GitLab attempts to [automatically revoke](../automatic_response.md) some types of leaked secrets. For those that are not automatically revoked, you must do so manually. [Purging a secret from the repository's history](../../../project/repository/repository_size.md#purge-files-from-repository-history) does not fully address the leak. The original secret remains in any existing forks or clones of the repository. For instructions on how to respond to a leaked secret, select the vulnerability in the vulnerability report. ## Optimization Before deploying pipeline secret detection across your organization, optimize the configuration to reduce false positives and improve accuracy for your specific environment. False positives can create alert fatigue and reduce trust in the tool. Consider using custom ruleset configuration (Ultimate only): - Exclude known safe patterns specific to your codebase. - Adjust sensitivity for rules that frequently trigger on non-secrets. - Add custom rules for organization-specific secret formats. To optimize performance in large repositories or organizations with many projects, review your: - Scan scope management: - Turn off historical scanning after you run a historical scan in a project. - Schedule historic scans during low-usage periods. - Resource allocation: - Allocate sufficient runner resources for larger repositories. - Consider dedicated runners for security scanning workloads. - Monitor scan duration and optimize based on repository size. ### Testing optimization changes Before applying optimizations organization-wide: 1. Validate that optimizations don't miss legitimate secrets. 1. Track false positive reduction and scan performance improvements. 1. Maintain records of effective optimization patterns. ## Roll out You should implement pipeline secret detection incrementally. Start with a small-scale pilot to understand the tool's behavior before rolling out the feature across your organization. Follow these guidelines when you roll out pipeline secret detection: 1. Choose a pilot project. Suitable projects have: - Active development with regular commits. - A manageable codebase size. - A team familiar with GitLab CI/CD. - Willingness to iterate on configuration. 1. Start simple. Enable pipeline secret detection with default settings on your pilot project. 1. Monitor results. Run the analyzer for one or two weeks to understand typical findings. 1. Address detected secrets. Remediate any legitimate secrets found. 1. Tune your configuration. Adjust settings based on initial results. 1. Document the implementation. Record common false positives and remediation patterns. ## FIPS-enabled images {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/6479) in GitLab 14.10. {{< /history >}} The default scanner images are built off a base Alpine image for size and maintainability. GitLab offers [Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the images that are FIPS-enabled. To use the FIPS-enabled images, either: - Set the `SECRET_DETECTION_IMAGE_SUFFIX` CI/CD variable to `-fips`. - Add the `-fips` extension to the default image name. For example: ```yaml variables: SECRET_DETECTION_IMAGE_SUFFIX: '-fips' include: - template: Jobs/Secret-Detection.gitlab-ci.yml ``` ## Troubleshooting ### Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../../troubleshooting_application_security.md#debug-level-logging). #### Warning: `gl-secret-detection-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). #### Error: `Couldn't run the gitleaks command: exit status 2` The pipeline secret detection analyzer relies on generating patches between commits to scan content for secrets. If the number of commits in a merge request is greater than the value of the [`GIT_DEPTH` CI/CD variable](../../../../ci/runners/configure_runners.md#shallow-cloning), Secret Detection [fails to detect secrets](#error-couldnt-run-the-gitleaks-command-exit-status-2). For example, you could have a pipeline triggered from a merge request containing 60 commits and the `GIT_DEPTH` variable set to less than 60. In that case the pipeline secret detection job fails because the clone is not deep enough to contain all of the relevant commits. To verify the current value, see [pipeline configuration](../../../../ci/pipelines/settings.md#limit-the-number-of-changes-fetched-during-clone). To confirm this as the cause of the error, enable [debug-level logging](../../troubleshooting_application_security.md#debug-level-logging), then rerun the pipeline. The logs should look similar to the following example. The text "object not found" is a symptom of this error. ```plaintext ERRO[2020-11-18T18:05:52Z] object not found [ERRO] [secrets] [2020-11-18T18:05:52Z] ▶ Couldn't run the gitleaks command: exit status 2 [ERRO] [secrets] [2020-11-18T18:05:52Z] ▶ Gitleaks analysis failed: exit status 2 ``` To resolve the issue, set the [`GIT_DEPTH` CI/CD variable](../../../../ci/runners/configure_runners.md#shallow-cloning) to a higher value. To apply this only to the pipeline secret detection job, the following can be added to your `.gitlab-ci.yml` file: ```yaml secret_detection: variables: GIT_DEPTH: 100 ``` #### Error: `ERR fatal: ambiguous argument` Pipeline secret detection can fail with the message `ERR fatal: ambiguous argument` error if your repository's default branch is unrelated to the branch the job was triggered for. See issue [!352014](https://gitlab.com/gitlab-org/gitlab/-/issues/352014) for more details. To resolve the issue, make sure to correctly [set your default branch](../../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch that has related history with the branch you run the `secret-detection` job on. #### `exec /bin/sh: exec format error` message in job log The GitLab pipeline secret detection analyzer [only supports](#getting-started) running on the `amd64` CPU architecture. This message indicates that the job is being run on a different architecture, such as `arm`. #### Error: `fatal: detected dubious ownership in repository at '/builds/<project dir>'` Secret detection might fail with an exit status of 128. This can be caused by a change to the user on the Docker image. For example: ```shell $ /analyzer run [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ GitLab secrets analyzer v6.0.1 [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Detecting project [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Analyzer will attempt to analyze all projects in the repository [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Loading ruleset for /builds.... [WARN] [secrets] [2024-06-06T07:28:13Z] ▶ /builds/....secret-detection-ruleset.toml not found, ruleset support will be disabled. [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Running analyzer [FATA] [secrets] [2024-06-06T07:28:13Z] ▶ get commit count: exit status 128 ``` To work around this issue, add a `before_script` with the following: ```yaml before_script: - git config --global --add safe.directory "$CI_PROJECT_DIR" ``` For more information about this issue, see [issue 465974](https://gitlab.com/gitlab-org/gitlab/-/issues/465974). <!-- markdownlint-enable MD025 -->
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Pipeline secret detection breadcrumbs: - doc - user - application_security - secret_detection - pipeline --- <!-- markdownlint-disable MD025 --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Pipeline secret detection scans files after they are committed to a Git repository and pushed to GitLab. After you [enable pipeline secret detection](#getting-started), scans run in a CI/CD job named `secret_detection`. You can run scans and view [pipeline secret detection JSON report artifacts](../../../../ci/yaml/artifacts_reports.md#artifactsreportssecret_detection) in any GitLab tier. With GitLab Ultimate, pipeline secret detection results are also processed so you can: - See them in the [merge request widget](../../detect/security_scanning_results.md), [pipeline security report](../../detect/security_scanning_results.md), and [vulnerability report](../../vulnerability_report/_index.md). - Use them in approval workflows. - Review them in the security dashboard. - [Automatically respond](../automatic_response.md) to leaks in public repositories. - Enforce consistent secret detection rules across projects by using [security policies](../../policies/_index.md). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an interactive reading and how-to demo of this pipeline secret detection documentation see: - [How to enable secret detection in GitLab Application Security Part 1/2](https://youtu.be/dbMxeO6nJCE?feature=shared) - [How to enable secret detection in GitLab Application Security Part 2/2](https://youtu.be/VL-_hdiTazo?feature=shared) <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For other interactive reading and how-to demos, see the [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9). ## Availability Different features are available in different [GitLab tiers](https://about.gitlab.com/pricing/). | Capability | In Free & Premium | In Ultimate | |:------------------------------------------------------------------------|:-------------------------------------|:------------| | [Customize analyzer behavior](configure.md#customize-analyzer-behavior) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Download [output](#secret-detection-results) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | See new findings in the merge request widget | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | View identified secrets in the pipelines' **Security** tab | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Manage vulnerabilities](../../vulnerability_report/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Access the Security Dashboard](../../security_dashboard/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Customize analyzer rulesets](configure.md#customize-analyzer-rulesets) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Enable security policies](../../policies/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started To get started with pipeline secret detection, select a pilot project and enable the analyzer. Prerequisites: - You have a Linux-based runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. If you use hosted runners for GitLab.com, this is enabled by default. - Windows Runners are not supported. - CPU architectures other than amd64 are not supported. - You have a `.gitlab-ci.yml` file that includes the `test` stage. Enable the secret detection analyzer by using one of the following: - Edit the `.gitlab-ci.yml` file manually. Use this method if your CI/CD configuration is complex. - Use an automatically configured merge request. Use this method if you don't have a CI/CD configuration, or your configuration is minimal. - Enable pipeline secret detection in a [scan execution policy](../../policies/scan_execution_policies.md). If this is your first time running a secret detection scan on your project, you should run a historic scan immediately after you enable the analyzer. After you enable pipeline secret detection, you can [customize the analyzer settings](configure.md). ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit an existing `.gitlab-ci.yml` file. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** indicates the file is valid. 1. Select the **Edit** tab. 1. Optional. In the **Commit message** text box, customize the commit message. 1. In the **Branch** text box, enter the name of the default branch. 1. Select **Commit changes**. Pipelines now include a pipeline secret detection job. Consider [running a historic scan](#run-a-historic-scan) after you enable the analyzer. ### Use an automatically configured merge request This method automatically prepares a merge request to add a `.gitlab-ci.yml` file that includes the pipeline secret detection template. Merge the merge request to enable pipeline secret detection. To enable pipeline secret detection: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Pipeline secret detection** row, select **Configure with a merge request**. 1. Optional. Complete the fields. 1. Select **Create merge request**. 1. Review and merge the merge request. Pipelines now include a pipeline secret detection job. ## Coverage Pipeline secret detection is optimized to balance coverage and run time. Only the current state of the repository and future commits are scanned for secrets. To identify secrets already present in the repository's history, run a historic scan once after enabling pipeline secret detection. Scan results are available only after the pipeline is completed. Exactly what is scanned for secrets depends on the type of pipeline, and whether any additional configuration is set. By default, when you run a pipeline: - On a branch: - On the **default branch**, the Git working tree is scanned. This means the entire repository is scanned as though it were a typical directory. - On a **new, non-default branch**, the content of all commits from the most recent commit on the parent branch to the latest commit is scanned. - On an **existing, non-default branch**, the content of all commits from the last pushed commit to the latest commit is scanned. - On a **merge request**, the content of all commits on the branch is scanned. If the analyzer can't access every commit, the content of all commits from the parent to the latest commit is scanned. To scan all commits, you must enable [merge request pipelines](../../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). To override the default behavior, use the [available CI/CD variables](configure.md#available-cicd-variables). ### Run a historic scan By default, pipeline secret detection scans only the current state of the Git repository. Any secrets contained in the repository's history are not detected. Run a historic scan to check for secrets from all commits and branches in the Git repository. You should run a historic scan only once, after enabling pipeline secret detection. Historic scans can take a long time, especially for larger repositories with lengthy Git histories. After completing an initial historic scan, use only standard pipeline secret detection as part of your pipeline. If you enable pipeline secret detection with a [scan execution policy](../../policies/scan_execution_policies.md#scanner-behavior), by default the first scheduled scan is a historic scan. To run a historic scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipelines**. 1. Select **New pipeline**. 1. Add a CI/CD variable: 1. From the dropdown list, select **Variable**. 1. In the **Input variable key** box, enter `SECRET_DETECTION_HISTORIC_SCAN`. 1. In the **Input variable value** box, enter `true`. 1. Select **New pipeline**. ### Advanced vulnerability tracking {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/434096) in GitLab 17.0. {{< /history >}} When developers make changes to a file with identified secrets, it's likely that the positions of these secrets will also change. Pipeline secret detection may have already flagged these secrets as vulnerabilities, tracked in the [vulnerability report](../../vulnerability_report/_index.md). These vulnerabilities are associated with specific secrets for easy identification and action. However, if the detected secrets aren't accurately tracked as they shift, managing vulnerabilities becomes challenging, potentially resulting in duplicate vulnerability reports. Pipeline secret detection uses an advanced vulnerability tracking algorithm to more accurately identify when the same secret has moved within a file due to refactoring or unrelated changes. For more information, see the confidential project `https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator`. The content of this project is available only to GitLab team members. #### Unsupported workflows - The algorithm does not support the workflow where the existing finding lacks a tracking signature and does not share the same location as the newly detected finding. - For some rule types, such as cryptographic keys, pipeline secret detection identifies leaks by matching prefix of the secret rather than the entire secret value. In this scenario, the algorithm consolidates different secrets of the same rule type in a file into a single finding, rather than treating each distinct secret as a separate finding. For example, the [SSH Private Key rule type](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/d2919f65f1d8001755015b5d790af620676b97ea/gitleaks.toml#L138) matches only the `-----BEGIN OPENSSH PRIVATE KEY-----` prefix of a value to confirm the presence of a SSH private key. If there are two distinct SSH Private Keys within the same file, the algorithm considers both values as identical and reports only one finding instead of two. - The algorithm's scope is limited to a per-file basis, meaning that the same secret appearing in two different files is treated as two distinct findings. ### Detected secrets Pipeline secret detection scans the repository's content for specific patterns. Each pattern matches a specific type of secret and is specified in a rule by using a TOML syntax. GitLab maintains the default set of rules. With GitLab Ultimate you can extend these rules to suit your needs. For example, while personal access tokens that use a custom prefix are not detected by default, you can customize the rules to identify these tokens. For details, see [Customize analyzer rulesets](configure.md#customize-analyzer-rulesets). To confirm which secrets are detected by pipeline secret detection, see [Detected secrets](../detected_secrets.md). To provide reliable, high-confidence results, pipeline secret detection only looks for passwords or other unstructured secrets in specific contexts like URLs. When a secret is detected a vulnerability is created for it. The vulnerability remains as "Still detected" even if the secret is removed from the scanned file and pipeline secret detection has been run again. This is because the leaked secret continues to be a security risk until it has been revoked. Removed secrets also persist in the Git history. To remove a secret from the Git repository's history, see [Redact text from repository](../../../project/merge_requests/revert_changes.md#redact-text-from-repository). ## Secret detection results Pipeline secret detection outputs the file `gl-secret-detection-report.json` as a job artifact. The file contains detected secrets. You can [download](../../../../ci/jobs/job_artifacts.md#download-job-artifacts) the file for processing outside GitLab. For more information, see the [report file schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/secret-detection-report-format.json) and the [example report file](https://gitlab.com/gitlab-org/security-products/analyzers/secrets/-/blob/master/qa/expect/secrets/gl-secret-detection-report.json). ### Additional output {{< details >}} - Tier: Ultimate {{< /details >}} Job results are also reported on the: - [Merge request widget](../../detect/security_scanning_results.md#merge-request-security-widget): shows new findings introduced in the merge request. - [Pipeline security report](../../vulnerability_report/pipeline.md): displays all findings from the latest pipeline run. - [Vulnerability report](../../vulnerability_report/_index.md): provides centralized management of all security findings. - Security dashboard: offers organization-wide visibility into all vulnerabilities across projects and groups. ## Understanding the results Pipeline secret detection provides detailed information about potential secrets found in your repository. Each secret includes the type of secret leaked and remediation guidelines. When reviewing results: 1. Look at the surrounding code to determine if the detected pattern is actually a secret 1. Test whether the detected value is a working credential. 1. Consider the repository's visibility and the secret's scope. 1. Address active, high-privilege secrets first. ### Common detection categories Detections by pipeline secret detection often fall into one of three categories: - **True positives**: Legitimate secrets that should be rotated and removed. For example: - Active API keys, database passwords, authentication tokens - Private keys and certificates - Service account credentials - **False positives**: Detected patterns that aren't actual secrets. For example: - Example values in documentation - Test data or mock credentials - Configuration templates with placeholder values - **Historical findings**: Secrets that were previously committed but might not be active. These detections: - Require investigation to determine current status - Should still be rotated as a precaution ## Remediate a leaked secret When a secret is detected, you should rotate it immediately. GitLab attempts to [automatically revoke](../automatic_response.md) some types of leaked secrets. For those that are not automatically revoked, you must do so manually. [Purging a secret from the repository's history](../../../project/repository/repository_size.md#purge-files-from-repository-history) does not fully address the leak. The original secret remains in any existing forks or clones of the repository. For instructions on how to respond to a leaked secret, select the vulnerability in the vulnerability report. ## Optimization Before deploying pipeline secret detection across your organization, optimize the configuration to reduce false positives and improve accuracy for your specific environment. False positives can create alert fatigue and reduce trust in the tool. Consider using custom ruleset configuration (Ultimate only): - Exclude known safe patterns specific to your codebase. - Adjust sensitivity for rules that frequently trigger on non-secrets. - Add custom rules for organization-specific secret formats. To optimize performance in large repositories or organizations with many projects, review your: - Scan scope management: - Turn off historical scanning after you run a historical scan in a project. - Schedule historic scans during low-usage periods. - Resource allocation: - Allocate sufficient runner resources for larger repositories. - Consider dedicated runners for security scanning workloads. - Monitor scan duration and optimize based on repository size. ### Testing optimization changes Before applying optimizations organization-wide: 1. Validate that optimizations don't miss legitimate secrets. 1. Track false positive reduction and scan performance improvements. 1. Maintain records of effective optimization patterns. ## Roll out You should implement pipeline secret detection incrementally. Start with a small-scale pilot to understand the tool's behavior before rolling out the feature across your organization. Follow these guidelines when you roll out pipeline secret detection: 1. Choose a pilot project. Suitable projects have: - Active development with regular commits. - A manageable codebase size. - A team familiar with GitLab CI/CD. - Willingness to iterate on configuration. 1. Start simple. Enable pipeline secret detection with default settings on your pilot project. 1. Monitor results. Run the analyzer for one or two weeks to understand typical findings. 1. Address detected secrets. Remediate any legitimate secrets found. 1. Tune your configuration. Adjust settings based on initial results. 1. Document the implementation. Record common false positives and remediation patterns. ## FIPS-enabled images {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/6479) in GitLab 14.10. {{< /history >}} The default scanner images are built off a base Alpine image for size and maintainability. GitLab offers [Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the images that are FIPS-enabled. To use the FIPS-enabled images, either: - Set the `SECRET_DETECTION_IMAGE_SUFFIX` CI/CD variable to `-fips`. - Add the `-fips` extension to the default image name. For example: ```yaml variables: SECRET_DETECTION_IMAGE_SUFFIX: '-fips' include: - template: Jobs/Secret-Detection.gitlab-ci.yml ``` ## Troubleshooting ### Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../../troubleshooting_application_security.md#debug-level-logging). #### Warning: `gl-secret-detection-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). #### Error: `Couldn't run the gitleaks command: exit status 2` The pipeline secret detection analyzer relies on generating patches between commits to scan content for secrets. If the number of commits in a merge request is greater than the value of the [`GIT_DEPTH` CI/CD variable](../../../../ci/runners/configure_runners.md#shallow-cloning), Secret Detection [fails to detect secrets](#error-couldnt-run-the-gitleaks-command-exit-status-2). For example, you could have a pipeline triggered from a merge request containing 60 commits and the `GIT_DEPTH` variable set to less than 60. In that case the pipeline secret detection job fails because the clone is not deep enough to contain all of the relevant commits. To verify the current value, see [pipeline configuration](../../../../ci/pipelines/settings.md#limit-the-number-of-changes-fetched-during-clone). To confirm this as the cause of the error, enable [debug-level logging](../../troubleshooting_application_security.md#debug-level-logging), then rerun the pipeline. The logs should look similar to the following example. The text "object not found" is a symptom of this error. ```plaintext ERRO[2020-11-18T18:05:52Z] object not found [ERRO] [secrets] [2020-11-18T18:05:52Z] ▶ Couldn't run the gitleaks command: exit status 2 [ERRO] [secrets] [2020-11-18T18:05:52Z] ▶ Gitleaks analysis failed: exit status 2 ``` To resolve the issue, set the [`GIT_DEPTH` CI/CD variable](../../../../ci/runners/configure_runners.md#shallow-cloning) to a higher value. To apply this only to the pipeline secret detection job, the following can be added to your `.gitlab-ci.yml` file: ```yaml secret_detection: variables: GIT_DEPTH: 100 ``` #### Error: `ERR fatal: ambiguous argument` Pipeline secret detection can fail with the message `ERR fatal: ambiguous argument` error if your repository's default branch is unrelated to the branch the job was triggered for. See issue [!352014](https://gitlab.com/gitlab-org/gitlab/-/issues/352014) for more details. To resolve the issue, make sure to correctly [set your default branch](../../../project/repository/branches/default.md#change-the-default-branch-name-for-a-project) on your repository. You should set it to a branch that has related history with the branch you run the `secret-detection` job on. #### `exec /bin/sh: exec format error` message in job log The GitLab pipeline secret detection analyzer [only supports](#getting-started) running on the `amd64` CPU architecture. This message indicates that the job is being run on a different architecture, such as `arm`. #### Error: `fatal: detected dubious ownership in repository at '/builds/<project dir>'` Secret detection might fail with an exit status of 128. This can be caused by a change to the user on the Docker image. For example: ```shell $ /analyzer run [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ GitLab secrets analyzer v6.0.1 [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Detecting project [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Analyzer will attempt to analyze all projects in the repository [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Loading ruleset for /builds.... [WARN] [secrets] [2024-06-06T07:28:13Z] ▶ /builds/....secret-detection-ruleset.toml not found, ruleset support will be disabled. [INFO] [secrets] [2024-06-06T07:28:13Z] ▶ Running analyzer [FATA] [secrets] [2024-06-06T07:28:13Z] ▶ get commit count: exit status 128 ``` To work around this issue, add a `before_script` with the following: ```yaml before_script: - git config --global --add safe.directory "$CI_PROJECT_DIR" ``` For more information about this issue, see [issue 465974](https://gitlab.com/gitlab-org/gitlab/-/issues/465974). <!-- markdownlint-enable MD025 -->
https://docs.gitlab.com/user/application_security/secret_detection/tutorial
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/tutorial.md
2025-08-13
doc/user/application_security/secret_detection/pipeline
[ "doc", "user", "application_security", "secret_detection", "pipeline" ]
tutorial.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Protect your project with pipeline secret detection
null
<!-- vale gitlab_base.FutureTense = NO --> If your application uses external resources, you usually need to authenticate your application with a secret, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. Pipeline secret detection uses a CI/CD job to check your GitLab project for secrets. In this tutorial, you'll create a project, configure pipeline secret detection, and learn how to analyze its results: 1. [Create a project](#create-a-project) 1. [Check the job output](#check-the-job-output) 1. [Enable merge request pipelines](#enable-merge-request-pipelines) 1. [Add a fake secret](#add-a-fake-secret) 1. [Triage the secret](#triage-the-secret) 1. [Remediate a leak](#remediate-a-leak) ## Before you begin Before you begin this tutorial, make sure you have the following: - A GitLab.com account. To take advantage of all the features of pipeline secret detection, you should use an account with Ultimate if you have one. - Some familiarity with CI/CD. ## Create a project First, create a project and enable secret detection: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) > **New project/repository**. 1. Select **Create blank project**. 1. Enter the project details: 1. Enter a name and project slug. 1. From the **Project deployment target (optional)** dropdown list, select **No deployment planned**. 1. Select the **Initialize repository with a README** checkbox. This will give you a place to add content to the project later. 1. Select the **Enable Secret Detection** checkbox. 1. Select **Create project**. A new project is created and initialized with a README and `.gitlab-ci.yml` file. The CI/CD configuration includes the `Security/Secret-Detection.gitlab-ci.yml` template, which enables pipeline secret detection in the project. ## Check the job output Pipeline secret detection runs in a CI/CD job called `secret_detection`. Scan results are written to the CI/CD job log. Each scan also produces a comprehensive report as a job artifact. To check the results of the most recent scan: 1. On the left sidebar, select **Build** > **Jobs**. 1. Select the most recent `secret_detection` job. If you haven't run a new pipeline, there should be only one job. 1. Check the log output for the following: - Information about the scan, including the analyzer version and ruleset. Your project uses the default ruleset because you enabled secret detection automatically. - Whether any secrets were detected. You should see `no leaks found`. 1. To download the full report, under **Job artifacts**, select **Download**. ## Enable merge request pipelines So far, we've used pipeline secret detection to scan commits in the default branch. But to analyze commits in merge requests before you merge them to the default branch, you need to enable merge request pipelines. To do this: 1. Add the following lines to your `.gitlab-ci.yml` file: ```yaml variables: AST_ENABLE_MR_PIPELINES: "true" ``` 1. Save the changes and commit them to the `main` branch of your project. ## Add a fake secret Next, let's complicate the output of the job by "leaking" a fake secret in a merge request: 1. Check out a new branch: ```shell git checkout -b pipeline-sd-tutorial ``` 1. Edit your project README and add the following lines. Be sure to remove the spaces before and after the `-` to match the exact format of a personal access token: ```markdown # To make the example work, remove # the spaces before and after the dash: glpat - 12345678901234567890 ``` 1. Commit and push your changes, then open a merge request to merge them to the default branch. A merge request pipeline is automatically run. 1. Wait for the pipeline to finish, then check the job log. You should see `WRN leaks found: 1`. 1. Download the job artifact and check to make sure it contains the following information: - The secret type. In this example, the type is `"GitLab personal access token"`. - A description of what the secret is used for, with some steps you can take to remediate the leak. - The severity of the leak. Because personal access tokens can be used to impersonate users on GitLab.com, this leak is `Critical`. - The raw text of the secret. - Some information about where the secret is located: ```json "file": "README.md", "line_start": 97, "line_end": 97, ``` In this example, the secret is on line 97 of the file `README.md`. ### Using the merge request security widget {{< details >}} - Tier: Ultimate {{< /details >}} A secret detected on a non-default branch is called a "finding." When a finding is merged to the default branch, it becomes a "vulnerability." The merge request security widget displays a list of findings that could become vulnerabilities if the merge request is merged. To view the widget: 1. Select the merge request you created in the previous step. 1. Find the merge request security widget, which starts with **Security scanning**. 1. On the widget, select **Show details** ({{< icon name="chevron-down" >}}). 1. Review the displayed information. You should see **Secret detection detected 1 new potential vulnerability**. For a detailed view of all the findings in a merge request, select **View all pipeline findings**. ## Triage the secret {{< details >}} - Tier: Ultimate {{< /details >}} On GitLab Ultimate, job output is also written to: - The pipeline's **Security** tab. - If a finding becomes a vulnerability, the vulnerability report. To demonstrate how you can triage a secret by using the UI, let's create a vulnerability and change its status in the vulnerability report: 1. Merge the MR you created in the last step, then wait for the pipeline to finish. The fake secret is added to `main`, which causes the finding to become a vulnerability. 1. On the left sidebar, select **Secure** > **Vulnerability report**. 1. Select the vulnerability's **Description** to view: - Details about the secret type. - Remediation guidance. - Information about when and where the vulnerability was detected. 1. Select **Edit vulnerability** > **Change status**. 1. From the **Status** dropdown list, select **Dismiss as... Used in tests**. 1. Add a comment that explains why you added the fake secret to your project. 1. Select **Change status**. The vulnerability no longer appears on the front page of the vulnerability report. ## Remediate a leak If you add a secret to a remote repository, that secret is no longer secure and must be revoked as soon as possible. You should revoke and replace secrets even if they haven't been merged to your default branch. The exact steps you take to remediate a leak will depend on your organization's security policies, but at a minimum, you should: 1. Revoke the secret. When a secret is revoked, it is no longer valid and cannot be used to impersonate legitimate activity. 1. Remove the secret from your repository. Specific remediation guidance is written to the `secret-detection` job log, and is available on the vulnerability report details page.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Protect your project with pipeline secret detection' breadcrumbs: - doc - user - application_security - secret_detection - pipeline --- <!-- vale gitlab_base.FutureTense = NO --> If your application uses external resources, you usually need to authenticate your application with a secret, like a token or key. If a secret is pushed to a remote repository, anyone with access to the repository can impersonate you or your application. Pipeline secret detection uses a CI/CD job to check your GitLab project for secrets. In this tutorial, you'll create a project, configure pipeline secret detection, and learn how to analyze its results: 1. [Create a project](#create-a-project) 1. [Check the job output](#check-the-job-output) 1. [Enable merge request pipelines](#enable-merge-request-pipelines) 1. [Add a fake secret](#add-a-fake-secret) 1. [Triage the secret](#triage-the-secret) 1. [Remediate a leak](#remediate-a-leak) ## Before you begin Before you begin this tutorial, make sure you have the following: - A GitLab.com account. To take advantage of all the features of pipeline secret detection, you should use an account with Ultimate if you have one. - Some familiarity with CI/CD. ## Create a project First, create a project and enable secret detection: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) > **New project/repository**. 1. Select **Create blank project**. 1. Enter the project details: 1. Enter a name and project slug. 1. From the **Project deployment target (optional)** dropdown list, select **No deployment planned**. 1. Select the **Initialize repository with a README** checkbox. This will give you a place to add content to the project later. 1. Select the **Enable Secret Detection** checkbox. 1. Select **Create project**. A new project is created and initialized with a README and `.gitlab-ci.yml` file. The CI/CD configuration includes the `Security/Secret-Detection.gitlab-ci.yml` template, which enables pipeline secret detection in the project. ## Check the job output Pipeline secret detection runs in a CI/CD job called `secret_detection`. Scan results are written to the CI/CD job log. Each scan also produces a comprehensive report as a job artifact. To check the results of the most recent scan: 1. On the left sidebar, select **Build** > **Jobs**. 1. Select the most recent `secret_detection` job. If you haven't run a new pipeline, there should be only one job. 1. Check the log output for the following: - Information about the scan, including the analyzer version and ruleset. Your project uses the default ruleset because you enabled secret detection automatically. - Whether any secrets were detected. You should see `no leaks found`. 1. To download the full report, under **Job artifacts**, select **Download**. ## Enable merge request pipelines So far, we've used pipeline secret detection to scan commits in the default branch. But to analyze commits in merge requests before you merge them to the default branch, you need to enable merge request pipelines. To do this: 1. Add the following lines to your `.gitlab-ci.yml` file: ```yaml variables: AST_ENABLE_MR_PIPELINES: "true" ``` 1. Save the changes and commit them to the `main` branch of your project. ## Add a fake secret Next, let's complicate the output of the job by "leaking" a fake secret in a merge request: 1. Check out a new branch: ```shell git checkout -b pipeline-sd-tutorial ``` 1. Edit your project README and add the following lines. Be sure to remove the spaces before and after the `-` to match the exact format of a personal access token: ```markdown # To make the example work, remove # the spaces before and after the dash: glpat - 12345678901234567890 ``` 1. Commit and push your changes, then open a merge request to merge them to the default branch. A merge request pipeline is automatically run. 1. Wait for the pipeline to finish, then check the job log. You should see `WRN leaks found: 1`. 1. Download the job artifact and check to make sure it contains the following information: - The secret type. In this example, the type is `"GitLab personal access token"`. - A description of what the secret is used for, with some steps you can take to remediate the leak. - The severity of the leak. Because personal access tokens can be used to impersonate users on GitLab.com, this leak is `Critical`. - The raw text of the secret. - Some information about where the secret is located: ```json "file": "README.md", "line_start": 97, "line_end": 97, ``` In this example, the secret is on line 97 of the file `README.md`. ### Using the merge request security widget {{< details >}} - Tier: Ultimate {{< /details >}} A secret detected on a non-default branch is called a "finding." When a finding is merged to the default branch, it becomes a "vulnerability." The merge request security widget displays a list of findings that could become vulnerabilities if the merge request is merged. To view the widget: 1. Select the merge request you created in the previous step. 1. Find the merge request security widget, which starts with **Security scanning**. 1. On the widget, select **Show details** ({{< icon name="chevron-down" >}}). 1. Review the displayed information. You should see **Secret detection detected 1 new potential vulnerability**. For a detailed view of all the findings in a merge request, select **View all pipeline findings**. ## Triage the secret {{< details >}} - Tier: Ultimate {{< /details >}} On GitLab Ultimate, job output is also written to: - The pipeline's **Security** tab. - If a finding becomes a vulnerability, the vulnerability report. To demonstrate how you can triage a secret by using the UI, let's create a vulnerability and change its status in the vulnerability report: 1. Merge the MR you created in the last step, then wait for the pipeline to finish. The fake secret is added to `main`, which causes the finding to become a vulnerability. 1. On the left sidebar, select **Secure** > **Vulnerability report**. 1. Select the vulnerability's **Description** to view: - Details about the secret type. - Remediation guidance. - Information about when and where the vulnerability was detected. 1. Select **Edit vulnerability** > **Change status**. 1. From the **Status** dropdown list, select **Dismiss as... Used in tests**. 1. Add a comment that explains why you added the fake secret to your project. 1. Select **Change status**. The vulnerability no longer appears on the front page of the vulnerability report. ## Remediate a leak If you add a secret to a remote repository, that secret is no longer secure and must be revoked as soon as possible. You should revoke and replace secrets even if they haven't been merged to your default branch. The exact steps you take to remediate a leak will depend on your organization's security policies, but at a minimum, you should: 1. Revoke the secret. When a secret is revoked, it is no longer valid and cannot be used to impersonate legitimate activity. 1. Remove the secret from your repository. Specific remediation guidance is written to the `secret-detection` job log, and is available on the vulnerability report details page.
https://docs.gitlab.com/user/application_security/secret_detection/configure
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/configure.md
2025-08-13
doc/user/application_security/secret_detection/pipeline
[ "doc", "user", "application_security", "secret_detection", "pipeline" ]
configure.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Customize pipeline secret detection
null
<!-- markdownlint-disable MD025 --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Depending on your [subscription tier](_index.md#availability) and configuration method, you can change how pipeline secret detection works. [Customize analyzer behavior](#customize-analyzer-behavior) to: - Change what types of secrets the analyzer detects. - Use a different analyzer version. - Scan your project with a specific method. [Customize analyzer rulesets](#customize-analyzer-rulesets) to: - Detect custom secret types. - Override default scanner rules. ## Customize analyzer behavior To change how the analyzer behaves, define variables using the [`variables`](../../../../ci/yaml/_index.md#variables) parameter in `.gitlab-ci.yml`. {{< alert type="warning" >}} All configuration of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Add new patterns To search for other types of secrets in your repositories, you can [customize analyzer rulesets](#customize-analyzer-rulesets). To propose a new detection rule for all users of pipeline secret detection, [see our single source of truth for our rules](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/blob/main/README.md) and follow the guidance to create a merge request. If you operate a cloud or SaaS product and you're interested in partnering with GitLab to better protect your users, learn more about our [partner program for leaked credential notifications](../automatic_response.md#partner-program-for-leaked-credential-notifications). ### Pin to specific analyzer version The GitLab-managed CI/CD template specifies a major version and automatically pulls the latest analyzer release within that major version. In some cases, you may need to use a specific version. For example, you might need to avoid a regression in a later release. To override the automatic update behavior, set the `SECRETS_ANALYZER_VERSION` CI/CD variable in your CI/CD configuration file after you include the [`Secret-Detection.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml). You can set the tag to: - A major version, like `4`. Your pipelines use any minor or patch updates that are released within this major version. - A minor version, like `4.5`. Your pipelines use any patch updates that are released within this minor version. - A patch version, like `4.5.0`. Your pipelines don't receive any updates. This example uses a specific minor version of the analyzer: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: variables: SECRETS_ANALYZER_VERSION: "4.5" ``` ### Enable historic scan To enable a historic scan, set the variable `SECRET_DETECTION_HISTORIC_SCAN` to `true` in your `.gitlab-ci.yml` file. ### Run jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). ### Override the analyzer jobs To override a job definition, (for example, change properties like `variables` or `dependencies`), declare a job with the same name as the `secret_detection` job to override. Place this new job after the template inclusion and specify any additional keys under it. In the following example extract of a `.gitlab-ci.yml` file: - The `Jobs/Secret-Detection` CI template is [included](../../../../ci/yaml/_index.md#include). - In the `secret_detection` job, the CI/CD variable `SECRET_DETECTION_HISTORIC_SCAN` is set to `true`. Because the template is evaluated before the pipeline configuration, the last mention of the variable takes precedence, so an historic scan is performed. ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: variables: SECRET_DETECTION_HISTORIC_SCAN: "true" ``` ### Available CI/CD variables Change the behavior of pipeline secret detection by defining available CI/CD variables: | CI/CD variable | Default value | Description | |-----------------------------------|---------------|-------------| | `SECRET_DETECTION_EXCLUDED_PATHS` | "" | Exclude vulnerabilities from output based on the paths. The paths are a comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec` ). Parent directories also match patterns. Detected secrets previously added to the vulnerability report are not removed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/225273) in GitLab 13.3. | | `SECRET_DETECTION_HISTORIC_SCAN` | false | Flag to enable a historic Gitleaks scan. | | `SECRET_DETECTION_IMAGE_SUFFIX` | "" | Suffix added to the image name. If set to `-fips`, `FIPS-enabled` images are used for scan. See [Use FIPS-enabled images](_index.md#fips-enabled-images) for more details. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/355519) in GitLab 14.10. | | `SECRET_DETECTION_LOG_OPTIONS` | "" | Flag to specify a commit range to scan. Gitleaks uses [`git log`](https://git-scm.com/docs/git-log) to determine the commit range. When defined, pipeline secret detection attempts to fetch all commits in the branch. If the analyzer can't access every commit, it continues with the already checked out repository. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/350660) in GitLab 15.1. | In previous GitLab versions, the following variables were also available: | CI/CD variable | Default value | Description | |-----------------------------------|---------------|-------------| | `SECRET_DETECTION_COMMIT_FROM` | - | The commit a Gitleaks scan starts at. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. Replaced with `SECRET_DETECTION_COMMITS`. | | `SECRET_DETECTION_COMMIT_TO` | - | The commit a Gitleaks scan ends at. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. Replaced with `SECRET_DETECTION_COMMITS`. | | `SECRET_DETECTION_COMMITS` | - | The list of commits that Gitleaks should scan. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/352565) in GitLab 15.0. | ## Customize analyzer rulesets {{< details >}} - Tier: Ultimate {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/211387) in GitLab 13.5. - Expanded to include additional passthrough types of `file` and `raw` in GitLab 14.6. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/235359) support for overriding rules in GitLab 14.8. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/336395) support for passthrough chains and included additional passthrough types of `git` and `url` in GitLab 17.2. {{< /history >}} You can customize the types of secrets detected using pipeline secret detection by [creating a ruleset configuration file](#create-a-ruleset-configuration-file), either in the repository being scanned or a remote repository. Customization enables you to modify, replace, or extend the default ruleset. There are multiple kinds of customizations available: - Modify the behavior of **rules predefined in the default ruleset**. This includes: - [Override a rule from the default ruleset](#override-a-rule). - [Disable a rule from the default ruleset](#disable-a-rule). - [Disable or override a rule with a remote ruleset](#with-a-remote-ruleset). - Replace the default ruleset with a custom ruleset using passthroughs. This includes: - [Use configuration from an inline ruleset](#with-an-inline-ruleset). - [Use configuration from a local ruleset](#with-a-local-ruleset). - [Use configuration from a remote ruleset](#with-a-remote-ruleset-1). - [Use configuration from a private remote ruleset](#with-a-private-remote-ruleset) - Extend the behavior of the default ruleset using passthroughs. This includes: - [Use configuration from a local ruleset](#with-a-local-ruleset-1). - [Use configuration from a remote ruleset](#with-a-remote-ruleset-2). - Ignore secrets and paths using Gitleaks-native functionality. This includes: - Use [`Gitleaks' [allowlist] directive`](https://github.com/gitleaks/gitleaks#configuration) to [ignore patterns and paths](#ignore-patterns-and-paths). - Use `gitleaks:allow` comment to [ignore secrets inline](#ignore-secrets-inline). ### Create a ruleset configuration file To create a ruleset configuration file: 1. Create a `.gitlab` directory at the root of your project, if one doesn't already exist. 1. Create a file named `secret-detection-ruleset.toml` in the `.gitlab` directory. ### Modify rules from the default ruleset You can modify rules predefined in the [default ruleset](../detected_secrets.md). Modifying rules can help you adapt pipeline secret detection to an existing workflow or tool. For example you may want to override the severity of a detected secret or disable a rule from being detected at all. You can also use a ruleset configuration file stored remotely (that is, a remote Git repository or website) to modify predefined rules. New rules must use the [custom rule format](custom_rulesets_schema.md#custom-rule-format). #### Disable a rule {{< history >}} - Ability to disable a rule with a remote ruleset was [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/425251) in GitLab 16.0 and later. {{< /history >}} You can disable rules that you don't want active. To disable rules from the analyzer default ruleset: 1. [Create a ruleset configuration file](#create-a-ruleset-configuration-file), if one doesn't exist already. 1. Set the `disabled` flag to `true` in the context of a [`ruleset` section](custom_rulesets_schema.md#the-secretsruleset-section). 1. In one or more `ruleset.identifier` subsections, list the rules to disable. Every [`ruleset.identifier` section](custom_rulesets_schema.md#the-secretsrulesetidentifier-section) has: - A `type` field for the predefined rule identifier. - A `value` field for the rule name. In the following example `secret-detection-ruleset.toml` file, the disabled rules are matched by the `type` and `value` of identifiers: ```toml [secrets] [[secrets.ruleset]] disable = true [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "RSA private key" ``` #### Override a rule {{< history >}} - Ability to override a rule with a remote ruleset was [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/425251) in GitLab 16.0 and later. {{< /history >}} If there are specific rules to customize, you can override them. For example, you may increase the severity of a specific type of secret because leaking it would have a higher impact on your workflow. To override rules from the analyzer default ruleset: 1. [Create a ruleset configuration file](#create-a-ruleset-configuration-file), if one doesn't exist already. 1. In one or more `ruleset.identifier` subsections, list the rules to override. Every [`ruleset.identifier` section](custom_rulesets_schema.md#the-secretsrulesetidentifier-section) has: - A `type` field for the predefined rule identifier. - A `value` field for the rule name. 1. In the [`ruleset.override` context](custom_rulesets_schema.md#the-secretsrulesetoverride-section) of a [`ruleset` section](custom_rulesets_schema.md#the-secretsruleset-section), provide the keys to override. Any combination of keys can be overridden. Valid keys are: - `description` - `message` - `name` - `severity` (valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info`) In the following `secret-detection-ruleset.toml` file, rules are matched by the `type` and `value` of identifiers and then overridden: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "RSA private key" [secrets.ruleset.override] description = "OVERRIDDEN description" message = "OVERRIDDEN message" name = "OVERRIDDEN name" severity = "Info" ``` #### With a remote ruleset A **remote ruleset is a configuration file stored outside the current repository**. It can be used to modify rules across multiple projects. To modify a predefined rule with a remote ruleset, you can use the `SECRET_DETECTION_RULESET_GIT_REFERENCE` [CI/CD variable](../../../../ci/variables/_index.md): ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECRET_DETECTION_RULESET_GIT_REFERENCE: "gitlab.com/example-group/remote-ruleset-project" ``` Pipeline secret detection assumes the configuration is defined in `.gitlab/secret-detection-ruleset.toml` file in the repository referenced by the CI variable where the remote ruleset is stored. If that file doesn't exist, make sure to [create one](#create-a-ruleset-configuration-file) and follow the steps to [override](#override-a-rule) or [disable](#disable-a-rule) a predefined rule as previously outlined. {{< alert type="note" >}} A local `.gitlab/secret-detection-ruleset.toml` file in the project takes precedence over `SECRET_DETECTION_RULESET_GIT_REFERENCE` by default because `SECURE_ENABLE_LOCAL_CONFIGURATION` is set to `true`. If you set `SECURE_ENABLE_LOCAL_CONFIGURATION` to `false`, the local file is ignored and the default configuration or `SECRET_DETECTION_RULESET_GIT_REFERENCE` (if set) is used. {{< /alert >}} The `SECRET_DETECTION_RULESET_GIT_REFERENCE` variable uses a format similar to [Git URLs](https://git-scm.com/docs/git-clone#_git_urls) for specifying a URI, optional authentication, and optional Git SHA. The variable uses the following format: ```plaintext <AUTH_USER>:<AUTH_PASSWORD>@<PROJECT_PATH>@<GIT_SHA> ``` If the configuration file is stored in a private project that requires authentication, you may use a [Group Access Token](../../../group/settings/group_access_tokens.md) securely stored in a CI variable to load the remote ruleset: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECRET_DETECTION_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:$GROUP_ACCESS_TOKEN@gitlab.com/example-group/remote-ruleset-project" ``` The group access token must have the `read_repository` scope and at least the Reporter role. For details, see [Repository permissions](../../../permissions.md#repository). See [bot users for groups](../../../group/settings/group_access_tokens.md#bot-users-for-groups) to learn how to find the username associated with a group access token. ### Replace the default ruleset You can replace the default ruleset configuration using a number of [customizations](custom_rulesets_schema.md). Those can be combined using [passthroughs](custom_rulesets_schema.md#passthrough-types) into a single configuration. Using passthroughs, you can: - Chain up to [20 passthroughs](custom_rulesets_schema.md#the-secretspassthrough-section) into a single configuration to replace or extend predefined rules. - Include [environment variables in passthroughs](custom_rulesets_schema.md#interpolate). - Set a [timeout](custom_rulesets_schema.md#the-secrets-configuration-section) for evaluating passthroughs. - [Validate](custom_rulesets_schema.md#the-secrets-configuration-section) TOML syntax used in each defined passthrough. #### With an inline ruleset You can use [`raw` passthrough](custom_rulesets_schema.md#passthrough-types) to replace default ruleset with configuration provided inline. To do so, add the following in the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the rule defined under `[[rules]]` as appropriate: ```toml [secrets] [[secrets.passthrough]] type = "raw" target = "gitleaks.toml" value = """ title = "replace default ruleset with a raw passthrough" [[rules]] description = "Test for Raw Custom Rulesets" regex = '''Custom Raw Ruleset T[est]{3}''' """ ``` The previous example replaces the default ruleset with a rule that checks for the regex defined - `Custom Raw Ruleset T` with a suffix of 3 characters from either one of `e`, `s`, or `t` letters. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a local ruleset You can use [`file` passthrough](custom_rulesets_schema.md#passthrough-types) to replace the default ruleset with another file committed to the current repository. To do so, add the following in the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository and adjust the `value` as appropriate to point to the path of the file with the local ruleset configuration: ```toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "config/gitleaks.toml" ``` This would replace the default ruleset with the configuration defined in `config/gitleaks.toml` file. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a remote ruleset You can replace the default ruleset with configuration defined in a remote Git repository or a file stored somewhere online using the `git` and `url` passthroughs. A remote ruleset can be used across multiple projects. For example, you may want to apply the same ruleset to a number of projects in one of your namespaces, in such case, you may use either type of passthrough to load up that remote ruleset and have it used by multiple projects. It also enables centralized management of a ruleset, with only authorized people able to edit. To use `git` passthrough, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository and adjust the `value` to point to the address of the Git repository: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "git" ref = "main" subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` In this configuration the analyzer loads the ruleset from the `gitleaks.toml` file inside the `config` directory in the `main` branch of the repository stored at `user_group/central_repository_with_shared_ruleset`. You can then proceed to include the same configuration in projects other than `user_group/basic_repository`. Alternatively, you may use the `url` passthrough to replace the default ruleset with a remote ruleset configuration. To use the `url` passthrough, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository and adjust the `value` to point to the address of the remote file: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "url" target = "gitleaks.toml" value = "https://example.com/gitleaks.toml" ``` In this configuration the analyzer loads the ruleset configuration from `gitleaks.toml` file stored at the address provided. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a private remote ruleset If a ruleset configuration is stored in a private repository you must provide the credentials to access the repository by using the passthrough's [`auth` setting](custom_rulesets_schema.md#the-secretspassthrough-section). {{< alert type="note" >}} The `auth` setting only works with `git` passthrough. {{< /alert >}} To use a remote ruleset stored in a private repository, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository, adjust the `value` to point to the address of the Git repository, and update `auth` to use the appropriate credentials: ```toml [secrets] [[secrets.passthrough]] type = "git" ref = "main" auth = "USERNAME:PASSWORD" # replace USERNAME and PASSWORD as appropriate subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` {{< alert type="warning" >}} Beware of leaking credentials when using this feature. Check [this section](custom_rulesets_schema.md#interpolate) for an example on how to use environment variables to minimize the risk. {{< /alert >}} For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). ### Extend the default ruleset You can also extend the [default ruleset](../detected_secrets.md) configuration with additional rules as appropriate. This can be helpful when you would still like to benefit from the high-confidence predefined rules maintained by GitLab in the default ruleset, but also want to add rules for types of secrets that may be used in your own projects and namespaces. New rules must follow the [custom rule format](custom_rulesets_schema.md#custom-rule-format). #### With a local ruleset You can use a `file` passthrough to extend the default ruleset to add additional rules. Add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` The extended configuration stored in `extended-gitleaks-config.toml` is included in the configuration used by the analyzer in the CI/CD pipeline. In the example below, we add a couple of new `[[rules]]` sections that define a number of regular expressions to be detected: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] id = "example_api_key" description = "Example Service API Key" regex = '''example_api_key''' [[rules]] id = "example_api_secret" description = "Example Service API Secret" regex = '''example_api_secret''' ``` With this ruleset configuration the analyzer detects any strings matching with those two defined regex patterns. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a remote ruleset Similar to how you can replace the default ruleset with a remote ruleset, you can also extend the default ruleset with configuration stored in a remote Git repository or file stored external to the repository in which you have the `.gitlab/secret-detection-ruleset.toml` configuration file. This can be achieved by using either of the `git` or `url` passthroughs as discussed previously. To do that with a `git` passthrough, add the following to `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value`, `ref`, and `subdir` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "git" ref = "main" subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` Pipeline secret detection assumes the remote ruleset configuration file is called `gitleaks.toml`, and is stored in `config` directory on the `main` branch of the referenced repository. To extend the default ruleset, the `gitleaks.toml` file should use `[extend]` directive similar to the previous example: ```toml # https://gitlab.com/user_group/central_repository_with_shared_ruleset/-/raw/main/config/gitleaks.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] id = "example_api_key" description = "Example Service API Key" regex = '''example_api_key''' [[rules]] id = "example_api_secret" description = "Example Service API Secret" regex = '''example_api_secret''' ``` To use a `url` passthrough, add the following to `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "url" target = "gitleaks.toml" value = "https://example.com/gitleaks.toml" ``` For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a scan execution policy To extend and enforce the ruleset with a scan execution policy: - Follow the steps in [Set up a pipeline secret detection configuration with a scan execution policy](https://support.gitlab.com/hc/en-us/articles/18863735262364-How-to-set-up-a-centrally-managed-pipeline-secret-detection-configuration-applied-via-Scan-Execution-Policy). ### Ignore patterns and paths There may be situations in which you need to ignore a certain pattern or path from being detected by pipeline secret detection. For example, you may have a file including fake secrets to be used in a test suite. In that case, you can utilize [Gitleaks' native `[allowlist]`](https://github.com/gitleaks/gitleaks#configuration) directive to ignore specific patterns or paths. {{< alert type="note" >}} This feature works regardless of whether you're using a local or a remote ruleset configuration file. The examples below utilizes a local ruleset using `file` passthrough though. {{< /alert >}} To ignore a pattern, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` The extended configuration stored in `extended-gitleaks-config.toml` will be included in the configuration used by the analyzer. In the example below, we add an `[allowlist]` directive that defines a regex that matches the secret to be ignored ("allowed"): ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [allowlist] description = "allowlist of patterns to ignore in detection" regexTarget = "match" regexes = [ '''glpat-[0-9a-zA-Z_\\-]{20}''' ] ``` This ignores any string matching `glpat-` with a suffix of 20 characters of digits and letters. Similarly, you can exclude specific paths from being scanned. In the example below, we define an array of paths to ignore under the `[allowlist]` directive. A path could either be a regular expression, or a specific file path: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [allowlist] description = "allowlist of patterns to ignore in detection" paths = [ '''/gitleaks.toml''', '''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)''' ] ``` This ignores any secrets detected in either `/gitleaks.toml` file or any file ending with one of the specified extensions. From [Gitleaks v8.20.0](https://github.com/gitleaks/gitleaks/releases/tag/v8.20.0), you can also use `regexTarget` with `[allowlist]`. This means you can configure a [personal access token prefix](../../../../administration/settings/account_and_limit_settings.md#personal-access-token-prefix) or a [custom instance prefix](../../../../administration/settings/account_and_limit_settings.md#instance-token-prefix) by overriding existing rules. For example, for `personal access tokens`, you could configure: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] # Rule id you want to override: id = "gitlab_personal_access_token" # all the other attributes from the default rule are inherited [[rules.allowlists]] regexTarget = "line" regexes = [ '''CUSTOMglpat-''' ] [[rules]] id = "gitlab_personal_access_token_with_custom_prefix" regex = '<Regex that match a personal access token starting with your CUSTOM prefix>' ``` Keep in mind that you need to account for all rules configured in the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/blob/main/rules/mit/gitlab/gitlab.toml). For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). ### Ignore secrets inline In some instances, you might want to ignore a secret inline. For example, you may have a fake secret in an example or a test suite. In these instances, you will want to ignore the secret instead of having it reported as a vulnerability. To ignore a secret, add `gitleaks:allow` as a comment to the line that contains the secret. For example: ```ruby "A personal token for GitLab will look like glpat-JUST20LETTERSANDNUMB" # gitleaks:allow ``` ### Detecting complex strings The [default ruleset](_index.md#detected-secrets) provides patterns to detect structured strings with a low rate of false positives. However, you might want to detect more complex strings like passwords. Because [Gitleaks doesn't support lookahead or lookbehind](https://github.com/google/re2/issues/411), writing a high-confidence, general rule to detect unstructured strings is not possible. Although you can't detect every complex string, you can extend your ruleset to meet specific use cases. For example, this rule modifies the [`generic-api-key` rule](https://github.com/gitleaks/gitleaks/blob/4e43d1109303568509596ef5ef576fbdc0509891/config/gitleaks.toml#L507-L514) from the Gitleaks default ruleset: ```regex (?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|=:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['|\"|\n|\r|\s|\x60|;]|$) ``` This regular expression matches: 1. A case-insensitive identifier that starts with `pwd`, or `passwd` or `password`. You can adjust this with other variations like `secret` or `key`. 1. A suffix that follows the identifier. The suffix is a combination of digits, letters, and symbols, and is between zero and 23 characters long. 1. Commonly used assignment operators, like `=`, `:=`, `:`, or `=>`. 1. A secret prefix, often used as a boundary to help with detecting the secret. 1. A string of digits, letters, and symbols, which is between three and 50 characters long. This is the secret itself. If you expect longer strings, you can adjust the length. 1. A secret suffix, often used as a boundary. This matches common endings like ticks, line breaks, and new lines. Here are example strings which are matched by this regular expression: ```plaintext pwd = password1234 passwd = 'p@ssW0rd1234' password = thisismyverylongpassword password => mypassword password := mypassword password: password1234 "password" = "p%ssward1234" 'password': 'p@ssW0rd1234' ``` To use this regex, extend your ruleset with one of the methods documented on this page. For example, imagine you wish to extend the default ruleset [with a local ruleset](#with-a-local-ruleset-1) that includes this rule. Add the following to a `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository. Adjust the `value` to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` In `extended-gitleaks-config.toml` file, add a new `[[rules]]` section with the regular expression you want to use: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] description = "Generic Password Rule" id = "generic-password" regex = '''(?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|=:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['|\"|\n|\r|\s|\x60|;]|$)''' entropy = 3.5 keywords = ["pwd", "passwd", "password"] ``` {{< alert type="note" >}} This example configuration is provided only for convenience, and might not work for all use cases. If you configure your ruleset to detect complex strings, you might create a large number of false positives, or fail to capture certain patterns. {{< /alert >}} ### Demonstrations There are [demonstration projects](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection) that illustrate some of these configuration options. Below is a table with the demonstration projects and their associated workflows: | Action/Workflow | Applies to/via | With inline or local ruleset | With remote ruleset | |-------------------------|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Disable a rule | Predefined rules | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/disable-rule-project/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/disable-rule-project) | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/disable-rule-ruleset) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/disable-rule-project) | | Override a rule | Predefined rules | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/override-rule-project/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/override-rule-project) | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/override-rule-ruleset) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/override-rule-project) | | Replace default ruleset | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/file-passthrough/-/blob/main/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/file-passthrough) | Not applicable | | Replace default ruleset | Raw Passthrough | [Inline Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/raw-passthrough/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/raw-passthrough) | Not applicable | | Replace default ruleset | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-replace/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/git-passthrough) | | Replace default ruleset | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-replace/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/url-passthrough) | | Extend default ruleset | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/file-passthrough) | Not applicable | | Extend default ruleset | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-extend/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/git-passthrough) | | Extend default ruleset | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-extend/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/url-passthrough) | | Ignore paths | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/file-passthrough) | Not applicable | | Ignore paths | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-paths/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/git-passthrough) | | Ignore paths | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-paths/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/url-passthrough) | | Ignore patterns | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/file-passthrough) | Not applicable | | Ignore patterns | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-patterns/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/git-passthrough) | | Ignore patterns | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-patterns/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/url-passthrough) | | Ignore values | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/file-passthrough) | Not applicable | | Ignore values | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-values/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/git-passthrough) | | Ignore values | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-values/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/url-passthrough) | There are also some video demonstrations walking through setting up remote rulesets: - [Secret detection with local and remote ruleset](https://youtu.be/rsN1iDug5GU) ## Offline configuration {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} An offline environment has limited, restricted, or intermittent access to external resources through the internet. For instances in such an environment, pipeline secret detection requires some configuration changes. The instructions in this section must be completed together with the instructions detailed in [offline environments](../../offline_deployments/_index.md). ### Configure GitLab Runner By default, a runner tries to pull Docker images from the GitLab container registry even if a local copy is available. You should use this default setting, to ensure Docker images remain current. However, if no network connectivity is available, you must change the default GitLab Runner `pull_policy` variable. Configure the GitLab Runner CI/CD variable `pull_policy` to [`if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy). ### Use local pipeline secret detection analyzer image Use a local pipeline secret detection analyzer image if you want to obtain the image from a local Docker registry instead of the GitLab container registry. Prerequisites: - Importing Docker images into a local offline Docker registry depends on your network security policy. Consult your IT staff to find an accepted and approved process to import or temporarily access external resources. 1. Import the default pipeline secret detection analyzer image from `registry.gitlab.com` into your [local Docker container registry](../../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/secrets:6 ``` The pipeline secret detection analyzer's image is [periodically updated](../../detect/vulnerability_scanner_maintenance.md) so you should periodically update the local copy. 1. Set the CI/CD variable `SECURE_ANALYZERS_PREFIX` to the local Docker container registry. ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "localhost:5000/analyzers" ``` The pipeline secret detection job should now use the local copy of the analyzer Docker image, without requiring internet access. ## Using a custom SSL CA certificate authority To trust a custom Certificate Authority, set the `ADDITIONAL_CA_CERT_BUNDLE` variable to the bundle of CA certificates that you trust. Do this either in the `.gitlab-ci.yml` file, in a file variable, or as a CI/CD variable. - In the `.gitlab-ci.yml` file, the `ADDITIONAL_CA_CERT_BUNDLE` value must contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` - If using a file variable, set the value of `ADDITIONAL_CA_CERT_BUNDLE` to the path to the certificate. - If using a variable, set the value of `ADDITIONAL_CA_CERT_BUNDLE` to the text representation of the certificate.
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Customize pipeline secret detection breadcrumbs: - doc - user - application_security - secret_detection - pipeline --- <!-- markdownlint-disable MD025 --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Depending on your [subscription tier](_index.md#availability) and configuration method, you can change how pipeline secret detection works. [Customize analyzer behavior](#customize-analyzer-behavior) to: - Change what types of secrets the analyzer detects. - Use a different analyzer version. - Scan your project with a specific method. [Customize analyzer rulesets](#customize-analyzer-rulesets) to: - Detect custom secret types. - Override default scanner rules. ## Customize analyzer behavior To change how the analyzer behaves, define variables using the [`variables`](../../../../ci/yaml/_index.md#variables) parameter in `.gitlab-ci.yml`. {{< alert type="warning" >}} All configuration of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Add new patterns To search for other types of secrets in your repositories, you can [customize analyzer rulesets](#customize-analyzer-rulesets). To propose a new detection rule for all users of pipeline secret detection, [see our single source of truth for our rules](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/blob/main/README.md) and follow the guidance to create a merge request. If you operate a cloud or SaaS product and you're interested in partnering with GitLab to better protect your users, learn more about our [partner program for leaked credential notifications](../automatic_response.md#partner-program-for-leaked-credential-notifications). ### Pin to specific analyzer version The GitLab-managed CI/CD template specifies a major version and automatically pulls the latest analyzer release within that major version. In some cases, you may need to use a specific version. For example, you might need to avoid a regression in a later release. To override the automatic update behavior, set the `SECRETS_ANALYZER_VERSION` CI/CD variable in your CI/CD configuration file after you include the [`Secret-Detection.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.gitlab-ci.yml). You can set the tag to: - A major version, like `4`. Your pipelines use any minor or patch updates that are released within this major version. - A minor version, like `4.5`. Your pipelines use any patch updates that are released within this minor version. - A patch version, like `4.5.0`. Your pipelines don't receive any updates. This example uses a specific minor version of the analyzer: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: variables: SECRETS_ANALYZER_VERSION: "4.5" ``` ### Enable historic scan To enable a historic scan, set the variable `SECRET_DETECTION_HISTORIC_SCAN` to `true` in your `.gitlab-ci.yml` file. ### Run jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). ### Override the analyzer jobs To override a job definition, (for example, change properties like `variables` or `dependencies`), declare a job with the same name as the `secret_detection` job to override. Place this new job after the template inclusion and specify any additional keys under it. In the following example extract of a `.gitlab-ci.yml` file: - The `Jobs/Secret-Detection` CI template is [included](../../../../ci/yaml/_index.md#include). - In the `secret_detection` job, the CI/CD variable `SECRET_DETECTION_HISTORIC_SCAN` is set to `true`. Because the template is evaluated before the pipeline configuration, the last mention of the variable takes precedence, so an historic scan is performed. ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml secret_detection: variables: SECRET_DETECTION_HISTORIC_SCAN: "true" ``` ### Available CI/CD variables Change the behavior of pipeline secret detection by defining available CI/CD variables: | CI/CD variable | Default value | Description | |-----------------------------------|---------------|-------------| | `SECRET_DETECTION_EXCLUDED_PATHS` | "" | Exclude vulnerabilities from output based on the paths. The paths are a comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec` ). Parent directories also match patterns. Detected secrets previously added to the vulnerability report are not removed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/225273) in GitLab 13.3. | | `SECRET_DETECTION_HISTORIC_SCAN` | false | Flag to enable a historic Gitleaks scan. | | `SECRET_DETECTION_IMAGE_SUFFIX` | "" | Suffix added to the image name. If set to `-fips`, `FIPS-enabled` images are used for scan. See [Use FIPS-enabled images](_index.md#fips-enabled-images) for more details. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/355519) in GitLab 14.10. | | `SECRET_DETECTION_LOG_OPTIONS` | "" | Flag to specify a commit range to scan. Gitleaks uses [`git log`](https://git-scm.com/docs/git-log) to determine the commit range. When defined, pipeline secret detection attempts to fetch all commits in the branch. If the analyzer can't access every commit, it continues with the already checked out repository. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/350660) in GitLab 15.1. | In previous GitLab versions, the following variables were also available: | CI/CD variable | Default value | Description | |-----------------------------------|---------------|-------------| | `SECRET_DETECTION_COMMIT_FROM` | - | The commit a Gitleaks scan starts at. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. Replaced with `SECRET_DETECTION_COMMITS`. | | `SECRET_DETECTION_COMMIT_TO` | - | The commit a Gitleaks scan ends at. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. Replaced with `SECRET_DETECTION_COMMITS`. | | `SECRET_DETECTION_COMMITS` | - | The list of commits that Gitleaks should scan. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/243564) in GitLab 13.5. [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/352565) in GitLab 15.0. | ## Customize analyzer rulesets {{< details >}} - Tier: Ultimate {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/211387) in GitLab 13.5. - Expanded to include additional passthrough types of `file` and `raw` in GitLab 14.6. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/235359) support for overriding rules in GitLab 14.8. - [Enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/336395) support for passthrough chains and included additional passthrough types of `git` and `url` in GitLab 17.2. {{< /history >}} You can customize the types of secrets detected using pipeline secret detection by [creating a ruleset configuration file](#create-a-ruleset-configuration-file), either in the repository being scanned or a remote repository. Customization enables you to modify, replace, or extend the default ruleset. There are multiple kinds of customizations available: - Modify the behavior of **rules predefined in the default ruleset**. This includes: - [Override a rule from the default ruleset](#override-a-rule). - [Disable a rule from the default ruleset](#disable-a-rule). - [Disable or override a rule with a remote ruleset](#with-a-remote-ruleset). - Replace the default ruleset with a custom ruleset using passthroughs. This includes: - [Use configuration from an inline ruleset](#with-an-inline-ruleset). - [Use configuration from a local ruleset](#with-a-local-ruleset). - [Use configuration from a remote ruleset](#with-a-remote-ruleset-1). - [Use configuration from a private remote ruleset](#with-a-private-remote-ruleset) - Extend the behavior of the default ruleset using passthroughs. This includes: - [Use configuration from a local ruleset](#with-a-local-ruleset-1). - [Use configuration from a remote ruleset](#with-a-remote-ruleset-2). - Ignore secrets and paths using Gitleaks-native functionality. This includes: - Use [`Gitleaks' [allowlist] directive`](https://github.com/gitleaks/gitleaks#configuration) to [ignore patterns and paths](#ignore-patterns-and-paths). - Use `gitleaks:allow` comment to [ignore secrets inline](#ignore-secrets-inline). ### Create a ruleset configuration file To create a ruleset configuration file: 1. Create a `.gitlab` directory at the root of your project, if one doesn't already exist. 1. Create a file named `secret-detection-ruleset.toml` in the `.gitlab` directory. ### Modify rules from the default ruleset You can modify rules predefined in the [default ruleset](../detected_secrets.md). Modifying rules can help you adapt pipeline secret detection to an existing workflow or tool. For example you may want to override the severity of a detected secret or disable a rule from being detected at all. You can also use a ruleset configuration file stored remotely (that is, a remote Git repository or website) to modify predefined rules. New rules must use the [custom rule format](custom_rulesets_schema.md#custom-rule-format). #### Disable a rule {{< history >}} - Ability to disable a rule with a remote ruleset was [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/425251) in GitLab 16.0 and later. {{< /history >}} You can disable rules that you don't want active. To disable rules from the analyzer default ruleset: 1. [Create a ruleset configuration file](#create-a-ruleset-configuration-file), if one doesn't exist already. 1. Set the `disabled` flag to `true` in the context of a [`ruleset` section](custom_rulesets_schema.md#the-secretsruleset-section). 1. In one or more `ruleset.identifier` subsections, list the rules to disable. Every [`ruleset.identifier` section](custom_rulesets_schema.md#the-secretsrulesetidentifier-section) has: - A `type` field for the predefined rule identifier. - A `value` field for the rule name. In the following example `secret-detection-ruleset.toml` file, the disabled rules are matched by the `type` and `value` of identifiers: ```toml [secrets] [[secrets.ruleset]] disable = true [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "RSA private key" ``` #### Override a rule {{< history >}} - Ability to override a rule with a remote ruleset was [enabled](https://gitlab.com/gitlab-org/gitlab/-/issues/425251) in GitLab 16.0 and later. {{< /history >}} If there are specific rules to customize, you can override them. For example, you may increase the severity of a specific type of secret because leaking it would have a higher impact on your workflow. To override rules from the analyzer default ruleset: 1. [Create a ruleset configuration file](#create-a-ruleset-configuration-file), if one doesn't exist already. 1. In one or more `ruleset.identifier` subsections, list the rules to override. Every [`ruleset.identifier` section](custom_rulesets_schema.md#the-secretsrulesetidentifier-section) has: - A `type` field for the predefined rule identifier. - A `value` field for the rule name. 1. In the [`ruleset.override` context](custom_rulesets_schema.md#the-secretsrulesetoverride-section) of a [`ruleset` section](custom_rulesets_schema.md#the-secretsruleset-section), provide the keys to override. Any combination of keys can be overridden. Valid keys are: - `description` - `message` - `name` - `severity` (valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info`) In the following `secret-detection-ruleset.toml` file, rules are matched by the `type` and `value` of identifiers and then overridden: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "RSA private key" [secrets.ruleset.override] description = "OVERRIDDEN description" message = "OVERRIDDEN message" name = "OVERRIDDEN name" severity = "Info" ``` #### With a remote ruleset A **remote ruleset is a configuration file stored outside the current repository**. It can be used to modify rules across multiple projects. To modify a predefined rule with a remote ruleset, you can use the `SECRET_DETECTION_RULESET_GIT_REFERENCE` [CI/CD variable](../../../../ci/variables/_index.md): ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECRET_DETECTION_RULESET_GIT_REFERENCE: "gitlab.com/example-group/remote-ruleset-project" ``` Pipeline secret detection assumes the configuration is defined in `.gitlab/secret-detection-ruleset.toml` file in the repository referenced by the CI variable where the remote ruleset is stored. If that file doesn't exist, make sure to [create one](#create-a-ruleset-configuration-file) and follow the steps to [override](#override-a-rule) or [disable](#disable-a-rule) a predefined rule as previously outlined. {{< alert type="note" >}} A local `.gitlab/secret-detection-ruleset.toml` file in the project takes precedence over `SECRET_DETECTION_RULESET_GIT_REFERENCE` by default because `SECURE_ENABLE_LOCAL_CONFIGURATION` is set to `true`. If you set `SECURE_ENABLE_LOCAL_CONFIGURATION` to `false`, the local file is ignored and the default configuration or `SECRET_DETECTION_RULESET_GIT_REFERENCE` (if set) is used. {{< /alert >}} The `SECRET_DETECTION_RULESET_GIT_REFERENCE` variable uses a format similar to [Git URLs](https://git-scm.com/docs/git-clone#_git_urls) for specifying a URI, optional authentication, and optional Git SHA. The variable uses the following format: ```plaintext <AUTH_USER>:<AUTH_PASSWORD>@<PROJECT_PATH>@<GIT_SHA> ``` If the configuration file is stored in a private project that requires authentication, you may use a [Group Access Token](../../../group/settings/group_access_tokens.md) securely stored in a CI variable to load the remote ruleset: ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECRET_DETECTION_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:$GROUP_ACCESS_TOKEN@gitlab.com/example-group/remote-ruleset-project" ``` The group access token must have the `read_repository` scope and at least the Reporter role. For details, see [Repository permissions](../../../permissions.md#repository). See [bot users for groups](../../../group/settings/group_access_tokens.md#bot-users-for-groups) to learn how to find the username associated with a group access token. ### Replace the default ruleset You can replace the default ruleset configuration using a number of [customizations](custom_rulesets_schema.md). Those can be combined using [passthroughs](custom_rulesets_schema.md#passthrough-types) into a single configuration. Using passthroughs, you can: - Chain up to [20 passthroughs](custom_rulesets_schema.md#the-secretspassthrough-section) into a single configuration to replace or extend predefined rules. - Include [environment variables in passthroughs](custom_rulesets_schema.md#interpolate). - Set a [timeout](custom_rulesets_schema.md#the-secrets-configuration-section) for evaluating passthroughs. - [Validate](custom_rulesets_schema.md#the-secrets-configuration-section) TOML syntax used in each defined passthrough. #### With an inline ruleset You can use [`raw` passthrough](custom_rulesets_schema.md#passthrough-types) to replace default ruleset with configuration provided inline. To do so, add the following in the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the rule defined under `[[rules]]` as appropriate: ```toml [secrets] [[secrets.passthrough]] type = "raw" target = "gitleaks.toml" value = """ title = "replace default ruleset with a raw passthrough" [[rules]] description = "Test for Raw Custom Rulesets" regex = '''Custom Raw Ruleset T[est]{3}''' """ ``` The previous example replaces the default ruleset with a rule that checks for the regex defined - `Custom Raw Ruleset T` with a suffix of 3 characters from either one of `e`, `s`, or `t` letters. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a local ruleset You can use [`file` passthrough](custom_rulesets_schema.md#passthrough-types) to replace the default ruleset with another file committed to the current repository. To do so, add the following in the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository and adjust the `value` as appropriate to point to the path of the file with the local ruleset configuration: ```toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "config/gitleaks.toml" ``` This would replace the default ruleset with the configuration defined in `config/gitleaks.toml` file. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a remote ruleset You can replace the default ruleset with configuration defined in a remote Git repository or a file stored somewhere online using the `git` and `url` passthroughs. A remote ruleset can be used across multiple projects. For example, you may want to apply the same ruleset to a number of projects in one of your namespaces, in such case, you may use either type of passthrough to load up that remote ruleset and have it used by multiple projects. It also enables centralized management of a ruleset, with only authorized people able to edit. To use `git` passthrough, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository and adjust the `value` to point to the address of the Git repository: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "git" ref = "main" subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` In this configuration the analyzer loads the ruleset from the `gitleaks.toml` file inside the `config` directory in the `main` branch of the repository stored at `user_group/central_repository_with_shared_ruleset`. You can then proceed to include the same configuration in projects other than `user_group/basic_repository`. Alternatively, you may use the `url` passthrough to replace the default ruleset with a remote ruleset configuration. To use the `url` passthrough, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository and adjust the `value` to point to the address of the remote file: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "url" target = "gitleaks.toml" value = "https://example.com/gitleaks.toml" ``` In this configuration the analyzer loads the ruleset configuration from `gitleaks.toml` file stored at the address provided. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a private remote ruleset If a ruleset configuration is stored in a private repository you must provide the credentials to access the repository by using the passthrough's [`auth` setting](custom_rulesets_schema.md#the-secretspassthrough-section). {{< alert type="note" >}} The `auth` setting only works with `git` passthrough. {{< /alert >}} To use a remote ruleset stored in a private repository, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in a repository, adjust the `value` to point to the address of the Git repository, and update `auth` to use the appropriate credentials: ```toml [secrets] [[secrets.passthrough]] type = "git" ref = "main" auth = "USERNAME:PASSWORD" # replace USERNAME and PASSWORD as appropriate subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` {{< alert type="warning" >}} Beware of leaking credentials when using this feature. Check [this section](custom_rulesets_schema.md#interpolate) for an example on how to use environment variables to minimize the risk. {{< /alert >}} For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). ### Extend the default ruleset You can also extend the [default ruleset](../detected_secrets.md) configuration with additional rules as appropriate. This can be helpful when you would still like to benefit from the high-confidence predefined rules maintained by GitLab in the default ruleset, but also want to add rules for types of secrets that may be used in your own projects and namespaces. New rules must follow the [custom rule format](custom_rulesets_schema.md#custom-rule-format). #### With a local ruleset You can use a `file` passthrough to extend the default ruleset to add additional rules. Add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` The extended configuration stored in `extended-gitleaks-config.toml` is included in the configuration used by the analyzer in the CI/CD pipeline. In the example below, we add a couple of new `[[rules]]` sections that define a number of regular expressions to be detected: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] id = "example_api_key" description = "Example Service API Key" regex = '''example_api_key''' [[rules]] id = "example_api_secret" description = "Example Service API Secret" regex = '''example_api_secret''' ``` With this ruleset configuration the analyzer detects any strings matching with those two defined regex patterns. For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a remote ruleset Similar to how you can replace the default ruleset with a remote ruleset, you can also extend the default ruleset with configuration stored in a remote Git repository or file stored external to the repository in which you have the `.gitlab/secret-detection-ruleset.toml` configuration file. This can be achieved by using either of the `git` or `url` passthroughs as discussed previously. To do that with a `git` passthrough, add the following to `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value`, `ref`, and `subdir` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "git" ref = "main" subdir = "config" value = "https://gitlab.com/user_group/central_repository_with_shared_ruleset" ``` Pipeline secret detection assumes the remote ruleset configuration file is called `gitleaks.toml`, and is stored in `config` directory on the `main` branch of the referenced repository. To extend the default ruleset, the `gitleaks.toml` file should use `[extend]` directive similar to the previous example: ```toml # https://gitlab.com/user_group/central_repository_with_shared_ruleset/-/raw/main/config/gitleaks.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] id = "example_api_key" description = "Example Service API Key" regex = '''example_api_key''' [[rules]] id = "example_api_secret" description = "Example Service API Secret" regex = '''example_api_secret''' ``` To use a `url` passthrough, add the following to `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file ```toml # .gitlab/secret-detection-ruleset.toml in https://gitlab.com/user_group/basic_repository [secrets] [[secrets.passthrough]] type = "url" target = "gitleaks.toml" value = "https://example.com/gitleaks.toml" ``` For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). #### With a scan execution policy To extend and enforce the ruleset with a scan execution policy: - Follow the steps in [Set up a pipeline secret detection configuration with a scan execution policy](https://support.gitlab.com/hc/en-us/articles/18863735262364-How-to-set-up-a-centrally-managed-pipeline-secret-detection-configuration-applied-via-Scan-Execution-Policy). ### Ignore patterns and paths There may be situations in which you need to ignore a certain pattern or path from being detected by pipeline secret detection. For example, you may have a file including fake secrets to be used in a test suite. In that case, you can utilize [Gitleaks' native `[allowlist]`](https://github.com/gitleaks/gitleaks#configuration) directive to ignore specific patterns or paths. {{< alert type="note" >}} This feature works regardless of whether you're using a local or a remote ruleset configuration file. The examples below utilizes a local ruleset using `file` passthrough though. {{< /alert >}} To ignore a pattern, add the following to the `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository, and adjust the `value` as appropriate to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` The extended configuration stored in `extended-gitleaks-config.toml` will be included in the configuration used by the analyzer. In the example below, we add an `[allowlist]` directive that defines a regex that matches the secret to be ignored ("allowed"): ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [allowlist] description = "allowlist of patterns to ignore in detection" regexTarget = "match" regexes = [ '''glpat-[0-9a-zA-Z_\\-]{20}''' ] ``` This ignores any string matching `glpat-` with a suffix of 20 characters of digits and letters. Similarly, you can exclude specific paths from being scanned. In the example below, we define an array of paths to ignore under the `[allowlist]` directive. A path could either be a regular expression, or a specific file path: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [allowlist] description = "allowlist of patterns to ignore in detection" paths = [ '''/gitleaks.toml''', '''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)''' ] ``` This ignores any secrets detected in either `/gitleaks.toml` file or any file ending with one of the specified extensions. From [Gitleaks v8.20.0](https://github.com/gitleaks/gitleaks/releases/tag/v8.20.0), you can also use `regexTarget` with `[allowlist]`. This means you can configure a [personal access token prefix](../../../../administration/settings/account_and_limit_settings.md#personal-access-token-prefix) or a [custom instance prefix](../../../../administration/settings/account_and_limit_settings.md#instance-token-prefix) by overriding existing rules. For example, for `personal access tokens`, you could configure: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] # Rule id you want to override: id = "gitlab_personal_access_token" # all the other attributes from the default rule are inherited [[rules.allowlists]] regexTarget = "line" regexes = [ '''CUSTOMglpat-''' ] [[rules]] id = "gitlab_personal_access_token_with_custom_prefix" regex = '<Regex that match a personal access token starting with your CUSTOM prefix>' ``` Keep in mind that you need to account for all rules configured in the [default ruleset](https://gitlab.com/gitlab-org/security-products/secret-detection/secret-detection-rules/-/blob/main/rules/mit/gitlab/gitlab.toml). For more information on the passthrough syntax to use, see [Schema](custom_rulesets_schema.md#schema). ### Ignore secrets inline In some instances, you might want to ignore a secret inline. For example, you may have a fake secret in an example or a test suite. In these instances, you will want to ignore the secret instead of having it reported as a vulnerability. To ignore a secret, add `gitleaks:allow` as a comment to the line that contains the secret. For example: ```ruby "A personal token for GitLab will look like glpat-JUST20LETTERSANDNUMB" # gitleaks:allow ``` ### Detecting complex strings The [default ruleset](_index.md#detected-secrets) provides patterns to detect structured strings with a low rate of false positives. However, you might want to detect more complex strings like passwords. Because [Gitleaks doesn't support lookahead or lookbehind](https://github.com/google/re2/issues/411), writing a high-confidence, general rule to detect unstructured strings is not possible. Although you can't detect every complex string, you can extend your ruleset to meet specific use cases. For example, this rule modifies the [`generic-api-key` rule](https://github.com/gitleaks/gitleaks/blob/4e43d1109303568509596ef5ef576fbdc0509891/config/gitleaks.toml#L507-L514) from the Gitleaks default ruleset: ```regex (?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|=:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['|\"|\n|\r|\s|\x60|;]|$) ``` This regular expression matches: 1. A case-insensitive identifier that starts with `pwd`, or `passwd` or `password`. You can adjust this with other variations like `secret` or `key`. 1. A suffix that follows the identifier. The suffix is a combination of digits, letters, and symbols, and is between zero and 23 characters long. 1. Commonly used assignment operators, like `=`, `:=`, `:`, or `=>`. 1. A secret prefix, often used as a boundary to help with detecting the secret. 1. A string of digits, letters, and symbols, which is between three and 50 characters long. This is the secret itself. If you expect longer strings, you can adjust the length. 1. A secret suffix, often used as a boundary. This matches common endings like ticks, line breaks, and new lines. Here are example strings which are matched by this regular expression: ```plaintext pwd = password1234 passwd = 'p@ssW0rd1234' password = thisismyverylongpassword password => mypassword password := mypassword password: password1234 "password" = "p%ssward1234" 'password': 'p@ssW0rd1234' ``` To use this regex, extend your ruleset with one of the methods documented on this page. For example, imagine you wish to extend the default ruleset [with a local ruleset](#with-a-local-ruleset-1) that includes this rule. Add the following to a `.gitlab/secret-detection-ruleset.toml` configuration file stored in the same repository. Adjust the `value` to point to the path of the extended configuration file: ```toml # .gitlab/secret-detection-ruleset.toml [secrets] [[secrets.passthrough]] type = "file" target = "gitleaks.toml" value = "extended-gitleaks-config.toml" ``` In `extended-gitleaks-config.toml` file, add a new `[[rules]]` section with the regular expression you want to use: ```toml # extended-gitleaks-config.toml [extend] # Extends default packaged ruleset, NOTE: do not change the path. path = "/gitleaks.toml" [[rules]] description = "Generic Password Rule" id = "generic-password" regex = '''(?i)(?:pwd|passwd|password)(?:[0-9a-z\-_\t .]{0,20})(?:[\s|']|[\s|"]){0,3}(?:=|>|=:|:{1,3}=|\|\|:|<=|=>|:|\?=)(?:'|\"|\s|=|\x60){0,5}([0-9a-z\-_.=\S_]{3,50})(?:['|\"|\n|\r|\s|\x60|;]|$)''' entropy = 3.5 keywords = ["pwd", "passwd", "password"] ``` {{< alert type="note" >}} This example configuration is provided only for convenience, and might not work for all use cases. If you configure your ruleset to detect complex strings, you might create a large number of false positives, or fail to capture certain patterns. {{< /alert >}} ### Demonstrations There are [demonstration projects](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection) that illustrate some of these configuration options. Below is a table with the demonstration projects and their associated workflows: | Action/Workflow | Applies to/via | With inline or local ruleset | With remote ruleset | |-------------------------|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------| | Disable a rule | Predefined rules | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/disable-rule-project/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/disable-rule-project) | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/disable-rule-ruleset) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/disable-rule-project) | | Override a rule | Predefined rules | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/override-rule-project/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/local-ruleset/override-rule-project) | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/override-rule-ruleset) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/modify-default-ruleset/remote-ruleset/override-rule-project) | | Replace default ruleset | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/file-passthrough/-/blob/main/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/file-passthrough) | Not applicable | | Replace default ruleset | Raw Passthrough | [Inline Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/raw-passthrough/-/blob/main/.gitlab/secret-detection-ruleset.toml?ref_type=heads) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/raw-passthrough) | Not applicable | | Replace default ruleset | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-replace/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/git-passthrough) | | Replace default ruleset | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-replace/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/replace-default-ruleset/url-passthrough) | | Extend default ruleset | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/file-passthrough) | Not applicable | | Extend default ruleset | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-extend/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/git-passthrough) | | Extend default ruleset | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-extend/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/extend-default-ruleset/url-passthrough) | | Ignore paths | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/file-passthrough) | Not applicable | | Ignore paths | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-paths/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/git-passthrough) | | Ignore paths | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-paths/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-paths/url-passthrough) | | Ignore patterns | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/file-passthrough) | Not applicable | | Ignore patterns | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-patterns/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/git-passthrough) | | Ignore patterns | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-patterns/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-patterns/url-passthrough) | | Ignore values | File Passthrough | [Local Ruleset](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/file-passthrough/-/blob/main/config/extended-gitleaks-config.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/file-passthrough) | Not applicable | | Ignore values | Git Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-values/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/git-passthrough) | | Ignore values | URL Passthrough | Not applicable | [Remote Ruleset](https://gitlab.com/gitlab-org/security-products/tests/secrets-passthrough-git-and-url-test/-/blob/config-demos-ignore-values/config/gitleaks.toml) / [Project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/secret-detection/ignore-values/url-passthrough) | There are also some video demonstrations walking through setting up remote rulesets: - [Secret detection with local and remote ruleset](https://youtu.be/rsN1iDug5GU) ## Offline configuration {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} An offline environment has limited, restricted, or intermittent access to external resources through the internet. For instances in such an environment, pipeline secret detection requires some configuration changes. The instructions in this section must be completed together with the instructions detailed in [offline environments](../../offline_deployments/_index.md). ### Configure GitLab Runner By default, a runner tries to pull Docker images from the GitLab container registry even if a local copy is available. You should use this default setting, to ensure Docker images remain current. However, if no network connectivity is available, you must change the default GitLab Runner `pull_policy` variable. Configure the GitLab Runner CI/CD variable `pull_policy` to [`if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy). ### Use local pipeline secret detection analyzer image Use a local pipeline secret detection analyzer image if you want to obtain the image from a local Docker registry instead of the GitLab container registry. Prerequisites: - Importing Docker images into a local offline Docker registry depends on your network security policy. Consult your IT staff to find an accepted and approved process to import or temporarily access external resources. 1. Import the default pipeline secret detection analyzer image from `registry.gitlab.com` into your [local Docker container registry](../../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/secrets:6 ``` The pipeline secret detection analyzer's image is [periodically updated](../../detect/vulnerability_scanner_maintenance.md) so you should periodically update the local copy. 1. Set the CI/CD variable `SECURE_ANALYZERS_PREFIX` to the local Docker container registry. ```yaml include: - template: Jobs/Secret-Detection.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "localhost:5000/analyzers" ``` The pipeline secret detection job should now use the local copy of the analyzer Docker image, without requiring internet access. ## Using a custom SSL CA certificate authority To trust a custom Certificate Authority, set the `ADDITIONAL_CA_CERT_BUNDLE` variable to the bundle of CA certificates that you trust. Do this either in the `.gitlab-ci.yml` file, in a file variable, or as a CI/CD variable. - In the `.gitlab-ci.yml` file, the `ADDITIONAL_CA_CERT_BUNDLE` value must contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` - If using a file variable, set the value of `ADDITIONAL_CA_CERT_BUNDLE` to the path to the certificate. - If using a variable, set the value of `ADDITIONAL_CA_CERT_BUNDLE` to the text representation of the certificate.
https://docs.gitlab.com/user/application_security/secret_detection/custom_rulesets_schema
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/custom_rulesets_schema.md
2025-08-13
doc/user/application_security/secret_detection/pipeline
[ "doc", "user", "application_security", "secret_detection", "pipeline" ]
custom_rulesets_schema.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Custom rulesets schema
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use [different kinds of ruleset customizations](configure.md#customize-analyzer-rulesets) to customize the behavior of pipeline secret detection. ## Schema Customization of pipeline secret detection rulesets must adhere to a strict schema. The following sections describe each of the available options and the schema that applies to that section. ### The top-level section The top-level section contains one or more configuration sections, defined as [TOML tables](https://toml.io/en/v1.0.0#table). | Setting | Description | |-------------|----------------------------------------------------| | `[secrets]` | Declares a configuration section for the analyzer. | Configuration example: ```toml [secrets] ... ``` ### The `[secrets]` configuration section The `[secrets]` section lets you customize the behavior of the analyzer. Valid properties differ based on the kind of configuration you're making. | Setting | Applies to | Description | |-----------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `[[secrets.ruleset]]` | Predefined rules | Defines modifications to an existing rule. | | `interpolate` | All | If set to `true`, you can use `$VAR` in the configuration to evaluate environment variables. Use this feature with caution, so you don't leak secrets or tokens. (Default: `false`) | | `description` | Passthroughs | Description of the custom ruleset. | | `targetdir` | Passthroughs | The directory where the final configuration should be persisted. If empty, a directory with a random name is created. The directory can contain up to 100 MB of files. | | `validate` | Passthroughs | If set to `true`, the content of each passthrough is validated. The validation works for `yaml`, `xml`, `json` and `toml` content. The proper validator is identified based on the extension used in the `target` parameter of the `[[secrets.passthrough]]` section. (Default: `false`) | | `timeout` | Passthroughs | The maximum time to spend to evaluate the passthrough chain, before timing out. The timeout cannot exceed 300 seconds. (Default: 60) | #### `interpolate` {{< alert type="warning" >}} To reduce the risk of leaking secrets, use this feature with caution. {{< /alert >}} The example below shows a configuration that uses the `$GITURL` environment variable to access a private repository. The variable contains a username and token (for example `https://user:token@url`), so they're not explicitly stored in the configuration file. ```toml [secrets] description = "My private remote ruleset" interpolate = true [[secrets.passthrough]] type = "git" value = "$GITURL" ref = "main" ``` ### The `[[secrets.ruleset]]` section The `[[secrets.ruleset]]` section targets and modifies a single predefined rule. You can define one to many of these sections for the analyzer. | Setting | Description | |--------------------------------|---------------------------------------------------------| | `disable` | Whether the rule should be disabled. (Default: `false`) | | `[secrets.ruleset.identifier]` | Selects the predefined rule to be modified. | | `[secrets.ruleset.override]` | Defines the overrides for the rule. | Configuration example: ```toml [secrets] [[secrets.ruleset]] disable = true ... ``` ### The `[secrets.ruleset.identifier]` section The `[secrets.ruleset.identifier]` section defines the identifiers of the predefined rule that you wish to modify. | Setting | Description | | --------| ----------- | | `type` | The type of identifier used by the predefined rule. | | `value` | The value of the identifier used by the predefined rule. | To determine the correct values for `type` and `value`, view the [`gl-secret-detection-report.json`](_index.md#secret-detection-results) produced by the analyzer. You can download this file as a job artifact from the analyzer's CI/CD job. For example, the snippet below shows a finding from a `gitlab_personal_access_token` rule with one identifier. The `type` and `value` keys in the JSON object correspond to the values you should provide in this section. ```json ... "vulnerabilities": [ { "id": "fccb407005c0fb58ad6cfcae01bea86093953ed1ae9f9623ecc3e4117675c91a", "category": "secret_detection", "name": "GitLab personal access token", "description": "GitLab personal access token has been found in commit 5c124166", ... "identifiers": [ { "type": "gitleaks_rule_id", "name": "Gitleaks rule ID gitlab_personal_access_token", "value": "gitlab_personal_access_token" } ] } ... ] ... ``` Configuration example: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "gitlab_personal_access_token" ... ``` ### The `[secrets.ruleset.override]` section The `[secrets.ruleset.override]` section allows you to override attributes of a predefined rule. | Setting | Description | |---------------|-----------------------------------------------------------------------------------------------------| | `description` | A detailed description of the issue. | | `message` | (Deprecated) A description of the issue. | | `name` | The name of the rule. | | `severity` | The severity of the rule. Valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info` | {{< alert type="note" >}} Although `message` is still populated by the analyzers, it has been [deprecated](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/1d86d5f2e61dc38c775fb0490ee27a45eee4b8b3/vulnerability.go#L22) and replaced by `name` and `description`. {{< /alert >}} Configuration example: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.override] severity = "Medium" name = "systemd machine-id" ... ``` ### Custom rule format {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/511321) in GitLab 17.9. {{< /history >}} When creating custom rules, you can use both [Gitleaks' standard rule format](https://github.com/gitleaks/gitleaks?tab=readme-ov-file#configuration) and additional GitLab-specific fields. The following settings are available for each rule: | Setting | Required | Description | |---------|-------------|-------------| | `title` | No | A GitLab-specific field that sets a custom title for the rule. | | `description` | Yes | A detailed description of what the rule detects. | | `remediation` | No | A GitLab-specific field that provides remediation guidance when the rule is triggered. | | `regex` | Yes | The regular expression pattern used to detect secrets. | | `keywords` | No | A list of keywords to pre-filter content before applying the regex. | | `id` | Yes | A unique identifier for the rule. | Example of a custom rule with all available fields: ```toml [[rules]] title = "API Key Detection Rule" description = "Detects potential API keys in the codebase" remediation = "Rotate the exposed API key and store it in a secure credential manager" id = "custom_api_key" keywords = ["apikey", "api_key"] regex = '''api[_-]key[_-][a-zA-Z0-9]{16,}''' ``` When you create a custom rule that shares the same ID as a rule in the extended ruleset, your custom rule takes precedence. All properties of your custom rule replace the corresponding values from the extended rule. Example of extending default rules with a custom rule: ```toml title = "Extension of GitLab's default Gitleaks config" [extend] path = "/gitleaks.toml" [[rules]] title = "Custom API Key Rule" description = "Detects custom API key format" remediation = "Rotate the exposed API key" id = "custom_api_123" keywords = ["testing"] regex = '''testing-key-[1-9]{3}''' ``` ### The `[[secrets.passthrough]]` section The `[[secrets.passthrough]]` section allows you to synthesize a custom configuration for an analyzer. You can define up to 20 of these sections per analyzer. Passthroughs are then composed into a _passthrough chain_ that evaluates into a complete configuration that can be used to replace or extend the predefined rules of the analyzer. Passthroughs are evaluated in order. Passthroughs listed later in the chain have a higher precedence and can overwrite or append to data yielded by previous passthroughs (depending on the `mode`). Use passthroughs when you need to use or modify an existing configuration. The size of the configuration generated by a single passthrough is limited to 10 MB. | Setting | Applies to | Description | |-------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `type` | All | One of `file`, `raw`, `git`, or `url`. | | `target` | All | The target file to contain the data written by the passthrough evaluation. If empty, a random filename is used. | | `mode` | All | If `overwrite`, the `target` file is overwritten. If `append`, new content is appended to the `target` file. The `git` type only supports `overwrite`. (Default: `overwrite`) | | `ref` | `type = "git"` | Contains the name of the branch, tag, or the SHA to pull. | | `subdir` | `type = "git"` | Used to select a subdirectory of the Git repository as the configuration source. | | `auth` | `type = "git"` | Used to provide credentials to use when using a [configuration stored in a private Git repository](configure.md#with-a-private-remote-ruleset). | | `value` | All | For the `file`, `url`, and `git` types, defines the location of the file or Git repository. For the `raw` type, contains the inline configuration. | | `validator` | All | Used to explicitly invoke validators (`xml`, `yaml`, `json`, `toml`) on the target file after the evaluation of a passthrough. | #### Passthrough types | Type | Description | |--------|-------------------------------------------------------| | `file` | Use a file that is stored in the same Git repository. | | `raw` | Provide the ruleset configuration inline. | | `git` | Pull the configuration from a remote Git repository. | | `url` | Fetch the configuration using HTTP. |
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Custom rulesets schema breadcrumbs: - doc - user - application_security - secret_detection - pipeline --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use [different kinds of ruleset customizations](configure.md#customize-analyzer-rulesets) to customize the behavior of pipeline secret detection. ## Schema Customization of pipeline secret detection rulesets must adhere to a strict schema. The following sections describe each of the available options and the schema that applies to that section. ### The top-level section The top-level section contains one or more configuration sections, defined as [TOML tables](https://toml.io/en/v1.0.0#table). | Setting | Description | |-------------|----------------------------------------------------| | `[secrets]` | Declares a configuration section for the analyzer. | Configuration example: ```toml [secrets] ... ``` ### The `[secrets]` configuration section The `[secrets]` section lets you customize the behavior of the analyzer. Valid properties differ based on the kind of configuration you're making. | Setting | Applies to | Description | |-----------------------|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `[[secrets.ruleset]]` | Predefined rules | Defines modifications to an existing rule. | | `interpolate` | All | If set to `true`, you can use `$VAR` in the configuration to evaluate environment variables. Use this feature with caution, so you don't leak secrets or tokens. (Default: `false`) | | `description` | Passthroughs | Description of the custom ruleset. | | `targetdir` | Passthroughs | The directory where the final configuration should be persisted. If empty, a directory with a random name is created. The directory can contain up to 100 MB of files. | | `validate` | Passthroughs | If set to `true`, the content of each passthrough is validated. The validation works for `yaml`, `xml`, `json` and `toml` content. The proper validator is identified based on the extension used in the `target` parameter of the `[[secrets.passthrough]]` section. (Default: `false`) | | `timeout` | Passthroughs | The maximum time to spend to evaluate the passthrough chain, before timing out. The timeout cannot exceed 300 seconds. (Default: 60) | #### `interpolate` {{< alert type="warning" >}} To reduce the risk of leaking secrets, use this feature with caution. {{< /alert >}} The example below shows a configuration that uses the `$GITURL` environment variable to access a private repository. The variable contains a username and token (for example `https://user:token@url`), so they're not explicitly stored in the configuration file. ```toml [secrets] description = "My private remote ruleset" interpolate = true [[secrets.passthrough]] type = "git" value = "$GITURL" ref = "main" ``` ### The `[[secrets.ruleset]]` section The `[[secrets.ruleset]]` section targets and modifies a single predefined rule. You can define one to many of these sections for the analyzer. | Setting | Description | |--------------------------------|---------------------------------------------------------| | `disable` | Whether the rule should be disabled. (Default: `false`) | | `[secrets.ruleset.identifier]` | Selects the predefined rule to be modified. | | `[secrets.ruleset.override]` | Defines the overrides for the rule. | Configuration example: ```toml [secrets] [[secrets.ruleset]] disable = true ... ``` ### The `[secrets.ruleset.identifier]` section The `[secrets.ruleset.identifier]` section defines the identifiers of the predefined rule that you wish to modify. | Setting | Description | | --------| ----------- | | `type` | The type of identifier used by the predefined rule. | | `value` | The value of the identifier used by the predefined rule. | To determine the correct values for `type` and `value`, view the [`gl-secret-detection-report.json`](_index.md#secret-detection-results) produced by the analyzer. You can download this file as a job artifact from the analyzer's CI/CD job. For example, the snippet below shows a finding from a `gitlab_personal_access_token` rule with one identifier. The `type` and `value` keys in the JSON object correspond to the values you should provide in this section. ```json ... "vulnerabilities": [ { "id": "fccb407005c0fb58ad6cfcae01bea86093953ed1ae9f9623ecc3e4117675c91a", "category": "secret_detection", "name": "GitLab personal access token", "description": "GitLab personal access token has been found in commit 5c124166", ... "identifiers": [ { "type": "gitleaks_rule_id", "name": "Gitleaks rule ID gitlab_personal_access_token", "value": "gitlab_personal_access_token" } ] } ... ] ... ``` Configuration example: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.identifier] type = "gitleaks_rule_id" value = "gitlab_personal_access_token" ... ``` ### The `[secrets.ruleset.override]` section The `[secrets.ruleset.override]` section allows you to override attributes of a predefined rule. | Setting | Description | |---------------|-----------------------------------------------------------------------------------------------------| | `description` | A detailed description of the issue. | | `message` | (Deprecated) A description of the issue. | | `name` | The name of the rule. | | `severity` | The severity of the rule. Valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info` | {{< alert type="note" >}} Although `message` is still populated by the analyzers, it has been [deprecated](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/1d86d5f2e61dc38c775fb0490ee27a45eee4b8b3/vulnerability.go#L22) and replaced by `name` and `description`. {{< /alert >}} Configuration example: ```toml [secrets] [[secrets.ruleset]] [secrets.ruleset.override] severity = "Medium" name = "systemd machine-id" ... ``` ### Custom rule format {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/511321) in GitLab 17.9. {{< /history >}} When creating custom rules, you can use both [Gitleaks' standard rule format](https://github.com/gitleaks/gitleaks?tab=readme-ov-file#configuration) and additional GitLab-specific fields. The following settings are available for each rule: | Setting | Required | Description | |---------|-------------|-------------| | `title` | No | A GitLab-specific field that sets a custom title for the rule. | | `description` | Yes | A detailed description of what the rule detects. | | `remediation` | No | A GitLab-specific field that provides remediation guidance when the rule is triggered. | | `regex` | Yes | The regular expression pattern used to detect secrets. | | `keywords` | No | A list of keywords to pre-filter content before applying the regex. | | `id` | Yes | A unique identifier for the rule. | Example of a custom rule with all available fields: ```toml [[rules]] title = "API Key Detection Rule" description = "Detects potential API keys in the codebase" remediation = "Rotate the exposed API key and store it in a secure credential manager" id = "custom_api_key" keywords = ["apikey", "api_key"] regex = '''api[_-]key[_-][a-zA-Z0-9]{16,}''' ``` When you create a custom rule that shares the same ID as a rule in the extended ruleset, your custom rule takes precedence. All properties of your custom rule replace the corresponding values from the extended rule. Example of extending default rules with a custom rule: ```toml title = "Extension of GitLab's default Gitleaks config" [extend] path = "/gitleaks.toml" [[rules]] title = "Custom API Key Rule" description = "Detects custom API key format" remediation = "Rotate the exposed API key" id = "custom_api_123" keywords = ["testing"] regex = '''testing-key-[1-9]{3}''' ``` ### The `[[secrets.passthrough]]` section The `[[secrets.passthrough]]` section allows you to synthesize a custom configuration for an analyzer. You can define up to 20 of these sections per analyzer. Passthroughs are then composed into a _passthrough chain_ that evaluates into a complete configuration that can be used to replace or extend the predefined rules of the analyzer. Passthroughs are evaluated in order. Passthroughs listed later in the chain have a higher precedence and can overwrite or append to data yielded by previous passthroughs (depending on the `mode`). Use passthroughs when you need to use or modify an existing configuration. The size of the configuration generated by a single passthrough is limited to 10 MB. | Setting | Applies to | Description | |-------------|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `type` | All | One of `file`, `raw`, `git`, or `url`. | | `target` | All | The target file to contain the data written by the passthrough evaluation. If empty, a random filename is used. | | `mode` | All | If `overwrite`, the `target` file is overwritten. If `append`, new content is appended to the `target` file. The `git` type only supports `overwrite`. (Default: `overwrite`) | | `ref` | `type = "git"` | Contains the name of the branch, tag, or the SHA to pull. | | `subdir` | `type = "git"` | Used to select a subdirectory of the Git repository as the configuration source. | | `auth` | `type = "git"` | Used to provide credentials to use when using a [configuration stored in a private Git repository](configure.md#with-a-private-remote-ruleset). | | `value` | All | For the `file`, `url`, and `git` types, defines the location of the file or Git repository. For the `raw` type, contains the inline configuration. | | `validator` | All | Used to explicitly invoke validators (`xml`, `yaml`, `json`, `toml`) on the target file after the evaluation of a passthrough. | #### Passthrough types | Type | Description | |--------|-------------------------------------------------------| | `file` | Use a file that is stored in the same Git repository. | | `raw` | Provide the ruleset configuration inline. | | `git` | Pull the configuration from a remote Git repository. | | `url` | Fetch the configuration using HTTP. |
https://docs.gitlab.com/user/application_security/secret_detection/secret_push_protection
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/secret_detection/_index.md
2025-08-13
doc/user/application_security/secret_detection/secret_push_protection
[ "doc", "user", "application_security", "secret_detection", "secret_push_protection" ]
_index.md
Application Security Testing
Secret Detection
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Secret push protection
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11439) in GitLab 16.7 as an [experiment](../../../../policy/development_stages_support.md) for GitLab Dedicated customers. - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/12729) to Beta and made available on GitLab.com in GitLab 17.1. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/156907) in GitLab 17.2 [with flags](../../../../administration/feature_flags/_index.md) named `pre_receive_secret_detection_beta_release` and `pre_receive_secret_detection_push_check`. - Feature flag `pre_receive_secret_detection_beta_release` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/472418) in GitLab 17.4. - [Generally available](https://gitlab.com/groups/gitlab-org/-/epics/13107) in GitLab 17.5. - Feature flag `pre_receive_secret_detection_push_check` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/472419) in GitLab 17.7. {{< /history >}} Secret push protection blocks secrets such as keys and API tokens from being pushed to GitLab. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see the playlist [Get Started with Secret Push Protection](https://www.youtube.com/playlist?list=PL05JrBw4t0KoADm-g2vxfyR0m6QLphTv-). Use [pipeline secret detection](../_index.md) together with secret push protection to further strengthen your security. ## Secret push protection workflow Secret push protection takes place in the pre-receive hook. When you push changes to GitLab, push protection checks each [file or commit](#coverage) for secrets. By default, if a secret is detected, the push is blocked. <!-- To edit the diagram, use either Draw.io or the VS Code extension "Draw.io Integration" --> ![A flowchart showing how secret protection can block a push](img/spp_workflow_v17_9.drawio.svg) When a push is blocked, GitLab prompts a message that includes: - Commit ID containing the secret. - Filename and line containing the secret. - Type of secret. For example, the following is an extract of the message returned when a push using the Git CLI is blocked. When using other clients, including the GitLab Web IDE, the format of the message is different but the content is the same. ```plain remote: PUSH BLOCKED: Secrets detected in code changes remote: Secret push protection found the following secrets in commit: 37e54de5e78c31d9e3c3821fd15f7069e3d375b6 remote: remote: -- test.txt:2 GitLab Personal Access Token remote: remote: To push your changes you must remove the identified secrets. ``` If secret push protection does not detect any secrets in your commits, no message is displayed. ## Detected secrets Secret push protection scans [files or commits](#coverage) for specific patterns. Each pattern matches a specific type of secret. To confirm which secrets are detected by secret push protection, see [Detected secrets](../detected_secrets.md). Only high-confidence patterns were chosen for secret push protection, to minimize the delay when pushing your commits and minimize the number of false alerts. For example, personal access tokens that use a custom prefix are not detected by secret push protection. You can [exclude](../exclusions.md) selected secrets from detection by secret push protection. ## Getting started On GitLab Dedicated and GitLab Self-Managed instances, you must: 1. [Allow secret push protection on the entire instance](#allow-the-use-of-secret-push-protection-in-your-gitlab-instance). 1. Enable secret push protection. You can either: - [Enable secret push protection in a specific project](#enable-secret-push-protection-in-a-project). - Use the API to [enable secret push protection for all projects in group](../../../../api/group_security_settings.md#update-secret_push_protection_enabled-setting). ### Allow the use of secret push protection in your GitLab instance On GitLab Dedicated and GitLab Self-Managed instances, you must allow secret push protection before you can enable it in a project. Prerequisites: - You must be an administrator for your GitLab instance. To allow the use of secret push protection in your GitLab instance: 1. Sign in to your GitLab instance as an administrator. 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Settings > Security and compliance**. 1. Under **Secret detection**, select or clear **Allow secret push protection**. Secret push protection is allowed on the instance. To use this feature, you must enable it per project. ### Enable secret push protection in a project Prerequisites: - You must have at least the Maintainer role for the project. - On GitLab Dedicated and GitLab Self-Managed, you must allow secret push protection on the instance. To enable secret push protection in a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Secure > Security configuration**. 1. Turn on the **Secret push protection** toggle. You can also enable secret push protection for all projects in a group [with the API](../../../../api/group_security_settings.md#update-secret_push_protection_enabled-setting). ## Coverage {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/185882) to diff-only scanning in GitLab 17.11. {{< /history >}} Secret push protection does not block a secret when: - You used the [skip secret push protection](#skip-secret-push-protection) option when you pushed the commits. - The secret is [excluded](../exclusions.md) from secret push protection. - The secret is in a path defined as an [exclusion](../exclusions.md). Secret push protection does not check a file in a commit when: - The file is a binary file. - The file is larger than 1 MiB. - The diff patch for the file is larger than 1 MiB (when using [diff scanning](#diff-scanning)). - The file was renamed, deleted, or moved without changes to the content. - The content of the file is identical to the content of another file in the source code. - The file is contained in the initial push that created the repository. ### Diff scanning {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/469161) in GitLab 17.5 [with a flag](../../../../administration/feature_flags/_index.md) named `spp_scan_diffs`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/480092) in GitLab 17.6. - [Added](https://gitlab.com/gitlab-org/gitlab/-/issues/491282) support for Web IDE pushes in GitLab 17.10 [with a flag](../../../../administration/feature_flags/_index.md) named `secret_checks_for_web_requests`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/525627) in GitLab 17.11. Feature flag `spp_scan_diffs` removed. - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/525629) `secret_checks_for_web_requests` feature flag in GitLab 17.11. {{< /history >}} Secret push protection scans only the diffs of commits pushed over HTTP(S) and SSH. If a secret is already present in a file and not part of the changes, it is not detected. ## Understanding the results Secret push protection can identify various categories of secrets: - **API keys and tokens**: Service-specific authentication credentials - **Database connection strings**: URLs containing embedded credentials - **Private keys**: Cryptographic keys for authentication or encryption - **Generic high-entropy strings**: Patterns that appear to be randomly generated secrets When a push is blocked, secret push protection provides detailed information to help you locate and address the detected secrets: - **Commit ID**: The specific commit containing the secret. Useful for tracking changes in your Git history. - **File path and line number**: The exact location of the detected pattern for quick navigation. - **Secret type**: The classification of the detected pattern. For example, `GitLab Personal Access Token` or `AWS Access Key`. ### Common detection categories Not all detections require immediate action. Consider the following when evaluating results: - **True positives**: Legitimate secrets that should be rotated and removed. For example: - [Valid](../../vulnerabilities/validity_check.md) API keys or tokens - Production database credentials - Private cryptographic keys - Any credentials that could grant unauthorized access - **False positives**: Detected patterns that aren't actual secrets. For example: - Test data that resembles secrets but has no real-world value - Placeholder values in configuration templates - Example credentials in documentation - Hash values or checksums that match secret patterns Document common false positive patterns in your organization to streamline future evaluations. ## Optimization Before deploying secret push protection widely, optimize the configuration to reduce false positives and improve accuracy for your specific environment. ### Reduce false positives False positives can significantly impact developer productivity and lead to security fatigue. To reduce false positives: - Configure [exclusions](../exclusions.md) strategically: - Create path-based exclusions for test directories, documentation, and third party dependencies. - Use pattern-based exclusions for known false positive patterns specific to your codebase. - Document your exclusion rules and review them regularly. - Create standards for placeholder values and test credentials. - Monitor false positive rates and continue to adjust exclusions accordingly. ### Optimize performance Large repositories or frequent pushes can have performance impacts. To optimize the performance of secret push protection: - Monitor push times and establish baseline metrics before deployment. - Use diff scanning to reduce the amount of content scanned on each push. - Consider file size limits for repositories with large binary assets. - Implement exclusions for directories that are unlikely to contain secrets. ### Integration with existing workflows Ensure secret push protection complements your existing development practices: - Configure pipeline secret detection and secret push protection to be sure you have defense in depth. - Update developer documentation to include secret push protection procedures. - Align with security training to educate developers on secure coding practices to minimize leaked secrets. ## Roll out Successfully deploying secret push protection at scale requires careful planning and a phased implementation: 1. Choose two or three non-critical projects with active development to test the feature and understand its impact on developer workflows. 1. Turn on secret push protection for your selected test projects and monitor developer feedback. 1. Document processes for handling blocked pushes and train your development teams on the new workflows. 1. Track the number of secrets detected, false positive rates, and developer experience feedback during the pilot phase. You should run the pilot phase for two to four weeks to gather sufficient data and identify any workflow adjustments needed before broader deployment. Once you have completed the pilot, consider the next three phases for a scaled rollout: 1. Early adopters (weeks 3-6) - Enable on 10-20% of active projects, prioritizing security-sensitive repositories. - Focus on teams with strong security awareness and buy-in. - Monitor performance impacts and developer experience. - Refine processes based on real-world usage. 1. Broad deployment (weeks 7-12) - Gradually enable across remaining projects in batches. - Provide ongoing support and training to development teams. - Monitor system performance and scale infrastructure if needed. - Continue optimizing exclusion rules based on usage patterns. 1. Full coverage (weeks 13-16) - Enable secret push protection on all remaining projects. - Establish ongoing maintenance and review processes. - Implement regular audits of exclusion rules and detected patterns. ## Resolve a blocked push When secret push protection blocks a push, you can either: - [Remove the secret](#remove-the-secret) - [Skip secret push protection](#skip-secret-push-protection) ### Remove the secret Remove a blocked secret to allow the commit to be pushed to GitLab. The method of removing the secret depends on how recently it was committed. The instructions below use the Git CLI client, but you can achieve the same result by using another Git client. If the blocked secret was added with the most recent commit on your branch: 1. Remove the secrets from the files. 1. Stage the changes with `git add <file-name>`. 1. Modify the most recent commit to include the changed files with `git commit --amend`. 1. Push your changes with `git push`. If the blocked secret appears earlier in your Git history: 1. Optional. Watch a short demo of [removing secrets from your commits](https://www.youtube.com/watch?v=2jBC3uBUlyU). 1. Identify the commit SHA from the push error message. If there are multiple, find the earliest using `git log`. 1. Create a copy branch to work from with `git switch --create copy-branch` so you can reset to the original branch if the rebase encounters issues. 1. Use `git rebase -i <commit-sha>~1` to start an interactive rebase. 1. Mark the offending commits for editing by changing the `pick` command to `edit` in the editor. 1. Remove the secrets from the files. 1. Stage the changes with `git add <file-name>`. 1. Commit the changed files with `git commit --amend`. 1. Continue the rebase with `git rebase --continue` until all secrets are removed. 1. Push your changes from the copy branch to your original remote branch with `git push --force --set-upstream origin copy-branch:<original-branch>`. 1. When you are satisfied with the changes, consider the following optional cleanup steps. 1. Optional. Delete the original branch with `git branch --delete --force <original-branch>`. 1. Optional. Replace the original branch by renaming the copy branch with `git branch --move copy-branch <original-branch>`. ### Skip secret push protection In some cases, it may be necessary to skip secret push protection. For example, a developer may need to commit a placeholder secret for testing, or a user may want to skip secret push protection due to a Git operation timeout. [Audit events](../../../compliance/audit_event_types.md#secret-detection) are logged when secret push protection is skipped. Audit event details include: - Skip method used. - GitLab account name. - Date and time at which secret push protection was skipped. - Name of project that the secret was pushed to. - Target branch. (Introduced in GitLab 17.4) - Commits that skipped secret push protection. (Introduced in GitLab 17.9) If [pipeline secret detection](../pipeline/_index.md) is enabled, the content of all commits are scanned after they are pushed to the repository. To skip secret push protection for all commits in a push, either: - If you're using the Git CLI client, [instruct Git to skip secret push protection](#skip-when-using-the-git-cli-client). - If you're using any other client, [add `[skip secret push protection]` to one of the commit messages](#skip-when-using-the-git-cli-client). #### Skip when using the Git CLI client To skip secret push protection when using the Git CLI client: - Use the [push option](../../../../topics/git/commit.md#push-options-for-secret-push-protection). For example, you have several commits that are blocked from being pushed because one of them contains a secret. To skip secret push protection, you append the push option to the Git command. ```shell git push -o secret_push_protection.skip_all ``` #### Skip when using any Git client To skip secret push protection when using any Git client: - Add `[skip secret push protection]` to one of the commit messages, on either an existing line or a new line, then push the commits. For example, you are using the GitLab Web IDE and have several commits that are blocked from being pushed because one of them contains a secret. To skip secret push protection, edit the latest commit message and add `[skip secret push protection]`, then push the commits. ## Troubleshooting When working with secret push protection, you may encounter the following situations. ### Push blocked unexpectedly Before GitLab 17.11, secret push protection scanned the contents of all modified files. This can cause a push to be unexpectedly blocked if a modified file contains a secret, even if the secret is not part of the diff. On GitLab 17.11 and earlier, [enable the `spp_scan_diffs` feature flag](#diff-scanning) to ensure that only newly committed changes are scanned. To push a Web IDE change to a file that contains a secret, you need to additionally enable the `secret_checks_for_web_requests` feature flag. ### File was not scanned Some files are excluded from scanning. For details see [coverage](#coverage).
--- stage: Application Security Testing group: Secret Detection info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Secret push protection breadcrumbs: - doc - user - application_security - secret_detection - secret_push_protection --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11439) in GitLab 16.7 as an [experiment](../../../../policy/development_stages_support.md) for GitLab Dedicated customers. - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/12729) to Beta and made available on GitLab.com in GitLab 17.1. - [Enabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/156907) in GitLab 17.2 [with flags](../../../../administration/feature_flags/_index.md) named `pre_receive_secret_detection_beta_release` and `pre_receive_secret_detection_push_check`. - Feature flag `pre_receive_secret_detection_beta_release` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/472418) in GitLab 17.4. - [Generally available](https://gitlab.com/groups/gitlab-org/-/epics/13107) in GitLab 17.5. - Feature flag `pre_receive_secret_detection_push_check` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/472419) in GitLab 17.7. {{< /history >}} Secret push protection blocks secrets such as keys and API tokens from being pushed to GitLab. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see the playlist [Get Started with Secret Push Protection](https://www.youtube.com/playlist?list=PL05JrBw4t0KoADm-g2vxfyR0m6QLphTv-). Use [pipeline secret detection](../_index.md) together with secret push protection to further strengthen your security. ## Secret push protection workflow Secret push protection takes place in the pre-receive hook. When you push changes to GitLab, push protection checks each [file or commit](#coverage) for secrets. By default, if a secret is detected, the push is blocked. <!-- To edit the diagram, use either Draw.io or the VS Code extension "Draw.io Integration" --> ![A flowchart showing how secret protection can block a push](img/spp_workflow_v17_9.drawio.svg) When a push is blocked, GitLab prompts a message that includes: - Commit ID containing the secret. - Filename and line containing the secret. - Type of secret. For example, the following is an extract of the message returned when a push using the Git CLI is blocked. When using other clients, including the GitLab Web IDE, the format of the message is different but the content is the same. ```plain remote: PUSH BLOCKED: Secrets detected in code changes remote: Secret push protection found the following secrets in commit: 37e54de5e78c31d9e3c3821fd15f7069e3d375b6 remote: remote: -- test.txt:2 GitLab Personal Access Token remote: remote: To push your changes you must remove the identified secrets. ``` If secret push protection does not detect any secrets in your commits, no message is displayed. ## Detected secrets Secret push protection scans [files or commits](#coverage) for specific patterns. Each pattern matches a specific type of secret. To confirm which secrets are detected by secret push protection, see [Detected secrets](../detected_secrets.md). Only high-confidence patterns were chosen for secret push protection, to minimize the delay when pushing your commits and minimize the number of false alerts. For example, personal access tokens that use a custom prefix are not detected by secret push protection. You can [exclude](../exclusions.md) selected secrets from detection by secret push protection. ## Getting started On GitLab Dedicated and GitLab Self-Managed instances, you must: 1. [Allow secret push protection on the entire instance](#allow-the-use-of-secret-push-protection-in-your-gitlab-instance). 1. Enable secret push protection. You can either: - [Enable secret push protection in a specific project](#enable-secret-push-protection-in-a-project). - Use the API to [enable secret push protection for all projects in group](../../../../api/group_security_settings.md#update-secret_push_protection_enabled-setting). ### Allow the use of secret push protection in your GitLab instance On GitLab Dedicated and GitLab Self-Managed instances, you must allow secret push protection before you can enable it in a project. Prerequisites: - You must be an administrator for your GitLab instance. To allow the use of secret push protection in your GitLab instance: 1. Sign in to your GitLab instance as an administrator. 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Settings > Security and compliance**. 1. Under **Secret detection**, select or clear **Allow secret push protection**. Secret push protection is allowed on the instance. To use this feature, you must enable it per project. ### Enable secret push protection in a project Prerequisites: - You must have at least the Maintainer role for the project. - On GitLab Dedicated and GitLab Self-Managed, you must allow secret push protection on the instance. To enable secret push protection in a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Secure > Security configuration**. 1. Turn on the **Secret push protection** toggle. You can also enable secret push protection for all projects in a group [with the API](../../../../api/group_security_settings.md#update-secret_push_protection_enabled-setting). ## Coverage {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/185882) to diff-only scanning in GitLab 17.11. {{< /history >}} Secret push protection does not block a secret when: - You used the [skip secret push protection](#skip-secret-push-protection) option when you pushed the commits. - The secret is [excluded](../exclusions.md) from secret push protection. - The secret is in a path defined as an [exclusion](../exclusions.md). Secret push protection does not check a file in a commit when: - The file is a binary file. - The file is larger than 1 MiB. - The diff patch for the file is larger than 1 MiB (when using [diff scanning](#diff-scanning)). - The file was renamed, deleted, or moved without changes to the content. - The content of the file is identical to the content of another file in the source code. - The file is contained in the initial push that created the repository. ### Diff scanning {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/469161) in GitLab 17.5 [with a flag](../../../../administration/feature_flags/_index.md) named `spp_scan_diffs`. Disabled by default. - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/480092) in GitLab 17.6. - [Added](https://gitlab.com/gitlab-org/gitlab/-/issues/491282) support for Web IDE pushes in GitLab 17.10 [with a flag](../../../../administration/feature_flags/_index.md) named `secret_checks_for_web_requests`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/525627) in GitLab 17.11. Feature flag `spp_scan_diffs` removed. - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/525629) `secret_checks_for_web_requests` feature flag in GitLab 17.11. {{< /history >}} Secret push protection scans only the diffs of commits pushed over HTTP(S) and SSH. If a secret is already present in a file and not part of the changes, it is not detected. ## Understanding the results Secret push protection can identify various categories of secrets: - **API keys and tokens**: Service-specific authentication credentials - **Database connection strings**: URLs containing embedded credentials - **Private keys**: Cryptographic keys for authentication or encryption - **Generic high-entropy strings**: Patterns that appear to be randomly generated secrets When a push is blocked, secret push protection provides detailed information to help you locate and address the detected secrets: - **Commit ID**: The specific commit containing the secret. Useful for tracking changes in your Git history. - **File path and line number**: The exact location of the detected pattern for quick navigation. - **Secret type**: The classification of the detected pattern. For example, `GitLab Personal Access Token` or `AWS Access Key`. ### Common detection categories Not all detections require immediate action. Consider the following when evaluating results: - **True positives**: Legitimate secrets that should be rotated and removed. For example: - [Valid](../../vulnerabilities/validity_check.md) API keys or tokens - Production database credentials - Private cryptographic keys - Any credentials that could grant unauthorized access - **False positives**: Detected patterns that aren't actual secrets. For example: - Test data that resembles secrets but has no real-world value - Placeholder values in configuration templates - Example credentials in documentation - Hash values or checksums that match secret patterns Document common false positive patterns in your organization to streamline future evaluations. ## Optimization Before deploying secret push protection widely, optimize the configuration to reduce false positives and improve accuracy for your specific environment. ### Reduce false positives False positives can significantly impact developer productivity and lead to security fatigue. To reduce false positives: - Configure [exclusions](../exclusions.md) strategically: - Create path-based exclusions for test directories, documentation, and third party dependencies. - Use pattern-based exclusions for known false positive patterns specific to your codebase. - Document your exclusion rules and review them regularly. - Create standards for placeholder values and test credentials. - Monitor false positive rates and continue to adjust exclusions accordingly. ### Optimize performance Large repositories or frequent pushes can have performance impacts. To optimize the performance of secret push protection: - Monitor push times and establish baseline metrics before deployment. - Use diff scanning to reduce the amount of content scanned on each push. - Consider file size limits for repositories with large binary assets. - Implement exclusions for directories that are unlikely to contain secrets. ### Integration with existing workflows Ensure secret push protection complements your existing development practices: - Configure pipeline secret detection and secret push protection to be sure you have defense in depth. - Update developer documentation to include secret push protection procedures. - Align with security training to educate developers on secure coding practices to minimize leaked secrets. ## Roll out Successfully deploying secret push protection at scale requires careful planning and a phased implementation: 1. Choose two or three non-critical projects with active development to test the feature and understand its impact on developer workflows. 1. Turn on secret push protection for your selected test projects and monitor developer feedback. 1. Document processes for handling blocked pushes and train your development teams on the new workflows. 1. Track the number of secrets detected, false positive rates, and developer experience feedback during the pilot phase. You should run the pilot phase for two to four weeks to gather sufficient data and identify any workflow adjustments needed before broader deployment. Once you have completed the pilot, consider the next three phases for a scaled rollout: 1. Early adopters (weeks 3-6) - Enable on 10-20% of active projects, prioritizing security-sensitive repositories. - Focus on teams with strong security awareness and buy-in. - Monitor performance impacts and developer experience. - Refine processes based on real-world usage. 1. Broad deployment (weeks 7-12) - Gradually enable across remaining projects in batches. - Provide ongoing support and training to development teams. - Monitor system performance and scale infrastructure if needed. - Continue optimizing exclusion rules based on usage patterns. 1. Full coverage (weeks 13-16) - Enable secret push protection on all remaining projects. - Establish ongoing maintenance and review processes. - Implement regular audits of exclusion rules and detected patterns. ## Resolve a blocked push When secret push protection blocks a push, you can either: - [Remove the secret](#remove-the-secret) - [Skip secret push protection](#skip-secret-push-protection) ### Remove the secret Remove a blocked secret to allow the commit to be pushed to GitLab. The method of removing the secret depends on how recently it was committed. The instructions below use the Git CLI client, but you can achieve the same result by using another Git client. If the blocked secret was added with the most recent commit on your branch: 1. Remove the secrets from the files. 1. Stage the changes with `git add <file-name>`. 1. Modify the most recent commit to include the changed files with `git commit --amend`. 1. Push your changes with `git push`. If the blocked secret appears earlier in your Git history: 1. Optional. Watch a short demo of [removing secrets from your commits](https://www.youtube.com/watch?v=2jBC3uBUlyU). 1. Identify the commit SHA from the push error message. If there are multiple, find the earliest using `git log`. 1. Create a copy branch to work from with `git switch --create copy-branch` so you can reset to the original branch if the rebase encounters issues. 1. Use `git rebase -i <commit-sha>~1` to start an interactive rebase. 1. Mark the offending commits for editing by changing the `pick` command to `edit` in the editor. 1. Remove the secrets from the files. 1. Stage the changes with `git add <file-name>`. 1. Commit the changed files with `git commit --amend`. 1. Continue the rebase with `git rebase --continue` until all secrets are removed. 1. Push your changes from the copy branch to your original remote branch with `git push --force --set-upstream origin copy-branch:<original-branch>`. 1. When you are satisfied with the changes, consider the following optional cleanup steps. 1. Optional. Delete the original branch with `git branch --delete --force <original-branch>`. 1. Optional. Replace the original branch by renaming the copy branch with `git branch --move copy-branch <original-branch>`. ### Skip secret push protection In some cases, it may be necessary to skip secret push protection. For example, a developer may need to commit a placeholder secret for testing, or a user may want to skip secret push protection due to a Git operation timeout. [Audit events](../../../compliance/audit_event_types.md#secret-detection) are logged when secret push protection is skipped. Audit event details include: - Skip method used. - GitLab account name. - Date and time at which secret push protection was skipped. - Name of project that the secret was pushed to. - Target branch. (Introduced in GitLab 17.4) - Commits that skipped secret push protection. (Introduced in GitLab 17.9) If [pipeline secret detection](../pipeline/_index.md) is enabled, the content of all commits are scanned after they are pushed to the repository. To skip secret push protection for all commits in a push, either: - If you're using the Git CLI client, [instruct Git to skip secret push protection](#skip-when-using-the-git-cli-client). - If you're using any other client, [add `[skip secret push protection]` to one of the commit messages](#skip-when-using-the-git-cli-client). #### Skip when using the Git CLI client To skip secret push protection when using the Git CLI client: - Use the [push option](../../../../topics/git/commit.md#push-options-for-secret-push-protection). For example, you have several commits that are blocked from being pushed because one of them contains a secret. To skip secret push protection, you append the push option to the Git command. ```shell git push -o secret_push_protection.skip_all ``` #### Skip when using any Git client To skip secret push protection when using any Git client: - Add `[skip secret push protection]` to one of the commit messages, on either an existing line or a new line, then push the commits. For example, you are using the GitLab Web IDE and have several commits that are blocked from being pushed because one of them contains a secret. To skip secret push protection, edit the latest commit message and add `[skip secret push protection]`, then push the commits. ## Troubleshooting When working with secret push protection, you may encounter the following situations. ### Push blocked unexpectedly Before GitLab 17.11, secret push protection scanned the contents of all modified files. This can cause a push to be unexpectedly blocked if a modified file contains a secret, even if the secret is not part of the diff. On GitLab 17.11 and earlier, [enable the `spp_scan_diffs` feature flag](#diff-scanning) to ensure that only newly committed changes are scanned. To push a Web IDE change to a file that contains a secret, you need to additionally enable the `secret_checks_for_web_requests` feature flag. ### File was not scanned Some files are excluded from scanning. For details see [coverage](#coverage).
https://docs.gitlab.com/user/application_security/troubleshooting_dependency_scanning
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/troubleshooting_dependency_scanning.md
2025-08-13
doc/user/application_security/dependency_scanning
[ "doc", "user", "application_security", "dependency_scanning" ]
troubleshooting_dependency_scanning.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting Dependency Scanning
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When working with dependency scanning, you might encounter the following issues. ## Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../troubleshooting_application_security.md#debug-level-logging). ## Run the analyzer in a local environment You can run a dependency scanning analyzer locally to debug issues or verify behavior without running a pipeline. For example, to run the Python analyzer: ```shell cd project-git-repository docker run \ --interactive --tty --rm \ --volume "$PWD":/tmp/app \ --env CI_PROJECT_DIR=/tmp/app \ --env SECURE_LOG_LEVEL=debug \ -w /tmp/app \ registry.gitlab.com/security-products/gemnasium-python:5 /analyzer run ``` This command runs the analyzer with debug-level logging and mounts your local repository to analyze the dependencies. You can replace `registry.gitlab.com/security-products/gemnasium-python:5` with the appropriate scanner `image:tag` combination for your project's language and dependency manager. ### Working around missing support for certain languages or package managers As noted in the ["Supported languages" section](_index.md#supported-languages-and-package-managers) some dependency definition files are not yet supported. However, Dependency Scanning can be achieved if the language, a package manager, or a third-party tool can convert the definition file into a supported format. Generally, the approach is the following: 1. Define a dedicated converter job in your `.gitlab-ci.yml` file. Use a suitable Docker image, script, or both to facilitate the conversion. 1. Let that job upload the converted, supported file as an artifact. 1. Add [`dependencies: [<your-converter-job>]`](../../../ci/yaml/_index.md#dependencies) to your `dependency_scanning` job to make use of the converted definitions files. For example, Poetry projects that only have a `pyproject.toml` file can generate the `poetry.lock` file as follows. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml stages: - test gemnasium-python-dependency_scanning: # Work around https://gitlab.com/gitlab-org/gitlab/-/issues/32774 before_script: - pip install "poetry>=1,<2" # Or via another method: https://python-poetry.org/docs/#installation - poetry update --lock # Generates the lock file to be analyzed. ``` ### `Error response from daemon: error processing tar file: docker-tar: relocation error` This error occurs when the Docker version that runs the dependency scanning job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. Read more in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992 "Current SAST container fails"). ### Getting warning message `gl-dependency-scanning-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../troubleshooting_application_security.md#getting-warning-messages--reportjson-no-matching-files). ## `Error response from daemon: error processing tar file: docker-tar: relocation error` This error occurs when the Docker version that runs the dependency scanning job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. Read more in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992 "Current SAST container fails"). ## Dependency scanning jobs are running unexpectedly The [dependency scanning CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml) uses the [`rules:exists`](../../../ci/yaml/_index.md#rulesexists) syntax. This directive is limited to 10000 checks and always returns `true` after reaching this number. Because of this, and depending on the number of files in your repository, a dependency scanning job might be triggered even if the scanner doesn't support your project. For more details about this limitation, see [the `rules:exists` documentation](../../../ci/yaml/_index.md#rulesexists). ## Error: `dependency_scanning is used for configuration only, and its script should not be executed` For information, see the [GitLab Secure troubleshooting section](../troubleshooting_application_security.md#error-job-is-used-for-configuration-only-and-its-script-should-not-be-executed). ## Import multiple certificates for Java-based projects The `gemnasium-maven` analyzer reads the contents of the `ADDITIONAL_CA_CERT_BUNDLE` variable using `keytool`, which imports either a single certificate or a certificate chain. Multiple unrelated certificates are ignored and only the first one is imported by `keytool`. To add multiple unrelated certificates to the analyzer, you can declare a `before_script` such as this in the definition of the `gemnasium-maven-dependency_scanning` job: ```yaml gemnasium-maven-dependency_scanning: before_script: - . $HOME/.bashrc # make the java tools available to the script - OIFS="$IFS"; IFS=""; echo $ADDITIONAL_CA_CERT_BUNDLE > multi.pem; IFS="$OIFS" # write ADDITIONAL_CA_CERT_BUNDLE variable to a PEM file - csplit -z --digits=2 --prefix=cert multi.pem "/-----END CERTIFICATE-----/+1" "{*}" # split the file into individual certificates - for i in `ls cert*`; do keytool -v -importcert -alias "custom-cert-$i" -file $i -trustcacerts -noprompt -storepass changeit -keystore /opt/asdf/installs/java/adoptopenjdk-11.0.7+10.1/lib/security/cacerts 1>/dev/null 2>&1 || true; done # import each certificate using keytool (note the keystore location is related to the Java version being used and should be changed accordingly for other versions) - unset ADDITIONAL_CA_CERT_BUNDLE # unset the variable so that the analyzer doesn't duplicate the import ``` ## Dependency Scanning job fails with message `strconv.ParseUint: parsing "0.0": invalid syntax` Docker-in-Docker is unsupported, and attempting to invoke it is the likely cause of this error. To fix this error, disable Docker-in-Docker for dependency scanning. Individual `<analyzer-name>-dependency_scanning` jobs are created for each analyzer that runs in your CI/CD pipeline. ```yaml include: - template: Dependency-Scanning.gitlab-ci.yml variables: DS_DISABLE_DIND: "true" ``` ## Message `<file> does not exist in <commit SHA>` When the `Location` of a dependency in a file is shown, the path in the link goes to a specific Git SHA. If the lock file that our dependency scanning tools reviewed was cached, however, selecting that link redirects you to the repository root, with the message: `<file> does not exist in <commit SHA>`. The lock file is cached during the build phase and passed to the dependency scanning job before the scan occurs. Because the cache is downloaded before the analyzer run occurs, the existence of a lock file in the `CI_BUILDS_DIR` directory triggers the dependency scanning job. To prevent this warning, lock files should be committed. ## You no longer get the latest Docker image after setting `DS_MAJOR_VERSION` or `DS_ANALYZER_IMAGE` If you have manually set `DS_MAJOR_VERSION` or `DS_ANALYZER_IMAGE` for specific reasons, and now must update your configuration to again get the latest patched versions of our analyzers, edit your `.gitlab-ci.yml` file and either: - Set your `DS_MAJOR_VERSION` to match the latest version as seen in [our current Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml#L17). - If you hardcoded the `DS_ANALYZER_IMAGE` variable directly, change it to match the latest line as found in our [current Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml). The line number varies depending on which scanning job you edited. For example, the `gemnasium-maven-dependency_scanning` job pulls the latest `gemnasium-maven` Docker image because `DS_ANALYZER_IMAGE` is set to `"$SECURE_ANALYZERS_PREFIX/gemnasium-maven:$DS_MAJOR_VERSION"`. ## Dependency Scanning of setuptools project fails with `use_2to3 is invalid` error Support for [2to3](https://docs.python.org/3/library/2to3.html) was [removed](https://setuptools.pypa.io/en/latest/history.html#v58-0-0) in `setuptools` version `v58.0.0`. Dependency Scanning (running `python 3.9`) uses `setuptools` version `58.1.0+`, which doesn't support `2to3`. Therefore, a `setuptools` dependency relying on `lib2to3` fails with this message: ```plaintext error in <dependency name> setup command: use_2to3 is invalid ``` To work around this error, downgrade the analyzer's version of `setuptools` (for example, `v57.5.0`): ```yaml gemnasium-python-dependency_scanning: before_script: - pip install setuptools==57.5.0 ``` ## Dependency Scanning of projects using psycopg2 fails with `pg_config executable not found` error Scanning a Python project that depends on `psycopg2` can fail with this message: ```plaintext Error: pg_config executable not found. ``` [psycopg2](https://pypi.org/project/psycopg2/) depends on the `libpq-dev` Debian package, which is not installed in the `gemnasium-python` Docker image. To work around this error, install the `libpq-dev` package in a `before_script`: ```yaml gemnasium-python-dependency_scanning: before_script: - apt-get update && apt-get install -y libpq-dev ``` ## `NoSuchOptionException` when using `poetry config http-basic` with `CI_JOB_TOKEN` This error can occur when the automatically generated `CI_JOB_TOKEN` starts with a hyphen (`-`). To avoid this error, follow [Poetry's configuration advice](https://python-poetry.org/docs/repositories/#configuring-credentials). ## Error: project has unresolved dependencies The following error messages indicate a Gradle dependency resolution issue caused by your `build.gradle` or `build.gradle.kts` file: - `Project has <number> unresolved dependencies` (GitLab 16.7 to 16.9) - `project has unresolved dependencies: ["dependency_name:version"]` (GitLab 17.0 and later) In GitLab 16.7 to 16.9, `gemnasium-maven` cannot continue processing when an unresolved dependency is encountered. In GitLab 17.0 and later, `gemnasium-maven` supports the `DS_GRADLE_RESOLUTION_POLICY` environment variable which you can use to control how unresolved dependencies are handled. By default, the scan fails when unresolved dependencies are encountered. However, you can set the environment variable `DS_GRADLE_RESOLUTION_POLICY` to `"none"` to allow the scan to continue and produce partial results. Consult the [Gradle dependency resolution documentation](https://docs.gradle.org/current/userguide/dependency_resolution.html) for guidance on fixing your `build.gradle` file. For more details, refer to [issue 482650](https://gitlab.com/gitlab-org/gitlab/-/issues/482650). Additionally, there is a known issue in `Kotlin 2.0.0` affecting dependency resolution, which is scheduled to be fixed in `Kotlin 2.0.20`. For more information, refer to [this issue](https://github.com/gradle/github-dependency-graph-gradle-plugin/issues/140#issuecomment-2230255380). ## Setting build constraints when scanning Go projects Dependency scanning runs in a `linux/amd64` container. As a result, the build list generated for a Go project contains dependencies that are compatible with this environment. If your deployment environment is not `linux/amd64`, the final list of dependencies might contain additional incompatible modules. The dependency list might also omit modules that are only compatible with your deployment environment. To prevent this issue, you can configure the build process to target the operating system and architecture of the deployment environment by setting the `GOOS` and `GOARCH` [environment variables](https://go.dev/ref/mod#minimal-version-selection) of your `.gitlab-ci.yml` file. For example: ```yaml variables: GOOS: "darwin" GOARCH: "arm64" ``` You can also supply build tag constraints by using the `GOFLAGS` variable: ```yaml variables: GOFLAGS: "-tags=test_feature" ``` ## Dependency Scanning of Go projects returns false positives The `go.sum` file contains an entry of every module that was considered while generating the project's [build list](https://go.dev/ref/mod#glos-build-list). Multiple versions of a module are included in the `go.sum` file, but the [MVS](https://go.dev/ref/mod#minimal-version-selection) algorithm used by `go build` only selects one. As a result, when dependency scanning uses `go.sum`, it might report false positives. To prevent false positives, Gemnasium only uses `go.sum` if it is unable to generate the build list for the Go project. If `go.sum` is selected, a warning occurs: ```shell [WARN] [Gemnasium] [2022-09-14T20:59:38Z] ▶ Selecting "go.sum" parser for "/test-projects/gitlab-shell/go.sum". False positives may occur. See https://gitlab.com/gitlab-org/gitlab/-/issues/321081. ``` ## `Host key verification failed` when trying to use `ssh` After installing `openssh-client` on any `gemnasium` image, using `ssh` might lead to a `Host key verification failed` message. This can occur if you use `~` to represent the user directory during setup, due to setting `$HOME` to `/tmp` when building the image. This issue is described in [Cloning project over SSH fails when using `gemnasium-python` image](https://gitlab.com/gitlab-org/gitlab/-/issues/374571). `openssh-client` expects to find `/root/.ssh/known_hosts` but this path does not exist; `/tmp/.ssh/known_hosts` exists instead. This has been resolved in `gemnasium-python` where `openssh-client` is pre-installed, but the issue could occur when installing `openssh-client` from scratch on other images. To resolve this, you may either: 1. Use absolute paths (`/root/.ssh/known_hosts` instead of `~/.ssh/known_hosts`) when setting up keys and hosts. 1. Add `UserKnownHostsFile` to your `ssh` config specifying the relevant `known_hosts` files, for example: `echo 'UserKnownHostsFile /tmp/.ssh/known_hosts' >> /etc/ssh/ssh_config`. ## `ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE` This error occurs when the hash for a package in a `requirements.txt` file does not match the hash of the downloaded package. As a security measure, `pip` will assume that the package has been tampered with and will refuse to install it. To remediate this, ensure that the hash contained in the requirements file is correct. For requirement files generated by [`pip-compile`](https://pip-tools.readthedocs.io/en/stable/), run `pip-compile --generate-hashes` to ensure that the hash is up to date. If using a `Pipfile.lock` generated by [`pipenv`](https://pipenv.pypa.io/), run `pipenv verify` to verify that the lock file contains the latest package hashes. ## `ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==` This error will occur if the requirements file was generated on a different platform than the one used by the GitLab Runner. Support for targeting other platforms is tracked in [issue 416376](https://gitlab.com/gitlab-org/gitlab/-/issues/416376). ## Editable flags can cause dependency scanning for Python to hang If you use the [`-e/--editable`](https://pip.pypa.io/en/stable/cli/pip_install/#install-editable) flag in the `requirements.txt` file to target the current directory, you might encounter an issue that causes the Gemnasium Python dependency scanner to hang when it runs `pip3 download`. This command is required to build the target project. To resolve this issue, don't use the `-e/--editable` flag when you run dependency scanning for Python. ## Handling out of memory errors with SBT If you encounter out of memory errors with SBT while using dependency scanning on a Scala project, you can address this by setting the [`SBT_CLI_OPTS`](_index.md#analyzer-specific-settings) environment variable. An example configuration is: ```yaml variables: SBT_CLI_OPTS: "-J-Xmx8192m -J-Xms4192m -J-Xss2M" ``` If you're using the Kubernetes executor, you may need to override the default Kubernetes resource settings. Refer to the [Kubernetes executor documentation](https://docs.gitlab.com/runner/executors/kubernetes/#overwrite-container-resources) for details on how to adjust container resources to prevent memory issues. ## No `package-lock.json` file in NPM projects By default, the Dependency Scanning job runs only when there is a `package-lock.json` file in the repository. However, some NPM projects generate the `package-lock.json` file during the build process, instead of storing them in the Git repository. To scan dependencies in these projects: 1. Generate the `package-lock.json` file in a build job. 1. Store the generated file as an artifact. 1. Modify the Dependency Scanning job to use the artifact and adjust its rules. For example, your configuration might look like this: ```yaml include: - template: Dependency-Scanning.gitlab-ci.yml build: script: - npm i artifacts: paths: - package-lock.json # Store the generated package-lock.json as an artifact gemnasium-dependency_scanning: needs: ["build"] rules: - if: "$DEPENDENCY_SCANNING_DISABLED == 'true' || $DEPENDENCY_SCANNING_DISABLED == '1'" when: never - if: "$DS_EXCLUDED_ANALYZERS =~ /gemnasium([^-]|$)/" when: never - if: $CI_COMMIT_BRANCH && $GITLAB_FEATURES =~ /\bdependency_scanning\b/ && $CI_GITLAB_FIPS_MODE == "true" variables: DS_IMAGE_SUFFIX: "-fips" DS_REMEDIATE: 'false' - if: "$CI_COMMIT_BRANCH && $GITLAB_FEATURES =~ /\\bdependency_scanning\\b/" ``` ## No Dependency Scanning job added to the pipeline The Dependency Scanning job uses rules to check if either lockfiles with dependencies or build-tool related files exist. If none of these files are detected, the job is not added to the pipeline, even if the lockfile are generated by another job in the pipeline. If you experience this situation, ensure your repository contains [a supported file](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files), or a file indicating that a supported file is generated at runtime. Consider whether such files can be added to your repository to trigger the Dependency Scanning job. If you believe that your repository does contain such files and the job is still not triggered, [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) with the following information: - The language and build tool you use. - What kind of lockfile you provide and where it gets generated. You can also contribute directly to the [Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml#L269-270). ## Dependency Scanning fails with `gradlew: permission denied` The `permission denied` error on `gradlew` typically indicates that `gradlew` was checked into the repository without an executable bit set. The error might appear in your job with this message: ```plaintext [FATA] [gemnasium-maven] [2024-11-14T21:55:59Z] [/go/src/app/cmd/gemnasium-maven/main.go:65] ▶ fork/exec /builds/path/to/gradlew: permission denied ``` Make the file executable by running `chmod +ux gradlew` locally and pushing it to your Git repository. ## Dependency Scanning scanner is no longer `Gemnasium` Historically, the scanner used by Dependency Scanning is `Gemnasium` and this is what user can see on the [vulnerability page](../vulnerabilities/_index.md). With the rollout of [Dependency Scanning by using SBOM](dependency_scanning_sbom/_index.md), we are replacing the `Gemnasium` scanner with the built-in `GitLab SBoM Vulnerability Scanner`. This new scanner is no longer executed in a CI/CD job but rather within the GitLab platform. While the two scanners are expected to provide the same results, because the SBOM scan happens after the existing Dependency Scanning CI/CD job, existing vulnerabilities have their scanner value updated with the new `GitLab SBoM Vulnerability Scanner`. As we move forward with the rollout and ultimately replace the existing Gemnasium analyzer, the `GitLab SBoM Vulnerability Scanner` will be the only expected value for GitLab built-in Dependency Scanning feature. ## Dependency List for project not being updated based on latest SBOM When a pipeline has a failing job that would generate an SBOM, the `DeleteNotPresentOccurrencesService` does not execute, which prevents the dependency list from being changed or updated. This can occur even if there are other successful jobs that upload an SBOM, and the pipeline overall is successful. This is designed to prevent accidentally removing dependencies from the dependency list when related security scanning jobs fail. If the project dependency list is not updating as expected, check for any SBOM-related jobs that may have failed in the pipeline, and fix them or remove them.
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting Dependency Scanning breadcrumbs: - doc - user - application_security - dependency_scanning --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When working with dependency scanning, you might encounter the following issues. ## Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../troubleshooting_application_security.md#debug-level-logging). ## Run the analyzer in a local environment You can run a dependency scanning analyzer locally to debug issues or verify behavior without running a pipeline. For example, to run the Python analyzer: ```shell cd project-git-repository docker run \ --interactive --tty --rm \ --volume "$PWD":/tmp/app \ --env CI_PROJECT_DIR=/tmp/app \ --env SECURE_LOG_LEVEL=debug \ -w /tmp/app \ registry.gitlab.com/security-products/gemnasium-python:5 /analyzer run ``` This command runs the analyzer with debug-level logging and mounts your local repository to analyze the dependencies. You can replace `registry.gitlab.com/security-products/gemnasium-python:5` with the appropriate scanner `image:tag` combination for your project's language and dependency manager. ### Working around missing support for certain languages or package managers As noted in the ["Supported languages" section](_index.md#supported-languages-and-package-managers) some dependency definition files are not yet supported. However, Dependency Scanning can be achieved if the language, a package manager, or a third-party tool can convert the definition file into a supported format. Generally, the approach is the following: 1. Define a dedicated converter job in your `.gitlab-ci.yml` file. Use a suitable Docker image, script, or both to facilitate the conversion. 1. Let that job upload the converted, supported file as an artifact. 1. Add [`dependencies: [<your-converter-job>]`](../../../ci/yaml/_index.md#dependencies) to your `dependency_scanning` job to make use of the converted definitions files. For example, Poetry projects that only have a `pyproject.toml` file can generate the `poetry.lock` file as follows. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml stages: - test gemnasium-python-dependency_scanning: # Work around https://gitlab.com/gitlab-org/gitlab/-/issues/32774 before_script: - pip install "poetry>=1,<2" # Or via another method: https://python-poetry.org/docs/#installation - poetry update --lock # Generates the lock file to be analyzed. ``` ### `Error response from daemon: error processing tar file: docker-tar: relocation error` This error occurs when the Docker version that runs the dependency scanning job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. Read more in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992 "Current SAST container fails"). ### Getting warning message `gl-dependency-scanning-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../troubleshooting_application_security.md#getting-warning-messages--reportjson-no-matching-files). ## `Error response from daemon: error processing tar file: docker-tar: relocation error` This error occurs when the Docker version that runs the dependency scanning job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. Read more in [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992 "Current SAST container fails"). ## Dependency scanning jobs are running unexpectedly The [dependency scanning CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml) uses the [`rules:exists`](../../../ci/yaml/_index.md#rulesexists) syntax. This directive is limited to 10000 checks and always returns `true` after reaching this number. Because of this, and depending on the number of files in your repository, a dependency scanning job might be triggered even if the scanner doesn't support your project. For more details about this limitation, see [the `rules:exists` documentation](../../../ci/yaml/_index.md#rulesexists). ## Error: `dependency_scanning is used for configuration only, and its script should not be executed` For information, see the [GitLab Secure troubleshooting section](../troubleshooting_application_security.md#error-job-is-used-for-configuration-only-and-its-script-should-not-be-executed). ## Import multiple certificates for Java-based projects The `gemnasium-maven` analyzer reads the contents of the `ADDITIONAL_CA_CERT_BUNDLE` variable using `keytool`, which imports either a single certificate or a certificate chain. Multiple unrelated certificates are ignored and only the first one is imported by `keytool`. To add multiple unrelated certificates to the analyzer, you can declare a `before_script` such as this in the definition of the `gemnasium-maven-dependency_scanning` job: ```yaml gemnasium-maven-dependency_scanning: before_script: - . $HOME/.bashrc # make the java tools available to the script - OIFS="$IFS"; IFS=""; echo $ADDITIONAL_CA_CERT_BUNDLE > multi.pem; IFS="$OIFS" # write ADDITIONAL_CA_CERT_BUNDLE variable to a PEM file - csplit -z --digits=2 --prefix=cert multi.pem "/-----END CERTIFICATE-----/+1" "{*}" # split the file into individual certificates - for i in `ls cert*`; do keytool -v -importcert -alias "custom-cert-$i" -file $i -trustcacerts -noprompt -storepass changeit -keystore /opt/asdf/installs/java/adoptopenjdk-11.0.7+10.1/lib/security/cacerts 1>/dev/null 2>&1 || true; done # import each certificate using keytool (note the keystore location is related to the Java version being used and should be changed accordingly for other versions) - unset ADDITIONAL_CA_CERT_BUNDLE # unset the variable so that the analyzer doesn't duplicate the import ``` ## Dependency Scanning job fails with message `strconv.ParseUint: parsing "0.0": invalid syntax` Docker-in-Docker is unsupported, and attempting to invoke it is the likely cause of this error. To fix this error, disable Docker-in-Docker for dependency scanning. Individual `<analyzer-name>-dependency_scanning` jobs are created for each analyzer that runs in your CI/CD pipeline. ```yaml include: - template: Dependency-Scanning.gitlab-ci.yml variables: DS_DISABLE_DIND: "true" ``` ## Message `<file> does not exist in <commit SHA>` When the `Location` of a dependency in a file is shown, the path in the link goes to a specific Git SHA. If the lock file that our dependency scanning tools reviewed was cached, however, selecting that link redirects you to the repository root, with the message: `<file> does not exist in <commit SHA>`. The lock file is cached during the build phase and passed to the dependency scanning job before the scan occurs. Because the cache is downloaded before the analyzer run occurs, the existence of a lock file in the `CI_BUILDS_DIR` directory triggers the dependency scanning job. To prevent this warning, lock files should be committed. ## You no longer get the latest Docker image after setting `DS_MAJOR_VERSION` or `DS_ANALYZER_IMAGE` If you have manually set `DS_MAJOR_VERSION` or `DS_ANALYZER_IMAGE` for specific reasons, and now must update your configuration to again get the latest patched versions of our analyzers, edit your `.gitlab-ci.yml` file and either: - Set your `DS_MAJOR_VERSION` to match the latest version as seen in [our current Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml#L17). - If you hardcoded the `DS_ANALYZER_IMAGE` variable directly, change it to match the latest line as found in our [current Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.gitlab-ci.yml). The line number varies depending on which scanning job you edited. For example, the `gemnasium-maven-dependency_scanning` job pulls the latest `gemnasium-maven` Docker image because `DS_ANALYZER_IMAGE` is set to `"$SECURE_ANALYZERS_PREFIX/gemnasium-maven:$DS_MAJOR_VERSION"`. ## Dependency Scanning of setuptools project fails with `use_2to3 is invalid` error Support for [2to3](https://docs.python.org/3/library/2to3.html) was [removed](https://setuptools.pypa.io/en/latest/history.html#v58-0-0) in `setuptools` version `v58.0.0`. Dependency Scanning (running `python 3.9`) uses `setuptools` version `58.1.0+`, which doesn't support `2to3`. Therefore, a `setuptools` dependency relying on `lib2to3` fails with this message: ```plaintext error in <dependency name> setup command: use_2to3 is invalid ``` To work around this error, downgrade the analyzer's version of `setuptools` (for example, `v57.5.0`): ```yaml gemnasium-python-dependency_scanning: before_script: - pip install setuptools==57.5.0 ``` ## Dependency Scanning of projects using psycopg2 fails with `pg_config executable not found` error Scanning a Python project that depends on `psycopg2` can fail with this message: ```plaintext Error: pg_config executable not found. ``` [psycopg2](https://pypi.org/project/psycopg2/) depends on the `libpq-dev` Debian package, which is not installed in the `gemnasium-python` Docker image. To work around this error, install the `libpq-dev` package in a `before_script`: ```yaml gemnasium-python-dependency_scanning: before_script: - apt-get update && apt-get install -y libpq-dev ``` ## `NoSuchOptionException` when using `poetry config http-basic` with `CI_JOB_TOKEN` This error can occur when the automatically generated `CI_JOB_TOKEN` starts with a hyphen (`-`). To avoid this error, follow [Poetry's configuration advice](https://python-poetry.org/docs/repositories/#configuring-credentials). ## Error: project has unresolved dependencies The following error messages indicate a Gradle dependency resolution issue caused by your `build.gradle` or `build.gradle.kts` file: - `Project has <number> unresolved dependencies` (GitLab 16.7 to 16.9) - `project has unresolved dependencies: ["dependency_name:version"]` (GitLab 17.0 and later) In GitLab 16.7 to 16.9, `gemnasium-maven` cannot continue processing when an unresolved dependency is encountered. In GitLab 17.0 and later, `gemnasium-maven` supports the `DS_GRADLE_RESOLUTION_POLICY` environment variable which you can use to control how unresolved dependencies are handled. By default, the scan fails when unresolved dependencies are encountered. However, you can set the environment variable `DS_GRADLE_RESOLUTION_POLICY` to `"none"` to allow the scan to continue and produce partial results. Consult the [Gradle dependency resolution documentation](https://docs.gradle.org/current/userguide/dependency_resolution.html) for guidance on fixing your `build.gradle` file. For more details, refer to [issue 482650](https://gitlab.com/gitlab-org/gitlab/-/issues/482650). Additionally, there is a known issue in `Kotlin 2.0.0` affecting dependency resolution, which is scheduled to be fixed in `Kotlin 2.0.20`. For more information, refer to [this issue](https://github.com/gradle/github-dependency-graph-gradle-plugin/issues/140#issuecomment-2230255380). ## Setting build constraints when scanning Go projects Dependency scanning runs in a `linux/amd64` container. As a result, the build list generated for a Go project contains dependencies that are compatible with this environment. If your deployment environment is not `linux/amd64`, the final list of dependencies might contain additional incompatible modules. The dependency list might also omit modules that are only compatible with your deployment environment. To prevent this issue, you can configure the build process to target the operating system and architecture of the deployment environment by setting the `GOOS` and `GOARCH` [environment variables](https://go.dev/ref/mod#minimal-version-selection) of your `.gitlab-ci.yml` file. For example: ```yaml variables: GOOS: "darwin" GOARCH: "arm64" ``` You can also supply build tag constraints by using the `GOFLAGS` variable: ```yaml variables: GOFLAGS: "-tags=test_feature" ``` ## Dependency Scanning of Go projects returns false positives The `go.sum` file contains an entry of every module that was considered while generating the project's [build list](https://go.dev/ref/mod#glos-build-list). Multiple versions of a module are included in the `go.sum` file, but the [MVS](https://go.dev/ref/mod#minimal-version-selection) algorithm used by `go build` only selects one. As a result, when dependency scanning uses `go.sum`, it might report false positives. To prevent false positives, Gemnasium only uses `go.sum` if it is unable to generate the build list for the Go project. If `go.sum` is selected, a warning occurs: ```shell [WARN] [Gemnasium] [2022-09-14T20:59:38Z] ▶ Selecting "go.sum" parser for "/test-projects/gitlab-shell/go.sum". False positives may occur. See https://gitlab.com/gitlab-org/gitlab/-/issues/321081. ``` ## `Host key verification failed` when trying to use `ssh` After installing `openssh-client` on any `gemnasium` image, using `ssh` might lead to a `Host key verification failed` message. This can occur if you use `~` to represent the user directory during setup, due to setting `$HOME` to `/tmp` when building the image. This issue is described in [Cloning project over SSH fails when using `gemnasium-python` image](https://gitlab.com/gitlab-org/gitlab/-/issues/374571). `openssh-client` expects to find `/root/.ssh/known_hosts` but this path does not exist; `/tmp/.ssh/known_hosts` exists instead. This has been resolved in `gemnasium-python` where `openssh-client` is pre-installed, but the issue could occur when installing `openssh-client` from scratch on other images. To resolve this, you may either: 1. Use absolute paths (`/root/.ssh/known_hosts` instead of `~/.ssh/known_hosts`) when setting up keys and hosts. 1. Add `UserKnownHostsFile` to your `ssh` config specifying the relevant `known_hosts` files, for example: `echo 'UserKnownHostsFile /tmp/.ssh/known_hosts' >> /etc/ssh/ssh_config`. ## `ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE` This error occurs when the hash for a package in a `requirements.txt` file does not match the hash of the downloaded package. As a security measure, `pip` will assume that the package has been tampered with and will refuse to install it. To remediate this, ensure that the hash contained in the requirements file is correct. For requirement files generated by [`pip-compile`](https://pip-tools.readthedocs.io/en/stable/), run `pip-compile --generate-hashes` to ensure that the hash is up to date. If using a `Pipfile.lock` generated by [`pipenv`](https://pipenv.pypa.io/), run `pipenv verify` to verify that the lock file contains the latest package hashes. ## `ERROR: In --require-hashes mode, all requirements must have their versions pinned with ==` This error will occur if the requirements file was generated on a different platform than the one used by the GitLab Runner. Support for targeting other platforms is tracked in [issue 416376](https://gitlab.com/gitlab-org/gitlab/-/issues/416376). ## Editable flags can cause dependency scanning for Python to hang If you use the [`-e/--editable`](https://pip.pypa.io/en/stable/cli/pip_install/#install-editable) flag in the `requirements.txt` file to target the current directory, you might encounter an issue that causes the Gemnasium Python dependency scanner to hang when it runs `pip3 download`. This command is required to build the target project. To resolve this issue, don't use the `-e/--editable` flag when you run dependency scanning for Python. ## Handling out of memory errors with SBT If you encounter out of memory errors with SBT while using dependency scanning on a Scala project, you can address this by setting the [`SBT_CLI_OPTS`](_index.md#analyzer-specific-settings) environment variable. An example configuration is: ```yaml variables: SBT_CLI_OPTS: "-J-Xmx8192m -J-Xms4192m -J-Xss2M" ``` If you're using the Kubernetes executor, you may need to override the default Kubernetes resource settings. Refer to the [Kubernetes executor documentation](https://docs.gitlab.com/runner/executors/kubernetes/#overwrite-container-resources) for details on how to adjust container resources to prevent memory issues. ## No `package-lock.json` file in NPM projects By default, the Dependency Scanning job runs only when there is a `package-lock.json` file in the repository. However, some NPM projects generate the `package-lock.json` file during the build process, instead of storing them in the Git repository. To scan dependencies in these projects: 1. Generate the `package-lock.json` file in a build job. 1. Store the generated file as an artifact. 1. Modify the Dependency Scanning job to use the artifact and adjust its rules. For example, your configuration might look like this: ```yaml include: - template: Dependency-Scanning.gitlab-ci.yml build: script: - npm i artifacts: paths: - package-lock.json # Store the generated package-lock.json as an artifact gemnasium-dependency_scanning: needs: ["build"] rules: - if: "$DEPENDENCY_SCANNING_DISABLED == 'true' || $DEPENDENCY_SCANNING_DISABLED == '1'" when: never - if: "$DS_EXCLUDED_ANALYZERS =~ /gemnasium([^-]|$)/" when: never - if: $CI_COMMIT_BRANCH && $GITLAB_FEATURES =~ /\bdependency_scanning\b/ && $CI_GITLAB_FIPS_MODE == "true" variables: DS_IMAGE_SUFFIX: "-fips" DS_REMEDIATE: 'false' - if: "$CI_COMMIT_BRANCH && $GITLAB_FEATURES =~ /\\bdependency_scanning\\b/" ``` ## No Dependency Scanning job added to the pipeline The Dependency Scanning job uses rules to check if either lockfiles with dependencies or build-tool related files exist. If none of these files are detected, the job is not added to the pipeline, even if the lockfile are generated by another job in the pipeline. If you experience this situation, ensure your repository contains [a supported file](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files), or a file indicating that a supported file is generated at runtime. Consider whether such files can be added to your repository to trigger the Dependency Scanning job. If you believe that your repository does contain such files and the job is still not triggered, [open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) with the following information: - The language and build tool you use. - What kind of lockfile you provide and where it gets generated. You can also contribute directly to the [Dependency Scanning template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml#L269-270). ## Dependency Scanning fails with `gradlew: permission denied` The `permission denied` error on `gradlew` typically indicates that `gradlew` was checked into the repository without an executable bit set. The error might appear in your job with this message: ```plaintext [FATA] [gemnasium-maven] [2024-11-14T21:55:59Z] [/go/src/app/cmd/gemnasium-maven/main.go:65] ▶ fork/exec /builds/path/to/gradlew: permission denied ``` Make the file executable by running `chmod +ux gradlew` locally and pushing it to your Git repository. ## Dependency Scanning scanner is no longer `Gemnasium` Historically, the scanner used by Dependency Scanning is `Gemnasium` and this is what user can see on the [vulnerability page](../vulnerabilities/_index.md). With the rollout of [Dependency Scanning by using SBOM](dependency_scanning_sbom/_index.md), we are replacing the `Gemnasium` scanner with the built-in `GitLab SBoM Vulnerability Scanner`. This new scanner is no longer executed in a CI/CD job but rather within the GitLab platform. While the two scanners are expected to provide the same results, because the SBOM scan happens after the existing Dependency Scanning CI/CD job, existing vulnerabilities have their scanner value updated with the new `GitLab SBoM Vulnerability Scanner`. As we move forward with the rollout and ultimately replace the existing Gemnasium analyzer, the `GitLab SBoM Vulnerability Scanner` will be the only expected value for GitLab built-in Dependency Scanning feature. ## Dependency List for project not being updated based on latest SBOM When a pipeline has a failing job that would generate an SBOM, the `DeleteNotPresentOccurrencesService` does not execute, which prevents the dependency list from being changed or updated. This can occur even if there are other successful jobs that upload an SBOM, and the pipeline overall is successful. This is designed to prevent accidentally removing dependencies from the dependency list when related security scanning jobs fail. If the project dependency list is not updating as expected, check for any SBOM-related jobs that may have failed in the pipeline, and fix them or remove them.
https://docs.gitlab.com/user/application_security/static_reachability
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/static_reachability.md
2025-08-13
doc/user/application_security/dependency_scanning
[ "doc", "user", "application_security", "dependency_scanning" ]
static_reachability.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Static reachability analysis
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14177) as an [experiment](../../../policy/development_stages_support.md) in GitLab 17.5. - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/15781) from experiment to beta in GitLab 17.11. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/502334) support for JavaScript and TypeScript in GitLab 18.2 and Dependency Scanning Analyzer v0.32.0. {{< /history >}} Static reachability analysis (SRA) helps you prioritize remediation of vulnerabilities in dependencies. SRA identifies which dependencies your application actually uses. While dependency scanning finds all vulnerable dependencies, SRA focuses on those that are reachable and pose higher security risks, helping you prioritize remediation based on actual threat exposure. ## Getting started If you are new to static reachability analysis, the following steps show how to enable it for your project. Prerequisites: - Ensure the project uses [supported languages and package managers](#supported-languages-and-package-managers). - [Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) version 0.32.0 and later. - Enable [Dependency Scanning by using SBOM](dependency_scanning_sbom/_index.md#getting-started). [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) analyzers are not supported. - Language-specific prerequisites: - For Python, follow the [pip](dependency_scanning_sbom/_index.md#pip) or [pipenv](dependency_scanning_sbom/_index.md#pipenv) related instructions for dependency scanning using SBOM. You can also use any other Python package manager that is [supported](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files) by the DS analyzer. - For JavaScript and TypeScript, ensure your repository has lock files [supported](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files) by the DS analyzer. Exclusions: - SRA cannot be used together with either a scan execution policy or pipeline execution policy. To enable SRA: - On the left sidebar, select **Search or go to** and find your project. - Edit the `.gitlab-ci.yml` file, and add one of the following. If you're using the CI/CD template, add the following (ensure there is only one `variables:` line): ```yaml variables: DS_STATIC_REACHABILITY_ENABLED: true ``` If you're using the [Dependency Scanning component](https://gitlab.com/components/dependency-scanning), add the following (ensuring there is only one `include:` line.): ```yaml include: - component: ${CI_SERVER_FQDN}/components/dependency-scanning/main@0 inputs: enable_static_reachability: true rules: - if: $CI_SERVER_HOST == "gitlab.com" ``` At this point, SRA is enabled in your pipeline. When dependency scanning runs and outputs an SBOM, the results are supplemented by static reachability analysis. ## Understanding the results To identify vulnerable dependencies that are reachable, either: - In the vulnerability report, hover over the **Severity** value of a vulnerability. - In a vulnerability's details page, check the **Reachable** value. - Use a GraphQL query to list those vulnerabilities that are reachable. A dependency can have one of the following reachability values: Yes : The package linked to this vulnerability is confirmed reachable in code. Not Found : SRA ran successfully but did not detect usage of the vulnerable package. If a vulnerable dependency's reachability value is shown as **Not Found** exercise caution rather than completely dismissing it, because the beta version of SRA may produce false negatives. Not Available : SRA was not executed, so no reachability data exists. When a direct dependency is marked as **in use**, all its transitive dependencies are also marked as **in use**. ## Supported languages and package managers Static reachability analysis is available only for Python, JavaScript, and TypeScript projects. Frontend frameworks are not supported. SRA supplements the SBOMs generated by the new dependency scanner analyzer and so supports the same package managers. If a package manager without dependency graph support is used, all indirect dependencies are marked as [not found](#understanding-the-results). | Language | Supported package managers | Supported file suffix | |-----------------------|---------------------------------------------|-----------------------| | Python<sup>1</sup> | `pip`, `pipenv`<sup>2</sup>, `poetry`, `uv` | `.py` | | JavaScript/TypeScript | `npm`, `pnpm`, `yarn` | `.js`, `.ts` | **Footnotes**: 1. When using Dependency Scanning with `pipdeptree`, [optional dependencies](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) are marked as direct dependencies instead of as transitive dependencies. Static reachability analysis might not identify those packages as in use. For example, requiring `passlib[bcrypt]` may result in `passlib` being marked as `in_use` and `bcrypt` is marked as `not_found`. For more details, see [pip](dependency_scanning_sbom/_index.md#pip). 1. For Python `pipenv`, static reachability analysis doesn't support `Pipfile.lock` files. Support is available only for `pipenv.graph.json` because it supports a dependency graph. ## Running SRA in an offline environment To use the dependency scanning component in an offline environment, you must first [mirror the component project](../../../ci/components/_index.md#use-a-gitlabcom-component-on-gitlab-self-managed). ## How static reachability analysis works Dependency scanning generates an SBOM report that identifies all components and their transitive dependencies. Static reachability analysis checks each dependency in the SBOM report and adds a reachability value to the SBOM report. The enriched SBOM is then ingested by the GitLab instance. The following are marked as not found: - Dependencies that are found in the project's lock files but are not imported in the code. - Tools that are included in the project's lock files for local usage but are not imported in the code. For example, tools such as coverage testing or linting packages are marked as not found even if used locally.
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Static reachability analysis breadcrumbs: - doc - user - application_security - dependency_scanning --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/14177) as an [experiment](../../../policy/development_stages_support.md) in GitLab 17.5. - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/15781) from experiment to beta in GitLab 17.11. - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/502334) support for JavaScript and TypeScript in GitLab 18.2 and Dependency Scanning Analyzer v0.32.0. {{< /history >}} Static reachability analysis (SRA) helps you prioritize remediation of vulnerabilities in dependencies. SRA identifies which dependencies your application actually uses. While dependency scanning finds all vulnerable dependencies, SRA focuses on those that are reachable and pose higher security risks, helping you prioritize remediation based on actual threat exposure. ## Getting started If you are new to static reachability analysis, the following steps show how to enable it for your project. Prerequisites: - Ensure the project uses [supported languages and package managers](#supported-languages-and-package-managers). - [Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) version 0.32.0 and later. - Enable [Dependency Scanning by using SBOM](dependency_scanning_sbom/_index.md#getting-started). [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) analyzers are not supported. - Language-specific prerequisites: - For Python, follow the [pip](dependency_scanning_sbom/_index.md#pip) or [pipenv](dependency_scanning_sbom/_index.md#pipenv) related instructions for dependency scanning using SBOM. You can also use any other Python package manager that is [supported](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files) by the DS analyzer. - For JavaScript and TypeScript, ensure your repository has lock files [supported](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files) by the DS analyzer. Exclusions: - SRA cannot be used together with either a scan execution policy or pipeline execution policy. To enable SRA: - On the left sidebar, select **Search or go to** and find your project. - Edit the `.gitlab-ci.yml` file, and add one of the following. If you're using the CI/CD template, add the following (ensure there is only one `variables:` line): ```yaml variables: DS_STATIC_REACHABILITY_ENABLED: true ``` If you're using the [Dependency Scanning component](https://gitlab.com/components/dependency-scanning), add the following (ensuring there is only one `include:` line.): ```yaml include: - component: ${CI_SERVER_FQDN}/components/dependency-scanning/main@0 inputs: enable_static_reachability: true rules: - if: $CI_SERVER_HOST == "gitlab.com" ``` At this point, SRA is enabled in your pipeline. When dependency scanning runs and outputs an SBOM, the results are supplemented by static reachability analysis. ## Understanding the results To identify vulnerable dependencies that are reachable, either: - In the vulnerability report, hover over the **Severity** value of a vulnerability. - In a vulnerability's details page, check the **Reachable** value. - Use a GraphQL query to list those vulnerabilities that are reachable. A dependency can have one of the following reachability values: Yes : The package linked to this vulnerability is confirmed reachable in code. Not Found : SRA ran successfully but did not detect usage of the vulnerable package. If a vulnerable dependency's reachability value is shown as **Not Found** exercise caution rather than completely dismissing it, because the beta version of SRA may produce false negatives. Not Available : SRA was not executed, so no reachability data exists. When a direct dependency is marked as **in use**, all its transitive dependencies are also marked as **in use**. ## Supported languages and package managers Static reachability analysis is available only for Python, JavaScript, and TypeScript projects. Frontend frameworks are not supported. SRA supplements the SBOMs generated by the new dependency scanner analyzer and so supports the same package managers. If a package manager without dependency graph support is used, all indirect dependencies are marked as [not found](#understanding-the-results). | Language | Supported package managers | Supported file suffix | |-----------------------|---------------------------------------------|-----------------------| | Python<sup>1</sup> | `pip`, `pipenv`<sup>2</sup>, `poetry`, `uv` | `.py` | | JavaScript/TypeScript | `npm`, `pnpm`, `yarn` | `.js`, `.ts` | **Footnotes**: 1. When using Dependency Scanning with `pipdeptree`, [optional dependencies](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) are marked as direct dependencies instead of as transitive dependencies. Static reachability analysis might not identify those packages as in use. For example, requiring `passlib[bcrypt]` may result in `passlib` being marked as `in_use` and `bcrypt` is marked as `not_found`. For more details, see [pip](dependency_scanning_sbom/_index.md#pip). 1. For Python `pipenv`, static reachability analysis doesn't support `Pipfile.lock` files. Support is available only for `pipenv.graph.json` because it supports a dependency graph. ## Running SRA in an offline environment To use the dependency scanning component in an offline environment, you must first [mirror the component project](../../../ci/components/_index.md#use-a-gitlabcom-component-on-gitlab-self-managed). ## How static reachability analysis works Dependency scanning generates an SBOM report that identifies all components and their transitive dependencies. Static reachability analysis checks each dependency in the SBOM report and adds a reachability value to the SBOM report. The enriched SBOM is then ingested by the GitLab instance. The following are marked as not found: - Dependencies that are found in the project's lock files but are not imported in the code. - Tools that are included in the project's lock files for local usage but are not imported in the code. For example, tools such as coverage testing or linting packages are marked as not found even if used locally.
https://docs.gitlab.com/user/application_security/experiment_libbehave_dependency
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/experiment_libbehave_dependency.md
2025-08-13
doc/user/application_security/dependency_scanning
[ "doc", "user", "application_security", "dependency_scanning" ]
experiment_libbehave_dependency.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Analyze dependency for behaviors
Libbehave scans new dependencies added in merge requests for risky behaviors and assigns each behavior a risk score. Results are shown in the job output, merge request comments, and job artifacts.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} Libbehave is an experimental feature that scans your dependencies during merge request pipelines to identify newly added libraries and their potentially risky behaviors. While traditional dependency scanning looks for known vulnerabilities, Libbehave gives insight into what features and behaviors your dependencies exhibit. Each feature detected by Libbehave is assigned a "riskiness" score of either: - Informational: No risk, but may assist in cataloguing features of a dependency (for example, uses JSON). - Low: Small risk, can highlight a dependency is doing a security sensitive action such as using encryption. - Medium: Moderate level of risk, can be used to interact with the file system or read environment variables where sensitive data may be stored or accessed. - High: Highest level of risk, these behaviors are commonly abused in security vulnerabilities such as executing OS commands or dynamically evaluating code. Features that Libbehave detects include: - Executing OS commands - Executing dynamic code (eval) - Reading/writing files - Opening network sockets - Reading/expanding archives (ZIP/tar/Gzip) - Interacting with external services by using HTTP clients, Redis, Elastic Cache, Relational Management Database (RMDB) servers, SSH, Git - Serializing data in various formats: XML, YAML, MessagePack, Protocol Buffers, JSON, and language-specific formats - Templating - Popular frameworks - Upload/download of files For demos of Libbehave for each supported package manager type, see [our Libbehave demo projects](https://gitlab.com/gitlab-org/security-products/demos/experiments/libbehave). ## Supported languages and package managers The following languages and package managers are supported by Libbehave: - C# ([NuGet](https://www.nuget.org/)) - Reads `Directory.Build.props` files (replacing property values if found) - Reads `*.deps.json` files - Reads `**/*.dll` and `**/*.exe` files - Go - Reads `go.mod` files - Java ([Maven](https://maven.org)) - Reads `pom.xml` files (replacing property values if found) - Reads `**/gradle.lockfile*` files - JavaScript/TypeScript ([npmjs](https://npmjs.com)) - Reads `**/package-lock.json` files - Reads `**/yarn.lock` files - Reads `**/pnpm-lock.yaml` files - Python ([pypi](https://pypi.org)) - Reads `**/*requirements*.txt` files - Reads `**/poetry.lock` files - Reads `**/Pipfile.lock` files - Reads `**/setup.py` files - Reads packages in egg or wheel installation directories: - Reads `**/*dist-info/METADATA`, `**/*egg-info/PKG-INFO`, `**/*DIST-INFO/METADATA`, and `**/*EGG-INFO/PKG-INFO` files - PHP ([Composer/Packagist](https://packagist.org/)) - Reads `**/installed.json` files - Reads `**/composer.lock` files - Reads `**/php/.registry/.channel.*/*.reg"` files - Ruby ([Rubygems](https://rubygems.org)) - Reads `**/Gemfile.lock` files - Reads `**/specifications/**/*.gemspec` files - Reads `**/*.gemspec` files The previous files are analyzed for new dependencies only if the files have been modified in the source branch. ## Configuration Prerequisites: - Pipeline is part of an active [merge request pipeline](../../../ci/pipelines/merge_request_pipelines.md) that has a defined source and target Git branch. - Project includes one of the [supported languages](#supported-languages-and-package-managers). - Project is adding new dependencies to the source or feature branch. - For merge request (MR) comments, ensure a Guest level [project access token](../../project/settings/project_access_tokens.md), and the source branch is either a protected branch or the **Protect variable** CI/CD variable [option is unchecked](../../../ci/variables/_index.md#for-a-project). Libbehave is exposed through [CI/CD components](../../../ci/components/_index.md). To enable it, configure your project's `.gitlab-ci.yml` file as follows: ```yaml include: - component: $CI_SERVER_FQDN/security-products/experiments/libbehave/libbehave@v0.1.0 inputs: stage: test ``` The previous configuration enables the Libbehave CI component for the test stage. This will create a new job called `libbehave-experiment`. ### Configuring MR comments To configure MR comments for Libbehave: 1. Create a [project access token](../../project/settings/project_access_tokens.md) with the following attributes: - Guest level access - Enter name for the token, for example, `libbehave-bot`. - Select the scope `api`. Copy the project access token to your clipboard. It's required in the next step. 1. Add the token as a [project CI/CD variable](../../../ci/variables/_index.md): - Set **Visibility** to "Masked". - Uncheck the "Protect variable" option under **Flags**, to allow access from non-protected branches. - Set the key variable name to `BEHAVE_TOKEN`. - Set the value to your newly created project access token. 1. The CI/CD component automatically uses the `BEHAVE_TOKEN` so you do not need to specify it in the component inputs. ```yaml include: - component: gitlab.com/security-products/experiments/libbehave/libbehave@v0.1.0 inputs: stage: test ``` With this configuration, Libbehave can create MR comments with the analysis results. ### Available CI/CD inputs and variables You can use CI/CD variables to customize the [CI component](https://gitlab.com/security-products/experiments/libbehave) of Libbehave. The following variables configure the behavior of how Libbehave runs. | CI/CD variable | CLI Argument | Default | Description | |---------------------------------------|--------------|---------|----------------------------------------------------------------------| | `CI_MERGE_REQUEST_SOURCE_BRANCH_NAME` | `-source` | `""` | Source branch to diff against (for example, feature-branch) | | `CI_MERGE_REQUEST_TARGET_BRANCH_NAME` | `-target` | `""` | Target branch to diff against (for example, main) | | `BEHAVE_TIMEOUT` | `-timeout` | `"30m"` | Maximum time allowed to analyze and download packages (example: 30m) | | `BEHAVE_TOKEN` | `-token` | `""` | Optional. Access token (required to create an MR comment) | | `CI_PROJECT_ID` | `-project` | `""` | Optional. Project ID to create MR note with results | | `CI_MERGE_REQUEST_IID` | `-mrid` | `""` | Optional. Merge request ID to create MR note with results | The following flags are available, but are untested and should be left at their default values: | CI/CD variable | CLI Argument | Default | Description | |------------------------|------------------|---------------|-----------------------------| | `BEHAVE_RULE_PATHS` | `-rules` | `"/dist"` | The path to the rule files. | | `BEHAVE_TARGET_DIR` | `-dir` | `""` | The target directory to run behave against. | | `BEHAVE_NO_GIT_IGNORE` | `-no-git-ignore` | `true` | Whether to scan files in `.gitignore`. Providing the argument will not scan them, by default it will. | | `BEHAVE_OUTPUT_PATH` | `-output` | `"behaveout"` | The path to store scan results, extracted artifacts and report results. | | `BEHAVE_INCLUDE_LANG` | `-include-lang` | `""` | Include a language, one of: `csharp`, `go`, `java`, `js`, `php`, `python`, or `ruby`, separated by ',' excludes all others not specified. | | `BEHAVE_EXCLUDE_LANG` | `-exclude-lang` | `""` | Exclude a language, one of: `csharp`, `go`, `java`, `js`, `php`, `python`, or `ruby`, separated by ',', includes all others not specified. | | `BEHAVE_EXCLUDE_FILES` | `-exclude-` | `""` | Exclude files or paths by regular expressions, individual regular expressions are separated by ','. | As we have not tested all variables you may find some will work and others will not. If one does not work and you need it, we suggest [submitting a feature request](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed&issue[title]=Docs%20feedback%20-%20feature%20proposal:%20Write%20your%20title) or contributing to the code to enable it to be used. ## Dependency detection and analysis Libbehave analyzes and reports findings on any newly added dependencies and is meant to run in [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md). That means if your merge request does not include any new dependencies, then Libbehave returns zero results. Detection works differently, depending on the language and package manager used. By default, those supported package managers have their package manager related files parsed to identify which dependencies are being added. This information is gathered and then used to call out to the respective package manager API to download the identified package's artifacts. After they're downloaded, the dependencies are extracted and analyzed using static analysis methods based on Semgrep, with a configured set of checks. In the case of Java and C#, an additional step is taken to decompile the binary artifacts prior to running static analysis. ### Known issues Each language has its own known issues. All package files such as `Gemfile.lock` and `requirements.txt` must provide explicit versions. Version ranges are not supported. <!-- markdownlint-disable MD003 --> <!-- markdownlint-disable MD020 --> #### C\# <!-- markdownlint-disable MD020 --> <!-- markdownlint-enable MD003 --> - Property or variable replacement in `.props` or `.csproj` files do not account for nested project files. It replaces any variable that matches a global set of extracted variables and their values. - Decompiles downloaded dependencies, so source to line translation may not be 1:1. - Libbehave decompiles all .NET versions that exist in a NuGet package. This may be optimized in the future. - For example, some dependencies will package multiple DLLs in a single archive targeting different framework versions (example: net20/Some.dll, net45/Some.dll). #### Java - Does not support [inheritance](https://maven.apache.org/pom.html#inheritance) for `pom.xml` files. - Only supports Maven and not custom JFrog or other artifact repositories. - Decompiles downloaded dependencies, so source to line translation may not be 1:1. #### Python - Attempt to download source packages from PyPI for analysis. If there is no source package, Libbehave downloads the first available `bdist_wheel` package which may not match the target OS. ## Output Libbehave produces the following output: - **Job summary**: The summary of findings are output directly into the CI/CD output job console for a quick view of which features a dependency detected. - **MR comment summary**: The summary of findings are output as an MR comment note for easier review. This requires an access token to be configured to give the job access to write to the MR note section. - **HTML artifact**: An HTML artifact that contains a searchable set of libraries and identified features as well as the exact lines of code that triggered the finding. ### Job summary The job summary requires no extra configuration and will always be presented after a successful analysis. Example of what the Job Summary output looks like: ```plaintext # Job output # [=== libbehave: New packages detected ===] 🔺 4 new packages have been detected in this MR. [= java - open-vulnerability-clients 6.1.7 =] The https://mvnrepository.com/artifact/io.github.jeremylong/open-vulnerability-clients package was found to exhibit the following behaviors: - 🟧 GzipReadArchive (Risk: Medium) ----------------- [= java - jdiagnostics 1.0.7 =] The https://mvnrepository.com/artifact/org.anarres.jdiagnostics/jdiagnostics package was found to exhibit the following behaviors: - 🟥 CryptoMD5 (Risk: High) - 🟧 WriteFile (Risk: Medium) - 🟧 ReadFile (Risk: Medium) - 🟧 ReadEnvVars (Risk: Medium) ----------------- [= java - commons-dbcp2 2.12.0 =] The https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 package was found to exhibit the following behaviors: - 🟥 JavaObjectSerialization (Risk: High) - 🟧 Passwords (Risk: Medium) ----------------- [= java - jmockit 1.49 =] The https://mvnrepository.com/artifact/org.jmockit/jmockit package was found to exhibit the following behaviors: - 🟥 JavaObjectSerialization (Risk: High) - 🟧 WriteFile (Risk: Medium) - 🟧 ReadFile (Risk: Medium) - 🟨 CryptoRAND (Risk: Low) ----------------- ``` ### MR comment summary The MR Comment Summary output requires an access token with Guest level access be created for the project that the Libbehave component has been configured for. The access token should then be [configured for the project](../../../ci/variables/_index.md#for-a-project). Because feature branches are not protected by default, ensure the **Protect variable** setting is unchecked, otherwise the Libbehave job will not be able to read the access token's value. ![Example MR Comment Summary output](img/libbehave_mr_comment_v17_4.png) ### HTML artifact The HTML artifact will appear in your jobs artifacts output (`behaveout/gl-libbehave.html`) and should be accessible in your job artifact downloads. ![HTML Artifact Summary output](img/libbehave_html_artifact_v17_4.png) ## Offline environment (not supported) Libbehave does not work in offline environments as it pulls down dependencies directly from the various package managers. ## Troubleshooting ### Job is not run If the Libbehave job is not run, ensure your project is configured to run [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md). ### MR comment is not being added This is usually due to the `BEHAVE_TOKEN` not being set. Ensure the access token has Guest level access and the **Protect variable** option is unchecked in the **Settings > CI/CD** variables settings. #### I'm getting error "{401 Permission Denied}" This is usually due to the `BEHAVE_TOKEN` not containing the correct value. Ensure the access token has Guest level access.
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Analyze dependency for behaviors description: Libbehave scans new dependencies added in merge requests for risky behaviors and assigns each behavior a risk score. Results are shown in the job output, merge request comments, and job artifacts. breadcrumbs: - doc - user - application_security - dependency_scanning --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} Libbehave is an experimental feature that scans your dependencies during merge request pipelines to identify newly added libraries and their potentially risky behaviors. While traditional dependency scanning looks for known vulnerabilities, Libbehave gives insight into what features and behaviors your dependencies exhibit. Each feature detected by Libbehave is assigned a "riskiness" score of either: - Informational: No risk, but may assist in cataloguing features of a dependency (for example, uses JSON). - Low: Small risk, can highlight a dependency is doing a security sensitive action such as using encryption. - Medium: Moderate level of risk, can be used to interact with the file system or read environment variables where sensitive data may be stored or accessed. - High: Highest level of risk, these behaviors are commonly abused in security vulnerabilities such as executing OS commands or dynamically evaluating code. Features that Libbehave detects include: - Executing OS commands - Executing dynamic code (eval) - Reading/writing files - Opening network sockets - Reading/expanding archives (ZIP/tar/Gzip) - Interacting with external services by using HTTP clients, Redis, Elastic Cache, Relational Management Database (RMDB) servers, SSH, Git - Serializing data in various formats: XML, YAML, MessagePack, Protocol Buffers, JSON, and language-specific formats - Templating - Popular frameworks - Upload/download of files For demos of Libbehave for each supported package manager type, see [our Libbehave demo projects](https://gitlab.com/gitlab-org/security-products/demos/experiments/libbehave). ## Supported languages and package managers The following languages and package managers are supported by Libbehave: - C# ([NuGet](https://www.nuget.org/)) - Reads `Directory.Build.props` files (replacing property values if found) - Reads `*.deps.json` files - Reads `**/*.dll` and `**/*.exe` files - Go - Reads `go.mod` files - Java ([Maven](https://maven.org)) - Reads `pom.xml` files (replacing property values if found) - Reads `**/gradle.lockfile*` files - JavaScript/TypeScript ([npmjs](https://npmjs.com)) - Reads `**/package-lock.json` files - Reads `**/yarn.lock` files - Reads `**/pnpm-lock.yaml` files - Python ([pypi](https://pypi.org)) - Reads `**/*requirements*.txt` files - Reads `**/poetry.lock` files - Reads `**/Pipfile.lock` files - Reads `**/setup.py` files - Reads packages in egg or wheel installation directories: - Reads `**/*dist-info/METADATA`, `**/*egg-info/PKG-INFO`, `**/*DIST-INFO/METADATA`, and `**/*EGG-INFO/PKG-INFO` files - PHP ([Composer/Packagist](https://packagist.org/)) - Reads `**/installed.json` files - Reads `**/composer.lock` files - Reads `**/php/.registry/.channel.*/*.reg"` files - Ruby ([Rubygems](https://rubygems.org)) - Reads `**/Gemfile.lock` files - Reads `**/specifications/**/*.gemspec` files - Reads `**/*.gemspec` files The previous files are analyzed for new dependencies only if the files have been modified in the source branch. ## Configuration Prerequisites: - Pipeline is part of an active [merge request pipeline](../../../ci/pipelines/merge_request_pipelines.md) that has a defined source and target Git branch. - Project includes one of the [supported languages](#supported-languages-and-package-managers). - Project is adding new dependencies to the source or feature branch. - For merge request (MR) comments, ensure a Guest level [project access token](../../project/settings/project_access_tokens.md), and the source branch is either a protected branch or the **Protect variable** CI/CD variable [option is unchecked](../../../ci/variables/_index.md#for-a-project). Libbehave is exposed through [CI/CD components](../../../ci/components/_index.md). To enable it, configure your project's `.gitlab-ci.yml` file as follows: ```yaml include: - component: $CI_SERVER_FQDN/security-products/experiments/libbehave/libbehave@v0.1.0 inputs: stage: test ``` The previous configuration enables the Libbehave CI component for the test stage. This will create a new job called `libbehave-experiment`. ### Configuring MR comments To configure MR comments for Libbehave: 1. Create a [project access token](../../project/settings/project_access_tokens.md) with the following attributes: - Guest level access - Enter name for the token, for example, `libbehave-bot`. - Select the scope `api`. Copy the project access token to your clipboard. It's required in the next step. 1. Add the token as a [project CI/CD variable](../../../ci/variables/_index.md): - Set **Visibility** to "Masked". - Uncheck the "Protect variable" option under **Flags**, to allow access from non-protected branches. - Set the key variable name to `BEHAVE_TOKEN`. - Set the value to your newly created project access token. 1. The CI/CD component automatically uses the `BEHAVE_TOKEN` so you do not need to specify it in the component inputs. ```yaml include: - component: gitlab.com/security-products/experiments/libbehave/libbehave@v0.1.0 inputs: stage: test ``` With this configuration, Libbehave can create MR comments with the analysis results. ### Available CI/CD inputs and variables You can use CI/CD variables to customize the [CI component](https://gitlab.com/security-products/experiments/libbehave) of Libbehave. The following variables configure the behavior of how Libbehave runs. | CI/CD variable | CLI Argument | Default | Description | |---------------------------------------|--------------|---------|----------------------------------------------------------------------| | `CI_MERGE_REQUEST_SOURCE_BRANCH_NAME` | `-source` | `""` | Source branch to diff against (for example, feature-branch) | | `CI_MERGE_REQUEST_TARGET_BRANCH_NAME` | `-target` | `""` | Target branch to diff against (for example, main) | | `BEHAVE_TIMEOUT` | `-timeout` | `"30m"` | Maximum time allowed to analyze and download packages (example: 30m) | | `BEHAVE_TOKEN` | `-token` | `""` | Optional. Access token (required to create an MR comment) | | `CI_PROJECT_ID` | `-project` | `""` | Optional. Project ID to create MR note with results | | `CI_MERGE_REQUEST_IID` | `-mrid` | `""` | Optional. Merge request ID to create MR note with results | The following flags are available, but are untested and should be left at their default values: | CI/CD variable | CLI Argument | Default | Description | |------------------------|------------------|---------------|-----------------------------| | `BEHAVE_RULE_PATHS` | `-rules` | `"/dist"` | The path to the rule files. | | `BEHAVE_TARGET_DIR` | `-dir` | `""` | The target directory to run behave against. | | `BEHAVE_NO_GIT_IGNORE` | `-no-git-ignore` | `true` | Whether to scan files in `.gitignore`. Providing the argument will not scan them, by default it will. | | `BEHAVE_OUTPUT_PATH` | `-output` | `"behaveout"` | The path to store scan results, extracted artifacts and report results. | | `BEHAVE_INCLUDE_LANG` | `-include-lang` | `""` | Include a language, one of: `csharp`, `go`, `java`, `js`, `php`, `python`, or `ruby`, separated by ',' excludes all others not specified. | | `BEHAVE_EXCLUDE_LANG` | `-exclude-lang` | `""` | Exclude a language, one of: `csharp`, `go`, `java`, `js`, `php`, `python`, or `ruby`, separated by ',', includes all others not specified. | | `BEHAVE_EXCLUDE_FILES` | `-exclude-` | `""` | Exclude files or paths by regular expressions, individual regular expressions are separated by ','. | As we have not tested all variables you may find some will work and others will not. If one does not work and you need it, we suggest [submitting a feature request](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed&issue[title]=Docs%20feedback%20-%20feature%20proposal:%20Write%20your%20title) or contributing to the code to enable it to be used. ## Dependency detection and analysis Libbehave analyzes and reports findings on any newly added dependencies and is meant to run in [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md). That means if your merge request does not include any new dependencies, then Libbehave returns zero results. Detection works differently, depending on the language and package manager used. By default, those supported package managers have their package manager related files parsed to identify which dependencies are being added. This information is gathered and then used to call out to the respective package manager API to download the identified package's artifacts. After they're downloaded, the dependencies are extracted and analyzed using static analysis methods based on Semgrep, with a configured set of checks. In the case of Java and C#, an additional step is taken to decompile the binary artifacts prior to running static analysis. ### Known issues Each language has its own known issues. All package files such as `Gemfile.lock` and `requirements.txt` must provide explicit versions. Version ranges are not supported. <!-- markdownlint-disable MD003 --> <!-- markdownlint-disable MD020 --> #### C\# <!-- markdownlint-disable MD020 --> <!-- markdownlint-enable MD003 --> - Property or variable replacement in `.props` or `.csproj` files do not account for nested project files. It replaces any variable that matches a global set of extracted variables and their values. - Decompiles downloaded dependencies, so source to line translation may not be 1:1. - Libbehave decompiles all .NET versions that exist in a NuGet package. This may be optimized in the future. - For example, some dependencies will package multiple DLLs in a single archive targeting different framework versions (example: net20/Some.dll, net45/Some.dll). #### Java - Does not support [inheritance](https://maven.apache.org/pom.html#inheritance) for `pom.xml` files. - Only supports Maven and not custom JFrog or other artifact repositories. - Decompiles downloaded dependencies, so source to line translation may not be 1:1. #### Python - Attempt to download source packages from PyPI for analysis. If there is no source package, Libbehave downloads the first available `bdist_wheel` package which may not match the target OS. ## Output Libbehave produces the following output: - **Job summary**: The summary of findings are output directly into the CI/CD output job console for a quick view of which features a dependency detected. - **MR comment summary**: The summary of findings are output as an MR comment note for easier review. This requires an access token to be configured to give the job access to write to the MR note section. - **HTML artifact**: An HTML artifact that contains a searchable set of libraries and identified features as well as the exact lines of code that triggered the finding. ### Job summary The job summary requires no extra configuration and will always be presented after a successful analysis. Example of what the Job Summary output looks like: ```plaintext # Job output # [=== libbehave: New packages detected ===] 🔺 4 new packages have been detected in this MR. [= java - open-vulnerability-clients 6.1.7 =] The https://mvnrepository.com/artifact/io.github.jeremylong/open-vulnerability-clients package was found to exhibit the following behaviors: - 🟧 GzipReadArchive (Risk: Medium) ----------------- [= java - jdiagnostics 1.0.7 =] The https://mvnrepository.com/artifact/org.anarres.jdiagnostics/jdiagnostics package was found to exhibit the following behaviors: - 🟥 CryptoMD5 (Risk: High) - 🟧 WriteFile (Risk: Medium) - 🟧 ReadFile (Risk: Medium) - 🟧 ReadEnvVars (Risk: Medium) ----------------- [= java - commons-dbcp2 2.12.0 =] The https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 package was found to exhibit the following behaviors: - 🟥 JavaObjectSerialization (Risk: High) - 🟧 Passwords (Risk: Medium) ----------------- [= java - jmockit 1.49 =] The https://mvnrepository.com/artifact/org.jmockit/jmockit package was found to exhibit the following behaviors: - 🟥 JavaObjectSerialization (Risk: High) - 🟧 WriteFile (Risk: Medium) - 🟧 ReadFile (Risk: Medium) - 🟨 CryptoRAND (Risk: Low) ----------------- ``` ### MR comment summary The MR Comment Summary output requires an access token with Guest level access be created for the project that the Libbehave component has been configured for. The access token should then be [configured for the project](../../../ci/variables/_index.md#for-a-project). Because feature branches are not protected by default, ensure the **Protect variable** setting is unchecked, otherwise the Libbehave job will not be able to read the access token's value. ![Example MR Comment Summary output](img/libbehave_mr_comment_v17_4.png) ### HTML artifact The HTML artifact will appear in your jobs artifacts output (`behaveout/gl-libbehave.html`) and should be accessible in your job artifact downloads. ![HTML Artifact Summary output](img/libbehave_html_artifact_v17_4.png) ## Offline environment (not supported) Libbehave does not work in offline environments as it pulls down dependencies directly from the various package managers. ## Troubleshooting ### Job is not run If the Libbehave job is not run, ensure your project is configured to run [merge request pipelines](../../../ci/pipelines/merge_request_pipelines.md). ### MR comment is not being added This is usually due to the `BEHAVE_TOKEN` not being set. Ensure the access token has Guest level access and the **Protect variable** option is unchecked in the **Settings > CI/CD** variables settings. #### I'm getting error "{401 Permission Denied}" This is usually due to the `BEHAVE_TOKEN` not containing the correct value. Ensure the access token has Guest level access.
https://docs.gitlab.com/user/application_security/migration_guide_to_sbom_based_scans
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/migration_guide_to_sbom_based_scans.md
2025-08-13
doc/user/application_security/dependency_scanning
[ "doc", "user", "application_security", "dependency_scanning" ]
migration_guide_to_sbom_based_scans.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating to Dependency Scanning using SBOM
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - The legacy [Dependency Scanning feature based on the Gemnasium analyzer](_index.md) was [deprecated](../../../update/deprecations.md#dependency-scanning-upgrades-to-the-gitlab-sbom-vulnerability-scanner) in GitLab 17.9 and planned for removal in 19.0. {{< /history >}} The Dependency Scanning feature is upgrading to the GitLab SBOM Vulnerability Scanner. As part of this change, the [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) feature and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) replace the legacy Dependency Scanning feature based on the Gemnasium analyzer. However, due to the significant changes this transition introduces, it is not implemented automatically and this document serves as a migration guide. Follow this migration guide if you use GitLab Dependency Scanning and any of the following conditions apply: - The Dependency Scanning CI/CD jobs are configured by including a Dependency Scanning CI/CD templates. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml ``` - The Dependency Scanning CI/CD jobs are configured by using [Scan Execution Policies](../policies/scan_execution_policies.md). - The Dependency Scanning CI/CD jobs are configured by using [Pipeline Execution Policies](../policies/pipeline_execution_policies.md). ## Understand the changes Before you migrate your project to Dependency Scanning using SBOM, you should understand the fundamental changes being introduced. The transition represents a technical evolution, a new approach to how Dependency Scanning works in GitLab, and various improvements to the user experience, some of which include, but are not limited to, the following: - Increased language support. The deprecated Gemnasium analyzers are constrained to a small subset of Python and Java versions. The new analyzer gives organizations the necessary flexibility to use older versions of these toolchains with older projects, and the option to try newer versions without waiting on a major update to the analyzer's image. Additionally, the new analyzer benefits from increased [file coverage](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files). - Increased performance. Depending on the application, builds invoked by the Gemnasium analyzers can last for almost an hour, and be a duplicate effort. The new analyzer no longer invokes build systems directly. Instead, it re-uses previously defined build jobs to improve overall scan performance. - Smaller attack surface. To support its build capabilities, the Gemnasium analyzers are preloaded with a variety of dependencies. The new analyzer removes a large amount of these dependencies which results in a smaller attack surface. - Simpler configuration. The deprecated Gemnasium analyzers frequently require the configuration of proxies, Certificate Authority (CA) certificate bundles, and various other utilities to function correctly. The new solution removes many of these requirements, resulting in a robust tool that is simpler to configure. ### A new approach to security scanning When using the legacy Dependency Scanning feature, all scanning work happens within your CI/CD pipeline. When running a scan, the Gemnasium analyzer handles two critical tasks simultaneously: it identifies your project's dependencies and immediately performs a security analysis of those dependencies, all within the same CI/CD job. The Dependency Scanning using SBOM approach separates these tasks into two distinct phases: - First, when you run the new Dependency Scanning analyzer in the CI/CD pipeline, it focuses solely on creating a comprehensive inventory of your project's dependencies. This inventory is captured in a CycloneDX SBOM (Software Bill of Materials) report. - Second, the detected components are sent to the GitLab platform to perform a thorough security analysis using the built-in GitLab SBOM Vulnerability Scanner. You're already benefiting from this scanner with the [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md) feature. This separation of concerns brings several advantages for future enhancements, but it also means some changes are necessary because the security analysis happens outside the CI/CD pipeline. This impacts the availability of some functionalities that depend on the security analysis to run in the CI/CD pipeline. Review [the deprecation announcement](../../../update/deprecations.md#dependency-scanning-upgrades-to-the-gitlab-sbom-vulnerability-scanner) for a complete description. ### CI/CD configuration To prevent disruption to your CI/CD pipelines, the new approach is not yet applied to the stable Dependency Scanning CI/CD template (`Dependency-Scanning.gitlab-ci.yml`) and as of GitLab 17.9, you must use the `latest` template (`Dependency-Scanning.latest.gitlab-ci.yml`) to enable it. Other migration paths might be considered as the feature gains maturity. The latest Dependency Scanning CI/CD template (`Dependency-Scanning.latest.gitlab-ci.yml`) still maintains backward compatibility by default. It continues to run existing Gemnasium analyzer jobs, while the new Dependency Scanning analyzer only activates for newly supported languages and package managers. You can opt-in to use the new Dependency Scanning analyzer for all projects by configuring the `DS_ENFORCE_NEW_ANALYZER` CI/CD variable to `true`. If you're using [Scan Execution Policies](../policies/scan_execution_policies.md), these changes apply in the same way because they build upon the CI/CD templates. If you're using the [main Dependency Scanning CI/CD component](https://gitlab.com/components/dependency-scanning/-/tree/main/templates/main) you won't see any changes as it already employs the new analyzer. However, if you're using the specialized components for Android, Rust, Swift, or Cocoapods, you'll need to migrate to the main component that now covers all supported languages and package managers. ### Build support for Java and Python One significant change affects how dependencies are discovered, particularly for Java and Python projects. The new analyzer takes a different approach: instead of attempting to build your application to determine dependencies, it requires explicit dependency information through lockfiles or dependency graph files. This change means you'll need to ensure these files are available, either by committing them to your repository or generating them dynamically during the CI/CD pipeline. While this requires some initial setup, it provides more reliable and consistent results across different environments. The following sections will guide you through the specific steps needed to adapt your projects to this new approach if that's necessary. ### Accessing scan results (during Beta only) {{< alert type="warning" >}} ADDENDUM: Based on customer feedback, we have decided to reinstate the generation of the Dependency Scanning report artifact for the Generally Available release. However, it will not be available in the Beta release. See [this epic](https://gitlab.com/groups/gitlab-org/-/epics/17150) for more details. {{< /alert >}} <details> <summary>See Beta behavior</summary> When you migrate to Dependency Scanning using SBOM, you'll notice a fundamental change in how security scan results are handled. The new approach moves the security analysis out of the CI/CD pipeline and into the GitLab platform, which changes how you access and work with the results. With the legacy Dependency Scanning feature, CI/CD jobs using the Gemnasium analyzer generate a [Dependency Scanning report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportsdependency_scanning) containing the scan results, and upload it to the platform. You can access these results by all possible ways offered to job artifacts. This means you can process or modify the results within your CI/CD pipeline before they reach the GitLab platform. The Dependency Scanning using SBOM approach works differently. The security analysis now happens within the GitLab platform using the built-in GitLab SBOM Vulnerability Scanner, so you won't find the scan results in your job artifacts anymore. Instead, GitLab analyzes the [CycloneDX SBOM report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) that your CI/CD pipeline generates, creating security findings directly in the GitLab platform. To help you transition smoothly, GitLab maintains some backward compatibility. While using the Gemnasium analyzer, you'll still get a standard artifact (using `artifacts:paths`) that contains the scan results. This means if you have succeeding CI/CD jobs that need these results, they can still access them. However, keep in mind that as the GitLab SBOM Vulnerability Scanner evolves and improves, these artifact-based results won't reflect the latest enhancements. When you're ready to fully migrate to the new Dependency Scanning analyzer, you'll need to adjust how you programmatically access scan results. Instead of reading job artifacts, you'll use GitLab GraphQL API, specifically the ([`Pipeline.securityReportFindings` resource](../../../api/graphql/reference/_index.md#pipelinesecurityreportfindings)). </details> ## Identify affected projects Understanding which of your projects need attention for this migration is an important first step. The most significant impact will be on your Java and Python projects, because the way they handle dependencies is changing fundamentally. To help you identify affected projects, GitLab provides the [Dependency Scanning Build Support Detection Helper](https://gitlab.com/security-products/tooling/build-support-detection-helper) tool. This tool examines your GitLab group or GitLab Self-Managed instance and identifies projects that currently use the Dependency Scanning feature with either the `gemnasium-maven-dependency_scanning` or `gemnasium-python-dependency_scanning` CI/CD jobs. When you run this tool, it creates a comprehensive report of projects that will need your attention during the migration. Having this information early helps you plan your migration strategy effectively, especially if you manage multiple projects across your organization. ## Migrate to Dependency Scanning using SBOM To migrate to the Dependency Scanning using SBOM method, perform the following steps for each project: 1. Remove existing customization for Dependency Scanning based on the Gemnasium analyzer. - If you have manually overridden the `gemnasium-dependency_scanning`, `gemnasium-maven-dependency_scanning`, or `gemnasium-python-dependency_scanning` CI/CD jobs to customize them in a project's `.gitlab-ci.yml` or in the CI/CD configuration for a Pipeline Execution Policy, remove them. - If you have configured any of [the impacted CI/CD variables](#changes-to-cicd-variables), adjust your configuration accordingly. 1. Enable the Dependency Scanning using SBOM feature with one of the following options: - Use the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` to run the new Dependency Scanning analyzer: 1. Ensure your `.gitlab-ci.yml` CI/CD configuration includes the latest Dependency Scanning CI/CD template. 1. Add the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` and set it to `true`. This variable can be set in many different places, while observing the [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use the [Scan Execution Policies](../policies/scan_execution_policies.md) to run the new Dependency Scanning analyzer: 1. Edit the configured scan execution policy for Dependency Scanning and ensure it uses the `latest` template. 1. Add the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` and set it to `true`. This variable can be set in many different places, while observing the [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) to run the new Dependency Scanning analyzer: 1. Replace the Dependency Scanning CI/CD template's `include` statement with the Dependency Scanning CI/CD component in your `.gitlab-ci.yml` CI/CD configuration. 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use your own CycloneDX SBOM document: 1. Remove the Dependency Scanning CI/CD template's `include` statement from your `.gitlab-ci.yml` CI/CD configuration. 1. Ensure a compatible SBOM document is generated by a CI/CD job and uploaded as a [CycloneDX SBOM report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx). 1. Adjust any workflow depending on the [dependency_scanning report artifacts](../../../ci/yaml/artifacts_reports.md#artifactsreportsdependency_scanning) which are no longer uploaded. For multi-language projects, complete all relevant language-specific migration steps. {{< alert type="note" >}} If you decide to migrate from the CI/CD template to the CI/CD component, review the [current limitations](../../../ci/components/_index.md#use-a-gitlabcom-component-on-gitlab-self-managed) for GitLab Self-Managed. {{< /alert >}} ## Language-specific instructions As you migrate to the new Dependency Scanning analyzer, you'll need to make specific adjustments based on your project's programming languages and package managers. These instructions apply whenever you use the new Dependency Scanning analyzer, regardless of how you've configured it to run - whether through CI/CD templates, Scan Execution Policies, or the Dependency Scanning CI/CD component. In the following sections, you'll find detailed instructions for each supported language and package manager. For each one, we'll explain: - How dependency detection is changing - What specific files you need to provide - How to generate these files if they're not already part of your workflow Share any feedback on the new Dependency Scanning analyzer in this [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/523458). ### Bundler **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Bundler projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `Gemfile.lock` file (`gems.locked` alternate filename is also supported). The combination of supported versions of Bundler and the `Gemfile.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `Gemfile.lock` file (`gems.locked` alternate filename is also supported) and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Bundler project Migrate a Bundler project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps needed to migrate a Bundler project to use the Dependency Scanning analyzer. ### CocoaPods **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer does not support CocoaPods projects when using the CI/CD templates or the Scan Execution Policies. Support for CocoaPods is only available on the experimental Cocoapods CI/CD component. **New behavior**: The new Dependency Scanning analyzer extracts the project dependencies by parsing the `Podfile.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a CocoaPods project Migrate a CocoaPods project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a CocoaPods project to use the Dependency Scanning analyzer. ### Composer **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Composer projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `composer.lock` file. The combination of supported versions of Composer and the `composer.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `composer.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Composer project Migrate a Composer project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Composer project to use the Dependency Scanning analyzer. ### Conan **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Conan projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `conan.lock` file. The combination of supported versions of Conan and the `conan.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `conan.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Conan project Migrate a Conan project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Conan project to use the Dependency Scanning analyzer. ### Go **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Go projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by using the `go.mod` and `go.sum` file. This analyzer attempts to execute the `go list` command to increase the accuracy of the detected dependencies, which requires a functional Go environment. In case of failure, it falls back to parsing the `go.sum` file. The combination of supported versions of Go, the `go.mod`, and the `go.sum` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer does not attempt to execute the `go list` command in the project to extract the dependencies and it no longer falls back to parsing the `go.sum` file. Instead, the project must provide at least a `go.mod` file and ideally a `go.graph` file generated with the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go Toolchains. The `go.graph` file is required to increase the accuracy of the detected components and to generate the dependency graph to enable features like the [dependency path](../dependency_list/_index.md#dependency-paths). These files are processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Go. #### Migrate a Go project Migrate a Go project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Go project: - Ensure that your project provides a `go.mod` and a `go.graph` files. Configure the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go Toolchains in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Go](dependency_scanning_sbom/_index.md#go) for more details and examples. ### Gradle **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Gradle projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `build.gradle` and `build.gradle.kts` files. The combinations of supported versions for Java, Kotlin, and Gradle are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `dependencies.lock` file generated with the [Gradle Dependency Lock Plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Kotlin, and Gradle. #### Migrate a Gradle project Migrate a Gradle project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Gradle project: - Ensure that your project provides a `dependencies.lock` file. Configure the [Gradle Dependency Lock Plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin) in your project and either: - Permanently integrate the plugin into your development workflow. This means committing the `dependencies.lock` file into your repository and updating it as you're making changes to your project dependencies. - Use the command in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Gradle](dependency_scanning_sbom/_index.md#gradle) for more details and examples. ### Maven **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Maven projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `pom.xml` file. The combinations of supported versions for Java, Kotlin, and Maven are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `maven.graph.json` file generated with the [maven dependency plugin](https://maven.apache.org/plugins/maven-dependency-plugin/index.html). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Kotlin, and Maven. #### Migrate a Maven project Migrate a Maven project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Maven project: - Ensure that your project provides a `maven.graph.json` file. Configure the [maven dependency plugin](https://maven.apache.org/plugins/maven-dependency-plugin/index.html) in a preceding CI/CD job (for example: `build`) to dynamically generate the `maven.graph.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Maven](dependency_scanning_sbom/_index.md#maven) for more details and examples. ### npm **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports npm projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `package-lock.json` or `npm-shrinkwrap.json.lock` files. The combination of supported versions of npm and the `package-lock.json` or `npm-shrinkwrap.json.lock` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may scan JavaScript files vendored in a npm project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `package-lock.json` or `npm-shrinkwrap.json.lock` files and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate an npm project Migrate an npm project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate an npm project to use the Dependency Scanning analyzer. ### NuGet **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports NuGet projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `packages.lock.json` file. The combination of supported versions of NuGet and the `packages.lock.json` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `packages.lock.json` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a NuGet project Migrate a NuGet project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a NuGet project to use the Dependency Scanning analyzer. ### pip **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports pip projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `requirements.txt` file (`requirements.pip` and `requires.txt` alternate filenames are also supported). The `PIP_REQUIREMENTS_FILE` environment variable can also be used to specify a custom filename. The combinations of supported versions for Python and pip are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `requirements.txt` lockfile generated by the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Python and pip. The `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` environment variable can also be used to specify custom filnames for pip-compile lockfiles. Alternatively, the project can provide a `pipdeptree.json` file generated with the [pipdeptree command line utility](https://pypi.org/project/pipdeptree/). #### Migrate a pip project Migrate a pip project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a pip project: - Ensure that your project provides a `requirements.txt` lockfile. Configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) in your project and either: - Permanently integrate the command line tool into your development workflow. This means committing the `requirements.txt` file into your repository and updating it as you're making changes to your project dependencies. - Use the command line tool in a preceding CI/CD job (for example: `build`) to dynamically generate the `requirements.txt` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. OR - Ensure that your project provides a `pipdeptree.json` lockfile. Configure the [pipdeptree command line utility](https://pypi.org/project/pipdeptree/) in a preceding CI/CD job (for example: `build`) to dynamically generate the `pipdeptree.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for pip](dependency_scanning_sbom/_index.md#pip) for more details and examples. ### Pipenv **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Pipenv projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `Pipfile` file or from a `Pipfile.lock` file if present. The combinations of supported versions for Python and Pipenv are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the Pipenv project to extract the dependencies. Instead, the project must provide at least a `Pipfile.lock` file and ideally a `pipenv.graph.json` file generated by the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph). The `pipenv.graph.json` file is required to generate the dependency graph and enable features like the [dependency path](../dependency_list/_index.md#dependency-paths). These files are processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Python and Pipenv. #### Migrate a Pipenv project Migrate a Pipenv project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Pipenv project: - Ensure that your project provides a `Pipfile.lock` file. Configure the [`pipenv lock` command](https://pipenv.pypa.io/en/latest/cli.html#graph) in your project and either: - Permanently integrate the command into your development workflow. This means committing the `Pipfile.lock` file into your repository and updating it as you're making changes to your project dependencies. - Use the command in a preceding CI/CD job (for example: `build`) to dynamically generate the `Pipfile.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. OR - Ensure that your project provides a `pipenv.graph.json` file. Configure the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph) in a preceding CI/CD job (for example: `build`) to dynamically generate the `pipenv.graph.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Pipenv](dependency_scanning_sbom/_index.md#pipenv) for more details and examples. ### Poetry **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Poetry projects using the `gemnasium-python-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `poetry.lock` file. The combination of supported versions of Poetry and the `poetry.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `poetry.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Poetry project Migrate a Poetry project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Poetry project to use the Dependency Scanning analyzer. ### pnpm **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports pnpm projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `pnpm-lock.yaml` file. The combination of supported versions of pnpm and the `pnpm-lock.yaml` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may scan JavaScript files vendored in a npm project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `pnpm-lock.yaml` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate a pnpm project Migrate a pnpm project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There is no additional steps to migrate a pnpm project to use the Dependency Scanning analyzer. ### sbt **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports sbt projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `build.sbt` file. The combinations of supported versions for Java, Scala, and sbt are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `dependencies-compile.dot` file generated with the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph) ([included in sbt >= 1.4.0](https://www.scala-sbt.org/1.x/docs/sbt-1.4-Release-Notes.html#sbt-dependency-graph+is+in-sourced)). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Scala, and sbt. #### Migrate an sbt project Migrate an sbt project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate an sbt project: - Ensure that your project provides a `dependencies-compile.dot` file. Configure the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph) in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies-compile.dot` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for sbt](dependency_scanning_sbom/_index.md#sbt) for more details and examples. ### setuptools **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports setuptools projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `setup.py` file. The combinations of supported versions for Python and setuptools are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not support building a setuptool project to extract the dependencies. We recommend to configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) to generate a compatible `requirements.txt` lockfile. Alternatively you can provide your own CycloneDX SBOM document. #### Migrate a setuptools project Migrate a setuptools project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a setuptools project: - Ensure that your project provides a `requirements.txt` lockfile. Configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) in your project and either: - Permanently integrate the command line tool into your development workflow. This means committing the `requirements.txt` file into your repository and updating it as you're making changes to your project dependencies. - Use the command line tool in a `build` CI/CD job to dynamically generate the `requirements.txt` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for pip](dependency_scanning_sbom/_index.md#pip) for more details and examples. ### Swift **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer does not support Swift projects when using the CI/CD templates or the Scan Execution Policies. Support for Swift is only available on the experimental Swift CI/CD component. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `Package.resolved` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Swift project Migrate a Swift project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Swift project to use the Dependency Scanning analyzer. ### uv **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports uv projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `uv.lock` file. The combination of supported versions of uv and the `uv.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `uv.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a uv project Migrate a uv project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a uv project to use the Dependency Scanning analyzer. ### Yarn **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Yarn projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `yarn.lock` file. The combination of supported versions of Yarn and the `yarn.lock` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may provide remediation data to [resolve a vulnerability via merge request](../vulnerabilities/_index.md#resolve-a-vulnerability) for Yarn dependencies. This analyzer may scan JavaScript files vendored in a Yarn project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `yarn.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not provide remediations data for Yarn dependencies. Support for a replacement feature is proposed in [epic 759](https://gitlab.com/groups/gitlab-org/-/epics/759). This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate a Yarn project Migrate a Yarn project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Yarn project to use the Dependency Scanning analyzer. If you use the Resolve a vulnerability via merge request feature check [the deprecation announcement](../../../update/deprecations.md#resolve-a-vulnerability-for-dependency-scanning-on-yarn-projects) for available actions. If you use the JavaScript vendored files scan feature, check the [deprecation announcement](../../../update/deprecations.md#dependency-scanning-for-javascript-vendored-libraries) for available actions. ## Changes to CI/CD variables Most of the existing CI/CD variables are no longer relevant with the new Dependency Scanning analyzer so their values will be ignored. Unless these are also used to configure other security analyzers (for example: `ADDITIONAL_CA_CERT_BUNDLE`), you should remove them from your CI/CD configuration. Remove the following CI/CD variables from your CI/CD configuration: - `ADDITIONAL_CA_CERT_BUNDLE` - `DS_GRADLE_RESOLUTION_POLICY` - `DS_IMAGE_SUFFIX` - `DS_JAVA_VERSION` - `DS_PIP_DEPENDENCY_PATH` - `DS_PIP_VERSION` - `DS_REMEDIATE_TIMEOUT` - `DS_REMEDIATE` - `GEMNASIUM_DB_LOCAL_PATH` - `GEMNASIUM_DB_REF_NAME` - `GEMNASIUM_DB_REMOTE_URL` - `GEMNASIUM_DB_UPDATE_DISABLED` - `GEMNASIUM_LIBRARY_SCAN_ENABLED` - `GOARCH` - `GOFLAGS` - `GOOS` - `GOPRIVATE` - `GRADLE_CLI_OPTS` - `GRADLE_PLUGIN_INIT_PATH` - `MAVEN_CLI_OPTS` - `PIP_EXTRA_INDEX_URL` - `PIP_INDEX_URL` - `PIP_REQUIREMENTS_FILE` - `PIPENV_PYPI_MIRROR` - `SBT_CLI_OPTS` Keep the following CI/CD variables as they are applicable to the new Dependency Scanning analyzer: - `DS_EXCLUDED_ANALYZERS`* - `DS_EXCLUDED_PATHS` - `DS_INCLUDE_DEV_DEPENDENCIES` - `DS_MAX_DEPTH` - `SECURE_ANALYZERS_PREFIX` {{< alert type="note" >}} The `PIP_REQUIREMENTS_FILE` is replaced with `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` in the new Dependency Scanning analyzer. The `DS_EXCLUDED_ANALYZERS` can now contain a new value `dependency-scanning` to prevent the new Dependency Scanning analyzer job from running. {{< /alert >}} ## Continue with the Gemnasium analyzer You can continue using the deprecated Gemnasium analyzer with your existing CI/CD configuration, including all your current CI/CD variables. GitLab will continue to support it until the [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) feature and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) are Generally Available. This work is tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/15961)
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating to Dependency Scanning using SBOM breadcrumbs: - doc - user - application_security - dependency_scanning --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - The legacy [Dependency Scanning feature based on the Gemnasium analyzer](_index.md) was [deprecated](../../../update/deprecations.md#dependency-scanning-upgrades-to-the-gitlab-sbom-vulnerability-scanner) in GitLab 17.9 and planned for removal in 19.0. {{< /history >}} The Dependency Scanning feature is upgrading to the GitLab SBOM Vulnerability Scanner. As part of this change, the [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) feature and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) replace the legacy Dependency Scanning feature based on the Gemnasium analyzer. However, due to the significant changes this transition introduces, it is not implemented automatically and this document serves as a migration guide. Follow this migration guide if you use GitLab Dependency Scanning and any of the following conditions apply: - The Dependency Scanning CI/CD jobs are configured by including a Dependency Scanning CI/CD templates. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml ``` - The Dependency Scanning CI/CD jobs are configured by using [Scan Execution Policies](../policies/scan_execution_policies.md). - The Dependency Scanning CI/CD jobs are configured by using [Pipeline Execution Policies](../policies/pipeline_execution_policies.md). ## Understand the changes Before you migrate your project to Dependency Scanning using SBOM, you should understand the fundamental changes being introduced. The transition represents a technical evolution, a new approach to how Dependency Scanning works in GitLab, and various improvements to the user experience, some of which include, but are not limited to, the following: - Increased language support. The deprecated Gemnasium analyzers are constrained to a small subset of Python and Java versions. The new analyzer gives organizations the necessary flexibility to use older versions of these toolchains with older projects, and the option to try newer versions without waiting on a major update to the analyzer's image. Additionally, the new analyzer benefits from increased [file coverage](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning#supported-files). - Increased performance. Depending on the application, builds invoked by the Gemnasium analyzers can last for almost an hour, and be a duplicate effort. The new analyzer no longer invokes build systems directly. Instead, it re-uses previously defined build jobs to improve overall scan performance. - Smaller attack surface. To support its build capabilities, the Gemnasium analyzers are preloaded with a variety of dependencies. The new analyzer removes a large amount of these dependencies which results in a smaller attack surface. - Simpler configuration. The deprecated Gemnasium analyzers frequently require the configuration of proxies, Certificate Authority (CA) certificate bundles, and various other utilities to function correctly. The new solution removes many of these requirements, resulting in a robust tool that is simpler to configure. ### A new approach to security scanning When using the legacy Dependency Scanning feature, all scanning work happens within your CI/CD pipeline. When running a scan, the Gemnasium analyzer handles two critical tasks simultaneously: it identifies your project's dependencies and immediately performs a security analysis of those dependencies, all within the same CI/CD job. The Dependency Scanning using SBOM approach separates these tasks into two distinct phases: - First, when you run the new Dependency Scanning analyzer in the CI/CD pipeline, it focuses solely on creating a comprehensive inventory of your project's dependencies. This inventory is captured in a CycloneDX SBOM (Software Bill of Materials) report. - Second, the detected components are sent to the GitLab platform to perform a thorough security analysis using the built-in GitLab SBOM Vulnerability Scanner. You're already benefiting from this scanner with the [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md) feature. This separation of concerns brings several advantages for future enhancements, but it also means some changes are necessary because the security analysis happens outside the CI/CD pipeline. This impacts the availability of some functionalities that depend on the security analysis to run in the CI/CD pipeline. Review [the deprecation announcement](../../../update/deprecations.md#dependency-scanning-upgrades-to-the-gitlab-sbom-vulnerability-scanner) for a complete description. ### CI/CD configuration To prevent disruption to your CI/CD pipelines, the new approach is not yet applied to the stable Dependency Scanning CI/CD template (`Dependency-Scanning.gitlab-ci.yml`) and as of GitLab 17.9, you must use the `latest` template (`Dependency-Scanning.latest.gitlab-ci.yml`) to enable it. Other migration paths might be considered as the feature gains maturity. The latest Dependency Scanning CI/CD template (`Dependency-Scanning.latest.gitlab-ci.yml`) still maintains backward compatibility by default. It continues to run existing Gemnasium analyzer jobs, while the new Dependency Scanning analyzer only activates for newly supported languages and package managers. You can opt-in to use the new Dependency Scanning analyzer for all projects by configuring the `DS_ENFORCE_NEW_ANALYZER` CI/CD variable to `true`. If you're using [Scan Execution Policies](../policies/scan_execution_policies.md), these changes apply in the same way because they build upon the CI/CD templates. If you're using the [main Dependency Scanning CI/CD component](https://gitlab.com/components/dependency-scanning/-/tree/main/templates/main) you won't see any changes as it already employs the new analyzer. However, if you're using the specialized components for Android, Rust, Swift, or Cocoapods, you'll need to migrate to the main component that now covers all supported languages and package managers. ### Build support for Java and Python One significant change affects how dependencies are discovered, particularly for Java and Python projects. The new analyzer takes a different approach: instead of attempting to build your application to determine dependencies, it requires explicit dependency information through lockfiles or dependency graph files. This change means you'll need to ensure these files are available, either by committing them to your repository or generating them dynamically during the CI/CD pipeline. While this requires some initial setup, it provides more reliable and consistent results across different environments. The following sections will guide you through the specific steps needed to adapt your projects to this new approach if that's necessary. ### Accessing scan results (during Beta only) {{< alert type="warning" >}} ADDENDUM: Based on customer feedback, we have decided to reinstate the generation of the Dependency Scanning report artifact for the Generally Available release. However, it will not be available in the Beta release. See [this epic](https://gitlab.com/groups/gitlab-org/-/epics/17150) for more details. {{< /alert >}} <details> <summary>See Beta behavior</summary> When you migrate to Dependency Scanning using SBOM, you'll notice a fundamental change in how security scan results are handled. The new approach moves the security analysis out of the CI/CD pipeline and into the GitLab platform, which changes how you access and work with the results. With the legacy Dependency Scanning feature, CI/CD jobs using the Gemnasium analyzer generate a [Dependency Scanning report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportsdependency_scanning) containing the scan results, and upload it to the platform. You can access these results by all possible ways offered to job artifacts. This means you can process or modify the results within your CI/CD pipeline before they reach the GitLab platform. The Dependency Scanning using SBOM approach works differently. The security analysis now happens within the GitLab platform using the built-in GitLab SBOM Vulnerability Scanner, so you won't find the scan results in your job artifacts anymore. Instead, GitLab analyzes the [CycloneDX SBOM report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) that your CI/CD pipeline generates, creating security findings directly in the GitLab platform. To help you transition smoothly, GitLab maintains some backward compatibility. While using the Gemnasium analyzer, you'll still get a standard artifact (using `artifacts:paths`) that contains the scan results. This means if you have succeeding CI/CD jobs that need these results, they can still access them. However, keep in mind that as the GitLab SBOM Vulnerability Scanner evolves and improves, these artifact-based results won't reflect the latest enhancements. When you're ready to fully migrate to the new Dependency Scanning analyzer, you'll need to adjust how you programmatically access scan results. Instead of reading job artifacts, you'll use GitLab GraphQL API, specifically the ([`Pipeline.securityReportFindings` resource](../../../api/graphql/reference/_index.md#pipelinesecurityreportfindings)). </details> ## Identify affected projects Understanding which of your projects need attention for this migration is an important first step. The most significant impact will be on your Java and Python projects, because the way they handle dependencies is changing fundamentally. To help you identify affected projects, GitLab provides the [Dependency Scanning Build Support Detection Helper](https://gitlab.com/security-products/tooling/build-support-detection-helper) tool. This tool examines your GitLab group or GitLab Self-Managed instance and identifies projects that currently use the Dependency Scanning feature with either the `gemnasium-maven-dependency_scanning` or `gemnasium-python-dependency_scanning` CI/CD jobs. When you run this tool, it creates a comprehensive report of projects that will need your attention during the migration. Having this information early helps you plan your migration strategy effectively, especially if you manage multiple projects across your organization. ## Migrate to Dependency Scanning using SBOM To migrate to the Dependency Scanning using SBOM method, perform the following steps for each project: 1. Remove existing customization for Dependency Scanning based on the Gemnasium analyzer. - If you have manually overridden the `gemnasium-dependency_scanning`, `gemnasium-maven-dependency_scanning`, or `gemnasium-python-dependency_scanning` CI/CD jobs to customize them in a project's `.gitlab-ci.yml` or in the CI/CD configuration for a Pipeline Execution Policy, remove them. - If you have configured any of [the impacted CI/CD variables](#changes-to-cicd-variables), adjust your configuration accordingly. 1. Enable the Dependency Scanning using SBOM feature with one of the following options: - Use the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` to run the new Dependency Scanning analyzer: 1. Ensure your `.gitlab-ci.yml` CI/CD configuration includes the latest Dependency Scanning CI/CD template. 1. Add the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` and set it to `true`. This variable can be set in many different places, while observing the [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use the [Scan Execution Policies](../policies/scan_execution_policies.md) to run the new Dependency Scanning analyzer: 1. Edit the configured scan execution policy for Dependency Scanning and ensure it uses the `latest` template. 1. Add the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` and set it to `true`. This variable can be set in many different places, while observing the [CI/CD variable precedence](../../../ci/variables/_index.md#cicd-variable-precedence). 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) to run the new Dependency Scanning analyzer: 1. Replace the Dependency Scanning CI/CD template's `include` statement with the Dependency Scanning CI/CD component in your `.gitlab-ci.yml` CI/CD configuration. 1. Adjust your project and your CI/CD configuration if needed by following the language-specific instructions below. - Use your own CycloneDX SBOM document: 1. Remove the Dependency Scanning CI/CD template's `include` statement from your `.gitlab-ci.yml` CI/CD configuration. 1. Ensure a compatible SBOM document is generated by a CI/CD job and uploaded as a [CycloneDX SBOM report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx). 1. Adjust any workflow depending on the [dependency_scanning report artifacts](../../../ci/yaml/artifacts_reports.md#artifactsreportsdependency_scanning) which are no longer uploaded. For multi-language projects, complete all relevant language-specific migration steps. {{< alert type="note" >}} If you decide to migrate from the CI/CD template to the CI/CD component, review the [current limitations](../../../ci/components/_index.md#use-a-gitlabcom-component-on-gitlab-self-managed) for GitLab Self-Managed. {{< /alert >}} ## Language-specific instructions As you migrate to the new Dependency Scanning analyzer, you'll need to make specific adjustments based on your project's programming languages and package managers. These instructions apply whenever you use the new Dependency Scanning analyzer, regardless of how you've configured it to run - whether through CI/CD templates, Scan Execution Policies, or the Dependency Scanning CI/CD component. In the following sections, you'll find detailed instructions for each supported language and package manager. For each one, we'll explain: - How dependency detection is changing - What specific files you need to provide - How to generate these files if they're not already part of your workflow Share any feedback on the new Dependency Scanning analyzer in this [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/523458). ### Bundler **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Bundler projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `Gemfile.lock` file (`gems.locked` alternate filename is also supported). The combination of supported versions of Bundler and the `Gemfile.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `Gemfile.lock` file (`gems.locked` alternate filename is also supported) and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Bundler project Migrate a Bundler project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps needed to migrate a Bundler project to use the Dependency Scanning analyzer. ### CocoaPods **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer does not support CocoaPods projects when using the CI/CD templates or the Scan Execution Policies. Support for CocoaPods is only available on the experimental Cocoapods CI/CD component. **New behavior**: The new Dependency Scanning analyzer extracts the project dependencies by parsing the `Podfile.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a CocoaPods project Migrate a CocoaPods project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a CocoaPods project to use the Dependency Scanning analyzer. ### Composer **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Composer projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `composer.lock` file. The combination of supported versions of Composer and the `composer.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `composer.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Composer project Migrate a Composer project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Composer project to use the Dependency Scanning analyzer. ### Conan **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Conan projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `conan.lock` file. The combination of supported versions of Conan and the `conan.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `conan.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Conan project Migrate a Conan project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Conan project to use the Dependency Scanning analyzer. ### Go **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Go projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by using the `go.mod` and `go.sum` file. This analyzer attempts to execute the `go list` command to increase the accuracy of the detected dependencies, which requires a functional Go environment. In case of failure, it falls back to parsing the `go.sum` file. The combination of supported versions of Go, the `go.mod`, and the `go.sum` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer does not attempt to execute the `go list` command in the project to extract the dependencies and it no longer falls back to parsing the `go.sum` file. Instead, the project must provide at least a `go.mod` file and ideally a `go.graph` file generated with the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go Toolchains. The `go.graph` file is required to increase the accuracy of the detected components and to generate the dependency graph to enable features like the [dependency path](../dependency_list/_index.md#dependency-paths). These files are processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Go. #### Migrate a Go project Migrate a Go project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Go project: - Ensure that your project provides a `go.mod` and a `go.graph` files. Configure the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go Toolchains in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Go](dependency_scanning_sbom/_index.md#go) for more details and examples. ### Gradle **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Gradle projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `build.gradle` and `build.gradle.kts` files. The combinations of supported versions for Java, Kotlin, and Gradle are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `dependencies.lock` file generated with the [Gradle Dependency Lock Plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Kotlin, and Gradle. #### Migrate a Gradle project Migrate a Gradle project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Gradle project: - Ensure that your project provides a `dependencies.lock` file. Configure the [Gradle Dependency Lock Plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin) in your project and either: - Permanently integrate the plugin into your development workflow. This means committing the `dependencies.lock` file into your repository and updating it as you're making changes to your project dependencies. - Use the command in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Gradle](dependency_scanning_sbom/_index.md#gradle) for more details and examples. ### Maven **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Maven projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `pom.xml` file. The combinations of supported versions for Java, Kotlin, and Maven are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `maven.graph.json` file generated with the [maven dependency plugin](https://maven.apache.org/plugins/maven-dependency-plugin/index.html). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Kotlin, and Maven. #### Migrate a Maven project Migrate a Maven project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Maven project: - Ensure that your project provides a `maven.graph.json` file. Configure the [maven dependency plugin](https://maven.apache.org/plugins/maven-dependency-plugin/index.html) in a preceding CI/CD job (for example: `build`) to dynamically generate the `maven.graph.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Maven](dependency_scanning_sbom/_index.md#maven) for more details and examples. ### npm **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports npm projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `package-lock.json` or `npm-shrinkwrap.json.lock` files. The combination of supported versions of npm and the `package-lock.json` or `npm-shrinkwrap.json.lock` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may scan JavaScript files vendored in a npm project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `package-lock.json` or `npm-shrinkwrap.json.lock` files and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate an npm project Migrate an npm project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate an npm project to use the Dependency Scanning analyzer. ### NuGet **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports NuGet projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `packages.lock.json` file. The combination of supported versions of NuGet and the `packages.lock.json` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `packages.lock.json` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a NuGet project Migrate a NuGet project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a NuGet project to use the Dependency Scanning analyzer. ### pip **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports pip projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `requirements.txt` file (`requirements.pip` and `requires.txt` alternate filenames are also supported). The `PIP_REQUIREMENTS_FILE` environment variable can also be used to specify a custom filename. The combinations of supported versions for Python and pip are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `requirements.txt` lockfile generated by the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Python and pip. The `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` environment variable can also be used to specify custom filnames for pip-compile lockfiles. Alternatively, the project can provide a `pipdeptree.json` file generated with the [pipdeptree command line utility](https://pypi.org/project/pipdeptree/). #### Migrate a pip project Migrate a pip project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a pip project: - Ensure that your project provides a `requirements.txt` lockfile. Configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) in your project and either: - Permanently integrate the command line tool into your development workflow. This means committing the `requirements.txt` file into your repository and updating it as you're making changes to your project dependencies. - Use the command line tool in a preceding CI/CD job (for example: `build`) to dynamically generate the `requirements.txt` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. OR - Ensure that your project provides a `pipdeptree.json` lockfile. Configure the [pipdeptree command line utility](https://pypi.org/project/pipdeptree/) in a preceding CI/CD job (for example: `build`) to dynamically generate the `pipdeptree.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for pip](dependency_scanning_sbom/_index.md#pip) for more details and examples. ### Pipenv **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Pipenv projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `Pipfile` file or from a `Pipfile.lock` file if present. The combinations of supported versions for Python and Pipenv are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the Pipenv project to extract the dependencies. Instead, the project must provide at least a `Pipfile.lock` file and ideally a `pipenv.graph.json` file generated by the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph). The `pipenv.graph.json` file is required to generate the dependency graph and enable features like the [dependency path](../dependency_list/_index.md#dependency-paths). These files are processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Python and Pipenv. #### Migrate a Pipenv project Migrate a Pipenv project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a Pipenv project: - Ensure that your project provides a `Pipfile.lock` file. Configure the [`pipenv lock` command](https://pipenv.pypa.io/en/latest/cli.html#graph) in your project and either: - Permanently integrate the command into your development workflow. This means committing the `Pipfile.lock` file into your repository and updating it as you're making changes to your project dependencies. - Use the command in a preceding CI/CD job (for example: `build`) to dynamically generate the `Pipfile.lock` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. OR - Ensure that your project provides a `pipenv.graph.json` file. Configure the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph) in a preceding CI/CD job (for example: `build`) to dynamically generate the `pipenv.graph.json` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for Pipenv](dependency_scanning_sbom/_index.md#pipenv) for more details and examples. ### Poetry **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Poetry projects using the `gemnasium-python-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `poetry.lock` file. The combination of supported versions of Poetry and the `poetry.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `poetry.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Poetry project Migrate a Poetry project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Poetry project to use the Dependency Scanning analyzer. ### pnpm **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports pnpm projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `pnpm-lock.yaml` file. The combination of supported versions of pnpm and the `pnpm-lock.yaml` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may scan JavaScript files vendored in a npm project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `pnpm-lock.yaml` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate a pnpm project Migrate a pnpm project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There is no additional steps to migrate a pnpm project to use the Dependency Scanning analyzer. ### sbt **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports sbt projects using the `gemnasium-maven-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `build.sbt` file. The combinations of supported versions for Java, Scala, and sbt are complex, as detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not build the project to extract the dependencies. Instead, the project must provide a `dependencies-compile.dot` file generated with the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph) ([included in sbt >= 1.4.0](https://www.scala-sbt.org/1.x/docs/sbt-1.4-Release-Notes.html#sbt-dependency-graph+is+in-sourced)). This file is processed by the `dependency-scanning` CI/CD job to generate a CycloneDX SBOM report artifact. This approach does not require GitLab to support specific versions of Java, Scala, and sbt. #### Migrate an sbt project Migrate an sbt project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate an sbt project: - Ensure that your project provides a `dependencies-compile.dot` file. Configure the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph) in a preceding CI/CD job (for example: `build`) to dynamically generate the `dependencies-compile.dot` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for sbt](dependency_scanning_sbom/_index.md#sbt) for more details and examples. ### setuptools **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports setuptools projects using the `gemnasium-python-dependency_scanning` CI/CD job to extract the project dependencies by building the application from the `setup.py` file. The combinations of supported versions for Python and setuptools are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file). **New behavior**: The new Dependency Scanning analyzer does not support building a setuptool project to extract the dependencies. We recommend to configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) to generate a compatible `requirements.txt` lockfile. Alternatively you can provide your own CycloneDX SBOM document. #### Migrate a setuptools project Migrate a setuptools project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. To migrate a setuptools project: - Ensure that your project provides a `requirements.txt` lockfile. Configure the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/) in your project and either: - Permanently integrate the command line tool into your development workflow. This means committing the `requirements.txt` file into your repository and updating it as you're making changes to your project dependencies. - Use the command line tool in a `build` CI/CD job to dynamically generate the `requirements.txt` file and export it as an [artifact](../../../ci/jobs/job_artifacts.md) prior to running the Dependency Scanning job. See the [enablement instructions for pip](dependency_scanning_sbom/_index.md#pip) for more details and examples. ### Swift **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer does not support Swift projects when using the CI/CD templates or the Scan Execution Policies. Support for Swift is only available on the experimental Swift CI/CD component. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `Package.resolved` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a Swift project Migrate a Swift project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Swift project to use the Dependency Scanning analyzer. ### uv **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports uv projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `uv.lock` file. The combination of supported versions of uv and the `uv.lock` file are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `uv.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. #### Migrate a uv project Migrate a uv project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a uv project to use the Dependency Scanning analyzer. ### Yarn **Previous behavior**: Dependency Scanning based on the Gemnasium analyzer supports Yarn projects using the `gemnasium-dependency_scanning` CI/CD job and its ability to extract the project dependencies by parsing the `yarn.lock` file. The combination of supported versions of Yarn and the `yarn.lock` files are detailed in the [Dependency Scanning (Gemnasium-based) documentation](_index.md#obtaining-dependency-information-by-parsing-lockfiles). This analyzer may provide remediation data to [resolve a vulnerability via merge request](../vulnerabilities/_index.md#resolve-a-vulnerability) for Yarn dependencies. This analyzer may scan JavaScript files vendored in a Yarn project using the `Retire.JS` scanner. **New behavior**: The new Dependency Scanning analyzer also extracts the project dependencies by parsing the `yarn.lock` file and generates a CycloneDX SBOM report artifact with the `dependency-scanning` CI/CD job. This analyzer does not provide remediations data for Yarn dependencies. Support for a replacement feature is proposed in [epic 759](https://gitlab.com/groups/gitlab-org/-/epics/759). This analyzer does not scan vendored JavaScript files. Support for a replacement feature is proposed in [epic 7186](https://gitlab.com/groups/gitlab-org/-/epics/7186). #### Migrate a Yarn project Migrate a Yarn project to use the new Dependency Scanning analyzer. Prerequisites: - Complete [the generic migration steps](#migrate-to-dependency-scanning-using-sbom) required for all projects. There are no additional steps to migrate a Yarn project to use the Dependency Scanning analyzer. If you use the Resolve a vulnerability via merge request feature check [the deprecation announcement](../../../update/deprecations.md#resolve-a-vulnerability-for-dependency-scanning-on-yarn-projects) for available actions. If you use the JavaScript vendored files scan feature, check the [deprecation announcement](../../../update/deprecations.md#dependency-scanning-for-javascript-vendored-libraries) for available actions. ## Changes to CI/CD variables Most of the existing CI/CD variables are no longer relevant with the new Dependency Scanning analyzer so their values will be ignored. Unless these are also used to configure other security analyzers (for example: `ADDITIONAL_CA_CERT_BUNDLE`), you should remove them from your CI/CD configuration. Remove the following CI/CD variables from your CI/CD configuration: - `ADDITIONAL_CA_CERT_BUNDLE` - `DS_GRADLE_RESOLUTION_POLICY` - `DS_IMAGE_SUFFIX` - `DS_JAVA_VERSION` - `DS_PIP_DEPENDENCY_PATH` - `DS_PIP_VERSION` - `DS_REMEDIATE_TIMEOUT` - `DS_REMEDIATE` - `GEMNASIUM_DB_LOCAL_PATH` - `GEMNASIUM_DB_REF_NAME` - `GEMNASIUM_DB_REMOTE_URL` - `GEMNASIUM_DB_UPDATE_DISABLED` - `GEMNASIUM_LIBRARY_SCAN_ENABLED` - `GOARCH` - `GOFLAGS` - `GOOS` - `GOPRIVATE` - `GRADLE_CLI_OPTS` - `GRADLE_PLUGIN_INIT_PATH` - `MAVEN_CLI_OPTS` - `PIP_EXTRA_INDEX_URL` - `PIP_INDEX_URL` - `PIP_REQUIREMENTS_FILE` - `PIPENV_PYPI_MIRROR` - `SBT_CLI_OPTS` Keep the following CI/CD variables as they are applicable to the new Dependency Scanning analyzer: - `DS_EXCLUDED_ANALYZERS`* - `DS_EXCLUDED_PATHS` - `DS_INCLUDE_DEV_DEPENDENCIES` - `DS_MAX_DEPTH` - `SECURE_ANALYZERS_PREFIX` {{< alert type="note" >}} The `PIP_REQUIREMENTS_FILE` is replaced with `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` in the new Dependency Scanning analyzer. The `DS_EXCLUDED_ANALYZERS` can now contain a new value `dependency-scanning` to prevent the new Dependency Scanning analyzer job from running. {{< /alert >}} ## Continue with the Gemnasium analyzer You can continue using the deprecated Gemnasium analyzer with your existing CI/CD configuration, including all your current CI/CD variables. GitLab will continue to support it until the [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) feature and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning) are Generally Available. This work is tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/15961)
https://docs.gitlab.com/user/application_security/dependency_scanning
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/dependency_scanning
[ "doc", "user", "application_security", "dependency_scanning" ]
_index.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Dependency Scanning
Vulnerabilities, remediation, configuration, analyzers, and reports.
<style> table.ds-table tr:nth-child(even) { background-color: transparent; } table.ds-table td { border-left: 1px solid #dbdbdb; border-right: 1px solid #dbdbdb; border-bottom: 1px solid #dbdbdb; } table.ds-table tr td:first-child { border-left: 0; } table.ds-table tr td:last-child { border-right: 0; } table.ds-table ul { font-size: 1em; list-style-type: none; padding-left: 0px; margin-bottom: 0px; } table.no-vertical-table-lines td { border-left: none; border-right: none; border-bottom: 1px solid #f0f0f0; } table.no-vertical-table-lines tr { border-top: none; } </style> {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} The Dependency Scanning feature based on the Gemnasium analyzer is deprecated in GitLab 17.9 and is planned for removal in GitLab 19.0. It is being replaced with [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning). For more information, see [epic 15961](https://gitlab.com/groups/gitlab-org/-/epics/15961). {{< /alert >}} Dependency Scanning identifies security vulnerabilities in your application's dependencies before they reach production. This identification protects your application from potential exploits and data breaches that could damage user trust and your business reputation. When vulnerabilities are found during pipeline runs, they appear directly in your merge request, giving you immediate visibility of security issues before code is committed. All dependencies in your code, including transitive (nested) dependencies, are automatically analyzed during pipelines. This analysis catches security issues that manual review processes might miss. Dependency Scanning integrates into your existing CI/CD workflow with minimal configuration changes, making it straightforward to implement secure development practices from day one. Vulnerabilities can also be identified outside a pipeline by [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md). GitLab offers both Dependency Scanning and [Container Scanning](../container_scanning/_index.md) to ensure coverage for all of these dependency types. To cover as much of your risk area as possible, we encourage you to use all of our security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). ![Dependency scanning Widget](img/dependency_scanning_v13_2.png) {{< alert type="warning" >}} Dependency Scanning does not support runtime installation of compilers and interpreters. {{< /alert >}} - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Dependency Scanning - Advanced Security Testing](https://www.youtube.com/watch?v=TBnfbGk4c4o) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an interactive reading and how-to demo of this Dependency Scanning documentation, see [How to use dependency scanning tutorial hands-on GitLab Application Security part 3](https://youtu.be/ii05cMbJ4xQ?feature=shared) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For other interactive reading and how-to demos, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9) ## Getting started To get started with Dependency Scanning the following steps show how to enable Dependency Scanning for your project. Prerequisites: - The `test` stage is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. - If you're using SaaS runners on GitLab.com, this is enabled by default. To enable the analyzer, either: - Enable [Auto DevOps](../../../topics/autodevops/_index.md), which includes dependency scanning. - Use a preconfigured merge request. - Create a [scan execution policy](../policies/scan_execution_policies.md) that enforces dependency scanning. - Edit the `.gitlab-ci.yml` file manually. - [Use CI/CD components](#use-cicd-components) ### Use a preconfigured merge request This method automatically prepares a merge request that includes the Dependency Scanning template in the `.gitlab-ci.yml` file. You then merge the merge request to enable Dependency Scanning. {{< alert type="note" >}} This method works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it might not be parsed successfully, and an error might occur. In that case, use the [manual](#edit-the-gitlab-ciyml-file-manually) method instead. {{< /alert >}} To enable Dependency Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Dependency Scanning** row, select **Configure with a merge request**. 1. Select **Create merge request**. 1. Review the merge request, then select **Merge**. Pipelines now include a Dependency Scanning job. ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit the existing `.gitlab-ci.yml` file. Use this method if your GitLab CI/CD configuration file is complex. To enable Dependency Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, then select **Merge**. Pipelines now include a Dependency Scanning job. ### Use CI/CD components {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/454143) in GitLab 17.0. This feature is an [experiment](../../../policy/development_stages_support.md). - The dependency scanning CI/CD component only supports Android projects. {{< /history >}} Use [CI/CD components](../../../ci/components/_index.md) to perform Dependency Scanning of your application. For instructions, see the respective component's README file. #### Available CI/CD components See <https://gitlab.com/explore/catalog/components/dependency-scanning> After completing these steps, you can: - Learn more about how to [understand the results](#understanding-the-results). - Plan a [roll out](#roll-out) to more projects. ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - CVSS score: Provides a numeric value that maps to severity. - EPSS: Shows the likelihood of a vulnerability being exploited in the wild. - Has Known Exploit (KEV): Indicates that a given vulnerability has been exploited. - Project: Highlights the project where the vulnerability was identified. - Report type / Scanner: Explains the output type and scanner used to generate the output. - Reachable: Provides an indication whether the vulnerable dependency is used in code. - Scanner: Identifies which analyzer detected the vulnerability. - Location: Names the file where the vulnerable dependency is located. - Links: Evidence of the vulnerability being cataloged in various advisory databases. - Identifiers: A list of references used to classify the vulnerability, such as CVE identifiers. Dependency Scanning produces the following output: - **Dependency scanning report**: Contains details of all vulnerabilities detected in dependencies. - **CycloneDX Software Bill of Materials**: Software Bill of Materials (SBOM) for each supported lock or build file detected. ### Dependency scanning report Dependency scanning outputs a report containing details of all vulnerabilities. The report is processed internally and the results are shown in the UI. The report is also output as an artifact of the dependency scanning job, named `gl-dependency-scanning-report.json`. For more details of the dependency scanning report, see the [Dependency scanning report schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json). ### CycloneDX Software Bill of Materials Dependency Scanning outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for each supported lock or build file it detects. The CycloneDX SBOMs are: - Named `gl-sbom-<package-type>-<package-manager>.cdx.json`. - Available as job artifacts of the dependency scanning job. - Saved in the same directory as the detected lock or build files. For example, if your project has the following structure: ```plaintext . ├── ruby-project/ │ └── Gemfile.lock ├── ruby-project-2/ │ └── Gemfile.lock ├── php-project/ │ └── composer.lock └── go-project/ └── go.sum ``` Then the Gemnasium scanner generates the following CycloneDX SBOMs: ```plaintext . ├── ruby-project/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── ruby-project-2/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── php-project/ │ ├── composer.lock │ └── gl-sbom-packagist-composer.cdx.json └── go-project/ ├── go.sum └── gl-sbom-go-go.cdx.json ``` #### Merging multiple CycloneDX SBOMs You can use a CI/CD job to merge the multiple CycloneDX SBOMs into a single SBOM. GitLab uses [CycloneDX Properties](https://cyclonedx.org/use-cases/#properties--name-value-store) to store implementation-specific details in the metadata of each CycloneDX SBOM, such as the location of build and lock files. If multiple CycloneDX SBOMs are merged together, this information is removed from the resulting merged file. For example, the following `.gitlab-ci.yml` extract demonstrates how the Cyclone SBOM files can be merged, and the resulting file validated. ```yaml stages: - test - merge-cyclonedx-sboms include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml merge cyclonedx sboms: stage: merge-cyclonedx-sboms image: name: cyclonedx/cyclonedx-cli:0.25.1 entrypoint: [""] script: - find . -name "gl-sbom-*.cdx.json" -exec cyclonedx merge --output-file gl-sbom-all.cdx.json --input-files "{}" + # optional: validate the merged sbom - cyclonedx validate --input-version v1_4 --input-file gl-sbom-all.cdx.json artifacts: paths: - gl-sbom-all.cdx.json ``` ## Roll out After you are confident in the Dependency Scanning results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply Dependency Scannign settings across groups. - If you have unique requirements, Dependency Scanning with SBOM can be run in [offline environments](../offline_deployments/_index.md). ## Supported languages and package managers The following languages and dependency managers are supported by Dependency Scanning: <!-- markdownlint-disable MD044 --> <table class="ds-table"> <thead> <tr> <th>Language</th> <th>Language versions</th> <th>Package manager</th> <th>Supported files</th> <th><a href="#how-multiple-files-are-processed">Processes multiple files?</a></th> </tr> </thead> <tbody> <tr> <td>.NET</td> <td rowspan="2">All versions</td> <td rowspan="2"><a href="https://www.nuget.org/">NuGet</a></td> <td rowspan="2"><a href="https://learn.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files#enabling-lock-file"><code>packages.lock.json</code></a></td> <td rowspan="2">Y</td> </tr> <tr> <td>C#</td> </tr> <tr> <td>C</td> <td rowspan="2">All versions</td> <td rowspan="2"><a href="https://conan.io/">Conan</a></td> <td rowspan="2"><a href="https://docs.conan.io/en/latest/versioning/lockfiles.html"><code>conan.lock</code></a></td> <td rowspan="2">Y</td> </tr> <tr> <td>C++</td> </tr> <tr> <td>Go</td> <td>All versions</td> <td><a href="https://go.dev/">Go</a></td> <td> <ul> <li><code>go.mod</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td rowspan="2">Java and Kotlin</td> <td rowspan="2"> 8 LTS, 11 LTS, 17 LTS, or 21 LTS<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-1">1</a></b></sup> </td> <td><a href="https://gradle.org/">Gradle</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-2">2</a></b></sup></td> <td> <ul> <li><code>build.gradle</code></li> <li><code>build.gradle.kts</code></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://maven.apache.org/">Maven</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-6">6</a></b></sup></td> <td><code>pom.xml</code></td> <td>N</td> </tr> <tr> <td rowspan="3">JavaScript and TypeScript</td> <td rowspan="3">All versions</td> <td><a href="https://www.npmjs.com/">npm</a></td> <td> <ul> <li><code>package-lock.json</code></li> <li><code>npm-shrinkwrap.json</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td><a href="https://classic.yarnpkg.com/en/">yarn</a></td> <td><code>yarn.lock</code></td> <td>Y</td> </tr> <tr> <td><a href="https://pnpm.io/">pnpm</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-3">3</a></b></sup></td> <td><code>pnpm-lock.yaml</code></td> <td>Y</td> </tr> <tr> <td>PHP</td> <td>All versions</td> <td><a href="https://getcomposer.org/">Composer</a></td> <td><code>composer.lock</code></td> <td>Y</td> </tr> <tr> <td rowspan="5">Python</td> <td rowspan="5">3.11<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-7">7</a></b></sup></td> <td><a href="https://setuptools.readthedocs.io/en/latest/">setuptools</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-8">8</a></b></sup></td> <td><code>setup.py</code></td> <td>N</td> </tr> <tr> <td><a href="https://pip.pypa.io/en/stable/">pip</a></td> <td> <ul> <li><code>requirements.txt</code></li> <li><code>requirements.pip</code></li> <li><code>requires.txt</code></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://pipenv.pypa.io/en/latest/">Pipenv</a></td> <td> <ul> <li><a href="https://pipenv.pypa.io/en/latest/pipfile.html#example-pipfile"><code>Pipfile</code></a></li> <li><a href="https://pipenv.pypa.io/en/latest/pipfile.html#example-pipfile-lock"><code>Pipfile.lock</code></a></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://python-poetry.org/">Poetry</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-4">4</a></b></sup></td> <td><code>poetry.lock</code></td> <td>N</td> </tr> <tr> <td><a href="https://docs.astral.sh/uv/">uv</a></td> <td><code>uv.lock</code></td> <td>Y</td> </tr> <tr> <td>Ruby</td> <td>All versions</td> <td><a href="https://bundler.io/">Bundler</a></td> <td> <ul> <li><code>Gemfile.lock</code></li> <li><code>gems.locked</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td>Scala</td> <td>All versions</td> <td><a href="https://www.scala-sbt.org/">sbt</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-5">5</a></b></sup></td> <td><code>build.sbt</code></td> <td>N</td> </tr> <tr> <td>Swift</td> <td>All versions</td> <td><a href="https://swift.org/package-manager/">Swift Package Manager</a></td> <td><code>Package.resolved</code></td> <td>N</td> </tr> <tr> <td>Cocoapods<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-9">9</a></b></sup></td> <td>All versions</td> <td><a href="https://cocoapods.org/">CocoaPods</a></td> <td><code>Podfile.lock</code></td> <td>N</td> </tr> <tr> <td>Dart<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-10">10</a></b></sup></td> <td>All versions</td> <td><a href="https://pub.dev/">Pub</a></td> <td><code>pubspec.lock</code></td> <td>N</td> </tr> </tbody> </table> <ol> <li> <a id="notes-regarding-supported-languages-and-package-managers-1"></a> <p> Java 21 LTS for <a href="https://www.scala-sbt.org/">sbt</a> is limited to version 1.9.7. Support for more <a href="https://www.scala-sbt.org/">sbt</a> versions can be tracked in <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/430335">issue 430335</a>. It is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-2"></a> <p> Gradle is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-3"></a> <p> Support for <code>pnpm</code> lockfiles was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/336809">introduced in GitLab 15.11</a>. <code>pnpm</code> lockfiles do not store bundled dependencies, so the reported dependencies may differ from <code>npm</code> or <code>yarn</code>. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-4"></a> <p> Support for <a href="https://python-poetry.org/">Poetry</a> projects with a <code>poetry.lock</code> file was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/7006">added in GitLab 15.0</a>. Support for projects without a <code>poetry.lock</code> file is tracked in issue: <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/32774">Poetry's pyproject.toml support for dependency scanning.</a> </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-5"></a> <p> Support for sbt 1.0.x was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/415835">deprecated</a> in GitLab 16.8 and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/436985">removed</a> in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-6"></a> <p> Support for Maven below 3.8.8 was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/438772">deprecated</a> in GitLab 16.9 and will be removed in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-7"></a> <p> Support for prior Python versions was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/441201">deprecated</a> in GitLab 16.9 and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/441491">removed</a> in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-8"></a> <p> Excludes both <code>pip</code> and <code>setuptools</code> from the report as they are required by the installer. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-9"></a> <p> Only SBOM, without advisories. See <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/468764">spike on CocoaPods advisories research</a>. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-10"></a> <p> No license detection yet. See <a href="https://gitlab.com/groups/gitlab-org/-/epics/17037">epic on Dart license detection</a>. </p> </li> </ol> <!-- markdownlint-enable MD044 --> ### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines) ### Customizing analyzer behavior To customize Dependency Scanning, use [CI/CD variables](#available-cicd-variables). {{< alert type="warning" >}} Test all customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Overriding dependency scanning jobs To override a job definition (for example, to change properties like `variables` or `dependencies`), declare a new job with the same name as the one to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this disables `DS_REMEDIATE` for the `gemnasium` analyzer: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml gemnasium-dependency_scanning: variables: DS_REMEDIATE: "false" ``` To override the `dependencies: []` attribute, add an override job as described previously, targeting this attribute: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml gemnasium-dependency_scanning: dependencies: ["build"] ``` ### Available CI/CD variables You can use CI/CD variables to [customize](#customizing-analyzer-behavior) dependency scanning behavior. #### Global analyzer settings The following variables allow configuration of global dependency scanning settings. | CI/CD variables | Description | | ----------------------------|------------ | | `ADDITIONAL_CA_CERT_BUNDLE` | Bundle of CA certificates to trust. The bundle of certificates provided here is also used by other tools during the scanning process, such as `git`, `yarn`, or `npm`. For more details, see [Custom TLS certificate authority](#custom-tls-certificate-authority). | | `DS_EXCLUDED_ANALYZERS` | Specify the analyzers (by name) to exclude from Dependency Scanning. For more information, see [Analyzers](#analyzers). | | `DS_EXCLUDED_PATHS` | Exclude files and directories from the scan based on the paths. A comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. This is a pre-filter which is applied before the scan is executed. Default: `"spec, test, tests, tmp"`. | | `DS_IMAGE_SUFFIX` | Suffix added to the image name. (GitLab team members can view more information in this confidential issue: `https://gitlab.com/gitlab-org/gitlab/-/issues/354796`). Automatically set to `"-fips"` when FIPS mode is enabled. | | `DS_MAX_DEPTH` | Defines how many directory levels deep that the analyzer should search for supported files to scan. A value of `-1` scans all directories regardless of depth. Default: `2`. | | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the official default images (proxy). | #### Analyzer-specific settings The following variables configure the behavior of specific dependency scanning analyzers. | CI/CD variable | Analyzer | Default | Description | |--------------------------------------|--------------------|------------------------------|-------------| | `GEMNASIUM_DB_LOCAL_PATH` | `gemnasium` | `/gemnasium-db` | Path to local Gemnasium database. | | `GEMNASIUM_DB_UPDATE_DISABLED` | `gemnasium` | `"false"` | Disable automatic updates for the `gemnasium-db` advisory database. For usage see [Access to the GitLab Advisory Database](#access-to-the-gitlab-advisory-database). | | `GEMNASIUM_DB_REMOTE_URL` | `gemnasium` | `https://gitlab.com/gitlab-org/security-products/gemnasium-db.git` | Repository URL for fetching the GitLab Advisory Database. | | `GEMNASIUM_DB_REF_NAME` | `gemnasium` | `master` | Branch name for remote repository database. `GEMNASIUM_DB_REMOTE_URL` is required. | | `GEMNASIUM_IGNORED_SCOPES` | `gemnasium` | | Comma-separated list of Maven dependency scopes to ignore. For more details, see the [Maven dependency scope documentation](https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope) | | `DS_REMEDIATE` | `gemnasium` | `"true"`, `"false"` in FIPS mode | Enable automatic remediation of vulnerable dependencies. Not supported in FIPS mode. | | `DS_REMEDIATE_TIMEOUT` | `gemnasium` | `5m` | Timeout for auto-remediation. | | `GEMNASIUM_LIBRARY_SCAN_ENABLED` | `gemnasium` | `"true"` | Enable detecting vulnerabilities in vendored JavaScript libraries (libraries which are not managed by a package manager). This functionality requires a JavaScript lockfile to be present in a commit, otherwise Dependency Scanning is not executed and vendored files are not scanned.<br>Dependency scanning uses the [Retire.js](https://github.com/RetireJS/retire.js) scanner to detect a limited set of vulnerabilities. For details of which vulnerabilities are detected, see the [Retire.js repository](https://github.com/RetireJS/retire.js/blob/master/repository/jsrepository.json). | | `DS_INCLUDE_DEV_DEPENDENCIES` | `gemnasium` | `"true"` | When set to `"false"`, development dependencies and their vulnerabilities are not reported. Only projects using Composer, Maven, npm, pnpm, Pipenv or Poetry are supported. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227861) in GitLab 15.1. | | `GOOS` | `gemnasium` | `"linux"` | The operating system for which to compile Go code. | | `GOARCH` | `gemnasium` | `"amd64"` | The architecture of the processor for which to compile Go code. | | `GOFLAGS` | `gemnasium` | | The flags passed to the `go build` tool. | | `GOPRIVATE` | `gemnasium` | | A list of glob patterns and prefixes to be fetched from source. For more information, see the Go private modules [documentation](https://go.dev/ref/mod#private-modules). | | `DS_JAVA_VERSION` | `gemnasium-maven` | `17` | Version of Java. Available versions: `8`, `11`, `17`, `21`. | | `MAVEN_CLI_OPTS` | `gemnasium-maven` | `"-DskipTests --batch-mode"` | List of command line arguments that are passed to `maven` by the analyzer. See an example for [using private repositories](#authenticate-with-a-private-maven-repository). | | `GRADLE_CLI_OPTS` | `gemnasium-maven` | | List of command line arguments that are passed to `gradle` by the analyzer. | | `GRADLE_PLUGIN_INIT_PATH` | `gemnasium-maven` | `"gemnasium-init.gradle"` | Specifies the path to the Gradle initialization script. The init script must include `allprojects { apply plugin: 'project-report' }` to ensure compatibility. | | `DS_GRADLE_RESOLUTION_POLICY` | `gemnasium-maven` | `"failed"` | Controls Gradle dependency resolution strictness. Accepts `"none"` to allow partial results, or `"failed"` to fail the scan when any dependencies fail to resolve. | | `SBT_CLI_OPTS` | `gemnasium-maven` | | List of command-line arguments that the analyzer passes to `sbt`. | | `PIP_INDEX_URL` | `gemnasium-python` | `https://pypi.org/simple` | Base URL of Python Package Index. | | `PIP_EXTRA_INDEX_URL` | `gemnasium-python` | | Array of [extra URLs](https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-extra-index-url) of package indexes to use in addition to `PIP_INDEX_URL`. Comma-separated. **Warning**: Read [the following security consideration](#python-projects) when using this environment variable. | | `PIP_REQUIREMENTS_FILE` | `gemnasium-python` | | Pip requirements file to be scanned. This is a filename and not a path. When this environment variable is set only the specified file is scanned. | | `PIPENV_PYPI_MIRROR` | `gemnasium-python` | | If set, overrides the PyPi index used by Pipenv with a [mirror](https://github.com/pypa/pipenv/blob/v2022.1.8/pipenv/environments.py#L263). | | `DS_PIP_VERSION` | `gemnasium-python` | | Force the install of a specific pip version (example: `"19.3"`), otherwise the pip installed in the Docker image is used. | | `DS_PIP_DEPENDENCY_PATH` | `gemnasium-python` | | Path to load Python pip dependencies from. | #### Other variables The previous tables are not an exhaustive list of all variables that can be used. They contain all specific GitLab and analyzer variables we support and test. There are many variables, such as environment variables, that you can pass in and they do work. This is a large list, many of which we may be unaware of, and as such is not documented. For example, to pass the non-GitLab environment variable `HTTPS_PROXY` to all Dependency Scanning jobs, set it as a [CI/CD variable in your `.gitlab-ci.yml`](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file) file like this: ```yaml variables: HTTPS_PROXY: "https://squid-proxy:3128" ``` {{< alert type="note" >}} Gradle projects require [an additional variable](#using-a-proxy-with-gradle-projects) setup to use a proxy. {{< /alert >}} Alternatively we may use it in specific jobs, like Dependency Scanning: ```yaml dependency_scanning: variables: HTTPS_PROXY: $HTTPS_PROXY ``` As we have not tested all variables you may find some do work and others do not. If one does not work and you need it we suggest [submitting a feature request](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed&issue[title]=Docs%20feedback%20-%20feature%20proposal:%20Write%20your%20title) or contributing to the code to enable it to be used. ### Custom TLS certificate authority Dependency Scanning allows for use of custom TLS certificates for SSL/TLS connections instead of the default shipped with the analyzer container image. Support for custom certificate authorities was introduced in the following versions. | Analyzer | Version | |--------------------|--------------------------------------------------------------------------------------------------------| | `gemnasium` | [v2.8.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/releases/v2.8.0) | | `gemnasium-maven` | [v2.9.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium-maven/-/releases/v2.9.0) | | `gemnasium-python` | [v2.7.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium-python/-/releases/v2.7.0) | #### Using a custom TLS certificate authority To use a custom TLS certificate authority, assign the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1) to the CI/CD variable `ADDITIONAL_CA_CERT_BUNDLE`. For example, to configure the certificate in the `.gitlab-ci.yml` file: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` ### Authenticate with a private Maven repository To use a private Maven repository that requires authentication, you should store your credentials in a CI/CD variable and reference them in your Maven settings file. Do not add the credentials to your `.gitlab-ci.yml` file. To authenticate with a private Maven repository: 1. Add the `MAVEN_CLI_OPTS` CI/CD variable to your [project's settings](../../../ci/variables/_index.md#for-a-project), setting the value to include your credentials. For example, if your username is `myuser` and the password is `verysecret`: | Type | Key | Value | |----------|------------------|-------| | Variable | `MAVEN_CLI_OPTS` | `--settings mysettings.xml -Drepository.password=verysecret -Drepository.user=myuser` | 1. Create a Maven settings file with your server configuration. For example, add the following to the settings file `mysettings.xml`. This file is referenced in the `MAVEN_CLI_OPTS` CI/CD variable. ```xml <!-- mysettings.xml --> <settings> ... <servers> <server> <id>private_server</id> <username>${repository.user}</username> <password>${repository.password}</password> </server> </servers> </settings> ``` ### FIPS-enabled images {{< history >}} - Introduced in GitLab 15.0 - Gemnasium uses FIPS-enabled images when FIPS mode is enabled. {{< /history >}} GitLab also offers [FIPS-enabled Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the Gemnasium images. When FIPS mode is enabled in the GitLab instance, Gemnasium scanning jobs automatically use the FIPS-enabled images. To manually switch to FIPS-enabled images, set the variable `DS_IMAGE_SUFFIX` to `"-fips"`. Dependency scanning for Gradle projects and auto-remediation for Yarn projects are not supported in FIPS mode. FIPS-enabled images are based on RedHat's UBI micro. They don't have package managers such as `dnf` or `microdnf` so it's not possible to install system packages at runtime. ### Offline environment {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for dependency scanning jobs to run successfully. For more information, see [Offline environments](../offline_deployments/_index.md). #### Requirements To run dependency scanning in an offline environment you must have: - A GitLab Runner with the `docker` or `kubernetes` executor - Local copies of the dependency scanning analyzer images - Access to the [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) - Access to the [Package Metadata Database](../../../topics/offline/quick_start_guide.md#enabling-the-package-metadata-database) #### Local copies of analyzer images To use dependency scanning with all [supported languages and frameworks](#supported-languages-and-package-managers): 1. Import the following default dependency scanning analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/gemnasium:6 registry.gitlab.com/security-products/gemnasium:6-fips registry.gitlab.com/security-products/gemnasium-maven:6 registry.gitlab.com/security-products/gemnasium-maven:6-fips registry.gitlab.com/security-products/gemnasium-python:6 registry.gitlab.com/security-products/gemnasium-python:6-fips ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may want to download them regularly. 1. Configure GitLab CI/CD to use the local analyzers. Set the value of the CI/CD variable `SECURE_ANALYZERS_PREFIX` to your local Docker registry - in this example, `docker-registry.example.com`. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "docker-registry.example.com/analyzers" ``` #### Access to the GitLab Advisory Database The [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) is the source of vulnerability data used by the `gemnasium`, `gemnasium-maven`, and `gemnasium-python` analyzers. The Docker images of these analyzers include a clone of the database. The clone is synchronized with the database before starting a scan, to ensure the analyzers have the latest vulnerability data. In an offline environment, the default host of the GitLab Advisory Database can't be accessed. Instead, you must host the database somewhere that it is accessible to the GitLab runners. You must also update the database manually at your own schedule. Available options for hosting the database are: - [Use a clone of the GitLab Advisory Database](#use-a-copy-of-the-gitlab-advisory-database). - [Use a copy of the GitLab Advisory Database](#use-a-copy-of-the-gitlab-advisory-database). ##### Use a clone of the GitLab Advisory Database Using a clone of the GitLab Advisory Database is recommended because it is the most efficient method. To host a clone of the GitLab Advisory Database: 1. Clone the GitLab Advisory Database to a host that is accessible by HTTP from the GitLab runners. 1. In your `.gitlab-ci.yml` file, set the value of the CI/CD variable `GEMNASIUM_DB_REMOTE_URL` to the URL of the Git repository. For example: ```yaml variables: GEMNASIUM_DB_REMOTE_URL: https://users-own-copy.example.com/gemnasium-db.git ``` ##### Use a copy of the GitLab Advisory Database Using a copy of the GitLab Advisory Database requires you to host an archive file which is downloaded by the analyzers. To use a copy of the GitLab Advisory Database: 1. Download an archive of the GitLab Advisory Database to a host that is accessible by HTTP from the GitLab runners. The archive is located at `https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/archive/master/gemnasium-db-master.tar.gz`. 1. Update your `.gitlab-ci.yml` file. - Set CI/CD variable `GEMNASIUM_DB_LOCAL_PATH` to use the local copy of the database. - Set CI/CD variable `GEMNASIUM_DB_UPDATE_DISABLED` to disable the database update. - Download and extract the advisory database before the scan begins. ```yaml variables: GEMNASIUM_DB_LOCAL_PATH: ./gemnasium-db-local GEMNASIUM_DB_UPDATE_DISABLED: "true" dependency_scanning: before_script: - wget https://local.example.com/gemnasium_db.tar.gz - mkdir -p $GEMNASIUM_DB_LOCAL_PATH - tar -xzvf gemnasium_db.tar.gz --strip-components=1 -C $GEMNASIUM_DB_LOCAL_PATH ``` ### Using a proxy with Gradle projects The Gradle wrapper script does not read the `HTTP(S)_PROXY` environment variables. See [this upstream issue](https://github.com/gradle/gradle/issues/11065). To make the Gradle wrapper script use a proxy, you can specify the options using the `GRADLE_CLI_OPTS` CI/CD variable: ```yaml variables: GRADLE_CLI_OPTS: "-Dhttps.proxyHost=squid-proxy -Dhttps.proxyPort=3128 -Dhttp.proxyHost=squid-proxy -Dhttp.proxyPort=3128 -Dhttp.nonProxyHosts=localhost" ``` ### Using a proxy with Maven projects Maven does not read the `HTTP(S)_PROXY` environment variables. To make the Maven dependency scanner use a proxy, you can configure it using a `settings.xml` file (see [Maven documentation](https://maven.apache.org/guides/mini/guide-proxies.html)) and instruct Maven to use this configuration by using the `MAVEN_CLI_OPTS` CI/CD variable: ```yaml variables: MAVEN_CLI_OPTS: "--settings mysettings.xml" ``` ### Specific settings for languages and package managers See the following sections for configuring specific languages and package managers. #### Python (pip) If you need to install Python packages before the analyzer runs, you should use `pip install --user` in the `before_script` of the scanning job. The `--user` flag causes project dependencies to be installed in the user directory. If you do not pass the `--user` option, packages are installed globally, and they are not scanned and don't show up when listing project dependencies. #### Python (setuptools) If you need to install Python packages before the analyzer runs, you should use `python setup.py install --user` in the `before_script` of the scanning job. The `--user` flag causes project dependencies to be installed in the user directory. If you do not pass the `--user` option, packages are installed globally, and they are not scanned and don't show up when listing project dependencies. When using self-signed certificates for your private PyPi repository, no extra job configuration (aside from the previous `.gitlab-ci.yml` template) is needed. However, you must update your `setup.py` to ensure that it can reach your private repository. Here is an example configuration: 1. Update `setup.py` to create a `dependency_links` attribute pointing at your private repository for each dependency in the `install_requires` list: ```python install_requires=['pyparsing>=2.0.3'], dependency_links=['https://pypi.example.com/simple/pyparsing'], ``` 1. Fetch the certificate from your repository URL and add it to the project: ```shell printf "\n" | openssl s_client -connect pypi.example.com:443 -servername pypi.example.com | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > internal.crt ``` 1. Point `setup.py` at the newly downloaded certificate: ```python import setuptools.ssl_support setuptools.ssl_support.cert_paths = ['internal.crt'] ``` #### Python (Pipenv) If running in a limited network connectivity environment, you must configure the `PIPENV_PYPI_MIRROR` variable to use a private PyPi mirror. This mirror must contain both default and development dependencies. ```yaml variables: PIPENV_PYPI_MIRROR: https://pypi.example.com/simple ``` <!-- markdownlint-disable MD044 --> Alternatively, if it's not possible to use a private registry, you can load the required packages into the Pipenv virtual environment cache. For this option, the project must check in the `Pipfile.lock` into the repository, and load both default and development packages into the cache. See the example [python-pipenv](https://gitlab.com/gitlab-org/security-products/tests/python-pipenv/-/blob/41cc017bd1ed302f6edebcfa3bc2922f428e07b6/.gitlab-ci.yml#L20-42) project for an example of how this can be done. <!-- markdownlint-enable MD044 --> ## Dependency detection Dependency Scanning automatically detects the languages used in the repository. All analyzers matching the detected languages are run. There is usually no need to customize the selection of analyzers. We recommend not specifying the analyzers so you automatically use the full selection for best coverage, avoiding the need to make adjustments when there are deprecations or removals. However, you can override the selection using the variable `DS_EXCLUDED_ANALYZERS`. The language detection relies on CI job [`rules`](../../../ci/yaml/_index.md#rules) to detect [supported dependency file](#how-analyzers-are-triggered) For Java and Python, when a supported dependency file is detected, Dependency Scanning attempts to build the project and execute some Java or Python commands to get the list of dependencies. For all other projects, the lock file is parsed to obtain the list of dependencies without needing to build the project first. All direct and transitive dependencies are analyzed, without a limit to the depth of transitive dependencies. ### Analyzers Dependency Scanning supports the following official [Gemnasium-based](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) analyzers: - `gemnasium` - `gemnasium-maven` - `gemnasium-python` The analyzers are published as Docker images, which Dependency Scanning uses to launch dedicated containers for each analysis. You can also integrate a custom security scanner. Each analyzer is updated as new versions of Gemnasium are released. ### How analyzers obtain dependency information GitLab analyzers obtain dependency information using one of the following two methods: 1. [Parsing lockfiles directly.](#obtaining-dependency-information-by-parsing-lockfiles) 1. [Running a package manager or build tool to generate a dependency information file which is then parsed.](#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file) #### Obtaining dependency information by parsing lockfiles The following package managers use lockfiles that GitLab analyzers are capable of parsing directly: <!-- markdownlint-disable MD044 --> <table class="ds-table no-vertical-table-lines"> <thead> <tr> <th>Package Manager</th> <th>Supported File Format Versions</th> <th>Tested Package Manager Versions</th> </tr> </thead> <tbody> <tr> <td>Bundler</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/ruby-bundler/default/Gemfile.lock#L118">1.17.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/tests/ruby-bundler/-/blob/bundler2-FREEZE/Gemfile.lock#L118">2.1.4</a> </td> </tr> <tr> <td>Composer</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/php-composer/default/composer.lock">1.x</a> </td> </tr> <tr> <td>Conan</td> <td>0.4</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/c-conan/default/conan.lock#L38">1.x</a> </td> </tr> <tr> <td>Go</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/go-modules/gosum/default/go.sum">1.x</a> </td> </tr> <tr> <td>NuGet</td> <td>v1, v2<sup><b><a href="#notes-regarding-parsing-lockfiles-1">1</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/csharp-nuget-dotnetcore/default/src/web.api/packages.lock.json#L2">4.9</a> </td> </tr> <tr> <td>npm</td> <td>v1, v2, v3<sup><b><a href="#notes-regarding-parsing-lockfiles-2">2</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-npm/default/package-lock.json#L4">6.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-npm/lockfileVersion2/package-lock.json#L4">7.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/npm/fixtures/lockfile-v3/simple/package-lock.json#L4">9.x</a> </td> </tr> <tr> <td>pnpm</td> <td>v5, v6, v9</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-pnpm/default/pnpm-lock.yaml#L1">7.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/pnpm/fixtures/v6/simple/pnpm-lock.yaml#L1">8.x</a> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/pnpm/fixtures/v9/simple/pnpm-lock.yaml#L1">9.x</a> </td> </tr> <tr> <td>yarn</td> <td>versions 1, 2, 3, 4<sup><b><a href="#notes-regarding-parsing-lockfiles-3">3</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/classic/default/yarn.lock#L2">1.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/berry/v2/default/yarn.lock">2.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/berry/v3/default/yarn.lock">3.x</a> </td> </tr> <tr> <td>Poetry</td> <td>v1</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/python-poetry/default/poetry.lock">1.x</a> </td> </tr> <tr> <td>uv</td> <td>v0.x</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/uv/fixtures/simple/uv.lock">0.x</a> </td> </tr> </tbody> </table> <ol> <li> <a id="notes-regarding-parsing-lockfiles-1"></a> <p> Support for NuGet version 2 lock files was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/398680">introduced</a> in GitLab 16.2. </p> </li> <li> <a id="notes-regarding-parsing-lockfiles-2"></a> <p> Support for <code>lockfileVersion = 3</code> was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/365176">introduced</a> in GitLab 15.7. </p> </li> <li> <a id="notes-regarding-parsing-lockfiles-3"></a> <p> Support for Yarn version 4 was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/431752">introduced</a> in GitLab 16.11. </p> <p> The following features are not supported for Yarn Berry: </p> <ul> <li> <a href="https://yarnpkg.com/features/workspaces">workspaces</a> </li> <li> <a href="https://yarnpkg.com/cli/patch">yarn patch</a> </li> </ul> <p> Yarn files that contain a patch, a workspace, or both, are still processed, but these features are ignored. </p> </li> </ol> <!-- markdownlint-enable MD044 --> #### Obtaining dependency information by running a package manager to generate a parsable file To support the following package managers, the GitLab analyzers proceed in two steps: 1. Execute the package manager or a specific task, to export the dependency information. 1. Parse the exported dependency information. <!-- markdownlint-disable MD044 --> <table class="ds-table no-vertical-table-lines"> <thead> <tr> <th>Package Manager</th> <th>Pre-installed Versions</th> <th>Tested Versions</th> </tr> </thead> <tbody> <tr> <td>sbt</td> <td><a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L4">1.6.2</a></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L794-798">1.1.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L800-805">1.2.8</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L722-725">1.3.12</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L722-725">1.4.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L742-746">1.5.8</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L748-762">1.6.2</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L764-768">1.7.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L770-774">1.8.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L776-781">1.9.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/.gitlab/ci/gemnasium-maven.gitlab-ci.yml#L111-121">1.9.7</a> </td> </tr> <tr> <td>maven</td> <td><a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.3.1/build/gemnasium-maven/debian/config/.tool-versions#L3">3.9.8</a></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.3.1/spec/gemnasium-maven_image_spec.rb#L92-94">3.9.8</a><sup><b><a href="#exported-dependency-information-notes-1">1</a></b></sup> </td> </tr> <tr> <td>Gradle</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">6.7.1</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">7.6.4</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">8.8</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L316-321">5.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L323-328">6.7</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L330-335">6.9</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L337-341">7.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L343-347">8.8</a> </td> </tr> <tr> <td>setuptools</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.4.1/build/gemnasium-python/requirements.txt#L41">70.3.0</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.4.1/spec/gemnasium-python_image_spec.rb#L294-316">&gt;= 70.3.0</a> </td> </tr> <tr> <td>pip</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-python/debian/Dockerfile#L21">24</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L77-90">24</a> </td> </tr> <tr> <td>Pipenv</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-python/requirements.txt#L23">2023.11.15</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L243-256">2023.11.15</a><sup><b><a href="#exported-dependency-information-notes-3">3</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L219-241">2023.11.15</a> </td> </tr> <tr> <td>Go</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium/alpine/Dockerfile#L91-93">1.21</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium/alpine/Dockerfile#L91-93">1.21</a><sup><strong><a href="#exported-dependency-information-notes-4">4</a></strong></sup> </td> </tr> </tbody> </table> <ol> <li> <a id="exported-dependency-information-notes-1"></a> <p> This test uses the default version of <code>maven</code> specified by the <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L3"><code>.tool-versions</code></a> file. </p> </li> <li> <a id="exported-dependency-information-notes-2"></a> <p> Different versions of Java require different versions of Gradle. The versions of Gradle listed in the previous table are pre-installed in the analyzer image. The version of Gradle used by the analyzer depends on whether your project uses a <code>gradlew</code> (Gradle wrapper) file or not: </p> <ul> <li> <p> If your project <i>does not use</i> a <code>gradlew</code> file, then the analyzer automatically switches to one of the pre-installed Gradle versions, based on the version of Java specified by the <a href="#analyzer-specific-settings"><code>DS_JAVA_VERSION</code></a> variable (default version is <code>17</code>). </p> <p> For Java versions <code>8</code> and <code>11</code>, Gradle <code>6.7.1</code> is automatically selected, Java <code>17</code> uses Gradle <code>7.6.4</code>, and Java <code>21</code> uses Gradle <code>8.8</code>. </p> </li> <li> <p> If your project <i>does use</i> a <code>gradlew</code> file, then the version of Gradle pre-installed in the analyzer image is ignored, and the version specified in your <code>gradlew</code> file is used instead. </p> </li> </ul> </li> <li> <a id="exported-dependency-information-notes-3"></a> <p> This test confirms that if a <code>Pipfile.lock</code> file is found, it is used by <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium">Gemnasium</a> to scan the exact package versions listed in this file. </p> </li> <li> <a id="exported-dependency-information-notes-4"></a> <p> Because of the implementation of <code>go build</code>, the Go build process requires network access, a pre-loaded mod cache via <code>go mod download</code>, or vendored dependencies. For more information, refer to the Go documentation on <a href="https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependencies">compiling packages and dependencies</a>. </p> </li> </ol> <!-- markdownlint-enable MD044 --> ## How analyzers are triggered GitLab relies on [`rules:exists`](../../../ci/yaml/_index.md#rulesexists) to start the relevant analyzers for the languages detected by the presence of the [supported files](#supported-languages-and-package-managers) in the repository. A maximum of two directory levels from the repository's root is searched. For example, the `gemnasium-dependency_scanning` job is enabled if a repository contains either `Gemfile`, `api/Gemfile`, or `api/client/Gemfile`, but not if the only supported dependency file is `api/v1/client/Gemfile`. ## How multiple files are processed {{< alert type="note" >}} If you've run into problems while scanning multiple files, contribute a comment to [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/337056). {{< /alert >}} ### Python We only execute one installation in the directory where either a requirements file or a lock file has been detected. Dependencies are only analyzed by `gemnasium-python` for the first file that is detected. Files are searched for in the following order: 1. `requirements.txt`, `requirements.pip`, or `requires.txt` for projects using Pip. 1. `Pipfile` or `Pipfile.lock` for projects using Pipenv. 1. `poetry.lock` for projects using Poetry. 1. `setup.py` for project using Setuptools. The search begins with the root directory and then continues with subdirectories if no builds are found in the root directory. Consequently a Poetry lock file in the root directory would be detected before a Pipenv file in a subdirectory. ### Java and Scala We only execute one build in the directory where a build file has been detected. For large projects that include multiple Gradle, Maven, or sbt builds, or any combination of these, `gemnasium-maven` only analyzes dependencies for the first build file that is detected. Build files are searched for in the following order: 1. `pom.xml` for single or [multi-module](https://maven.apache.org/pom.html#Aggregation) Maven projects. 1. `build.gradle` or `build.gradle.kts` for single or [multi-project](https://docs.gradle.org/current/userguide/intro_multi_project_builds.html) Gradle builds. 1. `build.sbt` for single or [multi-project](https://www.scala-sbt.org/1.x/docs/Multi-Project.html) sbt builds. The search begins with the root directory and then continues with subdirectories if no builds are found in the root directory. Consequently an sbt build file in the root directory would be detected before a Gradle build file in a subdirectory. For [multi-module](https://maven.apache.org/pom.html#Aggregation) Maven projects, and multi-project [Gradle](https://docs.gradle.org/current/userguide/intro_multi_project_builds.html) and [sbt](https://www.scala-sbt.org/1.x/docs/Multi-Project.html) builds, sub-module and sub-project files are analyzed if they are declared in the parent build file. ### JavaScript The following analyzers are executed, each of which have different behavior when processing multiple files: - [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) Supports multiple lockfiles - [Retire.js](https://retirejs.github.io/retire.js/) Does not support multiple lockfiles. When multiple lockfiles exist, `Retire.js` analyzes the first lockfile discovered while traversing the directory tree in alphabetical order. The `gemnasium` analyzer scans supports JavaScript projects for vendored libraries (that is, those checked into the project but not managed by the package manager). ### Go Multiple files are supported. When a `go.mod` file is detected, the analyzer attempts to generate a [build list](https://go.dev/ref/mod#glos-build-list) using [Minimal Version Selection](https://go.dev/ref/mod#glos-minimal-version-selection). If this fails, the analyzer instead attempts to parse the dependencies within the `go.mod` file. As a requirement, the `go.mod` file should be cleaned up using the command `go mod tidy` to ensure proper management of dependencies. The process is repeated for every detected `go.mod` file. ### PHP, C, C++, .NET, C&#35;, Ruby, JavaScript The analyzer for these languages supports multiple lockfiles. ### Support for additional languages Support for additional languages, dependency managers, and dependency files are tracked in the following issues: | Package Managers | Languages | Supported files | Scan tools | Issue | | ------------------- | --------- | --------------- | ---------- | ----- | | [Poetry](https://python-poetry.org/) | Python | `pyproject.toml` | [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) | [GitLab#32774](https://gitlab.com/gitlab-org/gitlab/-/issues/32774) | ## Warnings We recommend that you use the most recent version of all containers, and the most recent supported version of all package managers and languages. Using previous versions carries an increased security risk because unsupported versions may no longer benefit from active security reporting and backporting of security fixes. ### Gradle projects Do not override the `reports.html.destination` or `reports.html.outputLocation` properties when generating an HTML dependency report for Gradle projects. Doing so prevents Dependency Scanning from functioning correctly. ### Maven Projects In isolated networks, if the central repository is a private registry (explicitly set with the `<mirror>` directive), Maven builds may fail to find the `gemnasium-maven-plugin` dependency. This issue occurs because Maven doesn't search the local repository (`/root/.m2`) by default and attempts to fetch from the central repository. The result is an error about the missing dependency. #### Workaround To resolve this issue, add a `<pluginRepositories>` section to your `settings.xml` file. This allows Maven to find plugins in the local repository. Before you begin, consider the following: - This workaround is only for environments where the default Maven central repository is mirrored to a private registry. - After applying this workaround, Maven searches the local repository for plugins, which may have security implications in some environments. Make sure this aligns with your organization's security policies. Follow these steps to modify the `settings.xml` file: 1. Locate your Maven `settings.xml` file. This file is typically found in one of these locations: - `/root/.m2/settings.xml` for the root user. - `~/.m2/settings.xml` for a regular user. - `${maven.home}/conf/settings.xml` global settings. 1. Check if there's an existing `<pluginRepositories>` section in the file. 1. If a `<pluginRepositories>` section already exists, add only the following `<pluginRepository>` element inside it. Otherwise, add the entire `<pluginRepositories>` section: ```xml <pluginRepositories> <pluginRepository> <id>local2</id> <name>local repository</name> <url>file:///root/.m2/repository/</url> </pluginRepository> </pluginRepositories> ``` 1. Run your Maven build or dependency scanning process again. ### Python projects Extra care needs to be taken when using the [`PIP_EXTRA_INDEX_URL`](https://pipenv.pypa.io/en/latest/indexes.html) environment variable due to a possible exploit documented by [CVE-2018-20225](https://nvd.nist.gov/vuln/detail/CVE-2018-20225): {{< alert type="warning" >}} An issue was discovered in pip (all versions) because it installs the version with the highest version number, even if the user had intended to obtain a private package from a private index. This only affects use of the `PIP_EXTRA_INDEX_URL` option, and exploitation requires that the package does not already exist in the public index (and thus the attacker can put the package there with an arbitrary version number). {{< /alert >}} ### Version number parsing In some cases it's not possible to determine if the version of a project dependency is in the affected range of a security advisory. For example: - The version is unknown. - The version is invalid. - Parsing the version or comparing it to the range fails. - The version is a branch, like `dev-master` or `1.5.x`. - The compared versions are ambiguous. For example, `1.0.0-20241502` can't be compared to `1.0.0-2` because one version contains a timestamp while the other does not. In these cases, the analyzer skips the dependency and outputs a message to the log. The GitLab analyzers do not make assumptions as they could result in a false positive or false negative. For a discussion, see [issue 442027](https://gitlab.com/gitlab-org/gitlab/-/issues/442027). ## Build Swift projects Swift Package Manager (SPM) is the official tool for managing the distribution of Swift code. It's integrated with the Swift build system to automate the process of downloading, compiling, and linking dependencies. Follow these best practices when you build a Swift project with SPM. 1. Include a `Package.resolved` file. The `Package.resolved` file locks your dependencies to specific versions. Always commit this file to your repository to ensure consistency across different environments. ```shell git add Package.resolved git commit -m "Add Package.resolved to lock dependencies" ``` 1. To build your Swift project, use the following commands: ```shell # Update dependencies swift package update # Build the project swift build ``` 1. To configure CI/CD, add these steps to your `.gitlab-ci.yml` file: ```yaml swift-build: stage: build script: - swift package update - swift build ``` 1. Optional. If you use private Swift package repositories with self-signed certificates, you might need to add the certificate to your project and configure Swift to trust it: 1. Fetch the certificate: ```shell echo | openssl s_client -servername your.repo.url -connect your.repo.url:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > repo-cert.crt ``` 1. Add these lines to your Swift package manifest (`Package.swift`): ```swift import Foundation #if canImport(Security) import Security #endif extension Package { public static func addCustomCertificate() { guard let certPath = Bundle.module.path(forResource: "repo-cert", ofType: "crt") else { fatalError("Certificate not found") } SecCertificateAddToSystemStore(SecCertificateCreateWithData(nil, try! Data(contentsOf: URL(fileURLWithPath: certPath)) as CFData)!) } } // Call this before defining your package Package.addCustomCertificate() ``` Always test your build process in a clean environment to ensure your dependencies are correctly specified and resolve automatically. ## Build CocoaPods projects CocoaPods is a popular dependency manager for Swift and Objective-C Cocoa projects. It provides a standard format for managing external libraries in iOS, macOS, watchOS, and tvOS projects. Follow these best practices when you build projects that use CocoaPods for dependency management. 1. Include a `Podfile.lock` file. The `Podfile.lock` file is crucial for locking your dependencies to specific versions. Always commit this file to your repository to ensure consistency across different environments. ```shell git add Podfile.lock git commit -m "Add Podfile.lock to lock CocoaPods dependencies" ``` 1. You can build your project with one of the following: - The `xcodebuild` command-line tool: ```shell # Install CocoaPods dependencies pod install # Build the project xcodebuild -workspace YourWorkspace.xcworkspace -scheme YourScheme build ``` - The Xcode IDE: 1. Open your `.xcworkspace` file in Xcode. 1. Select your target scheme. 1. Select **Product > Build**. You can also press <kbd>⌘</kbd>+<kbd>B</kbd>. - [fastlane](https://fastlane.tools/), a tool for automating builds and releases for iOS and Android apps: 1. Install `fastlane`: ```shell sudo gem install fastlane ``` 1. In your project, configure `fastlane`: ```shell fastlane init ``` 1. Add a lane to your `fastfile`: ```ruby lane :build do cocoapods gym(scheme: "YourScheme") end ``` 1. Run the build: ```shell fastlane build ``` - If your project uses both CocoaPods and Carthage, you can use Carthage to build your dependencies: 1. Create a `Cartfile` that includes your CocoaPods dependencies. 1. Run the following: ```shell carthage update --platform iOS ``` 1. Configure CI/CD to build the project according to your preferred method. For example, using `xcodebuild`: ```yaml cocoapods-build: stage: build script: - pod install - xcodebuild -workspace YourWorkspace.xcworkspace -scheme YourScheme build ``` 1. Optional. If you use private CocoaPods repositories, you might need to configure your project to access them: 1. Add the private spec repo: ```shell pod repo add REPO_NAME SOURCE_URL ``` 1. In your Podfile, specify the source: ```ruby source 'https://github.com/CocoaPods/Specs.git' source 'SOURCE_URL' ``` 1. Optional. If your private CocoaPods repository uses SSL, ensure the SSL certificate is properly configured: - If you use a self-signed certificate, add it to your system's trusted certificates. You can also specify the SSL configuration in your `.netrc` file: ```netrc machine your.private.repo.url login your_username password your_password ``` 1. After you update your Podfile, run `pod install` to install dependencies and update your workspace. Remember to always run `pod install` after updating your Podfile to ensure all dependencies are properly installed and the workspace is updated. ## Contributing to the vulnerability database To find a vulnerability, you can search the [`GitLab Advisory Database`](https://advisories.gitlab.com/). You can also [submit new vulnerabilities](https://gitlab.com/gitlab-org/security-products/gemnasium-db/blob/master/CONTRIBUTING.md).
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Dependency Scanning description: Vulnerabilities, remediation, configuration, analyzers, and reports. breadcrumbs: - doc - user - application_security - dependency_scanning --- <style> table.ds-table tr:nth-child(even) { background-color: transparent; } table.ds-table td { border-left: 1px solid #dbdbdb; border-right: 1px solid #dbdbdb; border-bottom: 1px solid #dbdbdb; } table.ds-table tr td:first-child { border-left: 0; } table.ds-table tr td:last-child { border-right: 0; } table.ds-table ul { font-size: 1em; list-style-type: none; padding-left: 0px; margin-bottom: 0px; } table.no-vertical-table-lines td { border-left: none; border-right: none; border-bottom: 1px solid #f0f0f0; } table.no-vertical-table-lines tr { border-top: none; } </style> {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} The Dependency Scanning feature based on the Gemnasium analyzer is deprecated in GitLab 17.9 and is planned for removal in GitLab 19.0. It is being replaced with [Dependency Scanning using SBOM](dependency_scanning_sbom/_index.md) and the [new Dependency Scanning analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning). For more information, see [epic 15961](https://gitlab.com/groups/gitlab-org/-/epics/15961). {{< /alert >}} Dependency Scanning identifies security vulnerabilities in your application's dependencies before they reach production. This identification protects your application from potential exploits and data breaches that could damage user trust and your business reputation. When vulnerabilities are found during pipeline runs, they appear directly in your merge request, giving you immediate visibility of security issues before code is committed. All dependencies in your code, including transitive (nested) dependencies, are automatically analyzed during pipelines. This analysis catches security issues that manual review processes might miss. Dependency Scanning integrates into your existing CI/CD workflow with minimal configuration changes, making it straightforward to implement secure development practices from day one. Vulnerabilities can also be identified outside a pipeline by [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md). GitLab offers both Dependency Scanning and [Container Scanning](../container_scanning/_index.md) to ensure coverage for all of these dependency types. To cover as much of your risk area as possible, we encourage you to use all of our security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). ![Dependency scanning Widget](img/dependency_scanning_v13_2.png) {{< alert type="warning" >}} Dependency Scanning does not support runtime installation of compilers and interpreters. {{< /alert >}} - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Dependency Scanning - Advanced Security Testing](https://www.youtube.com/watch?v=TBnfbGk4c4o) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an interactive reading and how-to demo of this Dependency Scanning documentation, see [How to use dependency scanning tutorial hands-on GitLab Application Security part 3](https://youtu.be/ii05cMbJ4xQ?feature=shared) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For other interactive reading and how-to demos, see [Get Started With GitLab Application Security Playlist](https://www.youtube.com/playlist?list=PL05JrBw4t0KrUrjDoefSkgZLx5aJYFaF9) ## Getting started To get started with Dependency Scanning the following steps show how to enable Dependency Scanning for your project. Prerequisites: - The `test` stage is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. - If you're using SaaS runners on GitLab.com, this is enabled by default. To enable the analyzer, either: - Enable [Auto DevOps](../../../topics/autodevops/_index.md), which includes dependency scanning. - Use a preconfigured merge request. - Create a [scan execution policy](../policies/scan_execution_policies.md) that enforces dependency scanning. - Edit the `.gitlab-ci.yml` file manually. - [Use CI/CD components](#use-cicd-components) ### Use a preconfigured merge request This method automatically prepares a merge request that includes the Dependency Scanning template in the `.gitlab-ci.yml` file. You then merge the merge request to enable Dependency Scanning. {{< alert type="note" >}} This method works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it might not be parsed successfully, and an error might occur. In that case, use the [manual](#edit-the-gitlab-ciyml-file-manually) method instead. {{< /alert >}} To enable Dependency Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Dependency Scanning** row, select **Configure with a merge request**. 1. Select **Create merge request**. 1. Review the merge request, then select **Merge**. Pipelines now include a Dependency Scanning job. ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit the existing `.gitlab-ci.yml` file. Use this method if your GitLab CI/CD configuration file is complex. To enable Dependency Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, then select **Merge**. Pipelines now include a Dependency Scanning job. ### Use CI/CD components {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/454143) in GitLab 17.0. This feature is an [experiment](../../../policy/development_stages_support.md). - The dependency scanning CI/CD component only supports Android projects. {{< /history >}} Use [CI/CD components](../../../ci/components/_index.md) to perform Dependency Scanning of your application. For instructions, see the respective component's README file. #### Available CI/CD components See <https://gitlab.com/explore/catalog/components/dependency-scanning> After completing these steps, you can: - Learn more about how to [understand the results](#understanding-the-results). - Plan a [roll out](#roll-out) to more projects. ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - CVSS score: Provides a numeric value that maps to severity. - EPSS: Shows the likelihood of a vulnerability being exploited in the wild. - Has Known Exploit (KEV): Indicates that a given vulnerability has been exploited. - Project: Highlights the project where the vulnerability was identified. - Report type / Scanner: Explains the output type and scanner used to generate the output. - Reachable: Provides an indication whether the vulnerable dependency is used in code. - Scanner: Identifies which analyzer detected the vulnerability. - Location: Names the file where the vulnerable dependency is located. - Links: Evidence of the vulnerability being cataloged in various advisory databases. - Identifiers: A list of references used to classify the vulnerability, such as CVE identifiers. Dependency Scanning produces the following output: - **Dependency scanning report**: Contains details of all vulnerabilities detected in dependencies. - **CycloneDX Software Bill of Materials**: Software Bill of Materials (SBOM) for each supported lock or build file detected. ### Dependency scanning report Dependency scanning outputs a report containing details of all vulnerabilities. The report is processed internally and the results are shown in the UI. The report is also output as an artifact of the dependency scanning job, named `gl-dependency-scanning-report.json`. For more details of the dependency scanning report, see the [Dependency scanning report schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/dependency-scanning-report-format.json). ### CycloneDX Software Bill of Materials Dependency Scanning outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for each supported lock or build file it detects. The CycloneDX SBOMs are: - Named `gl-sbom-<package-type>-<package-manager>.cdx.json`. - Available as job artifacts of the dependency scanning job. - Saved in the same directory as the detected lock or build files. For example, if your project has the following structure: ```plaintext . ├── ruby-project/ │ └── Gemfile.lock ├── ruby-project-2/ │ └── Gemfile.lock ├── php-project/ │ └── composer.lock └── go-project/ └── go.sum ``` Then the Gemnasium scanner generates the following CycloneDX SBOMs: ```plaintext . ├── ruby-project/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── ruby-project-2/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── php-project/ │ ├── composer.lock │ └── gl-sbom-packagist-composer.cdx.json └── go-project/ ├── go.sum └── gl-sbom-go-go.cdx.json ``` #### Merging multiple CycloneDX SBOMs You can use a CI/CD job to merge the multiple CycloneDX SBOMs into a single SBOM. GitLab uses [CycloneDX Properties](https://cyclonedx.org/use-cases/#properties--name-value-store) to store implementation-specific details in the metadata of each CycloneDX SBOM, such as the location of build and lock files. If multiple CycloneDX SBOMs are merged together, this information is removed from the resulting merged file. For example, the following `.gitlab-ci.yml` extract demonstrates how the Cyclone SBOM files can be merged, and the resulting file validated. ```yaml stages: - test - merge-cyclonedx-sboms include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml merge cyclonedx sboms: stage: merge-cyclonedx-sboms image: name: cyclonedx/cyclonedx-cli:0.25.1 entrypoint: [""] script: - find . -name "gl-sbom-*.cdx.json" -exec cyclonedx merge --output-file gl-sbom-all.cdx.json --input-files "{}" + # optional: validate the merged sbom - cyclonedx validate --input-version v1_4 --input-file gl-sbom-all.cdx.json artifacts: paths: - gl-sbom-all.cdx.json ``` ## Roll out After you are confident in the Dependency Scanning results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply Dependency Scannign settings across groups. - If you have unique requirements, Dependency Scanning with SBOM can be run in [offline environments](../offline_deployments/_index.md). ## Supported languages and package managers The following languages and dependency managers are supported by Dependency Scanning: <!-- markdownlint-disable MD044 --> <table class="ds-table"> <thead> <tr> <th>Language</th> <th>Language versions</th> <th>Package manager</th> <th>Supported files</th> <th><a href="#how-multiple-files-are-processed">Processes multiple files?</a></th> </tr> </thead> <tbody> <tr> <td>.NET</td> <td rowspan="2">All versions</td> <td rowspan="2"><a href="https://www.nuget.org/">NuGet</a></td> <td rowspan="2"><a href="https://learn.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files#enabling-lock-file"><code>packages.lock.json</code></a></td> <td rowspan="2">Y</td> </tr> <tr> <td>C#</td> </tr> <tr> <td>C</td> <td rowspan="2">All versions</td> <td rowspan="2"><a href="https://conan.io/">Conan</a></td> <td rowspan="2"><a href="https://docs.conan.io/en/latest/versioning/lockfiles.html"><code>conan.lock</code></a></td> <td rowspan="2">Y</td> </tr> <tr> <td>C++</td> </tr> <tr> <td>Go</td> <td>All versions</td> <td><a href="https://go.dev/">Go</a></td> <td> <ul> <li><code>go.mod</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td rowspan="2">Java and Kotlin</td> <td rowspan="2"> 8 LTS, 11 LTS, 17 LTS, or 21 LTS<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-1">1</a></b></sup> </td> <td><a href="https://gradle.org/">Gradle</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-2">2</a></b></sup></td> <td> <ul> <li><code>build.gradle</code></li> <li><code>build.gradle.kts</code></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://maven.apache.org/">Maven</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-6">6</a></b></sup></td> <td><code>pom.xml</code></td> <td>N</td> </tr> <tr> <td rowspan="3">JavaScript and TypeScript</td> <td rowspan="3">All versions</td> <td><a href="https://www.npmjs.com/">npm</a></td> <td> <ul> <li><code>package-lock.json</code></li> <li><code>npm-shrinkwrap.json</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td><a href="https://classic.yarnpkg.com/en/">yarn</a></td> <td><code>yarn.lock</code></td> <td>Y</td> </tr> <tr> <td><a href="https://pnpm.io/">pnpm</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-3">3</a></b></sup></td> <td><code>pnpm-lock.yaml</code></td> <td>Y</td> </tr> <tr> <td>PHP</td> <td>All versions</td> <td><a href="https://getcomposer.org/">Composer</a></td> <td><code>composer.lock</code></td> <td>Y</td> </tr> <tr> <td rowspan="5">Python</td> <td rowspan="5">3.11<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-7">7</a></b></sup></td> <td><a href="https://setuptools.readthedocs.io/en/latest/">setuptools</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-8">8</a></b></sup></td> <td><code>setup.py</code></td> <td>N</td> </tr> <tr> <td><a href="https://pip.pypa.io/en/stable/">pip</a></td> <td> <ul> <li><code>requirements.txt</code></li> <li><code>requirements.pip</code></li> <li><code>requires.txt</code></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://pipenv.pypa.io/en/latest/">Pipenv</a></td> <td> <ul> <li><a href="https://pipenv.pypa.io/en/latest/pipfile.html#example-pipfile"><code>Pipfile</code></a></li> <li><a href="https://pipenv.pypa.io/en/latest/pipfile.html#example-pipfile-lock"><code>Pipfile.lock</code></a></li> </ul> </td> <td>N</td> </tr> <tr> <td><a href="https://python-poetry.org/">Poetry</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-4">4</a></b></sup></td> <td><code>poetry.lock</code></td> <td>N</td> </tr> <tr> <td><a href="https://docs.astral.sh/uv/">uv</a></td> <td><code>uv.lock</code></td> <td>Y</td> </tr> <tr> <td>Ruby</td> <td>All versions</td> <td><a href="https://bundler.io/">Bundler</a></td> <td> <ul> <li><code>Gemfile.lock</code></li> <li><code>gems.locked</code></li> </ul> </td> <td>Y</td> </tr> <tr> <td>Scala</td> <td>All versions</td> <td><a href="https://www.scala-sbt.org/">sbt</a><sup><b><a href="#notes-regarding-supported-languages-and-package-managers-5">5</a></b></sup></td> <td><code>build.sbt</code></td> <td>N</td> </tr> <tr> <td>Swift</td> <td>All versions</td> <td><a href="https://swift.org/package-manager/">Swift Package Manager</a></td> <td><code>Package.resolved</code></td> <td>N</td> </tr> <tr> <td>Cocoapods<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-9">9</a></b></sup></td> <td>All versions</td> <td><a href="https://cocoapods.org/">CocoaPods</a></td> <td><code>Podfile.lock</code></td> <td>N</td> </tr> <tr> <td>Dart<sup><b><a href="#notes-regarding-supported-languages-and-package-managers-10">10</a></b></sup></td> <td>All versions</td> <td><a href="https://pub.dev/">Pub</a></td> <td><code>pubspec.lock</code></td> <td>N</td> </tr> </tbody> </table> <ol> <li> <a id="notes-regarding-supported-languages-and-package-managers-1"></a> <p> Java 21 LTS for <a href="https://www.scala-sbt.org/">sbt</a> is limited to version 1.9.7. Support for more <a href="https://www.scala-sbt.org/">sbt</a> versions can be tracked in <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/430335">issue 430335</a>. It is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-2"></a> <p> Gradle is not supported when <a href="https://docs.gitlab.com/ee/development/fips_compliance.html#enable-fips-mode">FIPS mode</a> is enabled. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-3"></a> <p> Support for <code>pnpm</code> lockfiles was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/336809">introduced in GitLab 15.11</a>. <code>pnpm</code> lockfiles do not store bundled dependencies, so the reported dependencies may differ from <code>npm</code> or <code>yarn</code>. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-4"></a> <p> Support for <a href="https://python-poetry.org/">Poetry</a> projects with a <code>poetry.lock</code> file was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/7006">added in GitLab 15.0</a>. Support for projects without a <code>poetry.lock</code> file is tracked in issue: <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/32774">Poetry's pyproject.toml support for dependency scanning.</a> </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-5"></a> <p> Support for sbt 1.0.x was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/415835">deprecated</a> in GitLab 16.8 and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/436985">removed</a> in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-6"></a> <p> Support for Maven below 3.8.8 was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/438772">deprecated</a> in GitLab 16.9 and will be removed in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-7"></a> <p> Support for prior Python versions was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/441201">deprecated</a> in GitLab 16.9 and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/441491">removed</a> in GitLab 17.0. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-8"></a> <p> Excludes both <code>pip</code> and <code>setuptools</code> from the report as they are required by the installer. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-9"></a> <p> Only SBOM, without advisories. See <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/468764">spike on CocoaPods advisories research</a>. </p> </li> <li> <a id="notes-regarding-supported-languages-and-package-managers-10"></a> <p> No license detection yet. See <a href="https://gitlab.com/groups/gitlab-org/-/epics/17037">epic on Dart license detection</a>. </p> </li> </ol> <!-- markdownlint-enable MD044 --> ### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines) ### Customizing analyzer behavior To customize Dependency Scanning, use [CI/CD variables](#available-cicd-variables). {{< alert type="warning" >}} Test all customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Overriding dependency scanning jobs To override a job definition (for example, to change properties like `variables` or `dependencies`), declare a new job with the same name as the one to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this disables `DS_REMEDIATE` for the `gemnasium` analyzer: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml gemnasium-dependency_scanning: variables: DS_REMEDIATE: "false" ``` To override the `dependencies: []` attribute, add an override job as described previously, targeting this attribute: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml gemnasium-dependency_scanning: dependencies: ["build"] ``` ### Available CI/CD variables You can use CI/CD variables to [customize](#customizing-analyzer-behavior) dependency scanning behavior. #### Global analyzer settings The following variables allow configuration of global dependency scanning settings. | CI/CD variables | Description | | ----------------------------|------------ | | `ADDITIONAL_CA_CERT_BUNDLE` | Bundle of CA certificates to trust. The bundle of certificates provided here is also used by other tools during the scanning process, such as `git`, `yarn`, or `npm`. For more details, see [Custom TLS certificate authority](#custom-tls-certificate-authority). | | `DS_EXCLUDED_ANALYZERS` | Specify the analyzers (by name) to exclude from Dependency Scanning. For more information, see [Analyzers](#analyzers). | | `DS_EXCLUDED_PATHS` | Exclude files and directories from the scan based on the paths. A comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. This is a pre-filter which is applied before the scan is executed. Default: `"spec, test, tests, tmp"`. | | `DS_IMAGE_SUFFIX` | Suffix added to the image name. (GitLab team members can view more information in this confidential issue: `https://gitlab.com/gitlab-org/gitlab/-/issues/354796`). Automatically set to `"-fips"` when FIPS mode is enabled. | | `DS_MAX_DEPTH` | Defines how many directory levels deep that the analyzer should search for supported files to scan. A value of `-1` scans all directories regardless of depth. Default: `2`. | | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the official default images (proxy). | #### Analyzer-specific settings The following variables configure the behavior of specific dependency scanning analyzers. | CI/CD variable | Analyzer | Default | Description | |--------------------------------------|--------------------|------------------------------|-------------| | `GEMNASIUM_DB_LOCAL_PATH` | `gemnasium` | `/gemnasium-db` | Path to local Gemnasium database. | | `GEMNASIUM_DB_UPDATE_DISABLED` | `gemnasium` | `"false"` | Disable automatic updates for the `gemnasium-db` advisory database. For usage see [Access to the GitLab Advisory Database](#access-to-the-gitlab-advisory-database). | | `GEMNASIUM_DB_REMOTE_URL` | `gemnasium` | `https://gitlab.com/gitlab-org/security-products/gemnasium-db.git` | Repository URL for fetching the GitLab Advisory Database. | | `GEMNASIUM_DB_REF_NAME` | `gemnasium` | `master` | Branch name for remote repository database. `GEMNASIUM_DB_REMOTE_URL` is required. | | `GEMNASIUM_IGNORED_SCOPES` | `gemnasium` | | Comma-separated list of Maven dependency scopes to ignore. For more details, see the [Maven dependency scope documentation](https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope) | | `DS_REMEDIATE` | `gemnasium` | `"true"`, `"false"` in FIPS mode | Enable automatic remediation of vulnerable dependencies. Not supported in FIPS mode. | | `DS_REMEDIATE_TIMEOUT` | `gemnasium` | `5m` | Timeout for auto-remediation. | | `GEMNASIUM_LIBRARY_SCAN_ENABLED` | `gemnasium` | `"true"` | Enable detecting vulnerabilities in vendored JavaScript libraries (libraries which are not managed by a package manager). This functionality requires a JavaScript lockfile to be present in a commit, otherwise Dependency Scanning is not executed and vendored files are not scanned.<br>Dependency scanning uses the [Retire.js](https://github.com/RetireJS/retire.js) scanner to detect a limited set of vulnerabilities. For details of which vulnerabilities are detected, see the [Retire.js repository](https://github.com/RetireJS/retire.js/blob/master/repository/jsrepository.json). | | `DS_INCLUDE_DEV_DEPENDENCIES` | `gemnasium` | `"true"` | When set to `"false"`, development dependencies and their vulnerabilities are not reported. Only projects using Composer, Maven, npm, pnpm, Pipenv or Poetry are supported. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227861) in GitLab 15.1. | | `GOOS` | `gemnasium` | `"linux"` | The operating system for which to compile Go code. | | `GOARCH` | `gemnasium` | `"amd64"` | The architecture of the processor for which to compile Go code. | | `GOFLAGS` | `gemnasium` | | The flags passed to the `go build` tool. | | `GOPRIVATE` | `gemnasium` | | A list of glob patterns and prefixes to be fetched from source. For more information, see the Go private modules [documentation](https://go.dev/ref/mod#private-modules). | | `DS_JAVA_VERSION` | `gemnasium-maven` | `17` | Version of Java. Available versions: `8`, `11`, `17`, `21`. | | `MAVEN_CLI_OPTS` | `gemnasium-maven` | `"-DskipTests --batch-mode"` | List of command line arguments that are passed to `maven` by the analyzer. See an example for [using private repositories](#authenticate-with-a-private-maven-repository). | | `GRADLE_CLI_OPTS` | `gemnasium-maven` | | List of command line arguments that are passed to `gradle` by the analyzer. | | `GRADLE_PLUGIN_INIT_PATH` | `gemnasium-maven` | `"gemnasium-init.gradle"` | Specifies the path to the Gradle initialization script. The init script must include `allprojects { apply plugin: 'project-report' }` to ensure compatibility. | | `DS_GRADLE_RESOLUTION_POLICY` | `gemnasium-maven` | `"failed"` | Controls Gradle dependency resolution strictness. Accepts `"none"` to allow partial results, or `"failed"` to fail the scan when any dependencies fail to resolve. | | `SBT_CLI_OPTS` | `gemnasium-maven` | | List of command-line arguments that the analyzer passes to `sbt`. | | `PIP_INDEX_URL` | `gemnasium-python` | `https://pypi.org/simple` | Base URL of Python Package Index. | | `PIP_EXTRA_INDEX_URL` | `gemnasium-python` | | Array of [extra URLs](https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption-extra-index-url) of package indexes to use in addition to `PIP_INDEX_URL`. Comma-separated. **Warning**: Read [the following security consideration](#python-projects) when using this environment variable. | | `PIP_REQUIREMENTS_FILE` | `gemnasium-python` | | Pip requirements file to be scanned. This is a filename and not a path. When this environment variable is set only the specified file is scanned. | | `PIPENV_PYPI_MIRROR` | `gemnasium-python` | | If set, overrides the PyPi index used by Pipenv with a [mirror](https://github.com/pypa/pipenv/blob/v2022.1.8/pipenv/environments.py#L263). | | `DS_PIP_VERSION` | `gemnasium-python` | | Force the install of a specific pip version (example: `"19.3"`), otherwise the pip installed in the Docker image is used. | | `DS_PIP_DEPENDENCY_PATH` | `gemnasium-python` | | Path to load Python pip dependencies from. | #### Other variables The previous tables are not an exhaustive list of all variables that can be used. They contain all specific GitLab and analyzer variables we support and test. There are many variables, such as environment variables, that you can pass in and they do work. This is a large list, many of which we may be unaware of, and as such is not documented. For example, to pass the non-GitLab environment variable `HTTPS_PROXY` to all Dependency Scanning jobs, set it as a [CI/CD variable in your `.gitlab-ci.yml`](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file) file like this: ```yaml variables: HTTPS_PROXY: "https://squid-proxy:3128" ``` {{< alert type="note" >}} Gradle projects require [an additional variable](#using-a-proxy-with-gradle-projects) setup to use a proxy. {{< /alert >}} Alternatively we may use it in specific jobs, like Dependency Scanning: ```yaml dependency_scanning: variables: HTTPS_PROXY: $HTTPS_PROXY ``` As we have not tested all variables you may find some do work and others do not. If one does not work and you need it we suggest [submitting a feature request](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed&issue[title]=Docs%20feedback%20-%20feature%20proposal:%20Write%20your%20title) or contributing to the code to enable it to be used. ### Custom TLS certificate authority Dependency Scanning allows for use of custom TLS certificates for SSL/TLS connections instead of the default shipped with the analyzer container image. Support for custom certificate authorities was introduced in the following versions. | Analyzer | Version | |--------------------|--------------------------------------------------------------------------------------------------------| | `gemnasium` | [v2.8.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/releases/v2.8.0) | | `gemnasium-maven` | [v2.9.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium-maven/-/releases/v2.9.0) | | `gemnasium-python` | [v2.7.0](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium-python/-/releases/v2.7.0) | #### Using a custom TLS certificate authority To use a custom TLS certificate authority, assign the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1) to the CI/CD variable `ADDITIONAL_CA_CERT_BUNDLE`. For example, to configure the certificate in the `.gitlab-ci.yml` file: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` ### Authenticate with a private Maven repository To use a private Maven repository that requires authentication, you should store your credentials in a CI/CD variable and reference them in your Maven settings file. Do not add the credentials to your `.gitlab-ci.yml` file. To authenticate with a private Maven repository: 1. Add the `MAVEN_CLI_OPTS` CI/CD variable to your [project's settings](../../../ci/variables/_index.md#for-a-project), setting the value to include your credentials. For example, if your username is `myuser` and the password is `verysecret`: | Type | Key | Value | |----------|------------------|-------| | Variable | `MAVEN_CLI_OPTS` | `--settings mysettings.xml -Drepository.password=verysecret -Drepository.user=myuser` | 1. Create a Maven settings file with your server configuration. For example, add the following to the settings file `mysettings.xml`. This file is referenced in the `MAVEN_CLI_OPTS` CI/CD variable. ```xml <!-- mysettings.xml --> <settings> ... <servers> <server> <id>private_server</id> <username>${repository.user}</username> <password>${repository.password}</password> </server> </servers> </settings> ``` ### FIPS-enabled images {{< history >}} - Introduced in GitLab 15.0 - Gemnasium uses FIPS-enabled images when FIPS mode is enabled. {{< /history >}} GitLab also offers [FIPS-enabled Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the Gemnasium images. When FIPS mode is enabled in the GitLab instance, Gemnasium scanning jobs automatically use the FIPS-enabled images. To manually switch to FIPS-enabled images, set the variable `DS_IMAGE_SUFFIX` to `"-fips"`. Dependency scanning for Gradle projects and auto-remediation for Yarn projects are not supported in FIPS mode. FIPS-enabled images are based on RedHat's UBI micro. They don't have package managers such as `dnf` or `microdnf` so it's not possible to install system packages at runtime. ### Offline environment {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for dependency scanning jobs to run successfully. For more information, see [Offline environments](../offline_deployments/_index.md). #### Requirements To run dependency scanning in an offline environment you must have: - A GitLab Runner with the `docker` or `kubernetes` executor - Local copies of the dependency scanning analyzer images - Access to the [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) - Access to the [Package Metadata Database](../../../topics/offline/quick_start_guide.md#enabling-the-package-metadata-database) #### Local copies of analyzer images To use dependency scanning with all [supported languages and frameworks](#supported-languages-and-package-managers): 1. Import the following default dependency scanning analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/gemnasium:6 registry.gitlab.com/security-products/gemnasium:6-fips registry.gitlab.com/security-products/gemnasium-maven:6 registry.gitlab.com/security-products/gemnasium-maven:6-fips registry.gitlab.com/security-products/gemnasium-python:6 registry.gitlab.com/security-products/gemnasium-python:6-fips ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may want to download them regularly. 1. Configure GitLab CI/CD to use the local analyzers. Set the value of the CI/CD variable `SECURE_ANALYZERS_PREFIX` to your local Docker registry - in this example, `docker-registry.example.com`. ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "docker-registry.example.com/analyzers" ``` #### Access to the GitLab Advisory Database The [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) is the source of vulnerability data used by the `gemnasium`, `gemnasium-maven`, and `gemnasium-python` analyzers. The Docker images of these analyzers include a clone of the database. The clone is synchronized with the database before starting a scan, to ensure the analyzers have the latest vulnerability data. In an offline environment, the default host of the GitLab Advisory Database can't be accessed. Instead, you must host the database somewhere that it is accessible to the GitLab runners. You must also update the database manually at your own schedule. Available options for hosting the database are: - [Use a clone of the GitLab Advisory Database](#use-a-copy-of-the-gitlab-advisory-database). - [Use a copy of the GitLab Advisory Database](#use-a-copy-of-the-gitlab-advisory-database). ##### Use a clone of the GitLab Advisory Database Using a clone of the GitLab Advisory Database is recommended because it is the most efficient method. To host a clone of the GitLab Advisory Database: 1. Clone the GitLab Advisory Database to a host that is accessible by HTTP from the GitLab runners. 1. In your `.gitlab-ci.yml` file, set the value of the CI/CD variable `GEMNASIUM_DB_REMOTE_URL` to the URL of the Git repository. For example: ```yaml variables: GEMNASIUM_DB_REMOTE_URL: https://users-own-copy.example.com/gemnasium-db.git ``` ##### Use a copy of the GitLab Advisory Database Using a copy of the GitLab Advisory Database requires you to host an archive file which is downloaded by the analyzers. To use a copy of the GitLab Advisory Database: 1. Download an archive of the GitLab Advisory Database to a host that is accessible by HTTP from the GitLab runners. The archive is located at `https://gitlab.com/gitlab-org/security-products/gemnasium-db/-/archive/master/gemnasium-db-master.tar.gz`. 1. Update your `.gitlab-ci.yml` file. - Set CI/CD variable `GEMNASIUM_DB_LOCAL_PATH` to use the local copy of the database. - Set CI/CD variable `GEMNASIUM_DB_UPDATE_DISABLED` to disable the database update. - Download and extract the advisory database before the scan begins. ```yaml variables: GEMNASIUM_DB_LOCAL_PATH: ./gemnasium-db-local GEMNASIUM_DB_UPDATE_DISABLED: "true" dependency_scanning: before_script: - wget https://local.example.com/gemnasium_db.tar.gz - mkdir -p $GEMNASIUM_DB_LOCAL_PATH - tar -xzvf gemnasium_db.tar.gz --strip-components=1 -C $GEMNASIUM_DB_LOCAL_PATH ``` ### Using a proxy with Gradle projects The Gradle wrapper script does not read the `HTTP(S)_PROXY` environment variables. See [this upstream issue](https://github.com/gradle/gradle/issues/11065). To make the Gradle wrapper script use a proxy, you can specify the options using the `GRADLE_CLI_OPTS` CI/CD variable: ```yaml variables: GRADLE_CLI_OPTS: "-Dhttps.proxyHost=squid-proxy -Dhttps.proxyPort=3128 -Dhttp.proxyHost=squid-proxy -Dhttp.proxyPort=3128 -Dhttp.nonProxyHosts=localhost" ``` ### Using a proxy with Maven projects Maven does not read the `HTTP(S)_PROXY` environment variables. To make the Maven dependency scanner use a proxy, you can configure it using a `settings.xml` file (see [Maven documentation](https://maven.apache.org/guides/mini/guide-proxies.html)) and instruct Maven to use this configuration by using the `MAVEN_CLI_OPTS` CI/CD variable: ```yaml variables: MAVEN_CLI_OPTS: "--settings mysettings.xml" ``` ### Specific settings for languages and package managers See the following sections for configuring specific languages and package managers. #### Python (pip) If you need to install Python packages before the analyzer runs, you should use `pip install --user` in the `before_script` of the scanning job. The `--user` flag causes project dependencies to be installed in the user directory. If you do not pass the `--user` option, packages are installed globally, and they are not scanned and don't show up when listing project dependencies. #### Python (setuptools) If you need to install Python packages before the analyzer runs, you should use `python setup.py install --user` in the `before_script` of the scanning job. The `--user` flag causes project dependencies to be installed in the user directory. If you do not pass the `--user` option, packages are installed globally, and they are not scanned and don't show up when listing project dependencies. When using self-signed certificates for your private PyPi repository, no extra job configuration (aside from the previous `.gitlab-ci.yml` template) is needed. However, you must update your `setup.py` to ensure that it can reach your private repository. Here is an example configuration: 1. Update `setup.py` to create a `dependency_links` attribute pointing at your private repository for each dependency in the `install_requires` list: ```python install_requires=['pyparsing>=2.0.3'], dependency_links=['https://pypi.example.com/simple/pyparsing'], ``` 1. Fetch the certificate from your repository URL and add it to the project: ```shell printf "\n" | openssl s_client -connect pypi.example.com:443 -servername pypi.example.com | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > internal.crt ``` 1. Point `setup.py` at the newly downloaded certificate: ```python import setuptools.ssl_support setuptools.ssl_support.cert_paths = ['internal.crt'] ``` #### Python (Pipenv) If running in a limited network connectivity environment, you must configure the `PIPENV_PYPI_MIRROR` variable to use a private PyPi mirror. This mirror must contain both default and development dependencies. ```yaml variables: PIPENV_PYPI_MIRROR: https://pypi.example.com/simple ``` <!-- markdownlint-disable MD044 --> Alternatively, if it's not possible to use a private registry, you can load the required packages into the Pipenv virtual environment cache. For this option, the project must check in the `Pipfile.lock` into the repository, and load both default and development packages into the cache. See the example [python-pipenv](https://gitlab.com/gitlab-org/security-products/tests/python-pipenv/-/blob/41cc017bd1ed302f6edebcfa3bc2922f428e07b6/.gitlab-ci.yml#L20-42) project for an example of how this can be done. <!-- markdownlint-enable MD044 --> ## Dependency detection Dependency Scanning automatically detects the languages used in the repository. All analyzers matching the detected languages are run. There is usually no need to customize the selection of analyzers. We recommend not specifying the analyzers so you automatically use the full selection for best coverage, avoiding the need to make adjustments when there are deprecations or removals. However, you can override the selection using the variable `DS_EXCLUDED_ANALYZERS`. The language detection relies on CI job [`rules`](../../../ci/yaml/_index.md#rules) to detect [supported dependency file](#how-analyzers-are-triggered) For Java and Python, when a supported dependency file is detected, Dependency Scanning attempts to build the project and execute some Java or Python commands to get the list of dependencies. For all other projects, the lock file is parsed to obtain the list of dependencies without needing to build the project first. All direct and transitive dependencies are analyzed, without a limit to the depth of transitive dependencies. ### Analyzers Dependency Scanning supports the following official [Gemnasium-based](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) analyzers: - `gemnasium` - `gemnasium-maven` - `gemnasium-python` The analyzers are published as Docker images, which Dependency Scanning uses to launch dedicated containers for each analysis. You can also integrate a custom security scanner. Each analyzer is updated as new versions of Gemnasium are released. ### How analyzers obtain dependency information GitLab analyzers obtain dependency information using one of the following two methods: 1. [Parsing lockfiles directly.](#obtaining-dependency-information-by-parsing-lockfiles) 1. [Running a package manager or build tool to generate a dependency information file which is then parsed.](#obtaining-dependency-information-by-running-a-package-manager-to-generate-a-parsable-file) #### Obtaining dependency information by parsing lockfiles The following package managers use lockfiles that GitLab analyzers are capable of parsing directly: <!-- markdownlint-disable MD044 --> <table class="ds-table no-vertical-table-lines"> <thead> <tr> <th>Package Manager</th> <th>Supported File Format Versions</th> <th>Tested Package Manager Versions</th> </tr> </thead> <tbody> <tr> <td>Bundler</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/ruby-bundler/default/Gemfile.lock#L118">1.17.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/tests/ruby-bundler/-/blob/bundler2-FREEZE/Gemfile.lock#L118">2.1.4</a> </td> </tr> <tr> <td>Composer</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/php-composer/default/composer.lock">1.x</a> </td> </tr> <tr> <td>Conan</td> <td>0.4</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/c-conan/default/conan.lock#L38">1.x</a> </td> </tr> <tr> <td>Go</td> <td>Not applicable</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/go-modules/gosum/default/go.sum">1.x</a> </td> </tr> <tr> <td>NuGet</td> <td>v1, v2<sup><b><a href="#notes-regarding-parsing-lockfiles-1">1</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/csharp-nuget-dotnetcore/default/src/web.api/packages.lock.json#L2">4.9</a> </td> </tr> <tr> <td>npm</td> <td>v1, v2, v3<sup><b><a href="#notes-regarding-parsing-lockfiles-2">2</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-npm/default/package-lock.json#L4">6.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-npm/lockfileVersion2/package-lock.json#L4">7.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/npm/fixtures/lockfile-v3/simple/package-lock.json#L4">9.x</a> </td> </tr> <tr> <td>pnpm</td> <td>v5, v6, v9</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-pnpm/default/pnpm-lock.yaml#L1">7.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/pnpm/fixtures/v6/simple/pnpm-lock.yaml#L1">8.x</a> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/pnpm/fixtures/v9/simple/pnpm-lock.yaml#L1">9.x</a> </td> </tr> <tr> <td>yarn</td> <td>versions 1, 2, 3, 4<sup><b><a href="#notes-regarding-parsing-lockfiles-3">3</a></b></sup></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/classic/default/yarn.lock#L2">1.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/berry/v2/default/yarn.lock">2.x</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/js-yarn/berry/v3/default/yarn.lock">3.x</a> </td> </tr> <tr> <td>Poetry</td> <td>v1</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/qa/fixtures/python-poetry/default/poetry.lock">1.x</a> </td> </tr> <tr> <td>uv</td> <td>v0.x</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/master/scanner/parser/uv/fixtures/simple/uv.lock">0.x</a> </td> </tr> </tbody> </table> <ol> <li> <a id="notes-regarding-parsing-lockfiles-1"></a> <p> Support for NuGet version 2 lock files was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/398680">introduced</a> in GitLab 16.2. </p> </li> <li> <a id="notes-regarding-parsing-lockfiles-2"></a> <p> Support for <code>lockfileVersion = 3</code> was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/365176">introduced</a> in GitLab 15.7. </p> </li> <li> <a id="notes-regarding-parsing-lockfiles-3"></a> <p> Support for Yarn version 4 was <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/431752">introduced</a> in GitLab 16.11. </p> <p> The following features are not supported for Yarn Berry: </p> <ul> <li> <a href="https://yarnpkg.com/features/workspaces">workspaces</a> </li> <li> <a href="https://yarnpkg.com/cli/patch">yarn patch</a> </li> </ul> <p> Yarn files that contain a patch, a workspace, or both, are still processed, but these features are ignored. </p> </li> </ol> <!-- markdownlint-enable MD044 --> #### Obtaining dependency information by running a package manager to generate a parsable file To support the following package managers, the GitLab analyzers proceed in two steps: 1. Execute the package manager or a specific task, to export the dependency information. 1. Parse the exported dependency information. <!-- markdownlint-disable MD044 --> <table class="ds-table no-vertical-table-lines"> <thead> <tr> <th>Package Manager</th> <th>Pre-installed Versions</th> <th>Tested Versions</th> </tr> </thead> <tbody> <tr> <td>sbt</td> <td><a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L4">1.6.2</a></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L794-798">1.1.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L800-805">1.2.8</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L722-725">1.3.12</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L722-725">1.4.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L742-746">1.5.8</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L748-762">1.6.2</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L764-768">1.7.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L770-774">1.8.3</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L776-781">1.9.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/.gitlab/ci/gemnasium-maven.gitlab-ci.yml#L111-121">1.9.7</a> </td> </tr> <tr> <td>maven</td> <td><a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.3.1/build/gemnasium-maven/debian/config/.tool-versions#L3">3.9.8</a></td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.3.1/spec/gemnasium-maven_image_spec.rb#L92-94">3.9.8</a><sup><b><a href="#exported-dependency-information-notes-1">1</a></b></sup> </td> </tr> <tr> <td>Gradle</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">6.7.1</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">7.6.4</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L5">8.8</a><sup><b><a href="#exported-dependency-information-notes-2">2</a></b></sup> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L316-321">5.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L323-328">6.7</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L330-335">6.9</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L337-341">7.6</a>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-maven_image_spec.rb#L343-347">8.8</a> </td> </tr> <tr> <td>setuptools</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.4.1/build/gemnasium-python/requirements.txt#L41">70.3.0</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.4.1/spec/gemnasium-python_image_spec.rb#L294-316">&gt;= 70.3.0</a> </td> </tr> <tr> <td>pip</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-python/debian/Dockerfile#L21">24</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L77-90">24</a> </td> </tr> <tr> <td>Pipenv</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-python/requirements.txt#L23">2023.11.15</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L243-256">2023.11.15</a><sup><b><a href="#exported-dependency-information-notes-3">3</a></b></sup>, <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/spec/gemnasium-python_image_spec.rb#L219-241">2023.11.15</a> </td> </tr> <tr> <td>Go</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium/alpine/Dockerfile#L91-93">1.21</a> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium/alpine/Dockerfile#L91-93">1.21</a><sup><strong><a href="#exported-dependency-information-notes-4">4</a></strong></sup> </td> </tr> </tbody> </table> <ol> <li> <a id="exported-dependency-information-notes-1"></a> <p> This test uses the default version of <code>maven</code> specified by the <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium/-/blob/v5.2.14/build/gemnasium-maven/debian/config/.tool-versions#L3"><code>.tool-versions</code></a> file. </p> </li> <li> <a id="exported-dependency-information-notes-2"></a> <p> Different versions of Java require different versions of Gradle. The versions of Gradle listed in the previous table are pre-installed in the analyzer image. The version of Gradle used by the analyzer depends on whether your project uses a <code>gradlew</code> (Gradle wrapper) file or not: </p> <ul> <li> <p> If your project <i>does not use</i> a <code>gradlew</code> file, then the analyzer automatically switches to one of the pre-installed Gradle versions, based on the version of Java specified by the <a href="#analyzer-specific-settings"><code>DS_JAVA_VERSION</code></a> variable (default version is <code>17</code>). </p> <p> For Java versions <code>8</code> and <code>11</code>, Gradle <code>6.7.1</code> is automatically selected, Java <code>17</code> uses Gradle <code>7.6.4</code>, and Java <code>21</code> uses Gradle <code>8.8</code>. </p> </li> <li> <p> If your project <i>does use</i> a <code>gradlew</code> file, then the version of Gradle pre-installed in the analyzer image is ignored, and the version specified in your <code>gradlew</code> file is used instead. </p> </li> </ul> </li> <li> <a id="exported-dependency-information-notes-3"></a> <p> This test confirms that if a <code>Pipfile.lock</code> file is found, it is used by <a href="https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium">Gemnasium</a> to scan the exact package versions listed in this file. </p> </li> <li> <a id="exported-dependency-information-notes-4"></a> <p> Because of the implementation of <code>go build</code>, the Go build process requires network access, a pre-loaded mod cache via <code>go mod download</code>, or vendored dependencies. For more information, refer to the Go documentation on <a href="https://pkg.go.dev/cmd/go#hdr-Compile_packages_and_dependencies">compiling packages and dependencies</a>. </p> </li> </ol> <!-- markdownlint-enable MD044 --> ## How analyzers are triggered GitLab relies on [`rules:exists`](../../../ci/yaml/_index.md#rulesexists) to start the relevant analyzers for the languages detected by the presence of the [supported files](#supported-languages-and-package-managers) in the repository. A maximum of two directory levels from the repository's root is searched. For example, the `gemnasium-dependency_scanning` job is enabled if a repository contains either `Gemfile`, `api/Gemfile`, or `api/client/Gemfile`, but not if the only supported dependency file is `api/v1/client/Gemfile`. ## How multiple files are processed {{< alert type="note" >}} If you've run into problems while scanning multiple files, contribute a comment to [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/337056). {{< /alert >}} ### Python We only execute one installation in the directory where either a requirements file or a lock file has been detected. Dependencies are only analyzed by `gemnasium-python` for the first file that is detected. Files are searched for in the following order: 1. `requirements.txt`, `requirements.pip`, or `requires.txt` for projects using Pip. 1. `Pipfile` or `Pipfile.lock` for projects using Pipenv. 1. `poetry.lock` for projects using Poetry. 1. `setup.py` for project using Setuptools. The search begins with the root directory and then continues with subdirectories if no builds are found in the root directory. Consequently a Poetry lock file in the root directory would be detected before a Pipenv file in a subdirectory. ### Java and Scala We only execute one build in the directory where a build file has been detected. For large projects that include multiple Gradle, Maven, or sbt builds, or any combination of these, `gemnasium-maven` only analyzes dependencies for the first build file that is detected. Build files are searched for in the following order: 1. `pom.xml` for single or [multi-module](https://maven.apache.org/pom.html#Aggregation) Maven projects. 1. `build.gradle` or `build.gradle.kts` for single or [multi-project](https://docs.gradle.org/current/userguide/intro_multi_project_builds.html) Gradle builds. 1. `build.sbt` for single or [multi-project](https://www.scala-sbt.org/1.x/docs/Multi-Project.html) sbt builds. The search begins with the root directory and then continues with subdirectories if no builds are found in the root directory. Consequently an sbt build file in the root directory would be detected before a Gradle build file in a subdirectory. For [multi-module](https://maven.apache.org/pom.html#Aggregation) Maven projects, and multi-project [Gradle](https://docs.gradle.org/current/userguide/intro_multi_project_builds.html) and [sbt](https://www.scala-sbt.org/1.x/docs/Multi-Project.html) builds, sub-module and sub-project files are analyzed if they are declared in the parent build file. ### JavaScript The following analyzers are executed, each of which have different behavior when processing multiple files: - [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) Supports multiple lockfiles - [Retire.js](https://retirejs.github.io/retire.js/) Does not support multiple lockfiles. When multiple lockfiles exist, `Retire.js` analyzes the first lockfile discovered while traversing the directory tree in alphabetical order. The `gemnasium` analyzer scans supports JavaScript projects for vendored libraries (that is, those checked into the project but not managed by the package manager). ### Go Multiple files are supported. When a `go.mod` file is detected, the analyzer attempts to generate a [build list](https://go.dev/ref/mod#glos-build-list) using [Minimal Version Selection](https://go.dev/ref/mod#glos-minimal-version-selection). If this fails, the analyzer instead attempts to parse the dependencies within the `go.mod` file. As a requirement, the `go.mod` file should be cleaned up using the command `go mod tidy` to ensure proper management of dependencies. The process is repeated for every detected `go.mod` file. ### PHP, C, C++, .NET, C&#35;, Ruby, JavaScript The analyzer for these languages supports multiple lockfiles. ### Support for additional languages Support for additional languages, dependency managers, and dependency files are tracked in the following issues: | Package Managers | Languages | Supported files | Scan tools | Issue | | ------------------- | --------- | --------------- | ---------- | ----- | | [Poetry](https://python-poetry.org/) | Python | `pyproject.toml` | [Gemnasium](https://gitlab.com/gitlab-org/security-products/analyzers/gemnasium) | [GitLab#32774](https://gitlab.com/gitlab-org/gitlab/-/issues/32774) | ## Warnings We recommend that you use the most recent version of all containers, and the most recent supported version of all package managers and languages. Using previous versions carries an increased security risk because unsupported versions may no longer benefit from active security reporting and backporting of security fixes. ### Gradle projects Do not override the `reports.html.destination` or `reports.html.outputLocation` properties when generating an HTML dependency report for Gradle projects. Doing so prevents Dependency Scanning from functioning correctly. ### Maven Projects In isolated networks, if the central repository is a private registry (explicitly set with the `<mirror>` directive), Maven builds may fail to find the `gemnasium-maven-plugin` dependency. This issue occurs because Maven doesn't search the local repository (`/root/.m2`) by default and attempts to fetch from the central repository. The result is an error about the missing dependency. #### Workaround To resolve this issue, add a `<pluginRepositories>` section to your `settings.xml` file. This allows Maven to find plugins in the local repository. Before you begin, consider the following: - This workaround is only for environments where the default Maven central repository is mirrored to a private registry. - After applying this workaround, Maven searches the local repository for plugins, which may have security implications in some environments. Make sure this aligns with your organization's security policies. Follow these steps to modify the `settings.xml` file: 1. Locate your Maven `settings.xml` file. This file is typically found in one of these locations: - `/root/.m2/settings.xml` for the root user. - `~/.m2/settings.xml` for a regular user. - `${maven.home}/conf/settings.xml` global settings. 1. Check if there's an existing `<pluginRepositories>` section in the file. 1. If a `<pluginRepositories>` section already exists, add only the following `<pluginRepository>` element inside it. Otherwise, add the entire `<pluginRepositories>` section: ```xml <pluginRepositories> <pluginRepository> <id>local2</id> <name>local repository</name> <url>file:///root/.m2/repository/</url> </pluginRepository> </pluginRepositories> ``` 1. Run your Maven build or dependency scanning process again. ### Python projects Extra care needs to be taken when using the [`PIP_EXTRA_INDEX_URL`](https://pipenv.pypa.io/en/latest/indexes.html) environment variable due to a possible exploit documented by [CVE-2018-20225](https://nvd.nist.gov/vuln/detail/CVE-2018-20225): {{< alert type="warning" >}} An issue was discovered in pip (all versions) because it installs the version with the highest version number, even if the user had intended to obtain a private package from a private index. This only affects use of the `PIP_EXTRA_INDEX_URL` option, and exploitation requires that the package does not already exist in the public index (and thus the attacker can put the package there with an arbitrary version number). {{< /alert >}} ### Version number parsing In some cases it's not possible to determine if the version of a project dependency is in the affected range of a security advisory. For example: - The version is unknown. - The version is invalid. - Parsing the version or comparing it to the range fails. - The version is a branch, like `dev-master` or `1.5.x`. - The compared versions are ambiguous. For example, `1.0.0-20241502` can't be compared to `1.0.0-2` because one version contains a timestamp while the other does not. In these cases, the analyzer skips the dependency and outputs a message to the log. The GitLab analyzers do not make assumptions as they could result in a false positive or false negative. For a discussion, see [issue 442027](https://gitlab.com/gitlab-org/gitlab/-/issues/442027). ## Build Swift projects Swift Package Manager (SPM) is the official tool for managing the distribution of Swift code. It's integrated with the Swift build system to automate the process of downloading, compiling, and linking dependencies. Follow these best practices when you build a Swift project with SPM. 1. Include a `Package.resolved` file. The `Package.resolved` file locks your dependencies to specific versions. Always commit this file to your repository to ensure consistency across different environments. ```shell git add Package.resolved git commit -m "Add Package.resolved to lock dependencies" ``` 1. To build your Swift project, use the following commands: ```shell # Update dependencies swift package update # Build the project swift build ``` 1. To configure CI/CD, add these steps to your `.gitlab-ci.yml` file: ```yaml swift-build: stage: build script: - swift package update - swift build ``` 1. Optional. If you use private Swift package repositories with self-signed certificates, you might need to add the certificate to your project and configure Swift to trust it: 1. Fetch the certificate: ```shell echo | openssl s_client -servername your.repo.url -connect your.repo.url:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > repo-cert.crt ``` 1. Add these lines to your Swift package manifest (`Package.swift`): ```swift import Foundation #if canImport(Security) import Security #endif extension Package { public static func addCustomCertificate() { guard let certPath = Bundle.module.path(forResource: "repo-cert", ofType: "crt") else { fatalError("Certificate not found") } SecCertificateAddToSystemStore(SecCertificateCreateWithData(nil, try! Data(contentsOf: URL(fileURLWithPath: certPath)) as CFData)!) } } // Call this before defining your package Package.addCustomCertificate() ``` Always test your build process in a clean environment to ensure your dependencies are correctly specified and resolve automatically. ## Build CocoaPods projects CocoaPods is a popular dependency manager for Swift and Objective-C Cocoa projects. It provides a standard format for managing external libraries in iOS, macOS, watchOS, and tvOS projects. Follow these best practices when you build projects that use CocoaPods for dependency management. 1. Include a `Podfile.lock` file. The `Podfile.lock` file is crucial for locking your dependencies to specific versions. Always commit this file to your repository to ensure consistency across different environments. ```shell git add Podfile.lock git commit -m "Add Podfile.lock to lock CocoaPods dependencies" ``` 1. You can build your project with one of the following: - The `xcodebuild` command-line tool: ```shell # Install CocoaPods dependencies pod install # Build the project xcodebuild -workspace YourWorkspace.xcworkspace -scheme YourScheme build ``` - The Xcode IDE: 1. Open your `.xcworkspace` file in Xcode. 1. Select your target scheme. 1. Select **Product > Build**. You can also press <kbd>⌘</kbd>+<kbd>B</kbd>. - [fastlane](https://fastlane.tools/), a tool for automating builds and releases for iOS and Android apps: 1. Install `fastlane`: ```shell sudo gem install fastlane ``` 1. In your project, configure `fastlane`: ```shell fastlane init ``` 1. Add a lane to your `fastfile`: ```ruby lane :build do cocoapods gym(scheme: "YourScheme") end ``` 1. Run the build: ```shell fastlane build ``` - If your project uses both CocoaPods and Carthage, you can use Carthage to build your dependencies: 1. Create a `Cartfile` that includes your CocoaPods dependencies. 1. Run the following: ```shell carthage update --platform iOS ``` 1. Configure CI/CD to build the project according to your preferred method. For example, using `xcodebuild`: ```yaml cocoapods-build: stage: build script: - pod install - xcodebuild -workspace YourWorkspace.xcworkspace -scheme YourScheme build ``` 1. Optional. If you use private CocoaPods repositories, you might need to configure your project to access them: 1. Add the private spec repo: ```shell pod repo add REPO_NAME SOURCE_URL ``` 1. In your Podfile, specify the source: ```ruby source 'https://github.com/CocoaPods/Specs.git' source 'SOURCE_URL' ``` 1. Optional. If your private CocoaPods repository uses SSL, ensure the SSL certificate is properly configured: - If you use a self-signed certificate, add it to your system's trusted certificates. You can also specify the SSL configuration in your `.netrc` file: ```netrc machine your.private.repo.url login your_username password your_password ``` 1. After you update your Podfile, run `pod install` to install dependencies and update your workspace. Remember to always run `pod install` after updating your Podfile to ensure all dependencies are properly installed and the workspace is updated. ## Contributing to the vulnerability database To find a vulnerability, you can search the [`GitLab Advisory Database`](https://advisories.gitlab.com/). You can also [submit new vulnerabilities](https://gitlab.com/gitlab-org/security-products/gemnasium-db/blob/master/CONTRIBUTING.md).
https://docs.gitlab.com/user/application_security/dependency_scanning/dependency_scanning_sbom
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/dependency_scanning/_index.md
2025-08-13
doc/user/application_security/dependency_scanning/dependency_scanning_sbom
[ "doc", "user", "application_security", "dependency_scanning", "dependency_scanning_sbom" ]
_index.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Dependency scanning by using SBOM
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/395692) in GitLab 17.1 and officially released in GitLab 17.3 with a flag named `dependency_scanning_using_sbom_reports`. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/395692) in GitLab 17.5. - Released [lockfile-based Dependency Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/-/blob/main/README.md?ref_type=heads#supported-files) analyzer as an [Experiment](../../../../policy/development_stages_support.md#experiment) in GitLab 17.4. - Released [Dependency Scanning CI/CD Component](https://gitlab.com/explore/catalog/components/dependency-scanning) version [`0.4.0`](https://gitlab.com/components/dependency-scanning/-/tags/0.4.0) in GitLab 17.5 with support for the [lockfile-based Dependency Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/-/blob/main/README.md?ref_type=heads#supported-files) analyzer. - [Enabled by default with the latest Dependency Scanning CI/CD templates](https://gitlab.com/gitlab-org/gitlab/-/issues/519597) for Cargo, Conda, Cocoapods, and Swift in GitLab 17.9. - Feature flag `dependency_scanning_using_sbom_reports` removed in GitLab 17.10. {{< /history >}} Dependency scanning using CycloneDX SBOM analyzes your application's dependencies for known vulnerabilities. All dependencies are scanned, [including transitive dependencies](../_index.md). Dependency scanning is often considered part of Software Composition Analysis (SCA). SCA can contain aspects of inspecting the items your code uses. These items typically include application and system dependencies that are almost always imported from external sources, rather than sourced from items you wrote yourself. Dependency scanning can run in the development phase of your application's lifecycle. Every time a pipeline produces an SBOM report, security findings are identified and compared between the source and target branches. Findings and their severity are listed in the merge request, enabling you to proactively address the risk to your application, before the code change is committed. Security findings for reported SBOM components are also identified by [Continuous Vulnerability Scanning](../../continuous_vulnerability_scanning/_index.md) when new security advisories are published, independently from CI/CD pipelines. GitLab offers both dependency scanning and [container scanning](../../container_scanning/_index.md) to ensure coverage for all of these dependency types. To cover as much of your risk area as possible, we encourage you to use all of our security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../../comparison_dependency_and_container_scanning.md). ## Getting started Enable the Dependency Scanning using SBOM feature with one of the following options: - Use the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` to enable a GitLab provided analyzer. - The (deprecated) Gemnasium analyzer is used by default. - To enable the new Dependency Scanning analyzer, set the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - A [supported lock file, dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files), or [trigger file](#trigger-files) must exist in the repository to create the `dependency-scanning` job in pipelines. - Use the [Scan Execution Policies](../../policies/scan_execution_policies.md) with the `latest` template to enable a GitLab provided analyzer. - The (deprecated) Gemnasium analyzer is used by default. - To enable the new Dependency Scanning analyzer, set the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - A [supported lock file, dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files), or [trigger file](#trigger-files) must exist in the repository to create the `dependency-scanning` job in pipelines. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) to enable the new Dependency Scanning analyzer. - Provide your own CycloneDX SBOM document as [a CI/CD artifact report](../../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) from a successful pipeline. You should use the new Dependency Scanning analyzer. For details, see [Enabling the analyzer](#enabling-the-analyzer). If instead you use the (deprecated) Gemnasium analyzer, refer to the enablement instructions for the [legacy Dependency Scanning feature](../_index.md#getting-started). ### Enabling the analyzer The Dependency Scanning analyzer produces a CycloneDX SBOM report compatible with GitLab. If your application can't generate such a report, you can use the GitLab analyzer to produce one. Share any feedback on the new Dependency Scanning analyzer in this [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/523458). Prerequisites: - A [supported lock file or dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files) must exist in the repository or must be passed as an artifact to the `dependency-scanning` job. - The component's [stage](https://gitlab.com/explore/catalog/components/dependency-scanning) is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. - If you're using SaaS runners on GitLab.com, this is enabled by default. To enable the analyzer, you must: - Use either the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` and enforce the new Dependency Scanning analyzer by setting the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. ```yaml include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml variables: DS_ENFORCE_NEW_ANALYZER: 'true' ``` - Use the [Scan Execution Policies](../../policies/scan_execution_policies.md) with the `latest` template and enforce the new Dependency Scanning analyzer by setting the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) ```yaml include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 ``` #### Trigger files Trigger files create a `dependency-scanning` CI/CD job when using the [latest Dependency Scanning CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml). The analyzer does not scan these files. Your project can be supported if you use a trigger file to [build](#language-specific-instructions) a [supported lock file](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files). | Language | Files | | -------- | ------- | | C#/Visual Basic | `*.csproj`, `*.vbproj` | | Java | `pom.xml` | | Java/Kotlin | `build.gradle`, `build.gradle.kts` | | Python | `requirements.pip`, `Pipfile`, `requires.txt`, `setup.py` | | Scala | `build.sbt` | #### Language-specific instructions If your project doesn't have a supported lock file dependency graph committed to its repository, you need to provide one. The examples below show how to create a file that is supported by the GitLab analyzer for popular languages and package managers. ##### Go If your project provides only a `go.mod` file, the Dependency Scanning analyzer can still extract the list of components. However, [dependency path](../../dependency_list/_index.md#dependency-paths) information is not available. Additionally, you might encounter false positives if there are multiple versions of the same module. To benefit from improved component detection and feature coverage, you should provide a `go.graph` file generated using the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go toolchain. The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a Go project. The dependency graph is output as a job artifact in the `build` stage, before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 go:build: stage: build image: "golang:latest" script: - "go mod tidy" - "go build ./..." - "go mod graph > go.graph" artifacts: when: on_success access: developer paths: ["**/go.graph"] ``` ##### Gradle For Gradle projects use either of the following methods to create a dependency graph. - Nebula Gradle Dependency Lock Plugin - Gradle's HtmlDependencyReportTask ###### Dependency Lock Plugin This method gives information about dependencies which are direct. To enable the CI/CD component on a Gradle project: 1. Edit the `build.gradle` or `build.gradle.kts` to use the [gradle-dependency-lock-plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin/wiki/Usage#example) or use an init script. 1. Configure the `.gitlab-ci.yml` file to generate the `dependencies.lock` and `dependencies.direct.lock` artifacts, and pass them to the `dependency-scanning` job. The following example demonstrates how to configure the component for a Gradle project. ```yaml stages: - build - test image: gradle:8.0-jdk11 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 generate nebula lockfile: # Running in the build stage ensures that the dependency-scanning job # receives the scannable artifacts. stage: build script: - | cat << EOF > nebula.gradle initscript { repositories { mavenCentral() } dependencies { classpath 'com.netflix.nebula:gradle-dependency-lock-plugin:12.7.1' } } allprojects { apply plugin: nebula.plugin.dependencylock.DependencyLockPlugin } EOF ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=true -PdependencyLock.lockFile=dependencies.lock generateLock saveLock ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=false -PdependencyLock.lockFile=dependencies.direct.lock generateLock saveLock # generateLock saves the lock file in the build/ directory of a project # and saveLock copies it into the root of a project. To avoid duplicates # and get an accurate location of the dependency, use find to remove the # lock files in the build/ directory only. after_script: - find . -path '*/build/dependencies*.lock' -print -delete # Collect all generated artifacts and pass them onto jobs in sequential stages. artifacts: paths: - '**/dependencies*.lock' - '**/dependencies*.lock' ``` ###### HtmlDependencyReportTask This method gives information about dependencies which are both transitive and direct. The [HtmlDependencyReportTask](https://docs.gradle.org/current/dsl/org.gradle.api.reporting.dependencies.HtmlDependencyReportTask.html) is an alternative way to get the list of dependencies for a Gradle project (tested with `gradle` versions 4 through 8). To enable use of this method with dependency scanning the artifact from running the `gradle htmlDependencyReport` task needs to be available. ```yaml stages: - build - test # Define the image that contains Java and Gradle image: gradle:8.0-jdk11 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build script: - gradle --init-script report.gradle htmlDependencyReport # The gradle task writes the dependency report as a javascript file under # build/reports/project/dependencies. Because the file has an un-standardized # name, the after_script finds and renames the file to # `gradle-html-dependency-report.js` copying it to the same directory as # `build.gradle` after_script: - | reports_dir=build/reports/project/dependencies while IFS= read -r -d '' src; do dest="${src%%/$reports_dir/*}/gradle-html-dependency-report.js" cp $src $dest done < <(find . -type f -path "*/${reports_dir}/*.js" -not -path "*/${reports_dir}/js/*" -print0) # Pass html report artifact to subsequent dependency scanning stage. artifacts: paths: - "**/gradle-html-dependency-report.js" ``` The command above uses the `report.gradle` file and can be supplied through `--init-script` or its contents can be added to `build.gradle` directly: ```kotlin allprojects { apply plugin: 'project-report' } ``` {{< alert type="note" >}} The dependency report may indicate that dependencies for some configurations `FAILED` to be resolved. In this case dependency scanning logs a warning but does not fail the job. If you prefer to have the pipeline fail if resolution failures are reported, add the following extra steps to the `build` example above. {{< /alert >}} ```shell while IFS= read -r -d '' file; do grep --quiet -E '"resolvable":\s*"FAILED' $file && echo "Dependency report has dependencies with FAILED resolution status" && exit 1 done < <(find . -type f -path "*/gradle-html-dependency-report.js -print0) ``` ##### Maven The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component on a Maven project. The dependency graph is output as a job artifact in the `build` stage, before dependency scanning runs. Requirement: use at least version `3.7.0` of the maven-dependency-plugin. ```yaml stages: - build - test image: maven:3.9.9-eclipse-temurin-21 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: # Running in the build stage ensures that the dependency-scanning job # receives the maven.graph.json artifacts. stage: build script: - mvn install - mvn org.apache.maven.plugins:maven-dependency-plugin:3.8.1:tree -DoutputType=json -DoutputFile=maven.graph.json # Collect all maven.graph.json artifacts and pass them onto jobs # in sequential stages. artifacts: paths: - "**/*.jar" - "**/maven.graph.json" ``` ##### pip If your project provides a `requirements.txt` lock file generated by the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/), the Dependency Scanning analyzer can extract the list of components and the dependency graph information, which provides support for the [dependency path](../../dependency_list/_index.md#dependency-paths) feature. Alternatively, your project can provide a `pipdeptree.json` dependency graph export generated by the [`pipdeptree --json` command line utility](https://pypi.org/project/pipdeptree/). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a pip project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "python:latest" script: - "pip install -r requirements.txt" - "pip install pipdeptree" - "pipdeptree --json > pipdeptree.json" artifacts: when: on_success access: developer paths: ["**/pipdeptree.json"] ``` Because of a [known issue](https://github.com/tox-dev/pipdeptree/issues/107), `pipdeptree` does not mark [optional dependencies](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) as dependencies of the parent package. As a result, Dependency Scanning marks them as direct dependencies of the project, instead of as transitive dependencies. ##### Pipenv If your project provides only a `Pipfile.lock` file, the Dependency Scanning analyzer can still extract the list of components. However, [dependency path](../../dependency_list/_index.md#dependency-paths) information is not available. To benefit from improved feature coverage, you should provide a `pipenv.graph.json` file generated by the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a Pipenv project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "python:3.12" script: - "pip install pipenv" - "pipenv install" - "pipenv graph --json-tree > pipenv.graph.json" artifacts: when: on_success access: developer paths: ["**/pipenv.graph.json"] ``` ##### sbt To enable the CI/CD component on an sbt project: - Edit the `plugins.sbt` to use the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph/blob/master/README.md#usage-instructions). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support in an sbt project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "sbtscala/scala-sbt:eclipse-temurin-17.0.13_11_1.10.7_3.6.3" script: - "sbt dependencyDot" artifacts: when: on_success access: developer paths: ["**/dependencies-compile.dot"] ``` ## Understanding the results The dependency scanning analyzer produces CycloneDX Software Bill of Materials (SBOM) for each supported lock file or dependency graph export detected. ### CycloneDX Software Bill of Materials The dependency scanning analyzer outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for each supported lock or dependency graph export it detects. The CycloneDX SBOMs are created as job artifacts. The CycloneDX SBOMs are: - Named `gl-sbom-<package-type>-<package-manager>.cdx.json`. - Available as job artifacts of the dependency scanning job. - Uploaded as `cyclonedx` reports. - Saved in the same directory as the detected lock or dependency graph exports files. For example, if your project has the following structure: ```plaintext . ├── ruby-project/ │ └── Gemfile.lock ├── ruby-project-2/ │ └── Gemfile.lock └── php-project/ └── composer.lock ``` The following CycloneDX SBOMs are created as job artifacts: ```plaintext . ├── ruby-project/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── ruby-project-2/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json └── php-project/ ├── composer.lock └── gl-sbom-packagist-composer.cdx.json ``` ### Merging multiple CycloneDX SBOMs You can use a CI/CD job to merge the multiple CycloneDX SBOMs into a single SBOM. {{< alert type="note" >}} GitLab uses [CycloneDX Properties](https://cyclonedx.org/use-cases/#properties--name-value-store) to store implementation-specific details in the metadata of each CycloneDX SBOM, such as the location of dependency graph exports and lock files. If multiple CycloneDX SBOMs are merged together, this information is removed from the resulting merged file. {{< /alert >}} For example, the following `.gitlab-ci.yml` extract demonstrates how the Cyclone SBOM files can be merged, and the resulting file validated. ```yaml stages: - test - merge-cyclonedx-sboms include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 merge cyclonedx sboms: stage: merge-cyclonedx-sboms image: name: cyclonedx/cyclonedx-cli:0.27.1 entrypoint: [""] script: - find . -name "gl-sbom-*.cdx.json" -exec cyclonedx merge --output-file gl-sbom-all.cdx.json --input-files "{}" + # optional: validate the merged sbom - cyclonedx validate --input-version v1_6 --input-file gl-sbom-all.cdx.json artifacts: paths: - gl-sbom-all.cdx.json ``` ## Optimization To optimize Dependency Scanning with SBOM according to your requirements you can: - Exclude files and directories from the scan. - Define the max depth to look for files. ### Exclude files and directories from the scan To exclude files or directories from being scanned, use `DS_EXCLUDED_PATHS` with a comma-separated list of patterns in your `.gitlab-ci.yml`. This will prevent specified files and directories from being targeted by the scan. ### Define the max depth to look for files To optimize the analyzer behavior you may set a max depth value through the `DS_MAX_DEPTH` environment variable. A value of `-1` scans all directories regardless of depth. The default is `2`. ## Roll out After you are confident in the Dependency Scanning with SBOM results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../../detect/security_configuration.md#create-a-shared-configuration) to apply Dependency Scanning with SBOM settings across groups. - If you have unique requirements, Dependency Scanning with SBOM can be run in [offline environments](../../offline_deployments/_index.md). ## Supported package types For the security analysis to be effective, the components listed in your SBOM report must have corresponding entries in the [GitLab Advisory Database](../../gitlab_advisory_database/_index.md). The GitLab SBOM Vulnerability Scanner can report Dependency Scanning vulnerabilities for components with the following [PURL types](https://github.com/package-url/purl-spec/blob/346589846130317464b677bc4eab30bf5040183a/PURL-TYPES.rst): - `cargo` - `composer` - `conan` - `gem` - `golang` - `maven` - `npm` - `nuget` - `pypi` ## Customizing analyzer behavior How to customize the analyzer varies depending on the enablement solution. {{< alert type="warning" >}} Test all customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Customizing behavior with the CI/CD template When using the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` or [Scan Execution Policies](../../policies/scan_execution_policies.md) use [CI/CD variables](#available-cicd-variables). #### Available CI/CD variables The following variables allow configuration of global dependency scanning settings. | CI/CD variables | Description | | ----------------------------|------------ | | `DS_EXCLUDED_ANALYZERS` | Specify the analyzers (by name) to exclude from Dependency Scanning. | | `DS_EXCLUDED_PATHS` | Exclude files and directories from the scan based on the paths. A comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. This is a pre-filter which is applied before the scan is executed. Default: `"spec, test, tests, tmp"`. | | `DS_MAX_DEPTH` | Defines how many directory levels deep that the analyzer should search for supported files to scan. A value of `-1` scans all directories regardless of depth. Default: `2`. | | `DS_INCLUDE_DEV_DEPENDENCIES` | When set to `"false"`, development dependencies are not reported. Only projects using Composer, Conda, Gradle, Maven, npm, pnpm, Pipenv, Poetry, or uv are supported. Default: `"true"` | | `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` | Defines which requirement files to process using glob pattern matching (for example, `requirements*.txt` or `*-requirements.txt`). The pattern should match filenames only, not directory paths. See [glob pattern documentation](https://github.com/bmatcuk/doublestar/tree/v1?tab=readme-ov-file#patterns) for syntax details. | | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the official default images (proxy). | | `DS_FF_LINK_COMPONENTS_TO_GIT_FILES` | Link components in the dependency list to files committed to the repository rather than lockfiles and graph files generated dynamically in a CI/CD pipeline. This ensures all components are linked to a source file in the repository. Default: `"false"`. | ##### Overriding dependency scanning jobs To override a job definition declare a new job with the same name as the one to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this configures the `dependencies: []` attribute for the dependency-scanning job: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml dependency-scanning: dependencies: ["build"] ``` ### Customizing behavior with the CI/CD component When using the Dependency Scanning CI/CD component, the analyzer can be customized by configuring the [inputs](https://gitlab.com/explore/catalog/components/dependency-scanning). ## How it scans an application The dependency scanning using SBOM approach relies on two distinct phases: - First, the dependency detection phase that focuses solely on creating a comprehensive inventory of your project's dependencies and their relationship (dependency graph). This inventory is captured in an SBOM (Software Bill of Materials) document. - Second, after the CI/CD pipeline completes, the GitLab platform processes your SBOM report and performs a thorough security analysis using the built-in GitLab SBOM Vulnerability Scanner. It is the same scanner that provides [Continuous Vulnerability Scanning](../../continuous_vulnerability_scanning/_index.md). This separation of concerns and the modularity of this architecture allows to better support customers through expansion of language support, a tighter integration and experience within the GitLab platform, and a shift towards industry standard report types. ## Dependency detection Dependency scanning using SBOM requires the detected dependencies to be captured in a CycloneDX SBOM document. However, the modular aspect of this functionality allows you to select how this document is generated: - Using the Dependency Scanning analyzer provided by GitLab (recommended) - Using the (deprecated) Gemnasium analyzer provided by GitLab - Using a custom job with a 3rd party CycloneDX SBOM generator or a custom tool. To activate dependency scanning using SBOM, the provided CycloneDX SBOM document must: - Comply with [the CycloneDX specification](https://github.com/CycloneDX/specification) version `1.4`, `1.5`, or `1.6`. Online validator available on [CycloneDX Web Tool](https://cyclonedx.github.io/cyclonedx-web-tool/validate). - Comply with [the GitLab CycloneDX property taxonomy](../../../../development/sec/cyclonedx_property_taxonomy.md). - Be uploaded as [a CI/CD artifact report](../../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) from a successful pipeline. When using GitLab-provided analyzers, these requirements are met. ## Security analysis After a compatible CycloneDX SBOM document is uploaded, GitLab automatically performs the security analysis with the GitLab SBOM Vulnerability Scanner. Each component is checked against the GitLab Advisory Database and scan results are processed in the following manners: If the SBOM report is declared by a CI/CD job on the default branch: vulnerabilities are created, and can be seen in the [vulnerability report](../../vulnerability_report/_index.md). If the SBOM report is declared by a CI/CD job on a non-default branch: security findings are created, and can be seen in the [security tab of the pipeline view](../../vulnerability_report/pipeline.md) and MR security widget. This functionality is behind a feature flag and tracked in [Epic 14636](https://gitlab.com/groups/gitlab-org/-/epics/14636). ## Offline support {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, you need to make some adjustments to run dependency scanning jobs successfully. For more information, see [offline environments](../../offline_deployments/_index.md). ### Requirements To run dependency scanning in an offline environment you must have: - A GitLab Runner with the `docker` or `kubernetes` executor. - Local copies of the dependency scanning analyzer images. - Access to the [Package Metadata Database](../../../../topics/offline/quick_start_guide.md#enabling-the-package-metadata-database). Required to have license and advisory data for your dependencies. ### Local copies of analyzer images To use the dependency scanning analyzer: 1. Import the following default dependency scanning analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/dependency-scanning:v0 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may want to download them regularly. In case your offline instance has access to the GitLab registry you can use the [Security-Binaries template](../../offline_deployments/_index.md#using-the-official-gitlab-template) to download the latest dependency scanning analyzer image. 1. Configure GitLab CI/CD to use the local analyzers. Set the value of the CI/CD variable `SECURE_ANALYZERS_PREFIX` to your local Docker registry - in this example, `docker-registry.example.com`. ```yaml include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "docker-registry.example.com/analyzers" ``` ## Security policies Use security policies to enforce Dependency Scanning across multiple projects. The appropriate policy type depends on whether your projects have scannable artifacts committed to their repositories. ### Scan execution policies [Scan execution policies](../../policies/scan_execution_policies.md) are supported for all projects that have scannable artifacts committed to their repositories. These artifacts include lockfiles, dependency graph files, and other files that can be directly analyzed to identify dependencies. For projects with these artifacts, scan execution policies provide the fastest and most straightforward way to enforce Dependency Scanning. ### Pipeline execution policies For projects that don't have scannable artifacts committed to their repositories, you must use [pipeline execution policies](../../policies/pipeline_execution_policies.md). These policies use a custom CI/CD job to generate scannable artifacts before invoking Dependency Scanning. Pipeline execution policies: - Generate lockfiles or dependency graphs as part of your CI/CD pipeline. - Customize the dependency detection process for your specific project requirements. - Implement the language-specific instructions for build tools like Gradle and Maven. #### Example: Pipeline execution policy for a Gradle project For a Gradle project without a scannable artifact committed to the repository, a pipeline execution policy with an artifact generation step is required. This example uses the `nebula` plugin. In the dedicated security policies project create or update the main policy file (for example, `policy.yml`): ```yaml pipeline_execution_policy: - name: Enforce Gradle dependency scanning with SBOM description: Generate dependency artifact and run Dependency Scanning. enabled: true pipeline_config_strategy: inject_policy content: include: - project: $SECURITY_POLICIES_PROJECT file: "dependency-scanning.yml" ``` Add `dependency-scanning.yml`: ```yaml stages: - build - test variables: DS_ENFORCE_NEW_ANALYZER: "true" include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml generate nebula lockfile: image: openjdk:11-jdk stage: build script: - | cat << EOF > nebula.gradle initscript { repositories { mavenCentral() } dependencies { classpath 'com.netflix.nebula:gradle-dependency-lock-plugin:12.7.1' } } allprojects { apply plugin: nebula.plugin.dependencylock.DependencyLockPlugin } EOF ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=true -PdependencyLock.lockFile=dependencies.lock generateLock saveLock ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=false -PdependencyLock.lockFile=dependencies.direct.lock generateLock saveLock after_script: - find . -path '*/build/dependencies.lock' -print -delete artifacts: paths: - '**/dependencies.lock' - '**/dependencies.direct.lock' ``` This approach ensures that: 1. A pipeline run in the Gradle project generates the scannable artifacts. 1. Dependency Scanning is enforced and has access to the scannable artifacts. 1. All projects in the policy scope consistently follow the same dependency scanning approach. 1. Configuration changes can be managed centrally and applied across multiple projects. For more details on implementing pipeline execution policies for different build tools, refer to the [language-specific instructions](#language-specific-instructions). ## Troubleshooting When working with dependency scanning, you might encounter the following issues. ### Warning: `grep: command not found` The analyzer image contains minimal dependencies to decrease the image's attack surface. As a result, utilities commonly found in other images, like `grep`, are missing from the image. This may result in a warning like `/usr/bin/bash: line 3: grep: command not found` to appear in the job log. This warning does not impact the results of the analyzer and can be ignored.
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Dependency scanning by using SBOM breadcrumbs: - doc - user - application_security - dependency_scanning - dependency_scanning_sbom --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/395692) in GitLab 17.1 and officially released in GitLab 17.3 with a flag named `dependency_scanning_using_sbom_reports`. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/395692) in GitLab 17.5. - Released [lockfile-based Dependency Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/-/blob/main/README.md?ref_type=heads#supported-files) analyzer as an [Experiment](../../../../policy/development_stages_support.md#experiment) in GitLab 17.4. - Released [Dependency Scanning CI/CD Component](https://gitlab.com/explore/catalog/components/dependency-scanning) version [`0.4.0`](https://gitlab.com/components/dependency-scanning/-/tags/0.4.0) in GitLab 17.5 with support for the [lockfile-based Dependency Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/-/blob/main/README.md?ref_type=heads#supported-files) analyzer. - [Enabled by default with the latest Dependency Scanning CI/CD templates](https://gitlab.com/gitlab-org/gitlab/-/issues/519597) for Cargo, Conda, Cocoapods, and Swift in GitLab 17.9. - Feature flag `dependency_scanning_using_sbom_reports` removed in GitLab 17.10. {{< /history >}} Dependency scanning using CycloneDX SBOM analyzes your application's dependencies for known vulnerabilities. All dependencies are scanned, [including transitive dependencies](../_index.md). Dependency scanning is often considered part of Software Composition Analysis (SCA). SCA can contain aspects of inspecting the items your code uses. These items typically include application and system dependencies that are almost always imported from external sources, rather than sourced from items you wrote yourself. Dependency scanning can run in the development phase of your application's lifecycle. Every time a pipeline produces an SBOM report, security findings are identified and compared between the source and target branches. Findings and their severity are listed in the merge request, enabling you to proactively address the risk to your application, before the code change is committed. Security findings for reported SBOM components are also identified by [Continuous Vulnerability Scanning](../../continuous_vulnerability_scanning/_index.md) when new security advisories are published, independently from CI/CD pipelines. GitLab offers both dependency scanning and [container scanning](../../container_scanning/_index.md) to ensure coverage for all of these dependency types. To cover as much of your risk area as possible, we encourage you to use all of our security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../../comparison_dependency_and_container_scanning.md). ## Getting started Enable the Dependency Scanning using SBOM feature with one of the following options: - Use the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` to enable a GitLab provided analyzer. - The (deprecated) Gemnasium analyzer is used by default. - To enable the new Dependency Scanning analyzer, set the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - A [supported lock file, dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files), or [trigger file](#trigger-files) must exist in the repository to create the `dependency-scanning` job in pipelines. - Use the [Scan Execution Policies](../../policies/scan_execution_policies.md) with the `latest` template to enable a GitLab provided analyzer. - The (deprecated) Gemnasium analyzer is used by default. - To enable the new Dependency Scanning analyzer, set the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - A [supported lock file, dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files), or [trigger file](#trigger-files) must exist in the repository to create the `dependency-scanning` job in pipelines. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) to enable the new Dependency Scanning analyzer. - Provide your own CycloneDX SBOM document as [a CI/CD artifact report](../../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) from a successful pipeline. You should use the new Dependency Scanning analyzer. For details, see [Enabling the analyzer](#enabling-the-analyzer). If instead you use the (deprecated) Gemnasium analyzer, refer to the enablement instructions for the [legacy Dependency Scanning feature](../_index.md#getting-started). ### Enabling the analyzer The Dependency Scanning analyzer produces a CycloneDX SBOM report compatible with GitLab. If your application can't generate such a report, you can use the GitLab analyzer to produce one. Share any feedback on the new Dependency Scanning analyzer in this [feedback issue](https://gitlab.com/gitlab-org/gitlab/-/issues/523458). Prerequisites: - A [supported lock file or dependency graph](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files) must exist in the repository or must be passed as an artifact to the `dependency-scanning` job. - The component's [stage](https://gitlab.com/explore/catalog/components/dependency-scanning) is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. - If you're using SaaS runners on GitLab.com, this is enabled by default. To enable the analyzer, you must: - Use either the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` and enforce the new Dependency Scanning analyzer by setting the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. ```yaml include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml variables: DS_ENFORCE_NEW_ANALYZER: 'true' ``` - Use the [Scan Execution Policies](../../policies/scan_execution_policies.md) with the `latest` template and enforce the new Dependency Scanning analyzer by setting the CI/CD variable `DS_ENFORCE_NEW_ANALYZER` to `true`. - Use the [Dependency Scanning CI/CD component](https://gitlab.com/explore/catalog/components/dependency-scanning) ```yaml include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 ``` #### Trigger files Trigger files create a `dependency-scanning` CI/CD job when using the [latest Dependency Scanning CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Dependency-Scanning.latest.gitlab-ci.yml). The analyzer does not scan these files. Your project can be supported if you use a trigger file to [build](#language-specific-instructions) a [supported lock file](https://gitlab.com/gitlab-org/security-products/analyzers/dependency-scanning/#supported-files). | Language | Files | | -------- | ------- | | C#/Visual Basic | `*.csproj`, `*.vbproj` | | Java | `pom.xml` | | Java/Kotlin | `build.gradle`, `build.gradle.kts` | | Python | `requirements.pip`, `Pipfile`, `requires.txt`, `setup.py` | | Scala | `build.sbt` | #### Language-specific instructions If your project doesn't have a supported lock file dependency graph committed to its repository, you need to provide one. The examples below show how to create a file that is supported by the GitLab analyzer for popular languages and package managers. ##### Go If your project provides only a `go.mod` file, the Dependency Scanning analyzer can still extract the list of components. However, [dependency path](../../dependency_list/_index.md#dependency-paths) information is not available. Additionally, you might encounter false positives if there are multiple versions of the same module. To benefit from improved component detection and feature coverage, you should provide a `go.graph` file generated using the [`go mod graph` command](https://go.dev/ref/mod#go-mod-graph) from the Go toolchain. The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a Go project. The dependency graph is output as a job artifact in the `build` stage, before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 go:build: stage: build image: "golang:latest" script: - "go mod tidy" - "go build ./..." - "go mod graph > go.graph" artifacts: when: on_success access: developer paths: ["**/go.graph"] ``` ##### Gradle For Gradle projects use either of the following methods to create a dependency graph. - Nebula Gradle Dependency Lock Plugin - Gradle's HtmlDependencyReportTask ###### Dependency Lock Plugin This method gives information about dependencies which are direct. To enable the CI/CD component on a Gradle project: 1. Edit the `build.gradle` or `build.gradle.kts` to use the [gradle-dependency-lock-plugin](https://github.com/nebula-plugins/gradle-dependency-lock-plugin/wiki/Usage#example) or use an init script. 1. Configure the `.gitlab-ci.yml` file to generate the `dependencies.lock` and `dependencies.direct.lock` artifacts, and pass them to the `dependency-scanning` job. The following example demonstrates how to configure the component for a Gradle project. ```yaml stages: - build - test image: gradle:8.0-jdk11 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 generate nebula lockfile: # Running in the build stage ensures that the dependency-scanning job # receives the scannable artifacts. stage: build script: - | cat << EOF > nebula.gradle initscript { repositories { mavenCentral() } dependencies { classpath 'com.netflix.nebula:gradle-dependency-lock-plugin:12.7.1' } } allprojects { apply plugin: nebula.plugin.dependencylock.DependencyLockPlugin } EOF ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=true -PdependencyLock.lockFile=dependencies.lock generateLock saveLock ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=false -PdependencyLock.lockFile=dependencies.direct.lock generateLock saveLock # generateLock saves the lock file in the build/ directory of a project # and saveLock copies it into the root of a project. To avoid duplicates # and get an accurate location of the dependency, use find to remove the # lock files in the build/ directory only. after_script: - find . -path '*/build/dependencies*.lock' -print -delete # Collect all generated artifacts and pass them onto jobs in sequential stages. artifacts: paths: - '**/dependencies*.lock' - '**/dependencies*.lock' ``` ###### HtmlDependencyReportTask This method gives information about dependencies which are both transitive and direct. The [HtmlDependencyReportTask](https://docs.gradle.org/current/dsl/org.gradle.api.reporting.dependencies.HtmlDependencyReportTask.html) is an alternative way to get the list of dependencies for a Gradle project (tested with `gradle` versions 4 through 8). To enable use of this method with dependency scanning the artifact from running the `gradle htmlDependencyReport` task needs to be available. ```yaml stages: - build - test # Define the image that contains Java and Gradle image: gradle:8.0-jdk11 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build script: - gradle --init-script report.gradle htmlDependencyReport # The gradle task writes the dependency report as a javascript file under # build/reports/project/dependencies. Because the file has an un-standardized # name, the after_script finds and renames the file to # `gradle-html-dependency-report.js` copying it to the same directory as # `build.gradle` after_script: - | reports_dir=build/reports/project/dependencies while IFS= read -r -d '' src; do dest="${src%%/$reports_dir/*}/gradle-html-dependency-report.js" cp $src $dest done < <(find . -type f -path "*/${reports_dir}/*.js" -not -path "*/${reports_dir}/js/*" -print0) # Pass html report artifact to subsequent dependency scanning stage. artifacts: paths: - "**/gradle-html-dependency-report.js" ``` The command above uses the `report.gradle` file and can be supplied through `--init-script` or its contents can be added to `build.gradle` directly: ```kotlin allprojects { apply plugin: 'project-report' } ``` {{< alert type="note" >}} The dependency report may indicate that dependencies for some configurations `FAILED` to be resolved. In this case dependency scanning logs a warning but does not fail the job. If you prefer to have the pipeline fail if resolution failures are reported, add the following extra steps to the `build` example above. {{< /alert >}} ```shell while IFS= read -r -d '' file; do grep --quiet -E '"resolvable":\s*"FAILED' $file && echo "Dependency report has dependencies with FAILED resolution status" && exit 1 done < <(find . -type f -path "*/gradle-html-dependency-report.js -print0) ``` ##### Maven The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component on a Maven project. The dependency graph is output as a job artifact in the `build` stage, before dependency scanning runs. Requirement: use at least version `3.7.0` of the maven-dependency-plugin. ```yaml stages: - build - test image: maven:3.9.9-eclipse-temurin-21 include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: # Running in the build stage ensures that the dependency-scanning job # receives the maven.graph.json artifacts. stage: build script: - mvn install - mvn org.apache.maven.plugins:maven-dependency-plugin:3.8.1:tree -DoutputType=json -DoutputFile=maven.graph.json # Collect all maven.graph.json artifacts and pass them onto jobs # in sequential stages. artifacts: paths: - "**/*.jar" - "**/maven.graph.json" ``` ##### pip If your project provides a `requirements.txt` lock file generated by the [pip-compile command line tool](https://pip-tools.readthedocs.io/en/latest/cli/pip-compile/), the Dependency Scanning analyzer can extract the list of components and the dependency graph information, which provides support for the [dependency path](../../dependency_list/_index.md#dependency-paths) feature. Alternatively, your project can provide a `pipdeptree.json` dependency graph export generated by the [`pipdeptree --json` command line utility](https://pypi.org/project/pipdeptree/). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a pip project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "python:latest" script: - "pip install -r requirements.txt" - "pip install pipdeptree" - "pipdeptree --json > pipdeptree.json" artifacts: when: on_success access: developer paths: ["**/pipdeptree.json"] ``` Because of a [known issue](https://github.com/tox-dev/pipdeptree/issues/107), `pipdeptree` does not mark [optional dependencies](https://setuptools.pypa.io/en/latest/userguide/dependency_management.html#optional-dependencies) as dependencies of the parent package. As a result, Dependency Scanning marks them as direct dependencies of the project, instead of as transitive dependencies. ##### Pipenv If your project provides only a `Pipfile.lock` file, the Dependency Scanning analyzer can still extract the list of components. However, [dependency path](../../dependency_list/_index.md#dependency-paths) information is not available. To benefit from improved feature coverage, you should provide a `pipenv.graph.json` file generated by the [`pipenv graph` command](https://pipenv.pypa.io/en/latest/cli.html#graph). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support on a Pipenv project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "python:3.12" script: - "pip install pipenv" - "pipenv install" - "pipenv graph --json-tree > pipenv.graph.json" artifacts: when: on_success access: developer paths: ["**/pipenv.graph.json"] ``` ##### sbt To enable the CI/CD component on an sbt project: - Edit the `plugins.sbt` to use the [sbt-dependency-graph plugin](https://github.com/sbt/sbt-dependency-graph/blob/master/README.md#usage-instructions). The following example `.gitlab-ci.yml` demonstrates how to enable the CI/CD component with [dependency path](../../dependency_list/_index.md#dependency-paths) support in an sbt project. The `build` stage outputs the dependency graph as a job artifact before dependency scanning runs. ```yaml stages: - build - test include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 build: stage: build image: "sbtscala/scala-sbt:eclipse-temurin-17.0.13_11_1.10.7_3.6.3" script: - "sbt dependencyDot" artifacts: when: on_success access: developer paths: ["**/dependencies-compile.dot"] ``` ## Understanding the results The dependency scanning analyzer produces CycloneDX Software Bill of Materials (SBOM) for each supported lock file or dependency graph export detected. ### CycloneDX Software Bill of Materials The dependency scanning analyzer outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for each supported lock or dependency graph export it detects. The CycloneDX SBOMs are created as job artifacts. The CycloneDX SBOMs are: - Named `gl-sbom-<package-type>-<package-manager>.cdx.json`. - Available as job artifacts of the dependency scanning job. - Uploaded as `cyclonedx` reports. - Saved in the same directory as the detected lock or dependency graph exports files. For example, if your project has the following structure: ```plaintext . ├── ruby-project/ │ └── Gemfile.lock ├── ruby-project-2/ │ └── Gemfile.lock └── php-project/ └── composer.lock ``` The following CycloneDX SBOMs are created as job artifacts: ```plaintext . ├── ruby-project/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json ├── ruby-project-2/ │ ├── Gemfile.lock │ └── gl-sbom-gem-bundler.cdx.json └── php-project/ ├── composer.lock └── gl-sbom-packagist-composer.cdx.json ``` ### Merging multiple CycloneDX SBOMs You can use a CI/CD job to merge the multiple CycloneDX SBOMs into a single SBOM. {{< alert type="note" >}} GitLab uses [CycloneDX Properties](https://cyclonedx.org/use-cases/#properties--name-value-store) to store implementation-specific details in the metadata of each CycloneDX SBOM, such as the location of dependency graph exports and lock files. If multiple CycloneDX SBOMs are merged together, this information is removed from the resulting merged file. {{< /alert >}} For example, the following `.gitlab-ci.yml` extract demonstrates how the Cyclone SBOM files can be merged, and the resulting file validated. ```yaml stages: - test - merge-cyclonedx-sboms include: - component: $CI_SERVER_FQDN/components/dependency-scanning/main@0 merge cyclonedx sboms: stage: merge-cyclonedx-sboms image: name: cyclonedx/cyclonedx-cli:0.27.1 entrypoint: [""] script: - find . -name "gl-sbom-*.cdx.json" -exec cyclonedx merge --output-file gl-sbom-all.cdx.json --input-files "{}" + # optional: validate the merged sbom - cyclonedx validate --input-version v1_6 --input-file gl-sbom-all.cdx.json artifacts: paths: - gl-sbom-all.cdx.json ``` ## Optimization To optimize Dependency Scanning with SBOM according to your requirements you can: - Exclude files and directories from the scan. - Define the max depth to look for files. ### Exclude files and directories from the scan To exclude files or directories from being scanned, use `DS_EXCLUDED_PATHS` with a comma-separated list of patterns in your `.gitlab-ci.yml`. This will prevent specified files and directories from being targeted by the scan. ### Define the max depth to look for files To optimize the analyzer behavior you may set a max depth value through the `DS_MAX_DEPTH` environment variable. A value of `-1` scans all directories regardless of depth. The default is `2`. ## Roll out After you are confident in the Dependency Scanning with SBOM results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../../detect/security_configuration.md#create-a-shared-configuration) to apply Dependency Scanning with SBOM settings across groups. - If you have unique requirements, Dependency Scanning with SBOM can be run in [offline environments](../../offline_deployments/_index.md). ## Supported package types For the security analysis to be effective, the components listed in your SBOM report must have corresponding entries in the [GitLab Advisory Database](../../gitlab_advisory_database/_index.md). The GitLab SBOM Vulnerability Scanner can report Dependency Scanning vulnerabilities for components with the following [PURL types](https://github.com/package-url/purl-spec/blob/346589846130317464b677bc4eab30bf5040183a/PURL-TYPES.rst): - `cargo` - `composer` - `conan` - `gem` - `golang` - `maven` - `npm` - `nuget` - `pypi` ## Customizing analyzer behavior How to customize the analyzer varies depending on the enablement solution. {{< alert type="warning" >}} Test all customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} ### Customizing behavior with the CI/CD template When using the `latest` Dependency Scanning CI/CD template `Dependency-Scanning.latest.gitlab-ci.yml` or [Scan Execution Policies](../../policies/scan_execution_policies.md) use [CI/CD variables](#available-cicd-variables). #### Available CI/CD variables The following variables allow configuration of global dependency scanning settings. | CI/CD variables | Description | | ----------------------------|------------ | | `DS_EXCLUDED_ANALYZERS` | Specify the analyzers (by name) to exclude from Dependency Scanning. | | `DS_EXCLUDED_PATHS` | Exclude files and directories from the scan based on the paths. A comma-separated list of patterns. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. This is a pre-filter which is applied before the scan is executed. Default: `"spec, test, tests, tmp"`. | | `DS_MAX_DEPTH` | Defines how many directory levels deep that the analyzer should search for supported files to scan. A value of `-1` scans all directories regardless of depth. Default: `2`. | | `DS_INCLUDE_DEV_DEPENDENCIES` | When set to `"false"`, development dependencies are not reported. Only projects using Composer, Conda, Gradle, Maven, npm, pnpm, Pipenv, Poetry, or uv are supported. Default: `"true"` | | `DS_PIPCOMPILE_REQUIREMENTS_FILE_NAME_PATTERN` | Defines which requirement files to process using glob pattern matching (for example, `requirements*.txt` or `*-requirements.txt`). The pattern should match filenames only, not directory paths. See [glob pattern documentation](https://github.com/bmatcuk/doublestar/tree/v1?tab=readme-ov-file#patterns) for syntax details. | | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the official default images (proxy). | | `DS_FF_LINK_COMPONENTS_TO_GIT_FILES` | Link components in the dependency list to files committed to the repository rather than lockfiles and graph files generated dynamically in a CI/CD pipeline. This ensures all components are linked to a source file in the repository. Default: `"false"`. | ##### Overriding dependency scanning jobs To override a job definition declare a new job with the same name as the one to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this configures the `dependencies: []` attribute for the dependency-scanning job: ```yaml include: - template: Jobs/Dependency-Scanning.gitlab-ci.yml dependency-scanning: dependencies: ["build"] ``` ### Customizing behavior with the CI/CD component When using the Dependency Scanning CI/CD component, the analyzer can be customized by configuring the [inputs](https://gitlab.com/explore/catalog/components/dependency-scanning). ## How it scans an application The dependency scanning using SBOM approach relies on two distinct phases: - First, the dependency detection phase that focuses solely on creating a comprehensive inventory of your project's dependencies and their relationship (dependency graph). This inventory is captured in an SBOM (Software Bill of Materials) document. - Second, after the CI/CD pipeline completes, the GitLab platform processes your SBOM report and performs a thorough security analysis using the built-in GitLab SBOM Vulnerability Scanner. It is the same scanner that provides [Continuous Vulnerability Scanning](../../continuous_vulnerability_scanning/_index.md). This separation of concerns and the modularity of this architecture allows to better support customers through expansion of language support, a tighter integration and experience within the GitLab platform, and a shift towards industry standard report types. ## Dependency detection Dependency scanning using SBOM requires the detected dependencies to be captured in a CycloneDX SBOM document. However, the modular aspect of this functionality allows you to select how this document is generated: - Using the Dependency Scanning analyzer provided by GitLab (recommended) - Using the (deprecated) Gemnasium analyzer provided by GitLab - Using a custom job with a 3rd party CycloneDX SBOM generator or a custom tool. To activate dependency scanning using SBOM, the provided CycloneDX SBOM document must: - Comply with [the CycloneDX specification](https://github.com/CycloneDX/specification) version `1.4`, `1.5`, or `1.6`. Online validator available on [CycloneDX Web Tool](https://cyclonedx.github.io/cyclonedx-web-tool/validate). - Comply with [the GitLab CycloneDX property taxonomy](../../../../development/sec/cyclonedx_property_taxonomy.md). - Be uploaded as [a CI/CD artifact report](../../../../ci/yaml/artifacts_reports.md#artifactsreportscyclonedx) from a successful pipeline. When using GitLab-provided analyzers, these requirements are met. ## Security analysis After a compatible CycloneDX SBOM document is uploaded, GitLab automatically performs the security analysis with the GitLab SBOM Vulnerability Scanner. Each component is checked against the GitLab Advisory Database and scan results are processed in the following manners: If the SBOM report is declared by a CI/CD job on the default branch: vulnerabilities are created, and can be seen in the [vulnerability report](../../vulnerability_report/_index.md). If the SBOM report is declared by a CI/CD job on a non-default branch: security findings are created, and can be seen in the [security tab of the pipeline view](../../vulnerability_report/pipeline.md) and MR security widget. This functionality is behind a feature flag and tracked in [Epic 14636](https://gitlab.com/groups/gitlab-org/-/epics/14636). ## Offline support {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, you need to make some adjustments to run dependency scanning jobs successfully. For more information, see [offline environments](../../offline_deployments/_index.md). ### Requirements To run dependency scanning in an offline environment you must have: - A GitLab Runner with the `docker` or `kubernetes` executor. - Local copies of the dependency scanning analyzer images. - Access to the [Package Metadata Database](../../../../topics/offline/quick_start_guide.md#enabling-the-package-metadata-database). Required to have license and advisory data for your dependencies. ### Local copies of analyzer images To use the dependency scanning analyzer: 1. Import the following default dependency scanning analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/dependency-scanning:v0 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may want to download them regularly. In case your offline instance has access to the GitLab registry you can use the [Security-Binaries template](../../offline_deployments/_index.md#using-the-official-gitlab-template) to download the latest dependency scanning analyzer image. 1. Configure GitLab CI/CD to use the local analyzers. Set the value of the CI/CD variable `SECURE_ANALYZERS_PREFIX` to your local Docker registry - in this example, `docker-registry.example.com`. ```yaml include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "docker-registry.example.com/analyzers" ``` ## Security policies Use security policies to enforce Dependency Scanning across multiple projects. The appropriate policy type depends on whether your projects have scannable artifacts committed to their repositories. ### Scan execution policies [Scan execution policies](../../policies/scan_execution_policies.md) are supported for all projects that have scannable artifacts committed to their repositories. These artifacts include lockfiles, dependency graph files, and other files that can be directly analyzed to identify dependencies. For projects with these artifacts, scan execution policies provide the fastest and most straightforward way to enforce Dependency Scanning. ### Pipeline execution policies For projects that don't have scannable artifacts committed to their repositories, you must use [pipeline execution policies](../../policies/pipeline_execution_policies.md). These policies use a custom CI/CD job to generate scannable artifacts before invoking Dependency Scanning. Pipeline execution policies: - Generate lockfiles or dependency graphs as part of your CI/CD pipeline. - Customize the dependency detection process for your specific project requirements. - Implement the language-specific instructions for build tools like Gradle and Maven. #### Example: Pipeline execution policy for a Gradle project For a Gradle project without a scannable artifact committed to the repository, a pipeline execution policy with an artifact generation step is required. This example uses the `nebula` plugin. In the dedicated security policies project create or update the main policy file (for example, `policy.yml`): ```yaml pipeline_execution_policy: - name: Enforce Gradle dependency scanning with SBOM description: Generate dependency artifact and run Dependency Scanning. enabled: true pipeline_config_strategy: inject_policy content: include: - project: $SECURITY_POLICIES_PROJECT file: "dependency-scanning.yml" ``` Add `dependency-scanning.yml`: ```yaml stages: - build - test variables: DS_ENFORCE_NEW_ANALYZER: "true" include: - template: Jobs/Dependency-Scanning.latest.gitlab-ci.yml generate nebula lockfile: image: openjdk:11-jdk stage: build script: - | cat << EOF > nebula.gradle initscript { repositories { mavenCentral() } dependencies { classpath 'com.netflix.nebula:gradle-dependency-lock-plugin:12.7.1' } } allprojects { apply plugin: nebula.plugin.dependencylock.DependencyLockPlugin } EOF ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=true -PdependencyLock.lockFile=dependencies.lock generateLock saveLock ./gradlew --init-script nebula.gradle -PdependencyLock.includeTransitives=false -PdependencyLock.lockFile=dependencies.direct.lock generateLock saveLock after_script: - find . -path '*/build/dependencies.lock' -print -delete artifacts: paths: - '**/dependencies.lock' - '**/dependencies.direct.lock' ``` This approach ensures that: 1. A pipeline run in the Gradle project generates the scannable artifacts. 1. Dependency Scanning is enforced and has access to the scannable artifacts. 1. All projects in the policy scope consistently follow the same dependency scanning approach. 1. Configuration changes can be managed centrally and applied across multiple projects. For more details on implementing pipeline execution policies for different build tools, refer to the [language-specific instructions](#language-specific-instructions). ## Troubleshooting When working with dependency scanning, you might encounter the following issues. ### Warning: `grep: command not found` The analyzer image contains minimal dependencies to decrease the image's attack surface. As a result, utilities commonly found in other images, like `grep`, are missing from the image. This may result in a warning like `/usr/bin/bash: line 3: grep: command not found` to appear in the job log. This warning does not impact the results of the analyzer and can be ignored.
https://docs.gitlab.com/user/application_security/offline_deployments
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/offline_deployments
[ "doc", "user", "application_security", "offline_deployments" ]
_index.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Offline environments
Offline security scanning and resolving vulnerabilities.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< alert type="note" >}} To set up an offline environment, you must receive an [opt-out exemption of cloud licensing](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#offline-cloud-licensing) prior to purchase. For more details, contact your GitLab sales representative. {{< /alert >}} It's possible to run most of the GitLab security scanners when not connected to the internet. This document describes how to operate Secure Categories (that is, scanner types) in an offline environment. These instructions also apply to GitLab Self-Managed instances that are secured, have security policies (for example, firewall policies), or are otherwise restricted from accessing the full internet. GitLab refers to these environments as _offline environments_. Other common names include: - Air-gapped environments - Limited connectivity environments - Local area network (LAN) environments - Intranet environments These environments have physical barriers or security policies (for example, firewalls) that prevent or limit internet access. These instructions are designed for physically disconnected networks, but can also be followed in these other use cases. ## Defining offline environments In an offline environment, the GitLab instance can be one or more servers and services that can communicate on a local network, but with no or very restricted access to the internet. Assume anything in the GitLab instance and supporting infrastructure (for example, a private Maven repository) can be accessed through a local network connection. Assume any files from the internet must come in through physical media (USB drive, hard drive, writeable DVD, etc.). ## Use offline scanners GitLab scanners usually connect to the internet to download the latest sets of signatures, rules, and patches. A few extra steps are necessary to configure the tools to function properly by using resources available on your local network. ### Container registries and package repositories At a high-level, the security analyzers are delivered as Docker images and may leverage various package repositories. When you run a job on an internet-connected GitLab installation, GitLab checks the GitLab.com-hosted container registry to check that you have the latest versions of these Docker images and possibly connect to package repositories to install necessary dependencies. In an offline environment, these checks must be disabled so that GitLab.com isn't queried. Because the GitLab.com registry and repositories are not available, you must update each of the scanners to either reference a different, internally-hosted registry or provide access to the individual scanner images. You must also ensure that your app has access to common package repositories that are not hosted on GitLab.com, such as npm, yarn, or Ruby gems. Packages from these repositories can be obtained by temporarily connecting to a network or by mirroring the packages inside your own offline network. ### Interacting with the vulnerabilities Once a vulnerability is found, you can interact with it. Read more on how to [address the vulnerabilities](../vulnerabilities/_index.md). In some cases the reported vulnerabilities provide metadata that can contain external links exposed in the UI. These links might not be accessible within an offline environment. ### Resolving vulnerabilities The [resolving vulnerabilities](../vulnerabilities/_index.md#resolve-a-vulnerability) feature is available for offline Dependency Scanning and Container Scanning, but may not work depending on your instance's configuration. We can only suggest solutions, which are generally more current versions that have been patched, when we are able to access up-to-date registry services hosting the latest versions of that dependency or image. ### Scanner signature and rule updates When connected to the internet, some scanners reference public databases for the latest sets of signatures and rules to check against. Without connectivity, this is not possible. Depending on the scanner, you must therefore disable these automatic update checks and either use the databases that they came with and manually update those databases or provide access to your own copies hosted within your network. ## Specific scanner instructions Each individual scanner may be slightly different than the steps previously described. You can find more information at each of the pages below: - [Container scanning offline directions](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment) - [SAST offline directions](../sast/_index.md#running-sast-in-an-offline-environment) - [Secret Detection offline directions](../secret_detection/pipeline/configure.md#offline-configuration) - [DAST offline directions](../dast/browser/configuration/offline_configuration.md) - [API Fuzzing offline directions](../api_fuzzing/configuration/offline_configuration.md) - [License Scanning offline directions](../../compliance/license_scanning_of_cyclonedx_files/_index.md#running-in-an-offline-environment) - [Dependency Scanning offline directions](../dependency_scanning/_index.md#offline-environment) - [IaC Scanning offline directions](../iac_scanning/_index.md#offline-configuration) ## Loading Docker images onto your offline host To use many GitLab features, including security scans and [Auto DevOps](../../../topics/autodevops/_index.md), the runner must be able to fetch the relevant Docker images. The process for making these images available without direct access to the public internet involves downloading the images then packaging and transferring them to the offline host. Here's an example of such a transfer: 1. Download Docker images from public internet. 1. Package Docker images as tar archives. 1. Transfer images to offline environment. 1. Load transferred images into offline Docker registry. ### Using the official GitLab template GitLab provides a [vendored template](../../../ci/yaml/_index.md#includetemplate) to ease this process. This template should be used in a new, empty project, with a `.gitlab-ci.yml` file containing: ```yaml include: - template: Security/Secure-Binaries.gitlab-ci.yml ``` The pipeline downloads the Docker images needed for the Security Scanners and saves them as [job artifacts](../../../ci/jobs/job_artifacts.md) or pushes them to the [container registry](../../packages/container_registry/_index.md) of the project where the pipeline is executed. These archives can be transferred to another location and [loaded](https://docs.docker.com/reference/cli/docker/image/load/) in a Docker daemon. This method requires a runner with access to both `gitlab.com` (including `registry.gitlab.com`) and the local offline instance. This runner must run in [privileged mode](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode) to be able to use the `docker` command inside the jobs. This runner can be installed in a DMZ or on a bastion, and used only for this specific project. {{< alert type="warning" >}} This template does not include updates for the container scanning analyzer. See [Container scanning offline directions](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment). {{< /alert >}} #### Scheduling the updates By default, this project's pipeline runs only once, when the `.gitlab-ci.yml` is added to the repository. To update the GitLab security scanners and signatures, it's necessary to run this pipeline regularly. GitLab provides a way to [schedule pipelines](../../../ci/pipelines/schedules.md). For example, you can set this up to download and store the Docker images every week. #### Using the secure bundle created The project using the `Secure-Binaries.gitlab-ci.yml` template should now host all the required images and resources needed to run GitLab Security features. Next, you must tell the offline instance to use these resources instead of the default ones on GitLab.com. To do so, set the CI/CD variable `SECURE_ANALYZERS_PREFIX` with the URL of the project [container registry](../../packages/container_registry/_index.md). You can set this variable in the projects' `.gitlab-ci.yml`, or in the GitLab UI in the project or group. See the [GitLab CI/CD variables page](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) for more information. #### Variables The following table shows which CI/CD variables you can use with the `Secure-Binaries.gitlab-ci.yml` template: | CI/CD variable | Description | Default value | |-------------------------------------------|-----------------------------------------------|-----------------------------------| | `SECURE_BINARIES_ANALYZERS` | Comma-separated list of analyzers to download | `"bandit, brakeman, gosec, ..."` | | `SECURE_BINARIES_DOWNLOAD_IMAGES` | Used to disable jobs | `"true"` | | `SECURE_BINARIES_PUSH_IMAGES` | Push files to the project registry | `"true"` | | `SECURE_BINARIES_SAVE_ARTIFACTS` | Also save image archives as artifacts | `"false"` | | `SECURE_BINARIES_ANALYZER_VERSION` | Default analyzer version (Docker tag) | `"2"` | ### Alternate way without the official template If it's not possible to follow the previous method, the images can be transferred manually instead: #### Example image packager script ```shell #!/bin/bash set -ux # Specify needed analyzer images analyzers=${SAST_ANALYZERS:-"bandit eslint gosec"} gitlab=registry.gitlab.com/security-products/ for i in "${analyzers[@]}" do tarname="${i}_2.tar" docker pull $gitlab$i:2 docker save $gitlab$i:2 -o ./analyzers/${tarname} chmod +r ./analyzers/${tarname} done ``` #### Example image loader script This example loads the images from a bastion host to an offline host. In certain configurations, physical media may be needed for such a transfer: ```shell #!/bin/bash set -ux # Specify needed analyzer images analyzers=${SAST_ANALYZERS:-"bandit eslint gosec"} registry=$GITLAB_HOST:4567 for i in "${analyzers[@]}" do tarname="${i}_2.tar" scp ./analyzers/${tarname} ${GITLAB_HOST}:~/${tarname} ssh $GITLAB_HOST "sudo docker load -i ${tarname}" ssh $GITLAB_HOST "sudo docker tag $(sudo docker images | grep $i | awk '{print $3}') ${registry}/analyzers/${i}:2" ssh $GITLAB_HOST "sudo docker push ${registry}/analyzers/${i}:2" done ``` ### Using GitLab Secure with AutoDevOps in an offline environment You can use GitLab AutoDevOps for Secure scans in an offline environment. However, you must first do these steps: 1. Load the container images into the local registry. GitLab Secure leverages analyzer container images to do the various scans. These images must be available as part of running AutoDevOps. Before running AutoDevOps, follow the steps in the [official GitLab template](#using-the-official-gitlab-template) to load those container images into the local container registry. 1. Set the CI/CD variable to ensure that AutoDevOps looks in the right place for those images. The AutoDevOps templates leverage the `SECURE_ANALYZERS_PREFIX` variable to identify the location of analyzer images. For more information about this see [Using the secure bundle created](#using-the-secure-bundle-created). Ensure that you set this variable to the correct value for where you loaded the analyzer images. You could consider doing this with a project CI/CD variable or by [modifying](../../../topics/autodevops/customize.md#customize-gitlab-ciyml) the `.gitlab-ci.yml` file directly. Once these steps are complete, GitLab has local copies of the Secure analyzers and is set up to use them instead of an Internet-hosted container image. This allows you to run Secure in AutoDevOps in an offline environment. These steps are specific to GitLab Secure with AutoDevOps. Using other stages with AutoDevOps may require other steps covered in the [Auto DevOps documentation](../../../topics/autodevops/_index.md).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Offline environments description: Offline security scanning and resolving vulnerabilities. breadcrumbs: - doc - user - application_security - offline_deployments --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< alert type="note" >}} To set up an offline environment, you must receive an [opt-out exemption of cloud licensing](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#offline-cloud-licensing) prior to purchase. For more details, contact your GitLab sales representative. {{< /alert >}} It's possible to run most of the GitLab security scanners when not connected to the internet. This document describes how to operate Secure Categories (that is, scanner types) in an offline environment. These instructions also apply to GitLab Self-Managed instances that are secured, have security policies (for example, firewall policies), or are otherwise restricted from accessing the full internet. GitLab refers to these environments as _offline environments_. Other common names include: - Air-gapped environments - Limited connectivity environments - Local area network (LAN) environments - Intranet environments These environments have physical barriers or security policies (for example, firewalls) that prevent or limit internet access. These instructions are designed for physically disconnected networks, but can also be followed in these other use cases. ## Defining offline environments In an offline environment, the GitLab instance can be one or more servers and services that can communicate on a local network, but with no or very restricted access to the internet. Assume anything in the GitLab instance and supporting infrastructure (for example, a private Maven repository) can be accessed through a local network connection. Assume any files from the internet must come in through physical media (USB drive, hard drive, writeable DVD, etc.). ## Use offline scanners GitLab scanners usually connect to the internet to download the latest sets of signatures, rules, and patches. A few extra steps are necessary to configure the tools to function properly by using resources available on your local network. ### Container registries and package repositories At a high-level, the security analyzers are delivered as Docker images and may leverage various package repositories. When you run a job on an internet-connected GitLab installation, GitLab checks the GitLab.com-hosted container registry to check that you have the latest versions of these Docker images and possibly connect to package repositories to install necessary dependencies. In an offline environment, these checks must be disabled so that GitLab.com isn't queried. Because the GitLab.com registry and repositories are not available, you must update each of the scanners to either reference a different, internally-hosted registry or provide access to the individual scanner images. You must also ensure that your app has access to common package repositories that are not hosted on GitLab.com, such as npm, yarn, or Ruby gems. Packages from these repositories can be obtained by temporarily connecting to a network or by mirroring the packages inside your own offline network. ### Interacting with the vulnerabilities Once a vulnerability is found, you can interact with it. Read more on how to [address the vulnerabilities](../vulnerabilities/_index.md). In some cases the reported vulnerabilities provide metadata that can contain external links exposed in the UI. These links might not be accessible within an offline environment. ### Resolving vulnerabilities The [resolving vulnerabilities](../vulnerabilities/_index.md#resolve-a-vulnerability) feature is available for offline Dependency Scanning and Container Scanning, but may not work depending on your instance's configuration. We can only suggest solutions, which are generally more current versions that have been patched, when we are able to access up-to-date registry services hosting the latest versions of that dependency or image. ### Scanner signature and rule updates When connected to the internet, some scanners reference public databases for the latest sets of signatures and rules to check against. Without connectivity, this is not possible. Depending on the scanner, you must therefore disable these automatic update checks and either use the databases that they came with and manually update those databases or provide access to your own copies hosted within your network. ## Specific scanner instructions Each individual scanner may be slightly different than the steps previously described. You can find more information at each of the pages below: - [Container scanning offline directions](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment) - [SAST offline directions](../sast/_index.md#running-sast-in-an-offline-environment) - [Secret Detection offline directions](../secret_detection/pipeline/configure.md#offline-configuration) - [DAST offline directions](../dast/browser/configuration/offline_configuration.md) - [API Fuzzing offline directions](../api_fuzzing/configuration/offline_configuration.md) - [License Scanning offline directions](../../compliance/license_scanning_of_cyclonedx_files/_index.md#running-in-an-offline-environment) - [Dependency Scanning offline directions](../dependency_scanning/_index.md#offline-environment) - [IaC Scanning offline directions](../iac_scanning/_index.md#offline-configuration) ## Loading Docker images onto your offline host To use many GitLab features, including security scans and [Auto DevOps](../../../topics/autodevops/_index.md), the runner must be able to fetch the relevant Docker images. The process for making these images available without direct access to the public internet involves downloading the images then packaging and transferring them to the offline host. Here's an example of such a transfer: 1. Download Docker images from public internet. 1. Package Docker images as tar archives. 1. Transfer images to offline environment. 1. Load transferred images into offline Docker registry. ### Using the official GitLab template GitLab provides a [vendored template](../../../ci/yaml/_index.md#includetemplate) to ease this process. This template should be used in a new, empty project, with a `.gitlab-ci.yml` file containing: ```yaml include: - template: Security/Secure-Binaries.gitlab-ci.yml ``` The pipeline downloads the Docker images needed for the Security Scanners and saves them as [job artifacts](../../../ci/jobs/job_artifacts.md) or pushes them to the [container registry](../../packages/container_registry/_index.md) of the project where the pipeline is executed. These archives can be transferred to another location and [loaded](https://docs.docker.com/reference/cli/docker/image/load/) in a Docker daemon. This method requires a runner with access to both `gitlab.com` (including `registry.gitlab.com`) and the local offline instance. This runner must run in [privileged mode](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode) to be able to use the `docker` command inside the jobs. This runner can be installed in a DMZ or on a bastion, and used only for this specific project. {{< alert type="warning" >}} This template does not include updates for the container scanning analyzer. See [Container scanning offline directions](../container_scanning/_index.md#running-container-scanning-in-an-offline-environment). {{< /alert >}} #### Scheduling the updates By default, this project's pipeline runs only once, when the `.gitlab-ci.yml` is added to the repository. To update the GitLab security scanners and signatures, it's necessary to run this pipeline regularly. GitLab provides a way to [schedule pipelines](../../../ci/pipelines/schedules.md). For example, you can set this up to download and store the Docker images every week. #### Using the secure bundle created The project using the `Secure-Binaries.gitlab-ci.yml` template should now host all the required images and resources needed to run GitLab Security features. Next, you must tell the offline instance to use these resources instead of the default ones on GitLab.com. To do so, set the CI/CD variable `SECURE_ANALYZERS_PREFIX` with the URL of the project [container registry](../../packages/container_registry/_index.md). You can set this variable in the projects' `.gitlab-ci.yml`, or in the GitLab UI in the project or group. See the [GitLab CI/CD variables page](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) for more information. #### Variables The following table shows which CI/CD variables you can use with the `Secure-Binaries.gitlab-ci.yml` template: | CI/CD variable | Description | Default value | |-------------------------------------------|-----------------------------------------------|-----------------------------------| | `SECURE_BINARIES_ANALYZERS` | Comma-separated list of analyzers to download | `"bandit, brakeman, gosec, ..."` | | `SECURE_BINARIES_DOWNLOAD_IMAGES` | Used to disable jobs | `"true"` | | `SECURE_BINARIES_PUSH_IMAGES` | Push files to the project registry | `"true"` | | `SECURE_BINARIES_SAVE_ARTIFACTS` | Also save image archives as artifacts | `"false"` | | `SECURE_BINARIES_ANALYZER_VERSION` | Default analyzer version (Docker tag) | `"2"` | ### Alternate way without the official template If it's not possible to follow the previous method, the images can be transferred manually instead: #### Example image packager script ```shell #!/bin/bash set -ux # Specify needed analyzer images analyzers=${SAST_ANALYZERS:-"bandit eslint gosec"} gitlab=registry.gitlab.com/security-products/ for i in "${analyzers[@]}" do tarname="${i}_2.tar" docker pull $gitlab$i:2 docker save $gitlab$i:2 -o ./analyzers/${tarname} chmod +r ./analyzers/${tarname} done ``` #### Example image loader script This example loads the images from a bastion host to an offline host. In certain configurations, physical media may be needed for such a transfer: ```shell #!/bin/bash set -ux # Specify needed analyzer images analyzers=${SAST_ANALYZERS:-"bandit eslint gosec"} registry=$GITLAB_HOST:4567 for i in "${analyzers[@]}" do tarname="${i}_2.tar" scp ./analyzers/${tarname} ${GITLAB_HOST}:~/${tarname} ssh $GITLAB_HOST "sudo docker load -i ${tarname}" ssh $GITLAB_HOST "sudo docker tag $(sudo docker images | grep $i | awk '{print $3}') ${registry}/analyzers/${i}:2" ssh $GITLAB_HOST "sudo docker push ${registry}/analyzers/${i}:2" done ``` ### Using GitLab Secure with AutoDevOps in an offline environment You can use GitLab AutoDevOps for Secure scans in an offline environment. However, you must first do these steps: 1. Load the container images into the local registry. GitLab Secure leverages analyzer container images to do the various scans. These images must be available as part of running AutoDevOps. Before running AutoDevOps, follow the steps in the [official GitLab template](#using-the-official-gitlab-template) to load those container images into the local container registry. 1. Set the CI/CD variable to ensure that AutoDevOps looks in the right place for those images. The AutoDevOps templates leverage the `SECURE_ANALYZERS_PREFIX` variable to identify the location of analyzer images. For more information about this see [Using the secure bundle created](#using-the-secure-bundle-created). Ensure that you set this variable to the correct value for where you loaded the analyzer images. You could consider doing this with a project CI/CD variable or by [modifying](../../../topics/autodevops/customize.md#customize-gitlab-ciyml) the `.gitlab-ci.yml` file directly. Once these steps are complete, GitLab has local copies of the Secure analyzers and is set up to use them instead of an Internet-hosted container image. This allows you to run Secure in AutoDevOps in an offline environment. These steps are specific to GitLab Secure with AutoDevOps. Using other stages with AutoDevOps may require other steps covered in the [Auto DevOps documentation](../../../topics/autodevops/_index.md).
https://docs.gitlab.com/user/application_security/security_inventory
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/security_inventory
[ "doc", "user", "application_security", "security_inventory" ]
_index.md
Security Risk Management
Security Platform Management
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Security inventory
Group-level visibility of assets, scanner coverage, and vulnerabilities.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16484) in GitLab 18.2 with a flag named `security_inventory_dashboard`. Enabled by default. This feature is in [beta](../../../policy/development_stages_support.md) {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} Use the security inventory to visualize which assets you need to secure and understand the actions you need to take to improve security. A common phrase in security is, "you can't secure what you can't see." The security inventory provides visibility into the security posture of your organization's top-level groups, helps you identify coverage gaps, and enables you to make efficient, risk-based prioritization decisions. The security inventory shows: - Your groups, subgroups, and projects. - Security scanner coverage for each project, regardless of how the scanner is enabled. Security scanners include: - Static application security testing (SAST) - Dependency scanning - Container scanning - Secret detection - Dynamic application security testing (DAST) - Infrastructure-as-code (IaC) scanning - The number of vulnerabilities in each group or project, sorted by severity level. This feature is in beta. Track the development of the security inventory in [epic 16484](https://gitlab.com/groups/gitlab-org/-/epics/16484). Share [your feedback](https://gitlab.com/gitlab-org/gitlab/-/issues/553062) with us as we continue to develop this feature. The security inventory is enabled by default. ## View the security inventory Prerequisites: - You must have at least the Developer role in the group to view the security inventory. To view the security inventory: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Security inventory**. 1. Complete one of the following actions: - To view a group's subgroups, projects, and security assets, select the group. - To view a group or project's scanner coverage, search for the group or project. ## Related topics - [Security Dashboard](../security_dashboard/_index.md) - [Vulnerability reports](../vulnerability_report/_index.md) - GraphQL references: - [AnalyzerGroupStatusType](../../../api/graphql/reference/_index.md#analyzergroupstatustype) - Counts for each analyzer status in the group and subgroups. - [AnalyzerProjectStatusType](../../../api/graphql/reference/_index.md#analyzerprojectstatustype) - Analyzer status (success/fail) for projects. - [VulnerabilityNamespaceStatisticType](../../../api/graphql/reference/_index.md#vulnerabilitynamespacestatistictype) - Counts for each vulnerability severity in the group and its subgroups. - [VulnerabilityStatisticType](../../../api/graphql/reference/_index.md#vulnerabilitystatistictype) - Counts for each vulnerability severity in the project. ## Troubleshooting When working with the security inventory, you might encounter the following issues: ### Security inventory menu item missing Some users do not have the required permissions to access the **Security Inventory** menu item. The menu item only displays for groups when the authenticated user has the Developer role or higher.
--- stage: Security Risk Management group: Security Platform Management info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Security inventory description: Group-level visibility of assets, scanner coverage, and vulnerabilities. breadcrumbs: - doc - user - application_security - security_inventory --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16484) in GitLab 18.2 with a flag named `security_inventory_dashboard`. Enabled by default. This feature is in [beta](../../../policy/development_stages_support.md) {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} Use the security inventory to visualize which assets you need to secure and understand the actions you need to take to improve security. A common phrase in security is, "you can't secure what you can't see." The security inventory provides visibility into the security posture of your organization's top-level groups, helps you identify coverage gaps, and enables you to make efficient, risk-based prioritization decisions. The security inventory shows: - Your groups, subgroups, and projects. - Security scanner coverage for each project, regardless of how the scanner is enabled. Security scanners include: - Static application security testing (SAST) - Dependency scanning - Container scanning - Secret detection - Dynamic application security testing (DAST) - Infrastructure-as-code (IaC) scanning - The number of vulnerabilities in each group or project, sorted by severity level. This feature is in beta. Track the development of the security inventory in [epic 16484](https://gitlab.com/groups/gitlab-org/-/epics/16484). Share [your feedback](https://gitlab.com/gitlab-org/gitlab/-/issues/553062) with us as we continue to develop this feature. The security inventory is enabled by default. ## View the security inventory Prerequisites: - You must have at least the Developer role in the group to view the security inventory. To view the security inventory: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Secure > Security inventory**. 1. Complete one of the following actions: - To view a group's subgroups, projects, and security assets, select the group. - To view a group or project's scanner coverage, search for the group or project. ## Related topics - [Security Dashboard](../security_dashboard/_index.md) - [Vulnerability reports](../vulnerability_report/_index.md) - GraphQL references: - [AnalyzerGroupStatusType](../../../api/graphql/reference/_index.md#analyzergroupstatustype) - Counts for each analyzer status in the group and subgroups. - [AnalyzerProjectStatusType](../../../api/graphql/reference/_index.md#analyzerprojectstatustype) - Analyzer status (success/fail) for projects. - [VulnerabilityNamespaceStatisticType](../../../api/graphql/reference/_index.md#vulnerabilitynamespacestatistictype) - Counts for each vulnerability severity in the group and its subgroups. - [VulnerabilityStatisticType](../../../api/graphql/reference/_index.md#vulnerabilitystatistictype) - Counts for each vulnerability severity in the project. ## Troubleshooting When working with the security inventory, you might encounter the following issues: ### Security inventory menu item missing Some users do not have the required permissions to access the **Security Inventory** menu item. The menu item only displays for groups when the authenticated user has the Developer role or higher.
https://docs.gitlab.com/user/application_security/vulnerability_report
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/vulnerability_report
[ "doc", "user", "application_security", "vulnerability_report" ]
_index.md
Security Risk Management
Security Insights
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Vulnerability report
Filtering, grouping, exporting, and manual addition.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Vulnerability Resolution activity icon [introduced](https://gitlab.com/groups/gitlab-org/-/epics/15036) in GitLab 17.5 with a flag named [`vulnerability_report_vr_badge`](https://gitlab.com/gitlab-org/gitlab/-/issues/486549). Disabled by default. - [Enabled by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171718) in GitLab 17.6. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/503568) in GitLab 18.0. Feature flag `vulnerability_report_vr_badge` removed. {{< /history >}} {{< alert type="flag" >}} The availability of Vulnerability Resolution activity icon is controlled by a feature flag. For more information, see the history. {{< /alert >}} The vulnerability report provides a consolidated view of security vulnerabilities found in your codebase. Sort vulnerabilities by severity, report type, scanner (for projects only), and other attributes to determine which issues need attention first. Track vulnerabilities through their lifecycle with status indicators and activity icons that show remediation progress. Access detailed information for each vulnerability, including Common Vulnerability Scoring System (CVSS) scores and file locations when available. Filter and group similar vulnerabilities to address them systematically. {{< alert type="note" >}} On GitLab.com, vulnerabilities are archived one year after they were last updated. For more details see [vulnerability archival](../vulnerability_archival/_index.md). {{< /alert >}} <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Vulnerability Management - Advanced Security Testing](https://www.youtube.com/watch?v=alMRIq5UPbw). ## Contents of the vulnerability report The report contains data from the default branch, showing cumulative results from all successful security scan jobs. Scan results appear after job completion or when a pipeline is blocked by manual jobs. For projects and groups, the vulnerability report contains: - Totals of vulnerabilities per severity level. - Filters for common vulnerability attributes. - Details of each vulnerability, presented in a table. For some vulnerabilities, the details include a link to the relevant file and line number in the default branch. For CVE vulnerabilities, you can also view the KEV status, CVSS and EPSS scores, and reachability information (Beta) in the vulnerability report. For more details on the security scores, see [vulnerability risk assessment data](../vulnerabilities/risk_assessment_data.md). For projects, the vulnerability report also contains: - A time stamp shows when the default branch was last updated, including a link to the latest pipeline. Pipelines that run against non-default branches do not update the time stamp. - The number of failures that occurred in the most recent pipeline. Select the failure notification to view the **Failed jobs** tab of the pipeline's page. The **Activity** column contains icons to indicate the activity, if any, taken on the vulnerability in that row: - Issues {{< icon name="issues" >}}: Links to issues created for the vulnerability. For more information, see [Create a GitLab issue for a vulnerability](../vulnerabilities/_index.md#create-a-gitlab-issue-for-a-vulnerability). - Merge requests {{< icon name="merge-request" >}}: Links to merge requests created for the vulnerability. For more information, see [Resolve a vulnerability with a merge request](../vulnerabilities/_index.md#resolve-a-vulnerability-with-a-merge-request). - Checked circle {{< icon name="check-circle-dashed" >}}: The vulnerability has been remediated. - False positive {{< icon name="false-positive" >}}: The scanner determined this vulnerability to be a false positive. - Solution {{< icon name="bulb" >}}: Indicates that the vulnerability has a solution available. - Vulnerability Resolution {{< icon name="tanuki-ai" >}}: Indicates that the vulnerability has an available AI resolution. To open an issue created for a vulnerability, hover over the **Activity** entry, then select the link. The issue icon ({{< icon name="issues" >}}) indicates the issue's status. If [Jira issue support](../../../integration/jira/configure.md) is enabled, the issue link found in the **Activity** entry links out to the issue in Jira. Unlike GitLab issues, the status of a Jira issue is not shown in the GitLab UI. ![Example project vulnerability report](img/vulnerability_report_v17_0.png) When vulnerabilities originate from a multi-project pipeline setup, this page displays the vulnerabilities that originate from the selected project. ## View the vulnerability report View the vulnerability report to list all vulnerabilities in the project or group. Prerequisites: - You must have at least the Developer role for the project or group. To view the vulnerability report: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Vulnerability report**. ## Filtering vulnerabilities {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/452492) the **Identifier** filter in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_filtering_by_identifier`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/502930) in GitLab 17.9. Feature flag `vulnerability_filtering_by_identifier` removed. {{< /history >}} You can filter vulnerabilities in the vulnerability report to more efficiently triage them. You can filter by: <!-- vale gitlab_base.SubstitutionWarning = NO --> - **Status**: The current status of the vulnerability: needs triage, confirmed, dismissed, or resolved. For more details, see [vulnerability status values](../vulnerabilities/_index.md#vulnerability-status-values). Dismissed vulnerabilities can be filtered together or individually by the reason they were dismissed. - **Severity**: The severity value of the vulnerability: critical, high, medium, low, info, unknown. - **Report Type**: The type of report that detected the vulnerability, such as SAST or Container fuzzing. For more details, see [report type filter](#report-type-filter). - **Scanner**: The specific scanner that identified the vulnerability. For more details, see [scanner filter](#scanner-filter). - **Activity**: Additional properties of to the vulnerability, such as whether or not the vulnerability has an issue, merge request, or solution available. For more details, see [activity filter](#activity-filter). - **Identifier**: The vulnerability's identifier (requires [advanced vulnerability management](#advanced-vulnerability-management). Without advanced vulnerability management, availability is restricted to projects and groups with a maximum of 20,000 vulnerabilities). - **Project**: Filter vulnerabilities in specific projects (available only for groups). For more details, see [project filter](#project-filter). - **Reachability**: Filter based on whether the vulnerability is reachable: yes, not found, not available. For more details, see [reachability filter](#reachability-filter). <!-- vale gitlab_base.SubstitutionWarning = YES --> ### Filter vulnerabilities {{< history >}} - Improved filtering [introduced](https://gitlab.com/groups/gitlab-org/-/epics/13339) in GitLab 16.9 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_report_advanced_filtering`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/437128) in GitLab 17.1. - [Generally available in 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157172). Feature flag `vulnerability_report_advanced_filtering` removed. {{< /history >}} Filter the vulnerability report to focus on a subset of vulnerabilities. To filter the list of vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. Optional. To remove the default filters, select **Clear** ({{< icon name="clear" >}}). 1. Above the list of vulnerabilities, select the filter bar. 1. In the dropdown list that appears, select an attribute you want to filter by, then select the values from the dropdown list. 1. Select outside the filter field. The vulnerability severity totals and list of matching vulnerabilities are updated. 1. To filter by multiple attributes, repeat the three previous steps. Multiple attributes are joined by a logical AND. ### Report type filter You can filter vulnerabilities based on the type of report that detected them. By default, the vulnerability report lists vulnerabilities from all report types. Use the **Manually added** attribute to filter vulnerabilities that were added manually. ### Scanner filter For projects, you can filter vulnerabilities based on the scanner that detected them. By default, the vulnerability report lists vulnerabilities from all scanners. For details of each of the available scanners, see [Security scanning tools](../detect/_index.md). ### Project filter The content of the Project filter varies: - **Security Center**: Only projects you've [added to your personal Security Center](../security_dashboard/_index.md#adding-projects-to-the-security-center). - **Group**: All projects in the group. - **Project**: Not applicable. ### Activity filter {{< history >}} - Introduced in GitLab 16.7 [with a flag](../../../administration/feature_flags/_index.md) named `activity_filter_has_remediations`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/429262) in GitLab 16.9. Feature flag `activity_filter_has_remediations` removed. - Activity filter option **GitLab Duo (AI)** [introduced](https://gitlab.com/groups/gitlab-org/-/epics/15036) in GitLab 17.5 with a flag named [`vulnerability_report_vr_filter`](https://gitlab.com/gitlab-org/gitlab/-/issues/486534). Disabled by default. - Activity filter option **GitLab Duo (AI)** [enabled by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171718) in GitLab 17.6. - Activity filter option **GitLab Duo (AI)** [generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/172372) in GitLab 18.0. The `vulnerability_report_vr_filter` flag removed. {{< /history >}} The activity filter behaves differently from the other filters. You can select only one value in each category. To remove a filter, from the activity filter dropdown list select the filter you want to remove. Selection behavior when using the activity filter: - **Activity** - **All activity**: Vulnerabilities with any activity status (same as ignoring this filter). Selecting this deselects all other activity filter options. - **Detection** - **Still detected** (default): Vulnerabilities that are still detected in the latest pipeline scan of the `default` branch. - **No longer detected**: Vulnerabilities that are no longer detected in the latest pipeline scan of the `default` branch. - **Issue** - **Has issues**: Vulnerabilities with one or more associated issues. - **Does not have issue**: Vulnerabilities without an associated issue. - **Merge request** - **Has merge request**: Vulnerabilities with one or more associated merge requests. - **Does not have merge request**: Vulnerabilities without an associated merge request. - **Solution available** - **Has a solution**: Vulnerabilities with an available solution. - **Does not have a solution**: Vulnerabilities without an available solution. - **GitLab Duo (AI)**: - **Vulnerability Resolution available**: Vulnerabilities with an available AI resolution. - **Vulnerability Resolution unavailable**: Vulnerabilities without an available AI resolution. The **GitLab Duo (AI)** filter is available when: - Security Center vulnerability report: Any project in the [Security Center](../security_dashboard/_index.md#adding-projects-to-the-security-center) has its **GitLab Duo** toggle turned on. - Group vulnerability report: For the group, **GitLab Duo features** is set to **On by default**. - Project vulnerability report: For the project, the **GitLab Duo** toggle is turned on. ### Reachability filter {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/17251) in GitLab 18.3. {{< /history >}} For groups and projects, you can filter vulnerabilities based on the [reachability value](../dependency_scanning/static_reachability.md#understanding-the-results). By default, the vulnerability report lists vulnerabilities with any reachability value. This filter requires [advanced vulnerability management](#advanced-vulnerability-management). ## Grouping vulnerabilities {{< history >}} - Grouping of vulnerabilities per project [introduced](https://gitlab.com/groups/gitlab-org/-/epics/10164) in GitLab 16.4 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_report_grouping`. Disabled by default. - Grouping of vulnerabilities per project [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134073) in GitLab 16.5. - Grouping of vulnerabilities per project [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/422509) in GitLab 16.6. Feature flag `vulnerability_report_grouping` removed. - Grouping of vulnerabilities per group [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137778) in GitLab 16.7 with a flag named [`group_level_vulnerability_report_grouping`](https://gitlab.com/gitlab-org/gitlab/-/issues/432778). Disabled by default. - Grouping of vulnerabilities per group [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157949) in GitLab 17.2. - Grouping of vulnerabilities per group [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/472669) in GitLab 17.3. Feature flag `group_level_vulnerability_report_grouping` removed. - OWASP top 10 grouping of vulnerabilities per group [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/432618) in GitLab 16.8 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_owasp_top_10_group`. Disabled by default. - OWASP top 10 grouping of vulnerabilities per group [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/437253) in GitLab 17.4. - OWASP top 10 grouping of vulnerabilities per group [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/437253) in GitLab 17.4. Feature flag `vulnerability_owasp_top_10_group` removed. - Non-OWASP category in OWASP top 10 grouping [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442526) in GitLab 17.1 [with a flag](../../../administration/feature_flags/_index.md) named `owasp_top_10_null_filtering`. Disabled by default. - Non-OWASP category in OWASP top 10 grouping [enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/463783) in GitLab 17.5. - Non-OWASP category in OWASP top 10 grouping [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/463783) in GitLab 17.6. Feature flag `owasp_top_10_null_filtering` removed. - OWASP 2021 top 10 grouping [added](https://gitlab.com/gitlab-org/gitlab/-/issues/466034) on GitLab.com and GitLab Dedicated in GitLab 18.1. {{< /history >}} You can group vulnerabilities on the vulnerability report page to more efficiently triage them. You can group by: - Status - Severity - Report Type - Scanner - OWASP top 10 2017 - OWASP top 10 2021 (requires [advanced vulnerability management](#advanced-vulnerability-management)) ### Group vulnerabilities To group vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Vulnerability report**. 1. From the **Group By** dropdown list, select a group. Vulnerabilities are grouped according to the group you selected. Each group is collapsed, with the total number of vulnerabilities per group displayed beside their name. To see the vulnerabilities in each group, select the group's name. ## View details of a vulnerability To view more details of a vulnerability, select the vulnerability's **Description**. The [vulnerability's details](../vulnerabilities/_index.md) page is opened. ## Change status of vulnerabilities {{< history >}} - Providing a comment and dismissal reason [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/408366) in GitLab 16.0. {{< /history >}} As you triage vulnerabilities you can change their status, including dismissing vulnerabilities. When a vulnerability is dismissed, the audit log includes a note of who dismissed it, when it was dismissed, and the reason it was dismissed. You cannot delete vulnerability records, so a permanent record always remains. Prerequisites: - You must have at least the Maintainer role for the project. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To change the status of vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. To select: - One or more vulnerabilities, select the checkbox beside each vulnerability. - All vulnerabilities on the page, select the checkbox in the table header. 1. In the **Set status** dropdown list, select the desired status. 1. If the **Dismiss** status is chosen, select the desired reason in the **Set dismissal reason** dropdown list. 1. In the **Add a comment** input, you can provide a comment. For the **Dismiss** status, a comment is required. 1. Select **Change status**. The status of the selected vulnerabilities is updated and the content of the vulnerability report is refreshed. ![Project vulnerability report](img/project_security_dashboard_status_change_v16_0.png) ## Change or override vulnerability severity {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16157) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_severity_override`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/groups/gitlab-org/-/epics/16157) in GitLab 17.10. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/ISSUE_ID) in GitLab 18.1. Feature flag `vulnerability_severity_override` removed. - [Added](https://gitlab.com/gitlab-org/gitlab/-/issues/537229) a feature flag that administrators can enable to prevent users from changing or overriding the severity level in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `hide_vulnerability_severity_override`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} In certain cases, you may need to adjust the severity of a detected vulnerability to better reflect your organization's priorities. For instance, a scanner might report a lower severity, but you might consider it more critical based on your environment or setup. This feature allows you to override the default severity assigned by the scanner. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission. To manually override a vulnerability's severity: 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Change severity**. 1. In the **Select severity** dropdown list, select the desired severity level. 1. In the **Add reason** text box, add a brief explanation of why you're changing the severity. 1. Select **Change severity**. For each selected vulnerability: - Its severity is updated in both the **Vulnerability details page** and the **Vulnerability report**. - A badge is added to its severity, indicating that the severity has been overridden. - Manual severity adjustments are recorded in the vulnerability's **history**. ![Vulnerability Severity Override](img/vulnerability_severity_change_v17_10.png) ## Prevent users from overriding vulnerability severities {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537229) in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `hide_vulnerability_severity_override`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} In some environments, you might need to prevent users from overriding the severity of vulnerabilities. The `hide_vulnerability_severity_override` feature flag allows administrators to hide the severity override functionality in the vulnerability report. This feature helps organizations maintain standardized vulnerability severity ratings across projects. When enabled, this feature: - Hides the **Change severity** option from the action dropdown list in the vulnerability report. - Prevents users from manually changing severity levels through the UI, ensuring consistent vulnerability scoring based on scanner results. - Disables all API endpoints related to the modification of vulnerability severities, maintaining consistency across all access methods. To enable the `hide_vulnerability_severity_override` flag, see [enable and disable GitLab features deployed behind feature flags](../../../administration/feature_flags/_index.md). ## Add vulnerabilities to an existing issue {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13216) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `enhanced_vulnerability_bulk_actions`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/190213) in GitLab 18.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/515204) in GitLab 18.1. Feature flag `enhanced_vulnerability_bulk_actions` removed. {{< /history >}} You can link one or more vulnerabilities to existing issues in the vulnerability report. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission in a custom role. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To attach vulnerabilities to an existing issue: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Attach to existing issue**. 1. In the **Enter issue URL or <#issue ID>** text box, enter the ID of an issue to autocomplete, or add the URL of the issue. You can enter multiple issues to add the vulnerabilities to. 1. Select **Add**. Each selected vulnerability will be linked to all of the specified issues. ![Attach vulnerabilities to an existing issue](img/vulnerability_attach_existing_issue_v18_0.png) ## Add vulnerabilities to a new issue {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13216) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `new_issue_attachment_from_vulnerability_bulk_action`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/190213) in GitLab 18.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/516939) in GitLab 18.1. Feature flag `new_issue_attachment_from_vulnerability_bulk_action` removed. {{< /history >}} You can link one or more vulnerabilities to a new issue. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission in a custom role. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To attach vulnerabilities to a new issue: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Attach to new issue**. 1. Select **Create issue**. You will be redirected to a new issue. Each selected vulnerability is already linked to it. ![Attach vulnerabilities to a new issue](img/vulnerability_attach_new_issue_v18_0.png) ## Sort vulnerabilities by date detected By default, vulnerabilities are sorted by severity level, with the highest-severity vulnerabilities listed at the top. To sort vulnerabilities by the date each vulnerability was detected, select the "Detected" column header. ## Exporting {{< history >}} - Added "Dismissal Reason" as a column in the CSV export [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/434076) in GitLab 16.8. {{< /history >}} You can export details of the vulnerabilities listed in the vulnerability report. The export format is CSV (comma separated values). All vulnerabilities are included because filters do not apply to the export. Fields included are: - Status (See the following table for details of how the status value is exported.) - Group name - Project name - Report type - Scanner name - Vulnerability - Basic details - Additional information - Severity - [CVE](https://cve.mitre.org/) (Common Vulnerabilities and Exposures) - [CWE](https://cwe.mitre.org/) (Common Weakness Enumeration) - Other identifiers - Detected At - Location - Activity: Returns `true` if the vulnerability is resolved on the default branch, and `false` if not. - Comments - Full Path - CVSS Vectors - [Dismissal Reason](../vulnerabilities/_index.md#vulnerability-dismissal-reasons) - Vulnerability ID {{< alert type="note" >}} Full details are available through our [Job Artifacts API](../../../api/job_artifacts.md#download-a-single-artifact-file-by-reference-name). Use one of the `gl-*-report.json` report filenames in place of `*artifact_path` to obtain, for example, the path of files in which vulnerabilities were detected. {{< /alert >}} The Status field's values shown in the vulnerability report are different to those contained in the vulnerability export. Use the following reference table to match them. | Vulnerability report | Vulnerability export | |:---------------------|:---------------------| | Needs triage | detected | | Dismissed | dismissed | | Resolved | resolved | | Confirmed | confirmed | ### Export details To export details of all vulnerabilities listed in the vulnerability report, select **Export**. When the exported details are available, you'll receive an email. To download the exported details, select the link in the email. {{< alert type="note" >}} Some CSV readers have limitations on the number of rows or size of columns which may make them incompatible with larger exports. The vulnerability export does not account for the limitations of individual programs. {{< /alert >}} ## Manually add a vulnerability {{< history >}} - [Feature flag `new_vulnerability_form`](https://gitlab.com/gitlab-org/gitlab/-/issues/359049) removed in GitLab 15.0. {{< /history >}} Add a vulnerability manually when it is not available in the GitLab vulnerabilities database. You can add a vulnerability only in a project's vulnerability report. To add a vulnerability manually: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. Select **Submit vulnerability**. 1. Complete the fields and submit the form. The newly-created vulnerability's detail page is opened. ## Advanced vulnerability management {{< history >}} - Ingestion of vulnerability data into advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/536299) in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_es_ingestion`. Available in GitLab.com and GitLab Dedicated. Disabled by default. - Filters for OWASP 2021 grouping and identifiers in advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537673) in GitLab 18.1 with the feature flag `advanced_vulnerability_management`. Available in GitLab.com and GitLab Dedicated. Disabled by default. - Ingestion of vulnerability data into advanced search is [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/536299) on GitLab.com and GitLab Dedicated in GitLab 18.2. Feature flag `vulnerability_es_ingestion` removed. - Filters for OWASP 2021 grouping and identifiers in advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537673) in GitLab 18.2 with the feature flag `advanced_vulnerability_management`. Available in GitLab.com and GitLab Dedicated. Enabled by default. {{< /history >}} {{< alert type="flag" >}} Advanced vulnerability management is controlled by feature flags. For more information, see the history. {{< /alert >}} GitLab primarily uses PostgreSQL for filtering in the vulnerability report. Due to database indexing limitations and performance challenges when applying multiple filters, GitLab uses [advanced search](../../search/advanced_search.md) for specific vulnerability management features. Advanced search powers the following features: 1. Grouping data by OWASP 2021 categories in the vulnerability report for a project or group. 1. Filtering based on a vulnerability's identifier in the vulnerability report for a project or group. 1. Filtering based on [reachability](#reachability-filter) value in the vulnerability report for a project or group. Advanced search is used only for these specific features, including when they are combined with other [filters](#filter-vulnerabilities). Other filters, when used independently, continue to use the standard PostgreSQL filtering. ### Requirements To use the filters in advanced vulnerability management: - You must use GitLab.com or a GitLab Dedicated instance with [advanced search enabled](../../search/advanced_search.md#use-advanced-search). This feature is not supported on GitLab Self-Managed, but support is proposed [issue 525484](https://gitlab.com/gitlab-org/gitlab/-/issues/525484). - You must be in the vulnerability report for a project or group. This feature is not supported in the security dashboard, but support is proposed in [issue 537807](https://gitlab.com/gitlab-org/gitlab/-/issues/537807). ## Operational vulnerabilities The **Operational vulnerabilities** tab lists vulnerabilities found by [Operational container scanning](../../clusters/agent/vulnerabilities.md). This tab appears on the project, group, and Security Center vulnerability reports. ![Operational Vulnerability Tab](img/operational_vulnerability_tab_v14_6.png)
--- stage: Security Risk Management group: Security Insights info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Vulnerability report description: Filtering, grouping, exporting, and manual addition. breadcrumbs: - doc - user - application_security - vulnerability_report --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Vulnerability Resolution activity icon [introduced](https://gitlab.com/groups/gitlab-org/-/epics/15036) in GitLab 17.5 with a flag named [`vulnerability_report_vr_badge`](https://gitlab.com/gitlab-org/gitlab/-/issues/486549). Disabled by default. - [Enabled by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171718) in GitLab 17.6. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/503568) in GitLab 18.0. Feature flag `vulnerability_report_vr_badge` removed. {{< /history >}} {{< alert type="flag" >}} The availability of Vulnerability Resolution activity icon is controlled by a feature flag. For more information, see the history. {{< /alert >}} The vulnerability report provides a consolidated view of security vulnerabilities found in your codebase. Sort vulnerabilities by severity, report type, scanner (for projects only), and other attributes to determine which issues need attention first. Track vulnerabilities through their lifecycle with status indicators and activity icons that show remediation progress. Access detailed information for each vulnerability, including Common Vulnerability Scoring System (CVSS) scores and file locations when available. Filter and group similar vulnerabilities to address them systematically. {{< alert type="note" >}} On GitLab.com, vulnerabilities are archived one year after they were last updated. For more details see [vulnerability archival](../vulnerability_archival/_index.md). {{< /alert >}} <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Vulnerability Management - Advanced Security Testing](https://www.youtube.com/watch?v=alMRIq5UPbw). ## Contents of the vulnerability report The report contains data from the default branch, showing cumulative results from all successful security scan jobs. Scan results appear after job completion or when a pipeline is blocked by manual jobs. For projects and groups, the vulnerability report contains: - Totals of vulnerabilities per severity level. - Filters for common vulnerability attributes. - Details of each vulnerability, presented in a table. For some vulnerabilities, the details include a link to the relevant file and line number in the default branch. For CVE vulnerabilities, you can also view the KEV status, CVSS and EPSS scores, and reachability information (Beta) in the vulnerability report. For more details on the security scores, see [vulnerability risk assessment data](../vulnerabilities/risk_assessment_data.md). For projects, the vulnerability report also contains: - A time stamp shows when the default branch was last updated, including a link to the latest pipeline. Pipelines that run against non-default branches do not update the time stamp. - The number of failures that occurred in the most recent pipeline. Select the failure notification to view the **Failed jobs** tab of the pipeline's page. The **Activity** column contains icons to indicate the activity, if any, taken on the vulnerability in that row: - Issues {{< icon name="issues" >}}: Links to issues created for the vulnerability. For more information, see [Create a GitLab issue for a vulnerability](../vulnerabilities/_index.md#create-a-gitlab-issue-for-a-vulnerability). - Merge requests {{< icon name="merge-request" >}}: Links to merge requests created for the vulnerability. For more information, see [Resolve a vulnerability with a merge request](../vulnerabilities/_index.md#resolve-a-vulnerability-with-a-merge-request). - Checked circle {{< icon name="check-circle-dashed" >}}: The vulnerability has been remediated. - False positive {{< icon name="false-positive" >}}: The scanner determined this vulnerability to be a false positive. - Solution {{< icon name="bulb" >}}: Indicates that the vulnerability has a solution available. - Vulnerability Resolution {{< icon name="tanuki-ai" >}}: Indicates that the vulnerability has an available AI resolution. To open an issue created for a vulnerability, hover over the **Activity** entry, then select the link. The issue icon ({{< icon name="issues" >}}) indicates the issue's status. If [Jira issue support](../../../integration/jira/configure.md) is enabled, the issue link found in the **Activity** entry links out to the issue in Jira. Unlike GitLab issues, the status of a Jira issue is not shown in the GitLab UI. ![Example project vulnerability report](img/vulnerability_report_v17_0.png) When vulnerabilities originate from a multi-project pipeline setup, this page displays the vulnerabilities that originate from the selected project. ## View the vulnerability report View the vulnerability report to list all vulnerabilities in the project or group. Prerequisites: - You must have at least the Developer role for the project or group. To view the vulnerability report: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Vulnerability report**. ## Filtering vulnerabilities {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/452492) the **Identifier** filter in GitLab 17.7 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_filtering_by_identifier`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/502930) in GitLab 17.9. Feature flag `vulnerability_filtering_by_identifier` removed. {{< /history >}} You can filter vulnerabilities in the vulnerability report to more efficiently triage them. You can filter by: <!-- vale gitlab_base.SubstitutionWarning = NO --> - **Status**: The current status of the vulnerability: needs triage, confirmed, dismissed, or resolved. For more details, see [vulnerability status values](../vulnerabilities/_index.md#vulnerability-status-values). Dismissed vulnerabilities can be filtered together or individually by the reason they were dismissed. - **Severity**: The severity value of the vulnerability: critical, high, medium, low, info, unknown. - **Report Type**: The type of report that detected the vulnerability, such as SAST or Container fuzzing. For more details, see [report type filter](#report-type-filter). - **Scanner**: The specific scanner that identified the vulnerability. For more details, see [scanner filter](#scanner-filter). - **Activity**: Additional properties of to the vulnerability, such as whether or not the vulnerability has an issue, merge request, or solution available. For more details, see [activity filter](#activity-filter). - **Identifier**: The vulnerability's identifier (requires [advanced vulnerability management](#advanced-vulnerability-management). Without advanced vulnerability management, availability is restricted to projects and groups with a maximum of 20,000 vulnerabilities). - **Project**: Filter vulnerabilities in specific projects (available only for groups). For more details, see [project filter](#project-filter). - **Reachability**: Filter based on whether the vulnerability is reachable: yes, not found, not available. For more details, see [reachability filter](#reachability-filter). <!-- vale gitlab_base.SubstitutionWarning = YES --> ### Filter vulnerabilities {{< history >}} - Improved filtering [introduced](https://gitlab.com/groups/gitlab-org/-/epics/13339) in GitLab 16.9 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_report_advanced_filtering`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/437128) in GitLab 17.1. - [Generally available in 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157172). Feature flag `vulnerability_report_advanced_filtering` removed. {{< /history >}} Filter the vulnerability report to focus on a subset of vulnerabilities. To filter the list of vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. Optional. To remove the default filters, select **Clear** ({{< icon name="clear" >}}). 1. Above the list of vulnerabilities, select the filter bar. 1. In the dropdown list that appears, select an attribute you want to filter by, then select the values from the dropdown list. 1. Select outside the filter field. The vulnerability severity totals and list of matching vulnerabilities are updated. 1. To filter by multiple attributes, repeat the three previous steps. Multiple attributes are joined by a logical AND. ### Report type filter You can filter vulnerabilities based on the type of report that detected them. By default, the vulnerability report lists vulnerabilities from all report types. Use the **Manually added** attribute to filter vulnerabilities that were added manually. ### Scanner filter For projects, you can filter vulnerabilities based on the scanner that detected them. By default, the vulnerability report lists vulnerabilities from all scanners. For details of each of the available scanners, see [Security scanning tools](../detect/_index.md). ### Project filter The content of the Project filter varies: - **Security Center**: Only projects you've [added to your personal Security Center](../security_dashboard/_index.md#adding-projects-to-the-security-center). - **Group**: All projects in the group. - **Project**: Not applicable. ### Activity filter {{< history >}} - Introduced in GitLab 16.7 [with a flag](../../../administration/feature_flags/_index.md) named `activity_filter_has_remediations`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/429262) in GitLab 16.9. Feature flag `activity_filter_has_remediations` removed. - Activity filter option **GitLab Duo (AI)** [introduced](https://gitlab.com/groups/gitlab-org/-/epics/15036) in GitLab 17.5 with a flag named [`vulnerability_report_vr_filter`](https://gitlab.com/gitlab-org/gitlab/-/issues/486534). Disabled by default. - Activity filter option **GitLab Duo (AI)** [enabled by default](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/171718) in GitLab 17.6. - Activity filter option **GitLab Duo (AI)** [generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/172372) in GitLab 18.0. The `vulnerability_report_vr_filter` flag removed. {{< /history >}} The activity filter behaves differently from the other filters. You can select only one value in each category. To remove a filter, from the activity filter dropdown list select the filter you want to remove. Selection behavior when using the activity filter: - **Activity** - **All activity**: Vulnerabilities with any activity status (same as ignoring this filter). Selecting this deselects all other activity filter options. - **Detection** - **Still detected** (default): Vulnerabilities that are still detected in the latest pipeline scan of the `default` branch. - **No longer detected**: Vulnerabilities that are no longer detected in the latest pipeline scan of the `default` branch. - **Issue** - **Has issues**: Vulnerabilities with one or more associated issues. - **Does not have issue**: Vulnerabilities without an associated issue. - **Merge request** - **Has merge request**: Vulnerabilities with one or more associated merge requests. - **Does not have merge request**: Vulnerabilities without an associated merge request. - **Solution available** - **Has a solution**: Vulnerabilities with an available solution. - **Does not have a solution**: Vulnerabilities without an available solution. - **GitLab Duo (AI)**: - **Vulnerability Resolution available**: Vulnerabilities with an available AI resolution. - **Vulnerability Resolution unavailable**: Vulnerabilities without an available AI resolution. The **GitLab Duo (AI)** filter is available when: - Security Center vulnerability report: Any project in the [Security Center](../security_dashboard/_index.md#adding-projects-to-the-security-center) has its **GitLab Duo** toggle turned on. - Group vulnerability report: For the group, **GitLab Duo features** is set to **On by default**. - Project vulnerability report: For the project, the **GitLab Duo** toggle is turned on. ### Reachability filter {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/17251) in GitLab 18.3. {{< /history >}} For groups and projects, you can filter vulnerabilities based on the [reachability value](../dependency_scanning/static_reachability.md#understanding-the-results). By default, the vulnerability report lists vulnerabilities with any reachability value. This filter requires [advanced vulnerability management](#advanced-vulnerability-management). ## Grouping vulnerabilities {{< history >}} - Grouping of vulnerabilities per project [introduced](https://gitlab.com/groups/gitlab-org/-/epics/10164) in GitLab 16.4 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_report_grouping`. Disabled by default. - Grouping of vulnerabilities per project [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134073) in GitLab 16.5. - Grouping of vulnerabilities per project [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/422509) in GitLab 16.6. Feature flag `vulnerability_report_grouping` removed. - Grouping of vulnerabilities per group [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/137778) in GitLab 16.7 with a flag named [`group_level_vulnerability_report_grouping`](https://gitlab.com/gitlab-org/gitlab/-/issues/432778). Disabled by default. - Grouping of vulnerabilities per group [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/157949) in GitLab 17.2. - Grouping of vulnerabilities per group [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/472669) in GitLab 17.3. Feature flag `group_level_vulnerability_report_grouping` removed. - OWASP top 10 grouping of vulnerabilities per group [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/432618) in GitLab 16.8 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_owasp_top_10_group`. Disabled by default. - OWASP top 10 grouping of vulnerabilities per group [enabled on GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/437253) in GitLab 17.4. - OWASP top 10 grouping of vulnerabilities per group [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/437253) in GitLab 17.4. Feature flag `vulnerability_owasp_top_10_group` removed. - Non-OWASP category in OWASP top 10 grouping [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442526) in GitLab 17.1 [with a flag](../../../administration/feature_flags/_index.md) named `owasp_top_10_null_filtering`. Disabled by default. - Non-OWASP category in OWASP top 10 grouping [enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/463783) in GitLab 17.5. - Non-OWASP category in OWASP top 10 grouping [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/463783) in GitLab 17.6. Feature flag `owasp_top_10_null_filtering` removed. - OWASP 2021 top 10 grouping [added](https://gitlab.com/gitlab-org/gitlab/-/issues/466034) on GitLab.com and GitLab Dedicated in GitLab 18.1. {{< /history >}} You can group vulnerabilities on the vulnerability report page to more efficiently triage them. You can group by: - Status - Severity - Report Type - Scanner - OWASP top 10 2017 - OWASP top 10 2021 (requires [advanced vulnerability management](#advanced-vulnerability-management)) ### Group vulnerabilities To group vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Secure > Vulnerability report**. 1. From the **Group By** dropdown list, select a group. Vulnerabilities are grouped according to the group you selected. Each group is collapsed, with the total number of vulnerabilities per group displayed beside their name. To see the vulnerabilities in each group, select the group's name. ## View details of a vulnerability To view more details of a vulnerability, select the vulnerability's **Description**. The [vulnerability's details](../vulnerabilities/_index.md) page is opened. ## Change status of vulnerabilities {{< history >}} - Providing a comment and dismissal reason [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/408366) in GitLab 16.0. {{< /history >}} As you triage vulnerabilities you can change their status, including dismissing vulnerabilities. When a vulnerability is dismissed, the audit log includes a note of who dismissed it, when it was dismissed, and the reason it was dismissed. You cannot delete vulnerability records, so a permanent record always remains. Prerequisites: - You must have at least the Maintainer role for the project. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To change the status of vulnerabilities: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. To select: - One or more vulnerabilities, select the checkbox beside each vulnerability. - All vulnerabilities on the page, select the checkbox in the table header. 1. In the **Set status** dropdown list, select the desired status. 1. If the **Dismiss** status is chosen, select the desired reason in the **Set dismissal reason** dropdown list. 1. In the **Add a comment** input, you can provide a comment. For the **Dismiss** status, a comment is required. 1. Select **Change status**. The status of the selected vulnerabilities is updated and the content of the vulnerability report is refreshed. ![Project vulnerability report](img/project_security_dashboard_status_change_v16_0.png) ## Change or override vulnerability severity {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16157) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_severity_override`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/groups/gitlab-org/-/epics/16157) in GitLab 17.10. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/ISSUE_ID) in GitLab 18.1. Feature flag `vulnerability_severity_override` removed. - [Added](https://gitlab.com/gitlab-org/gitlab/-/issues/537229) a feature flag that administrators can enable to prevent users from changing or overriding the severity level in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `hide_vulnerability_severity_override`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} In certain cases, you may need to adjust the severity of a detected vulnerability to better reflect your organization's priorities. For instance, a scanner might report a lower severity, but you might consider it more critical based on your environment or setup. This feature allows you to override the default severity assigned by the scanner. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission. To manually override a vulnerability's severity: 1. On the left sidebar, select **Search or go to** and find your project. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Change severity**. 1. In the **Select severity** dropdown list, select the desired severity level. 1. In the **Add reason** text box, add a brief explanation of why you're changing the severity. 1. Select **Change severity**. For each selected vulnerability: - Its severity is updated in both the **Vulnerability details page** and the **Vulnerability report**. - A badge is added to its severity, indicating that the severity has been overridden. - Manual severity adjustments are recorded in the vulnerability's **history**. ![Vulnerability Severity Override](img/vulnerability_severity_change_v17_10.png) ## Prevent users from overriding vulnerability severities {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537229) in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `hide_vulnerability_severity_override`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. {{< /alert >}} In some environments, you might need to prevent users from overriding the severity of vulnerabilities. The `hide_vulnerability_severity_override` feature flag allows administrators to hide the severity override functionality in the vulnerability report. This feature helps organizations maintain standardized vulnerability severity ratings across projects. When enabled, this feature: - Hides the **Change severity** option from the action dropdown list in the vulnerability report. - Prevents users from manually changing severity levels through the UI, ensuring consistent vulnerability scoring based on scanner results. - Disables all API endpoints related to the modification of vulnerability severities, maintaining consistency across all access methods. To enable the `hide_vulnerability_severity_override` flag, see [enable and disable GitLab features deployed behind feature flags](../../../administration/feature_flags/_index.md). ## Add vulnerabilities to an existing issue {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13216) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `enhanced_vulnerability_bulk_actions`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/190213) in GitLab 18.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/515204) in GitLab 18.1. Feature flag `enhanced_vulnerability_bulk_actions` removed. {{< /history >}} You can link one or more vulnerabilities to existing issues in the vulnerability report. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission in a custom role. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To attach vulnerabilities to an existing issue: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Attach to existing issue**. 1. In the **Enter issue URL or <#issue ID>** text box, enter the ID of an issue to autocomplete, or add the URL of the issue. You can enter multiple issues to add the vulnerabilities to. 1. Select **Add**. Each selected vulnerability will be linked to all of the specified issues. ![Attach vulnerabilities to an existing issue](img/vulnerability_attach_existing_issue_v18_0.png) ## Add vulnerabilities to a new issue {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13216) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `new_issue_attachment_from_vulnerability_bulk_action`. Disabled by default. - [Enabled on GitLab.com, GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/190213) in GitLab 18.0. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/516939) in GitLab 18.1. Feature flag `new_issue_attachment_from_vulnerability_bulk_action` removed. {{< /history >}} You can link one or more vulnerabilities to a new issue. Prerequisites: - You must have at least the Maintainer role for the project or the `admin_vulnerability` permission in a custom role. The `admin_vulnerability` permission was [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/412693) from the Developer role in GitLab 17.0. To attach vulnerabilities to a new issue: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Go to **Secure > Vulnerability report**. 1. Select vulnerabilities: - To select individual vulnerabilities, select the checkbox beside each vulnerability. - To select all vulnerabilities on the page, select the checkbox in the table header. 1. In the **Select action** dropdown list, select **Attach to new issue**. 1. Select **Create issue**. You will be redirected to a new issue. Each selected vulnerability is already linked to it. ![Attach vulnerabilities to a new issue](img/vulnerability_attach_new_issue_v18_0.png) ## Sort vulnerabilities by date detected By default, vulnerabilities are sorted by severity level, with the highest-severity vulnerabilities listed at the top. To sort vulnerabilities by the date each vulnerability was detected, select the "Detected" column header. ## Exporting {{< history >}} - Added "Dismissal Reason" as a column in the CSV export [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/434076) in GitLab 16.8. {{< /history >}} You can export details of the vulnerabilities listed in the vulnerability report. The export format is CSV (comma separated values). All vulnerabilities are included because filters do not apply to the export. Fields included are: - Status (See the following table for details of how the status value is exported.) - Group name - Project name - Report type - Scanner name - Vulnerability - Basic details - Additional information - Severity - [CVE](https://cve.mitre.org/) (Common Vulnerabilities and Exposures) - [CWE](https://cwe.mitre.org/) (Common Weakness Enumeration) - Other identifiers - Detected At - Location - Activity: Returns `true` if the vulnerability is resolved on the default branch, and `false` if not. - Comments - Full Path - CVSS Vectors - [Dismissal Reason](../vulnerabilities/_index.md#vulnerability-dismissal-reasons) - Vulnerability ID {{< alert type="note" >}} Full details are available through our [Job Artifacts API](../../../api/job_artifacts.md#download-a-single-artifact-file-by-reference-name). Use one of the `gl-*-report.json` report filenames in place of `*artifact_path` to obtain, for example, the path of files in which vulnerabilities were detected. {{< /alert >}} The Status field's values shown in the vulnerability report are different to those contained in the vulnerability export. Use the following reference table to match them. | Vulnerability report | Vulnerability export | |:---------------------|:---------------------| | Needs triage | detected | | Dismissed | dismissed | | Resolved | resolved | | Confirmed | confirmed | ### Export details To export details of all vulnerabilities listed in the vulnerability report, select **Export**. When the exported details are available, you'll receive an email. To download the exported details, select the link in the email. {{< alert type="note" >}} Some CSV readers have limitations on the number of rows or size of columns which may make them incompatible with larger exports. The vulnerability export does not account for the limitations of individual programs. {{< /alert >}} ## Manually add a vulnerability {{< history >}} - [Feature flag `new_vulnerability_form`](https://gitlab.com/gitlab-org/gitlab/-/issues/359049) removed in GitLab 15.0. {{< /history >}} Add a vulnerability manually when it is not available in the GitLab vulnerabilities database. You can add a vulnerability only in a project's vulnerability report. To add a vulnerability manually: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Vulnerability report**. 1. Select **Submit vulnerability**. 1. Complete the fields and submit the form. The newly-created vulnerability's detail page is opened. ## Advanced vulnerability management {{< history >}} - Ingestion of vulnerability data into advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/536299) in GitLab 18.1 [with a flag](../../../administration/feature_flags/_index.md) named `vulnerability_es_ingestion`. Available in GitLab.com and GitLab Dedicated. Disabled by default. - Filters for OWASP 2021 grouping and identifiers in advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537673) in GitLab 18.1 with the feature flag `advanced_vulnerability_management`. Available in GitLab.com and GitLab Dedicated. Disabled by default. - Ingestion of vulnerability data into advanced search is [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/536299) on GitLab.com and GitLab Dedicated in GitLab 18.2. Feature flag `vulnerability_es_ingestion` removed. - Filters for OWASP 2021 grouping and identifiers in advanced search [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/537673) in GitLab 18.2 with the feature flag `advanced_vulnerability_management`. Available in GitLab.com and GitLab Dedicated. Enabled by default. {{< /history >}} {{< alert type="flag" >}} Advanced vulnerability management is controlled by feature flags. For more information, see the history. {{< /alert >}} GitLab primarily uses PostgreSQL for filtering in the vulnerability report. Due to database indexing limitations and performance challenges when applying multiple filters, GitLab uses [advanced search](../../search/advanced_search.md) for specific vulnerability management features. Advanced search powers the following features: 1. Grouping data by OWASP 2021 categories in the vulnerability report for a project or group. 1. Filtering based on a vulnerability's identifier in the vulnerability report for a project or group. 1. Filtering based on [reachability](#reachability-filter) value in the vulnerability report for a project or group. Advanced search is used only for these specific features, including when they are combined with other [filters](#filter-vulnerabilities). Other filters, when used independently, continue to use the standard PostgreSQL filtering. ### Requirements To use the filters in advanced vulnerability management: - You must use GitLab.com or a GitLab Dedicated instance with [advanced search enabled](../../search/advanced_search.md#use-advanced-search). This feature is not supported on GitLab Self-Managed, but support is proposed [issue 525484](https://gitlab.com/gitlab-org/gitlab/-/issues/525484). - You must be in the vulnerability report for a project or group. This feature is not supported in the security dashboard, but support is proposed in [issue 537807](https://gitlab.com/gitlab-org/gitlab/-/issues/537807). ## Operational vulnerabilities The **Operational vulnerabilities** tab lists vulnerabilities found by [Operational container scanning](../../clusters/agent/vulnerabilities.md). This tab appears on the project, group, and Security Center vulnerability reports. ![Operational Vulnerability Tab](img/operational_vulnerability_tab_v14_6.png)
https://docs.gitlab.com/user/application_security/pipeline
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/pipeline.md
2025-08-13
doc/user/application_security/vulnerability_report
[ "doc", "user", "application_security", "vulnerability_report" ]
pipeline.md
null
null
null
null
null
<!-- markdownlint-disable --> This document was moved to [another location](../detect/security_scanning_results.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
--- redirect_to: ../detect/security_scanning_results.md remove_date: '2025-09-11' breadcrumbs: - doc - user - application_security - vulnerability_report --- <!-- markdownlint-disable --> This document was moved to [another location](../detect/security_scanning_results.md). <!-- This redirect file can be deleted after <2025-09-11>. --> <!-- Redirects that point to other docs in the same project expire in three months. --> <!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. --> <!-- Before deletion, see: https://docs.gitlab.com/development/documentation/redirects -->
https://docs.gitlab.com/user/application_security/gitlab_advanced_sast
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/gitlab_advanced_sast.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
gitlab_advanced_sast.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab Advanced SAST
GitLab Advanced SAST uses cross-file, cross-function taint analysis to detect complex vulnerabilities with high accuracy.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced in GitLab 17.1 as an [experiment](../../../policy/development_stages_support.md) for Python. - Support for Go and Java added in 17.2. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/461859) from experiment to beta in GitLab 17.2. - Support for JavaScript, TypeScript, and C# added in 17.3. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/474094) in GitLab 17.3. - Support for Java Server Pages (JSP) added in GitLab 17.4. - Support for PHP [added](https://gitlab.com/groups/gitlab-org/-/epics/14273) in GitLab 18.1. {{< /history >}} GitLab Advanced SAST is a Static Application Security Testing (SAST) analyzer designed to discover vulnerabilities by performing cross-function and cross-file taint analysis. GitLab Advanced SAST is an opt-in feature. When it is enabled, the GitLab Advanced SAST analyzer scans all the files of the supported languages, using the GitLab Advanced SAST predefined ruleset. The Semgrep analyzer will not scan these files. All vulnerabilities identified by the GitLab Advanced SAST analyzer will be reported, including vulnerabilities previously reported by the Semgrep-based analyzer. An automated [transition process](#transitioning-from-semgrep-to-gitlab-advanced-sast) de-duplicates findings when GitLab Advanced SAST locates the same type of vulnerability in the same location as the Semgrep-based analyzer. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview of GitLab Advanced SAST and how it works, see [GitLab Advanced SAST: Accelerating Vulnerability Resolution](https://youtu.be/xDa1MHOcyn8). For a product tour, see the [GitLab Advanced SAST product tour](https://gitlab.navattic.com/advanced-sast). ## Feature comparison | Feature | SAST | Advanced SAST | |------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Depth of Analysis | Limited ability to detect complex vulnerabilities; analysis is limited to a single file, and (with limited exceptions) a single function. | Detects complex vulnerabilities using cross-file, cross-function taint analysis. | | Accuracy | More likely to create false-positive results due to limited context. | Creates fewer false-positive results by using cross-file, cross-function taint analysis to focus on truly exploitable vulnerabilities. | | Remediation Guidance | Vulnerability findings are identified by line number. | Detailed [code flow view](#vulnerability-code-flow) shows how the vulnerability flows through the program, allowing for faster remediation. | | Works with GitLab Duo Vulnerability Explanation and Vulnerability Resolution | Yes. | Yes. | | Language coverage | [More expansive](_index.md#supported-languages-and-frameworks). | [More limited](#supported-languages). | ## When vulnerabilities are reported GitLab Advanced SAST uses cross-file, cross-function scanning with taint analysis to trace the flow of user input into the program. By following the paths user inputs take, the analyzer identifies potential points where untrusted data can influence the execution of your application in unsafe ways, ensuring that injection vulnerabilities, such as SQL injection and cross-site scripting (XSS), are detected even when they span multiple functions and files. To minimize noise, GitLab Advanced SAST only reports taint-based vulnerabilities when there is a verifiable flow that brings untrusted user input source to a sensitive sink. Other products may report vulnerabilities with less validation. GitLab Advanced SAST is tuned to emphasize input that crosses trust boundaries, like values that are sourced from HTTP requests. The set of untrusted input sources does not include command-line arguments, environment variables, or other inputs that are typically provided by the user operating the program. For details of which types of vulnerabilities GitLab Advanced SAST detects, see [GitLab Advanced SAST CWE coverage](advanced_sast_coverage.md). ## Transitioning from Semgrep to GitLab Advanced SAST When you migrate from Semgrep to GitLab Advanced SAST, an automated transition process deduplicates vulnerabilities. This process links previously detected Semgrep vulnerabilities with corresponding GitLab Advanced SAST findings, replacing them when a match is found. ### How vulnerability transition works After enabling Advanced SAST scanning in the **default branch** (see [Enable GitLab Advanced SAST scanning](#enable-gitlab-advanced-sast-scanning)), when a scan runs and detects vulnerabilities, it checks whether any of them should replace existing Semgrep vulnerabilities based on the following conditions. #### Conditions for deduplication 1. **Matching Identifier**: - At least one of the GitLab Advanced SAST vulnerability's identifiers (excluding CWE and OWASP) must match the **primary identifier** of an existing Semgrep vulnerability. - The primary identifier is the first identifier in the vulnerability's identifiers array in the [SAST report](_index.md#download-a-sast-report). - For example, if a GitLab Advanced SAST vulnerability has identifiers including `bandit.B506` and a Semgrep vulnerability's primary identifier is also `bandit.B506`, this condition is met. 1. **Matching Location**: - The vulnerabilities must be associated with the **same location** in the code. This is determined using one of the following fields in a vulnerability in the [SAST report](_index.md#download-a-sast-report): - Tracking field (if present) - Location field (if the Tracking field is absent) ### Changes to the vulnerability When the conditions are met, the existing Semgrep vulnerability is converted into a GitLab Advanced SAST vulnerability. This updated vulnerability appears in the [Vulnerability Report](../vulnerability_report/_index.md) with the following changes: - The scanner type updates from Semgrep to GitLab Advanced SAST. - Any additional identifiers present in the GitLab Advanced SAST vulnerability are added to the existing vulnerability. - All other details of the vulnerability remain unchanged. ### Handling duplicated vulnerabilities In some cases, Semgrep vulnerabilities may still appear as duplicates if the [deduplication conditions](#conditions-for-deduplication) are not met. To resolve this in the [Vulnerability Report](../vulnerability_report/_index.md): 1. [Filter vulnerabilities](../vulnerability_report/_index.md#filtering-vulnerabilities) by Advanced SAST scanner and [export the results in CSV format](../vulnerability_report/_index.md#export-details). 1. [Filter vulnerabilities](../vulnerability_report/_index.md#filtering-vulnerabilities) by Semgrep scanner. These are likely the vulnerabilities that were not deduplicated. 1. For each Semgrep vulnerability, check if it has a corresponding match in the exported Advanced SAST results. 1. If a duplicate exists, resolve the Semgrep vulnerability appropriately. ## Supported languages GitLab Advanced SAST supports the following languages with cross-function and cross-file taint analysis: - C# - Go - Java, including Java Server Pages (JSP) - JavaScript, TypeScript - PHP - Python - Ruby ### PHP known issues When analyzing PHP code, GitLab Advanced SAST has the following limitations: - **Dynamic file inclusion**: Dynamic file inclusion statements (`include`, `include_once`, `require`, `require_once`) using variables for file paths are not supported in this release. Only static file inclusion paths are supported for cross-file analysis. See [issue 527341](https://gitlab.com/gitlab-org/gitlab/-/issues/527341). - **Case sensitivity**: PHP's case-insensitive nature for function names, class names, and method names is not fully supported in cross-file analysis. See [issue 526528](https://gitlab.com/gitlab-org/gitlab/-/issues/526528). ## Configuration Enable the GitLab Advanced SAST analyzer to discover vulnerabilities in your application by performing cross-function and cross-file taint analysis. You can then adjust its behavior by using CI/CD variables. ### Available CI/CD variables GitLab Advanced SAST can be configured using the following CI/CD variables. | CI/CD variable | Default | Description | |--------------------------------|---------|-------------------------------------------------------------------------------| | `GITLAB_ADVANCED_SAST_ENABLED` | `false` | Set to `true` to enable GitLab Advanced SAST scanning, or `false` to disable. | | `FF_GLAS_ENABLE_PHP_SUPPORT` | `true` | Set to `true` to analyze PHP files, or false to disable. | ### Requirements Like other GitLab SAST analyzers, the GitLab Advanced SAST analyzer requires a runner and a CI/CD pipeline; see [SAST requirements](_index.md#getting-started) for details. On GitLab Self-Managed, you must also use a GitLab version that supports GitLab Advanced SAST: - You should use GitLab 17.4 or later if possible. GitLab 17.4 includes a new code-flow view, vulnerability deduplication, and further updates to the SAST CI/CD template. - The [SAST CI/CD templates](_index.md#stable-vs-latest-sast-templates) were updated to include GitLab Advanced SAST in the following releases: - The stable template includes GitLab Advanced SAST in GitLab 17.3 or later. - The latest template includes GitLab Advanced SAST in GitLab 17.2 or later. Don't mix [latest and stable templates](../detect/security_configuration.md#template-editions) in a single project. - At a minimum, GitLab Advanced SAST requires version 17.1 or later. ### Enable GitLab Advanced SAST scanning GitLab Advanced SAST is included in the standard GitLab SAST CI/CD template, but isn't yet enabled by default. To enable it, set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. You can set this variable in different ways depending on how you manage your CI/CD configuration. #### Edit the CI/CD pipeline definition manually If you've already enabled GitLab SAST scanning in your project, add a CI/CD variable to enable GitLab Advanced SAST. This minimal YAML file includes the [stable SAST template](_index.md#stable-vs-latest-sast-templates) and enables GitLab Advanced SAST: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: GITLAB_ADVANCED_SAST_ENABLED: 'true' ``` #### Enforce it in a Scan Execution Policy To enable GitLab Advanced SAST in a [Scan Execution Policy](../policies/scan_execution_policies.md), update your policy's scan action to set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. You can set this variable by: - Selecting it from the menu in the [policy editor](../policies/scan_execution_policies.md#scan-execution-policy-editor). - Adding it to the [`variables` object](../policies/scan_execution_policies.md#scan-action-type) in the scan action. #### By using the pipeline editor To enable GitLab Advanced SAST by using the pipeline editor: 1. In your project, select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Update the CI/CD configuration to: - Include one of the GitLab-managed [SAST CI/CD templates](_index.md#stable-vs-latest-sast-templates) if it is not [already included](_index.md#configure-sast-in-your-cicd-yaml). - In GitLab 17.3 or later, you should use the stable template, `Jobs/SAST.gitlab-ci.yml`. - In GitLab 17.2, GitLab Advanced SAST is only available in the latest template, `Jobs/SAST.latest.gitlab-ci.yml`. Don't mix [latest and stable templates](../detect/security_configuration.md#template-editions) in a single project. - In GitLab 17.1, you must manually copy the contents of the GitLab Advanced SAST job into your CI/CD pipeline definition. - Set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. See the [minimal YAML example](#edit-the-cicd-pipeline-definition-manually). 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, then select **Merge**. Pipelines now include a GitLab Advanced SAST job. ### Disable GitLab Advanced SAST scanning Advanced SAST scanning is not enabled by default, but it may be enabled at the group level or in another way that affects multiple projects. To explicitly disable Advanced SAST scanning in a project, set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `false`. You can set this variable anywhere you can configure CI/CD variables, including the same ways you can [enable Advanced SAST scanning](#enable-gitlab-advanced-sast-scanning). ## Vulnerability code flow {{< history >}} - Introduced in GitLab 17.3 [with several flags](../../../administration/feature_flags/_index.md). Enabled by default. - Enabled on GitLab Self-Managed and GitLab Dedicated in GitLab 17.7. - Generally available in GitLab 17.7. All feature flags removed. {{< /history >}} For specific types of vulnerabilities, GitLab Advanced SAST provides code flow information. A vulnerability's code flow is the path the data takes from the user input (source) to the vulnerable line of code (sink), through all assignments, manipulation, and sanitization. This information helps you understand and evaluate the vulnerability's context, impact, and risk. Code flow information is available for vulnerabilities that are detected by tracing input from a source to a sink, including: - SQL injection - Command injection - Cross-site scripting (XSS) - Path traversal The code flow information is shown the **Code flow** tab and includes: - The steps from source to sink. - The relevant files, including code snippets. ![A code flow of a Python application across two files](img/code_flow_view_v17_7.png) ## Customize GitLab Advanced SAST You can disable GitLab Advanced SAST rules or edit their metadata, just as you can other analyzers. For details, see [Customize rulesets](customize_rulesets.md#disable-predefined-gitlab-advanced-sast-rules). ## Request source code of LGPL-licensed components in GitLab Advanced SAST To request information about the source code of LGPL-licensed components in GitLab Advanced SAST, [contact GitLab Support](https://about.gitlab.com/support/). To ensure a quick response, include the GitLab Advanced SAST analyzer version in your request. Because this feature is only available at the Ultimate tier, you must be associated with an organization with that level of support entitlement. ## Feedback Feel free to add your feedback in the dedicated [issue 466322](https://gitlab.com/gitlab-org/gitlab/-/issues/466322). ## Troubleshooting When working with GitLab Advanced SAST, you might encounter the following issues. ### Slow scans or timeouts with Advanced SAST Because [Advanced SAST](gitlab_advanced_sast.md) scans your program in detail, scans can sometimes take a long time to complete, especially for large repositories. If you're experiencing performance issues, consider following the recommendations here. #### Reduce scan time by excluding files Because each file is analyzed against all applicable rules, you can reduce the number of files scanned to decrease scan time. To do this, use the [SAST_EXCLUDED_PATHS](_index.md#vulnerability-filters) variable to exclude folders that do not need to be scanned. Effective exclusions vary, but might include: - Database migrations - Unit tests - Dependency directories, such as `node_modules/` - Build directories #### Optimize scans with multi-core scanning Multi-core scanning is enabled by default in the Advanced SAST (analyzer version v1.1.10 and later). You can increase the runner size to make more resources available for scanning. For self-hosted runners, you may need to customize the `--multi-core` flag in the [security scanner configuration](_index.md#security-scanner-configuration). #### When to seek support If you've followed these optimization steps and your Advanced SAST scan is still running longer than expected, reach out to GitLab Support for further assistance with the following information: - [GitLab Advanced SAST analyzer version](#identify-the-gitlab-advanced-sast-analyzer-version) - Programming language used in your repository - [Debug logs](../troubleshooting_application_security.md#debug-level-logging) - [Performance debugging artifact](#generate-a-performance-debugging-artifact) ##### Identify the GitLab Advanced SAST analyzer version To identify the GitLab Advanced SAST analyzer version: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Jobs**. 1. Locate the `gitlab-advanced-sast` job. 1. In the output of the job, search for the string `GitLab GitLab Advanced SAST analyzer`. You should find the version at the end of line with that string. For example: ```plaintext [INFO] [GitLab Advanced SAST] [2025-01-24T15:51:03Z] ▶ GitLab GitLab Advanced SAST analyzer v1.1.1 ``` In this example, the version is `1.1.1`. ##### Generate a performance debugging artifact To generate the `trace.ctf` artifact, add the following to your `.gitlab-ci.yml`. Set `RUNNER_SCRIPT_TIMEOUT` to at least 10 minutes shorter than `timeout` to ensure the artifact has time to upload. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: GITLAB_ADVANCED_SAST_ENABLED: 'true' MEMTRACE: 'trace.ctf' DISABLE_MULTI_CORE: true # Disable multi core when collecting memtrace gitlab-advanced-sast: artifacts: paths: - '**/trace.ctf' # Collects all trace.ctf files generated by this job expire_in: 1 week # Sets retention for artifacts when: always # Ensures artifact export even if the job fails variables: RUNNER_SCRIPT_TIMEOUT: 50m timeout: 1h ```
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: GitLab Advanced SAST uses cross-file, cross-function taint analysis to detect complex vulnerabilities with high accuracy. title: GitLab Advanced SAST breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced in GitLab 17.1 as an [experiment](../../../policy/development_stages_support.md) for Python. - Support for Go and Java added in 17.2. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/461859) from experiment to beta in GitLab 17.2. - Support for JavaScript, TypeScript, and C# added in 17.3. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/474094) in GitLab 17.3. - Support for Java Server Pages (JSP) added in GitLab 17.4. - Support for PHP [added](https://gitlab.com/groups/gitlab-org/-/epics/14273) in GitLab 18.1. {{< /history >}} GitLab Advanced SAST is a Static Application Security Testing (SAST) analyzer designed to discover vulnerabilities by performing cross-function and cross-file taint analysis. GitLab Advanced SAST is an opt-in feature. When it is enabled, the GitLab Advanced SAST analyzer scans all the files of the supported languages, using the GitLab Advanced SAST predefined ruleset. The Semgrep analyzer will not scan these files. All vulnerabilities identified by the GitLab Advanced SAST analyzer will be reported, including vulnerabilities previously reported by the Semgrep-based analyzer. An automated [transition process](#transitioning-from-semgrep-to-gitlab-advanced-sast) de-duplicates findings when GitLab Advanced SAST locates the same type of vulnerability in the same location as the Semgrep-based analyzer. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview of GitLab Advanced SAST and how it works, see [GitLab Advanced SAST: Accelerating Vulnerability Resolution](https://youtu.be/xDa1MHOcyn8). For a product tour, see the [GitLab Advanced SAST product tour](https://gitlab.navattic.com/advanced-sast). ## Feature comparison | Feature | SAST | Advanced SAST | |------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Depth of Analysis | Limited ability to detect complex vulnerabilities; analysis is limited to a single file, and (with limited exceptions) a single function. | Detects complex vulnerabilities using cross-file, cross-function taint analysis. | | Accuracy | More likely to create false-positive results due to limited context. | Creates fewer false-positive results by using cross-file, cross-function taint analysis to focus on truly exploitable vulnerabilities. | | Remediation Guidance | Vulnerability findings are identified by line number. | Detailed [code flow view](#vulnerability-code-flow) shows how the vulnerability flows through the program, allowing for faster remediation. | | Works with GitLab Duo Vulnerability Explanation and Vulnerability Resolution | Yes. | Yes. | | Language coverage | [More expansive](_index.md#supported-languages-and-frameworks). | [More limited](#supported-languages). | ## When vulnerabilities are reported GitLab Advanced SAST uses cross-file, cross-function scanning with taint analysis to trace the flow of user input into the program. By following the paths user inputs take, the analyzer identifies potential points where untrusted data can influence the execution of your application in unsafe ways, ensuring that injection vulnerabilities, such as SQL injection and cross-site scripting (XSS), are detected even when they span multiple functions and files. To minimize noise, GitLab Advanced SAST only reports taint-based vulnerabilities when there is a verifiable flow that brings untrusted user input source to a sensitive sink. Other products may report vulnerabilities with less validation. GitLab Advanced SAST is tuned to emphasize input that crosses trust boundaries, like values that are sourced from HTTP requests. The set of untrusted input sources does not include command-line arguments, environment variables, or other inputs that are typically provided by the user operating the program. For details of which types of vulnerabilities GitLab Advanced SAST detects, see [GitLab Advanced SAST CWE coverage](advanced_sast_coverage.md). ## Transitioning from Semgrep to GitLab Advanced SAST When you migrate from Semgrep to GitLab Advanced SAST, an automated transition process deduplicates vulnerabilities. This process links previously detected Semgrep vulnerabilities with corresponding GitLab Advanced SAST findings, replacing them when a match is found. ### How vulnerability transition works After enabling Advanced SAST scanning in the **default branch** (see [Enable GitLab Advanced SAST scanning](#enable-gitlab-advanced-sast-scanning)), when a scan runs and detects vulnerabilities, it checks whether any of them should replace existing Semgrep vulnerabilities based on the following conditions. #### Conditions for deduplication 1. **Matching Identifier**: - At least one of the GitLab Advanced SAST vulnerability's identifiers (excluding CWE and OWASP) must match the **primary identifier** of an existing Semgrep vulnerability. - The primary identifier is the first identifier in the vulnerability's identifiers array in the [SAST report](_index.md#download-a-sast-report). - For example, if a GitLab Advanced SAST vulnerability has identifiers including `bandit.B506` and a Semgrep vulnerability's primary identifier is also `bandit.B506`, this condition is met. 1. **Matching Location**: - The vulnerabilities must be associated with the **same location** in the code. This is determined using one of the following fields in a vulnerability in the [SAST report](_index.md#download-a-sast-report): - Tracking field (if present) - Location field (if the Tracking field is absent) ### Changes to the vulnerability When the conditions are met, the existing Semgrep vulnerability is converted into a GitLab Advanced SAST vulnerability. This updated vulnerability appears in the [Vulnerability Report](../vulnerability_report/_index.md) with the following changes: - The scanner type updates from Semgrep to GitLab Advanced SAST. - Any additional identifiers present in the GitLab Advanced SAST vulnerability are added to the existing vulnerability. - All other details of the vulnerability remain unchanged. ### Handling duplicated vulnerabilities In some cases, Semgrep vulnerabilities may still appear as duplicates if the [deduplication conditions](#conditions-for-deduplication) are not met. To resolve this in the [Vulnerability Report](../vulnerability_report/_index.md): 1. [Filter vulnerabilities](../vulnerability_report/_index.md#filtering-vulnerabilities) by Advanced SAST scanner and [export the results in CSV format](../vulnerability_report/_index.md#export-details). 1. [Filter vulnerabilities](../vulnerability_report/_index.md#filtering-vulnerabilities) by Semgrep scanner. These are likely the vulnerabilities that were not deduplicated. 1. For each Semgrep vulnerability, check if it has a corresponding match in the exported Advanced SAST results. 1. If a duplicate exists, resolve the Semgrep vulnerability appropriately. ## Supported languages GitLab Advanced SAST supports the following languages with cross-function and cross-file taint analysis: - C# - Go - Java, including Java Server Pages (JSP) - JavaScript, TypeScript - PHP - Python - Ruby ### PHP known issues When analyzing PHP code, GitLab Advanced SAST has the following limitations: - **Dynamic file inclusion**: Dynamic file inclusion statements (`include`, `include_once`, `require`, `require_once`) using variables for file paths are not supported in this release. Only static file inclusion paths are supported for cross-file analysis. See [issue 527341](https://gitlab.com/gitlab-org/gitlab/-/issues/527341). - **Case sensitivity**: PHP's case-insensitive nature for function names, class names, and method names is not fully supported in cross-file analysis. See [issue 526528](https://gitlab.com/gitlab-org/gitlab/-/issues/526528). ## Configuration Enable the GitLab Advanced SAST analyzer to discover vulnerabilities in your application by performing cross-function and cross-file taint analysis. You can then adjust its behavior by using CI/CD variables. ### Available CI/CD variables GitLab Advanced SAST can be configured using the following CI/CD variables. | CI/CD variable | Default | Description | |--------------------------------|---------|-------------------------------------------------------------------------------| | `GITLAB_ADVANCED_SAST_ENABLED` | `false` | Set to `true` to enable GitLab Advanced SAST scanning, or `false` to disable. | | `FF_GLAS_ENABLE_PHP_SUPPORT` | `true` | Set to `true` to analyze PHP files, or false to disable. | ### Requirements Like other GitLab SAST analyzers, the GitLab Advanced SAST analyzer requires a runner and a CI/CD pipeline; see [SAST requirements](_index.md#getting-started) for details. On GitLab Self-Managed, you must also use a GitLab version that supports GitLab Advanced SAST: - You should use GitLab 17.4 or later if possible. GitLab 17.4 includes a new code-flow view, vulnerability deduplication, and further updates to the SAST CI/CD template. - The [SAST CI/CD templates](_index.md#stable-vs-latest-sast-templates) were updated to include GitLab Advanced SAST in the following releases: - The stable template includes GitLab Advanced SAST in GitLab 17.3 or later. - The latest template includes GitLab Advanced SAST in GitLab 17.2 or later. Don't mix [latest and stable templates](../detect/security_configuration.md#template-editions) in a single project. - At a minimum, GitLab Advanced SAST requires version 17.1 or later. ### Enable GitLab Advanced SAST scanning GitLab Advanced SAST is included in the standard GitLab SAST CI/CD template, but isn't yet enabled by default. To enable it, set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. You can set this variable in different ways depending on how you manage your CI/CD configuration. #### Edit the CI/CD pipeline definition manually If you've already enabled GitLab SAST scanning in your project, add a CI/CD variable to enable GitLab Advanced SAST. This minimal YAML file includes the [stable SAST template](_index.md#stable-vs-latest-sast-templates) and enables GitLab Advanced SAST: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: GITLAB_ADVANCED_SAST_ENABLED: 'true' ``` #### Enforce it in a Scan Execution Policy To enable GitLab Advanced SAST in a [Scan Execution Policy](../policies/scan_execution_policies.md), update your policy's scan action to set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. You can set this variable by: - Selecting it from the menu in the [policy editor](../policies/scan_execution_policies.md#scan-execution-policy-editor). - Adding it to the [`variables` object](../policies/scan_execution_policies.md#scan-action-type) in the scan action. #### By using the pipeline editor To enable GitLab Advanced SAST by using the pipeline editor: 1. In your project, select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Update the CI/CD configuration to: - Include one of the GitLab-managed [SAST CI/CD templates](_index.md#stable-vs-latest-sast-templates) if it is not [already included](_index.md#configure-sast-in-your-cicd-yaml). - In GitLab 17.3 or later, you should use the stable template, `Jobs/SAST.gitlab-ci.yml`. - In GitLab 17.2, GitLab Advanced SAST is only available in the latest template, `Jobs/SAST.latest.gitlab-ci.yml`. Don't mix [latest and stable templates](../detect/security_configuration.md#template-editions) in a single project. - In GitLab 17.1, you must manually copy the contents of the GitLab Advanced SAST job into your CI/CD pipeline definition. - Set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `true`. See the [minimal YAML example](#edit-the-cicd-pipeline-definition-manually). 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, then select **Merge**. Pipelines now include a GitLab Advanced SAST job. ### Disable GitLab Advanced SAST scanning Advanced SAST scanning is not enabled by default, but it may be enabled at the group level or in another way that affects multiple projects. To explicitly disable Advanced SAST scanning in a project, set the CI/CD variable `GITLAB_ADVANCED_SAST_ENABLED` to `false`. You can set this variable anywhere you can configure CI/CD variables, including the same ways you can [enable Advanced SAST scanning](#enable-gitlab-advanced-sast-scanning). ## Vulnerability code flow {{< history >}} - Introduced in GitLab 17.3 [with several flags](../../../administration/feature_flags/_index.md). Enabled by default. - Enabled on GitLab Self-Managed and GitLab Dedicated in GitLab 17.7. - Generally available in GitLab 17.7. All feature flags removed. {{< /history >}} For specific types of vulnerabilities, GitLab Advanced SAST provides code flow information. A vulnerability's code flow is the path the data takes from the user input (source) to the vulnerable line of code (sink), through all assignments, manipulation, and sanitization. This information helps you understand and evaluate the vulnerability's context, impact, and risk. Code flow information is available for vulnerabilities that are detected by tracing input from a source to a sink, including: - SQL injection - Command injection - Cross-site scripting (XSS) - Path traversal The code flow information is shown the **Code flow** tab and includes: - The steps from source to sink. - The relevant files, including code snippets. ![A code flow of a Python application across two files](img/code_flow_view_v17_7.png) ## Customize GitLab Advanced SAST You can disable GitLab Advanced SAST rules or edit their metadata, just as you can other analyzers. For details, see [Customize rulesets](customize_rulesets.md#disable-predefined-gitlab-advanced-sast-rules). ## Request source code of LGPL-licensed components in GitLab Advanced SAST To request information about the source code of LGPL-licensed components in GitLab Advanced SAST, [contact GitLab Support](https://about.gitlab.com/support/). To ensure a quick response, include the GitLab Advanced SAST analyzer version in your request. Because this feature is only available at the Ultimate tier, you must be associated with an organization with that level of support entitlement. ## Feedback Feel free to add your feedback in the dedicated [issue 466322](https://gitlab.com/gitlab-org/gitlab/-/issues/466322). ## Troubleshooting When working with GitLab Advanced SAST, you might encounter the following issues. ### Slow scans or timeouts with Advanced SAST Because [Advanced SAST](gitlab_advanced_sast.md) scans your program in detail, scans can sometimes take a long time to complete, especially for large repositories. If you're experiencing performance issues, consider following the recommendations here. #### Reduce scan time by excluding files Because each file is analyzed against all applicable rules, you can reduce the number of files scanned to decrease scan time. To do this, use the [SAST_EXCLUDED_PATHS](_index.md#vulnerability-filters) variable to exclude folders that do not need to be scanned. Effective exclusions vary, but might include: - Database migrations - Unit tests - Dependency directories, such as `node_modules/` - Build directories #### Optimize scans with multi-core scanning Multi-core scanning is enabled by default in the Advanced SAST (analyzer version v1.1.10 and later). You can increase the runner size to make more resources available for scanning. For self-hosted runners, you may need to customize the `--multi-core` flag in the [security scanner configuration](_index.md#security-scanner-configuration). #### When to seek support If you've followed these optimization steps and your Advanced SAST scan is still running longer than expected, reach out to GitLab Support for further assistance with the following information: - [GitLab Advanced SAST analyzer version](#identify-the-gitlab-advanced-sast-analyzer-version) - Programming language used in your repository - [Debug logs](../troubleshooting_application_security.md#debug-level-logging) - [Performance debugging artifact](#generate-a-performance-debugging-artifact) ##### Identify the GitLab Advanced SAST analyzer version To identify the GitLab Advanced SAST analyzer version: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Jobs**. 1. Locate the `gitlab-advanced-sast` job. 1. In the output of the job, search for the string `GitLab GitLab Advanced SAST analyzer`. You should find the version at the end of line with that string. For example: ```plaintext [INFO] [GitLab Advanced SAST] [2025-01-24T15:51:03Z] ▶ GitLab GitLab Advanced SAST analyzer v1.1.1 ``` In this example, the version is `1.1.1`. ##### Generate a performance debugging artifact To generate the `trace.ctf` artifact, add the following to your `.gitlab-ci.yml`. Set `RUNNER_SCRIPT_TIMEOUT` to at least 10 minutes shorter than `timeout` to ensure the artifact has time to upload. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: GITLAB_ADVANCED_SAST_ENABLED: 'true' MEMTRACE: 'trace.ctf' DISABLE_MULTI_CORE: true # Disable multi core when collecting memtrace gitlab-advanced-sast: artifacts: paths: - '**/trace.ctf' # Collects all trace.ctf files generated by this job expire_in: 1 week # Sets retention for artifacts when: always # Ensures artifact export even if the job fails variables: RUNNER_SCRIPT_TIMEOUT: 50m timeout: 1h ```
https://docs.gitlab.com/user/application_security/sast
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
_index.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Static Application Security Testing (SAST)
Scanning, configuration, analyzers, vulnerabilities, reporting, customization, and integration.
<style> table.sast-table tr:nth-child(even) { background-color: transparent; } table.sast-table td { border-left: 1px solid #dbdbdb; border-right: 1px solid #dbdbdb; border-bottom: 1px solid #dbdbdb; } table.sast-table tr td:first-child { border-left: 0; } table.sast-table tr td:last-child { border-right: 0; } table.sast-table ul { font-size: 1em; list-style-type: none; padding-left: 0px; margin-bottom: 0px; } table.no-vertical-table-lines td { border-left: none; border-right: none; border-bottom: 1px solid #f0f0f0; } table.no-vertical-table-lines tr { border-top: none; } </style> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Static Application Security Testing (SAST) discovers vulnerabilities in your source code before they reach production. Integrated directly into your CI/CD pipeline, SAST identifies security issues during development when they're easiest and most cost-effective to fix. Security vulnerabilities found late in development create costly delays and potential breaches. SAST scans happen automatically with each commit, giving you immediate feedback without disrupting your workflow. ## Features The following table lists the GitLab tiers in which each feature is available. | Feature | In Free & Premium | In Ultimate | |:-----------------------------------------------------------------------------------------|:-------------------------------------|:------------| | Basic scanning with [open-source analyzers](#supported-languages-and-frameworks) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Downloadable [SAST JSON report](#download-a-sast-report) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Cross-file, cross-function scanning with [GitLab Advanced SAST](gitlab_advanced_sast.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New findings in [merge request widget](#merge-request-widget) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New findings in [merge request changes view](#merge-request-changes-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Vulnerability Management](../vulnerabilities/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [UI-based scanner configuration](#configure-sast-by-using-the-ui) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Ruleset customization](customize_rulesets.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Advanced Vulnerability Tracking](#advanced-vulnerability-tracking) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started If you are new to SAST, the following steps show how to enable SAST for your project. Prerequisites: - Linux-based GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. If you're using hosted runners for GitLab.com, this is enabled by default. - Windows Runners are not supported. - CPU architectures other than amd64 are not supported. - GitLab CI/CD configuration (`.gitlab-ci.yml`) must include the `test` stage, which is included by default. If you redefine the stages in the `.gitlab-ci.yml` file, the `test` stage is required. To enable SAST: 1. On the left sidebar, select **Search or go to** and find your project. 1. If your project does not already have one, create a `.gitlab-ci.yml` file in the root directory. 1. At the top of the `.gitlab-ci.yml` file, add one of the following lines: Using a template: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` Or using a CI component: ```yaml include: - component: gitlab.com/components/sast/sast@main ``` At this point, SAST is enabled in your pipeline. If supported source code is present, the appropriate analyzers and default rules automatically scan for vulnerabilities when a pipeline runs. The corresponding jobs will appear under the `test` stage in your pipeline. {{< alert type="note" >}} You can see a working example in [SAST example project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/semgrep/sast-getting-started). {{< /alert >}} After completing these steps, you can: - Learn more about how to [understand the results](#understanding-the-results). - Review [optimization tips](#optimization). - Plan a [rollout to more projects](#roll-out). For details on other configuration methods, see [Configuration](#configuration). ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Either download results, or select a vulnerability to view its details (Ultimate only), including: - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Status: Indicates whether the vulnerability has been triaged or resolved. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - Location: Shows the filename and line number where the issue was found. Selecting the file path opens the corresponding line in the code view. - Scanner: Identifies which analyzer detected the vulnerability. - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers and the IDs of the rules that detected it. SAST vulnerabilities are named according to the primary Common Weakness Enumeration (CWE) identifier for the discovered vulnerability. Read the description of each vulnerability finding to learn more about the specific issue that the scanner has detected. For more information on SAST coverage, see [SAST rules](rules.md). In Ultimate, you can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see [Pipeline security report](../detect/security_scanning_results.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} Additional ways to see SAST results: - [Merge request widget](#merge-request-widget): Shows newly introduced or resolved findings. - [Merge request changes view](#merge-request-changes-view): Shows inline annotations for changed lines. - [Vulnerability report](../vulnerability_report/_index.md): Shows confirmed vulnerabilities on the default branch. A pipeline consists of multiple jobs, including SAST and DAST scanning. If any job fails to finish for any reason, the security dashboard does not show SAST scanner output. For example, if the SAST job finishes but the DAST job fails, the security dashboard does not show SAST results. On failure, the analyzer outputs an exit code. ### Merge request widget {{< details >}} - Tier: Ultimate {{< /details >}} SAST results display in the merge request widget area if a report from the target branch is available for comparison. The merge request widget shows: - new SAST findings that are introduced by the MR. - existing findings that are resolved by the MR. The results are compared using [Advanced Vulnerability Tracking](#advanced-vulnerability-tracking) whenever it is available. ![Security Merge request widget](img/sast_mr_widget_v16_7.png) ### Merge request changes view {{< details >}} - Tier: Ultimate {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10959) in GitLab 16.6 with a [flag](../../../administration/feature_flags/_index.md) named `sast_reports_in_inline_diff`. Disabled by default. - Enabled by default in GitLab 16.8. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410191) in GitLab 16.9. {{< /history >}} SAST results display in the merge request **Changes** view. Lines containing SAST issues are marked by a symbol beside the gutter. Select the symbol to see the list of issues, then select an issue to see its details. ![SAST Inline Indicator](img/sast_inline_indicator_v16_7.png) ## Optimization To optimize SAST according to your requirements you can: - Disable a rule. - Exclude files or paths from being scanned. ### Disable a rule To disable a rule, for example because it generates too many false positives: 1. On the left sidebar, select **Search or go to** and find your project. 1. Create a `.gitlab/sast-ruleset.toml` file at the root of your project if one does not already exist. 1. In the vulnerability's details, locate the ID of the rule that triggered the finding. 1. Use the rule ID to disable the rule. For example, to disable `gosec.G107-1`, add the following in `.gitlab/sast-ruleset.toml`: ```toml [semgrep] [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G107-1" ``` For more details on customizing rulesets, see [Customize rulesets](customize_rulesets.md). ### Exclude files or paths from being scanned To exclude files or paths from being scanned, for example test or temporary code, set the `SAST_EXCLUDED_PATHS` variable. For example, to skip `rule-template-injection.go`, add the following to your `.gitlab-ci.yml`: ```yaml variables: SAST_EXCLUDED_PATHS: "rule-template-injection.go" ``` For more information about configuration options, see [Available CI/CD variables](#available-cicd-variables). ## Roll out After you are confident in the SAST results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply SAST settings across groups. - Share and reuse a central ruleset by [specifying a remote configuration file](customize_rulesets.md#specify-a-remote-configuration-file). - If you have unique requirements, SAST can be run in [offline environments](#running-sast-in-an-offline-environment) or under [SELinux](#running-sast-in-selinux) constraints. ## Supported languages and frameworks GitLab SAST supports scanning the following languages and frameworks. The available scanning options depend on the GitLab tier: - In Ultimate, [GitLab Advanced SAST](gitlab_advanced_sast.md) provides more accurate results. You should use it for the languages it supports. - In all tiers, you can use GitLab-provided analyzers, based on open-source scanners, to scan your code. For more information about our plans for language support in SAST, see the [category direction page](https://about.gitlab.com/direction/application_security_testing/static-analysis/sast/#language-support). | Language | Supported by [GitLab Advanced SAST](gitlab_advanced_sast.md) (Ultimate only) | Supported by another [analyzer](analyzers.md) (all tiers) | |------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------| | Apex (Salesforce) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [PMD-Apex](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex) | | C | {{< icon name="dotted-circle" >}} No, tracked in [epic 14271](https://gitlab.com/groups/gitlab-org/-/epics/14271) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | C++ | {{< icon name="dotted-circle" >}} No, tracked in [epic 14271](https://gitlab.com/groups/gitlab-org/-/epics/14271) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | C# | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Elixir (Phoenix) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Sobelow](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow) | | Go | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Groovy | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) with the find-sec-bugs plugin<sup><b><a href="#spotbugs-footnote">1</a></b></sup> | | Java | {{< icon name="check-circle" >}} Yes, including Java Server Pages (JSP) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) (including Android) | | JavaScript, including Node.js and React | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Kotlin | {{< icon name="dotted-circle" >}} No, tracked in [epic 15173](https://gitlab.com/groups/gitlab-org/-/epics/15173) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) (including Android) | | Objective-C (iOS) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | PHP | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Python | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Ruby, including Ruby on Rails | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Scala | {{< icon name="dotted-circle" >}} No, tracked in [epic 15174](https://gitlab.com/groups/gitlab-org/-/epics/15174) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Swift (iOS) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | TypeScript | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | YAML<sup><b><a href="#yaml-footnote">2</a></b></sup> | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Java Properties | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | **Footnotes**: 1. <a id="spotbugs-footnote"></a>The SpotBugs-based analyzer supports [Gradle](https://gradle.org/), [Maven](https://maven.apache.org/), and [SBT](https://www.scala-sbt.org/). It can also be used with variants like the [Gradle wrapper](https://docs.gradle.org/current/userguide/gradle_wrapper.html), [Grails](https://grails.org/), and the [Maven wrapper](https://github.com/takari/maven-wrapper). However, SpotBugs has [limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/350801) when used against [Ant](https://ant.apache.org/)-based projects. You should use the GitLab Advanced SAST or Semgrep-based analyzer for Ant-based Java or Scala projects. 1. <a id="yaml-footnote"></a>`YAML` support is restricted to the following file patterns: - `application*.yml` - `application*.yaml` - `bootstrap*.yml` - `bootstrap*.yaml` The SAST CI/CD template also includes an analyzer job that can scan Kubernetes manifests and Helm charts; this job is off by default. See [Enabling Kubesec analyzer](#enabling-kubesec-analyzer) or consider [IaC Scanning](../iac_scanning/_index.md), which supports additional platforms, instead. To learn more about SAST analyzers that are no longer supported, see [Analyzers that have reached End of Support](analyzers.md#analyzers-that-have-reached-end-of-support). ## Advanced vulnerability tracking {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Source code is volatile; as developers make changes, source code may move within files or between files. Security analyzers may have already reported vulnerabilities that are being tracked in the [vulnerability report](../vulnerability_report/_index.md). These vulnerabilities are linked to specific problematic code fragments so that they can be found and fixed. If the code fragments are not tracked reliably as they move, vulnerability management is harder because the same vulnerability could be reported again. GitLab SAST uses an advanced vulnerability tracking algorithm to more accurately identify when the same vulnerability has moved within a file due to refactoring or unrelated changes. Advanced vulnerability tracking is available in a subset of the [supported languages](#supported-languages-and-frameworks) and [analyzers](analyzers.md): - C, in the Semgrep-based only - C++, in the Semgrep-based only - C#, in the GitLab Advanced SAST and Semgrep-based analyzers - Go, in the GitLab Advanced SAST and Semgrep-based analyzers - Java, in the GitLab Advanced SAST and Semgrep-based analyzers - JavaScript, in the GitLab Advanced SAST and Semgrep-based analyzers - PHP, in the Semgrep-based analyzer only - Python, in the GitLab Advanced SAST and Semgrep-based analyzers - Ruby, in the Semgrep-based analyzer only Support for more languages and analyzers is tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/5144). For more information, see the confidential project `https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator`. The content of this project is available only to GitLab team members. ## Automatic vulnerability resolution {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368284) in GitLab 15.9 [with a project-level flag](../../../administration/feature_flags/_index.md) named `sec_mark_dropped_findings_as_resolved`. - Enabled by default in GitLab 15.10. On GitLab.com, [contact Support](https://about.gitlab.com/support/) if you need to disable the flag for your project. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/375128) in GitLab 16.2. {{< /history >}} To help you focus on the vulnerabilities that are still relevant, GitLab SAST automatically [resolves](../vulnerabilities/_index.md#vulnerability-status-values) vulnerabilities when: - You [disable a predefined rule](customize_rulesets.md#disable-predefined-rules). - We remove a rule from the default ruleset. Automatic resolution is available only for findings from the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep). The Vulnerability Management system leaves a comment on automatically-resolved vulnerabilities so you still have a historical record of the vulnerability. If you re-enable the rule later, the findings are reopened for triage. ## Supported distributions The default scanner images are built on a base Alpine image for size and maintainability. ### FIPS-enabled images GitLab offers an image version, based on the [Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) base image, that uses a FIPS 140-validated cryptographic module. To use the FIPS-enabled image, you can either: - Set the `SAST_IMAGE_SUFFIX` to `-fips`. - Add the `-fips` extension to the default image name. For example: ```yaml variables: SAST_IMAGE_SUFFIX: '-fips' include: - template: Jobs/SAST.gitlab-ci.yml ``` A FIPS-compliant image is only available for the GitLab Advanced SAST and Semgrep-based analyzer. {{< alert type="warning" >}} To use SAST in a FIPS-compliant manner, you must [exclude other analyzers from running](analyzers.md#customize-analyzers). If you use a FIPS-enabled image to run Advanced SAST or Semgrep in [a runner with non-root user](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html#run-with-non-root-user), you must update the `run_as_user` attribute under `runners.kubernetes.pod_security_context` to use the ID of `gitlab` user [created by the image](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/a5d822401014f400b24450c92df93467d5bbc6fd/Dockerfile.fips#L58), which is `1000`. {{< /alert >}} ## Download a SAST report Each SAST analyzer outputs a JSON report as a job artifact. The file contains details of all detected vulnerabilities. You can [download](../../../ci/jobs/job_artifacts.md#download-job-artifacts) the file for processing outside GitLab. For more information, see: - [SAST report file schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json) - [Example SAST report file](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/main/qa/expect/js/default/gl-sast-report.json) ## Configuration SAST scanning runs in your CI/CD pipeline. When you add the GitLab-managed CI/CD template to your pipeline, the right [SAST analyzers](analyzers.md) automatically scan your code and save results as [SAST report artifacts](../../../ci/yaml/artifacts_reports.md#artifactsreportssast). To configure SAST for a project you can: - Use [Auto SAST](../../../topics/autodevops/stages.md#auto-sast), provided by [Auto DevOps](../../../topics/autodevops/_index.md). - [Configure SAST in your CI/CD YAML](#configure-sast-in-your-cicd-yaml). - [Configure SAST by using the UI](#configure-sast-by-using-the-ui). You can enable SAST across many projects by [enforcing scan execution](../detect/security_configuration.md#create-a-shared-configuration). To configure Advanced SAST (available in GitLab Ultimate only), follow these [instructions](gitlab_advanced_sast.md#configuration). You can [change configuration variables](_index.md#available-cicd-variables) or [customize detection rules](customize_rulesets.md) if needed, but GitLab SAST is designed to be used in its default configuration. ### Configure SAST in your CI/CD YAML To enable SAST, you [include](../../../ci/yaml/_index.md#includetemplate) the [`SAST.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). The template is provided as a part of your GitLab installation. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` The included template creates SAST jobs in your CI/CD pipeline and scans your project's source code for possible vulnerabilities. The results are saved as a [SAST report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportssast) that you can later download and analyze. When downloading, you always receive the most recent SAST artifact available. ### Stable vs latest SAST templates SAST provides two templates for incorporating security testing into your CI/CD pipelines: - [`SAST.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) (recommended) The stable template offers a reliable and consistent SAST experience. You should use the stable template for most users and projects that require stability and predictable behavior in their CI/CD pipelines. - [`SAST.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.latest.gitlab-ci.yml) This template is for those who want to access and test cutting-edge features. It is not considered stable and may include breaking changes that are planned for the next major release. This template allows you to try new features and updates before they become part of the stable release, making it ideal for those comfortable with potential instability and eager to provide feedback on new functionality. ### Configure SAST by using the UI You can enable and configure SAST by using the UI, either with the default settings or with customizations. The method you can use depends on your GitLab license tier. #### Configure SAST with customizations {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410013) individual SAST analyzers configuration options from the UI in GitLab 16.2. {{< /history >}} {{< alert type="note" >}} The configuration tool works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it may not be parsed successfully, and an error may occur. {{< /alert >}} To enable and configure SAST with customizations: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. If the latest pipeline for the default branch of the project has completed and produced valid `SAST` artifacts, select **Configure SAST**, otherwise select **Enable SAST** in the Static Application Security Testing (SAST) row. 1. Enter the custom SAST values. Custom values are stored in the `.gitlab-ci.yml` file. For CI/CD variables not in the SAST Configuration page, their values are inherited from the GitLab SAST template. 1. Select **Create Merge Request**. 1. Review and merge the merge request. Pipelines now include a SAST job. #### Configure SAST with default settings only {{< alert type="note" >}} The configuration tool works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it may not be parsed successfully, and an error may occur. {{< /alert >}} To enable and configure SAST with default settings: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the SAST section, select **Configure with a merge request**. 1. Review and merge the merge request to enable SAST. Pipelines now include a SAST job. ### Overriding SAST jobs To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../ci/yaml/_index.md#rules)), declare a job with the same name as the SAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this enables `FAIL_NEVER` for the `spotbugs` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml spotbugs-sast: variables: FAIL_NEVER: 1 ``` ### Pinning to minor image version The GitLab-managed CI/CD template specifies a major version and automatically pulls the latest analyzer release within that major version. In some cases, you may need to use a specific version. For example, you might need to avoid a regression in a later release. To override the automatic update behavior, set the `SAST_ANALYZER_IMAGE_TAG` CI/CD variable in your CI/CD configuration file after you include the [`SAST.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). Only set this variable within a specific job. If you set it [at the top level](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file), the version you set is used for other SAST analyzers. You can set the tag to: - A major version, like `3`. Your pipelines use any minor or patch updates that are released within this major version. - A minor version, like `3.7`. Your pipelines use any patch updates that are released within this minor version. - A patch version, like `3.7.0`. Your pipelines don't receive any updates. This example uses a specific minor version of the `semgrep` analyzer and a specific patch version of the `brakeman` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml semgrep-sast: variables: SAST_ANALYZER_IMAGE_TAG: "3.7" brakeman-sast: variables: SAST_ANALYZER_IMAGE_TAG: "3.1.1" ``` ### Using CI/CD variables to pass credentials for private repositories Some analyzers require downloading the project's dependencies to perform the analysis. In turn, such dependencies may live in private Git repositories and thus require credentials like username and password to download them. Depending on the analyzer, such credentials can be provided to it via [custom CI/CD variables](#custom-cicd-variables). #### Using a CI/CD variable to pass username and password to a private Maven repository If your private Maven repository requires login credentials, you can use the `MAVEN_CLI_OPTS` CI/CD variable. For more information, see [how to use private Maven repositories](../dependency_scanning/_index.md#authenticate-with-a-private-maven-repository). ### Enabling Kubesec analyzer You need to set `SCAN_KUBERNETES_MANIFESTS` to `"true"` to enable the Kubesec analyzer. In `.gitlab-ci.yml`, define: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SCAN_KUBERNETES_MANIFESTS: "true" ``` ### Scan other languages with the Semgrep-based analyzer You can customize the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) to scan languages that aren't [supported](#supported-languages-and-frameworks) with a GitLab-managed ruleset. However, because GitLab does not provide rulesets for these other languages, you must provide a [custom ruleset](customize_rulesets.md#build-a-custom-configuration) to cover them. You must also modify the `rules` of the `semgrep-sast` CI/CD job so that the job runs when the relevant files are modified. #### Scan a Rust application For example, to scan a Rust application, you must: 1. Provide a custom ruleset for Rust. Create a file named `sast-ruleset.toml` in a `.gitlab/` directory at the root of your repository. The following example uses the Semgrep registry's default ruleset for Rust: ```toml [semgrep] description = "Rust ruleset for Semgrep" targetdir = "/sgrules" timeout = 60 [[semgrep.passthrough]] type = "url" value = "https://semgrep.dev/c/p/rust" target = "rust.yml" ``` Read more on [customizing rulesets](customize_rulesets.md#build-a-custom-configuration). 1. Override the `semgrep-sast` job to add a rule that detects Rust (`.rs`) files. Define the following in the `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml semgrep-sast: rules: - if: $CI_COMMIT_BRANCH exists: - '**/*.rs' # include any other file extensions you need to scan from the semgrep-sast template: Jobs/SAST.gitlab-ci.yml ``` ### JDK21 support for SpotBugs analyzer Version `6` of the SpotBugs analyzer adds support for JDK21 and removes JDK11. The default version remains at `5` as discussed in [issue 517169](https://gitlab.com/gitlab-org/gitlab/-/issues/517169). To use version `6`, manually pin the version by following the instructions [Pinning to minor image version](#pinning-to-minor-image-version). ```yaml spotbugs-sast: variables: SAST_ANALYZER_IMAGE_TAG: "6" ``` ### Using pre-compilation with SpotBugs analyzer The SpotBugs-based analyzer scans compiled bytecode for `Groovy` projects. By default, it automatically attempts to fetch dependencies and compile your code so it can be scanned. Automatic compilation can fail if: - your project requires custom build configurations - you use language versions that aren't built into the analyzer To resolve these issues, you should skip the analyzer's compilation step and directly provide artifacts from an earlier stage in your pipeline instead. This strategy is called _pre-compilation_. #### Sharing pre-compiled artifacts 1. Use a compilation job (typically named `build`) to compile your project and store the compiled output as a `job artifact` using [`artifacts: paths`](../../../ci/yaml/_index.md#artifactspaths). - For `Maven` projects, the output folder is usually the `target` directory - For `Gradle` projects, it's typically the `build` directory - If your project uses a custom output location, set the artifacts path accordingly 1. Disable automatic compilation by setting the `COMPILE: "false"` CI/CD variable in the `spotbugs-sast` job. 1. Ensure the `spotbugs-sast` job depends on the compilation job by setting the `dependencies` keyword. This allows the `spotbugs-sast` job to download and use the artifacts created in the compilation job. The following example pre-compiles a Gradle project and provides the compiled bytecode to the analyzer: ```yaml stages: - build - test include: - template: Jobs/SAST.gitlab-ci.yml build: image: gradle:7.6-jdk8 stage: build script: - gradle build artifacts: paths: - build/ spotbugs-sast: dependencies: - build variables: COMPILE: "false" SECURE_LOG_LEVEL: debug ``` #### Specifying dependencies (Maven only) If your project requires external dependencies to be recognized by the analyzer and you're using Maven, you can specify the location of the local repository by using the `MAVEN_REPO_PATH` variable. Specifying dependencies is only supported for Maven-based projects. Other build tools (for example, Gradle) do not have an equivalent mechanism for specifying dependencies. In that case, ensure that your compiled artifacts include all necessary dependencies. The following example pre-compiles a Maven project and provides the compiled bytecode along with the dependencies to the analyzer: ```yaml stages: - build - test include: - template: Jobs/SAST.gitlab-ci.yml build: image: maven:3.6-jdk-8-slim stage: build script: - mvn package -Dmaven.repo.local=./.m2/repository artifacts: paths: - .m2/ - target/ spotbugs-sast: dependencies: - build variables: MAVEN_REPO_PATH: $CI_PROJECT_DIR/.m2/repository COMPILE: "false" SECURE_LOG_LEVEL: debug ``` ### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). ### Available CI/CD variables SAST can be configured using the [`variables`](../../../ci/yaml/_index.md#variables) parameter in `.gitlab-ci.yml`. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} The following example includes the SAST template to override the `SEARCH_MAX_DEPTH` variable to `10` in all jobs. The template is [evaluated before](../../../ci/yaml/_index.md#include) the pipeline configuration, so the last mention of the variable takes precedence. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SEARCH_MAX_DEPTH: 10 ``` #### Custom Certificate Authority To trust a custom Certificate Authority, set the `ADDITIONAL_CA_CERT_BUNDLE` variable to the bundle of CA certs that you want to trust in the SAST environment. The `ADDITIONAL_CA_CERT_BUNDLE` value should contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` The `ADDITIONAL_CA_CERT_BUNDLE` value can also be configured as a [custom variable in the UI](../../../ci/variables/_index.md#for-a-project), either as a `file`, which requires the path to the certificate, or as a variable, which requires the text representation of the certificate. #### Docker images The following are Docker image-related CI/CD variables. | CI/CD variable | Description | |---------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the default images (proxy). Read more about [customizing analyzers](analyzers.md). | | `SAST_EXCLUDED_ANALYZERS` | Names of default images that should never run. Read more about [customizing analyzers](analyzers.md). | | `SAST_ANALYZER_IMAGE_TAG` | Override the default version of analyzer image. Read more about [pinning the analyzer image version](#pinning-to-minor-image-version). | | `SAST_IMAGE_SUFFIX` | Suffix added to the image name. If set to `-fips`, `FIPS-enabled` images are used for scan. See [FIPS-enabled images](#fips-enabled-images) for more details. | #### Vulnerability filters <table class="sast-table"> <thead> <tr> <th>CI/CD variable</th> <th>Description</th> <th>Default Value</th> <th>Analyzer</th> </tr> </thead> <tbody> <tr> <td rowspan="3"> <code>SAST_EXCLUDED_PATHS</code> </td> <td rowspan="3"> Comma-separated list of paths for excluding vulnerabilities. The exact handling of this variable depends on which analyzer is used.<sup><b><a href="#sast-excluded-paths-description">1</a></b></sup> </td> <td rowspan="3"> <code> <a href="https://gitlab.com/gitlab-org/gitlab/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L13">spec, test, tests, tmp</a> </code> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a><sup><b><a href="#sast-excluded-paths-semgrep">2</a></b>,</sup><sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> <a href="gitlab_advanced_sast.md">GitLab Advanced SAST</a><sup><b><a href="#sast-excluded-paths-semgrep">2</a></b>,</sup><sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> All other SAST analyzers<sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> <!-- markdownlint-disable MD044 --> <code>SAST_SPOTBUGS_EXCLUDED_BUILD_PATHS</code> <!-- markdownlint-enable MD044 --> </td> <td> Comma-separated list of paths for excluding directories from being built and scanned. </td> <td>None</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs">SpotBugs</a><sup><b><a href="#sast-spotbugs-excluded-build-paths-description">4</a></b></sup> </td> </tr> <tr> <td rowspan="3"> <code>SEARCH_MAX_DEPTH</code> </td> <td rowspan="3"> The number of directory levels the analyzer will descend into when searching for matching files to scan.<sup><b><a href="#search-max-depth-description">5</a></b></sup> </td> <td rowspan="2"> <code> <a href="https://gitlab.com/gitlab-org/gitlab/-/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L54">20</a> </code> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a> </td> </tr> <tr> <td> <a href="gitlab_advanced_sast.md">GitLab Advanced SAST</a> </td> </tr> <tr> <td> <code> <a href="https://gitlab.com/gitlab-org/gitlab/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L26">4</a> </code> </td> <td> All other SAST analyzers </td> </tr> </tbody> </table> **Footnotes**: 1. <a id="sast-excluded-paths-description"></a>You might need to exclude temporary directories used by your build tool as these can generate false positives. To exclude paths, copy and paste the default excluded paths, then **add** your own paths to be excluded. If you don't specify the default excluded paths, the defaults are overridden and only the paths you specify are excluded from SAST scans. 1. <a id="sast-excluded-paths-semgrep"></a>For these analyzers, `SAST_EXCLUDED_PATHS` is implemented as a **pre-filter**, which is applied before the scan is executed. The analyzer skips any files or directories whose path matches one of the comma-separated patterns. For example, if `SAST_EXCLUDED_PATHS` is set to `*.py,tests`: - `*.py` ignores the following: - `foo.py` - `src/foo.py` - `foo.py/bar.sh` - `tests` ignores: - `tests/foo.py` - `a/b/tests/c/foo.py` Each pattern is a glob-style pattern that uses the same syntax as [gitignore](https://git-scm.com/docs/gitignore#_pattern_format). 1. <a id="sast-excluded-paths-all-other-sast-analyzers"></a>For these analyzers, `SAST_EXCLUDED_PATHS` is implemented as a **post-filter**, which is applied after the scan is executed. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. The post-filter implementation of `SAST_EXCLUDED_PATHS` is available for all SAST analyzers. Some SAST analyzers such as those with [superscript `2`](#sast-excluded-paths-semgrep) implement `SAST_EXCLUDED_PATHS` as both a pre-filter and post-filter. A pre-filter is more efficient because it reduces the number of files to be scanned. For analyzers that support `SAST_EXCLUDED_PATHS` as both a pre-filter and post-filter, the pre-filter is applied first, then the post-filter is applied to any vulnerabilities that remain. 1. <a id="sast-spotbugs-excluded-build-paths-description"></a> For this variable, Path patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns). Directories are excluded from the build process if the path pattern matches a supported build file: - `build.sbt` - `grailsw` - `gradlew` - `build.gradle` - `mvnw` - `pom.xml` - `build.xml` For example, to exclude building and scanning a `maven` project containing a build file with the path `project/subdir/pom.xml`, pass a glob pattern that explicitly matches the build file, such as `project/*/*.xml` or `**/*.xml`, or an exact match such as `project/subdir/pom.xml`. Passing a parent directory for the pattern, such as `project` or `project/subdir`, does not exclude the directory from being built, because in this case, the build file is not explicitly matched by the pattern. 1. <a id="search-max-depth-description"></a>The [SAST CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/v17.4.1-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) searches the repository to detect the programming languages used, and selects the matching analyzers. Then, each analyzer searches the codebase to find the specific files or directories it should scan. Set the value of `SEARCH_MAX_DEPTH` to specify how many directory levels the analyzer's search phase should span. #### Analyzer settings Some analyzers can be customized with CI/CD variables. | CI/CD variable | Analyzer | Default | Description | |-------------------------------------|----------------------|-------------------------------------------------|-------------| | `GITLAB_ADVANCED_SAST_ENABLED` | GitLab Advanced SAST | `false` | Set to `true` to enable [GitLab Advanced SAST](gitlab_advanced_sast.md) scanning (available in GitLab Ultimate only). | | `SCAN_KUBERNETES_MANIFESTS` | Kubesec | `"false"` | Set to `"true"` to scan Kubernetes manifests. | | `KUBESEC_HELM_CHARTS_PATH` | Kubesec | | Optional path to Helm charts that `helm` uses to generate a Kubernetes manifest that `kubesec` scans. If dependencies are defined, `helm dependency build` should be ran in a `before_script` to fetch the necessary dependencies. | | `KUBESEC_HELM_OPTIONS` | Kubesec | | Additional arguments for the `helm` executable. | | `COMPILE` | SpotBugs | `true` | Set to `false` to disable project compilation and dependency fetching. | | `ANT_HOME` | SpotBugs | | The `ANT_HOME` variable. | | `ANT_PATH` | SpotBugs | `ant` | Path to the `ant` executable. | | `GRADLE_PATH` | SpotBugs | `gradle` | Path to the `gradle` executable. | | `JAVA_OPTS` | SpotBugs | `-XX:MaxRAMPercentage=80` | Additional arguments for the `java` executable. | | `JAVA_PATH` | SpotBugs | `java` | Path to the `java` executable. | | `SAST_JAVA_VERSION` | SpotBugs | `17` | Java version used. Supported versions are `17` and `11`. | | `MAVEN_CLI_OPTS` | SpotBugs | `--batch-mode -DskipTests=true` | Additional arguments for the `mvn` or `mvnw` executable. | | `MAVEN_PATH` | SpotBugs | `mvn` | Path to the `mvn` executable. | | `MAVEN_REPO_PATH` | SpotBugs | `$HOME/.m2/repository` | Path to the Maven local repository (shortcut for the `maven.repo.local` property). | | `SBT_PATH` | SpotBugs | `sbt` | Path to the `sbt` executable. | | `FAIL_NEVER` | SpotBugs | `false` | Set to `true` or `1` to ignore compilation failure. | | `SAST_SEMGREP_METRICS` | Semgrep | `true` | Set to `false` to disable sending anonymized scan metrics to [r2c](https://semgrep.dev). | | `SAST_SCANNER_ALLOWED_CLI_OPTS` | Semgrep | `--max-target-bytes=1000000 --timeout=5` | CLI options (arguments with value, or flags) that are passed to the underlying security scanner when running scan operation. Only a limited set of [options](#security-scanner-configuration) are accepted. Separate a CLI option and its value using either a blank space or equals (`=`) character. For example: `name1 value1` or `name1=value1`. Multiple options must be separated by blank spaces. For example: `name1 value1 name2 value2`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368565) in GitLab 15.3. | | `SAST_RULESET_GIT_REFERENCE` | All | | Defines a path to a custom ruleset configuration. If a project has a `.gitlab/sast-ruleset.toml` file committed, that local configuration takes precedence and the file from `SAST_RULESET_GIT_REFERENCE` isn't used. This variable is available for the Ultimate tier only. | | `SECURE_ENABLE_LOCAL_CONFIGURATION` | All | `false` | Enables the option to use custom ruleset configuration. If `SECURE_ENABLE_LOCAL_CONFIGURATION` is set to `false`, the project's custom ruleset configuration file at `.gitlab/sast-ruleset.toml` is ignored and the file from `SAST_RULESET_GIT_REFERENCE` or the default configuration takes precedence. | #### Security scanner configuration SAST analyzers internally use OSS security scanners to perform the analysis. We set the recommended configuration for the security scanner so that you need not to worry about tuning them. However, there can be some rare cases where our default scanner configuration does not suit your requirements. To allow some customization of scanner behavior, you can add a limited set of flags to the underlying scanner. Specify the flags in the `SAST_SCANNER_ALLOWED_CLI_OPTS` CI/CD variable. These flags are added to the scanner's CLI options. <table class="sast-table"> <thead> <tr> <th>Analyzer</th> <th>CLI option</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td rowspan="2"> GitLab Advanced SAST </td> <td> <code>--include-propagator-files</code> </td> <td> WARNING: This flag may cause significant performance degradation. <br> This option enables the scanning of intermediary files that connect source and sink files without containing either sources or sinks themselves. While useful for comprehensive analysis in smaller repositories, enabling this feature for large repositories will substantially impact performance. </td> </tr> <tr> <td> <code>--multi-core</code> </td> <td> Multi-core scanning is enabled by default, automatically detecting and utilizing available CPU cores based on container information. On self-hosted runners, the maximum number of cores is capped at 4. You can override the automatic core detection by explicitly setting <code>--multi-core</code> to a specific value. Multi-core execution requires proportionally more memory than single-core execution. To disable multi-core scanning, set the environment variable <code>DISABLE_MULTI_CORE</code>. Exceeding available cores or memory resources may lead to resource contention and suboptimal performance. </td> </tr> <tr> <td rowspan="3"> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a> </td> <td> <code>--max-memory</code> </td> <td> Sets the maximum system memory in MB to use when running a rule on a single file. </td> </tr> <tr> <td> <code>--max-target-bytes</code> </td> <td> <p> Maximum size for a file to be scanned. Any input program larger than this is ignored. Set to <code>0</code> or a negative value to disable this filter. Bytes can be specified with or without a unit of measurement, for example: <code>12.5kb</code>, <code>1.5MB</code>, or <code>123</code>. Defaults to <code>1000000</code> bytes. </p> <p> <b>Note:</b> You should keep this flag set to the default value. Also, avoid changing this flag to scan minified JavaScript, which is unlikely to work well, <code>DLLs</code>, <code>JARs</code> or other binary files because binary files are not scanned. </p> </td> </tr> <tr> <td> <code>--timeout</code> </td> <td> Maximum time in seconds to spend running a rule on a single file. Set to <code>0</code> to have no time limit. Timeout value must be an integer, for example: <code>10</code> or <code>15</code>. Defaults to <code>5</code>. </td> </tr> <tr> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs">SpotBugs</a> </td> <td> <code>-effort</code> </td> <td> Sets the analysis effort level. Valid values are, in increasing order of precision and ability to detect more vulnerabilities <code>min</code>, <code>less</code>, <code>more</code> and <code>max</code>. Default value is set to <code>max</code> which may require more memory and time to complete the scan, depending on the project's size. If you face memory or performance issues, you can reduce the analysis effort level to a lower value. For example: <code>-effort less</code>. </td> </tr> </tbody> </table> #### Custom CI/CD variables In addition to the aforementioned SAST configuration CI/CD variables, all [custom variables](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) are propagated to the underlying SAST analyzer images if [the SAST vendored template](#configuration) is used. ### Exclude code from analysis You can mark individual lines, or blocks, of code to be excluded from being analyzed for vulnerabilities. You should manage all vulnerabilities through Vulnerability Management, or adjust the scanned file paths using `SAST_EXCLUDED_PATHS` before using this method of finding-by-finding comment annotation. When using the Semgrep-based analyzer, the following options are also available: - Ignore a line of code - add `// nosemgrep:` comment to the end of the line (the prefix is according to the development language). Java example: ```java vuln_func(); // nosemgrep ``` Python example: ```python vuln_func(); # nosemgrep ``` - Ignore a line of code for specific rule - add `// nosemgrep: RULE_ID` comment at the end of the line (the prefix is according to the development language). - Ignore a file or directory - create a `.semgrepignore` file in your repository's root directory or your project's working directory and add patterns for files and folders there. GitLab Semgrep analyzer automatically merges your custom `.semgrepignore` file with [GitLab built-in ignore patterns](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/abcea7419961320f9718a2f24fe438cc1a7f8e08/semgrepignore). {{< alert type="note" >}} The Semgrep analyzer does not respect `.gitignore` files. Files listed in `.gitignore` are analyzed unless explicitly excluded by using `.semgrepignore` or `SAST_EXCLUDED_PATHS`. {{< /alert >}} For more details see [Semgrep documentation](https://semgrep.dev/docs/ignoring-files-folders-code). ## Running SAST in an offline environment {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the SAST job to run successfully. For more information, see [Offline environments](../offline_deployments/_index.md). ### Requirements for offline SAST To use SAST in an offline environment, you need: - GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. See [prerequisites](#getting-started) for details. - A Docker container registry with locally available copies of SAST [analyzer](https://gitlab.com/gitlab-org/security-products/analyzers) images. - Configure certificate checking of packages (optional). GitLab Runner has a [default `pull_policy` of `always`](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy), meaning the runner tries to pull Docker images from the GitLab container registry even if a local copy is available. The GitLab Runner [`pull_policy` can be set to `if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy) in an offline environment if you prefer using only locally available Docker images. However, we recommend keeping the pull policy setting to `always` if not in an offline environment, as this enables the use of updated scanners in your CI/CD pipelines. ### Make GitLab SAST analyzer images available inside your Docker registry For SAST with all [supported languages and frameworks](#supported-languages-and-frameworks), import the following default SAST analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/gitlab-advanced-sast:1 registry.gitlab.com/security-products/kubesec:5 registry.gitlab.com/security-products/pmd-apex:5 registry.gitlab.com/security-products/semgrep:5 registry.gitlab.com/security-products/sobelow:5 registry.gitlab.com/security-products/spotbugs:5 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may be able to make occasional updates on your own. For details on saving and transporting Docker images as a file, see the Docker documentation on [`docker save`](https://docs.docker.com/reference/cli/docker/image/save/), [`docker load`](https://docs.docker.com/reference/cli/docker/image/load/), [`docker export`](https://docs.docker.com/reference/cli/docker/container/export/), and [`docker import`](https://docs.docker.com/reference/cli/docker/image/import/). #### If support for Custom Certificate Authorities are needed Support for custom certificate authorities was introduced in the following versions. | Analyzer | Version | |------------|---------| | `kubesec` | [v2.1.0](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec/-/releases/v2.1.0) | | `pmd-apex` | [v2.1.0](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex/-/releases/v2.1.0) | | `semgrep` | [v0.0.1](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/releases/v0.0.1) | | `sobelow` | [v2.2.0](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow/-/releases/v2.2.0) | | `spotbugs` | [v2.7.1](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs/-/releases/v2.7.1) | ### Set SAST CI/CD variables to use local SAST analyzers Add the following configuration to your `.gitlab-ci.yml` file. You must replace `SECURE_ANALYZERS_PREFIX` to refer to your local Docker container registry: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "localhost:5000/analyzers" ``` The SAST job should now use local copies of the SAST analyzers to scan your code and generate security reports without requiring internet access. ### Configure certificate checking of packages If a SAST job invokes a package manager, you must configure its certificate verification. In an offline environment, certificate verification with an external source is not possible. Either use a self-signed certificate or disable certificate verification. Refer to the package manager's documentation for instructions. ## Running SAST in SELinux By default SAST analyzers are supported in GitLab instances hosted on SELinux. Adding a `before_script` in an [overridden SAST job](#overriding-sast-jobs) may not work as runners hosted on SELinux have restricted permissions.
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Static Application Security Testing (SAST) description: Scanning, configuration, analyzers, vulnerabilities, reporting, customization, and integration. breadcrumbs: - doc - user - application_security - sast --- <style> table.sast-table tr:nth-child(even) { background-color: transparent; } table.sast-table td { border-left: 1px solid #dbdbdb; border-right: 1px solid #dbdbdb; border-bottom: 1px solid #dbdbdb; } table.sast-table tr td:first-child { border-left: 0; } table.sast-table tr td:last-child { border-right: 0; } table.sast-table ul { font-size: 1em; list-style-type: none; padding-left: 0px; margin-bottom: 0px; } table.no-vertical-table-lines td { border-left: none; border-right: none; border-bottom: 1px solid #f0f0f0; } table.no-vertical-table-lines tr { border-top: none; } </style> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Static Application Security Testing (SAST) discovers vulnerabilities in your source code before they reach production. Integrated directly into your CI/CD pipeline, SAST identifies security issues during development when they're easiest and most cost-effective to fix. Security vulnerabilities found late in development create costly delays and potential breaches. SAST scans happen automatically with each commit, giving you immediate feedback without disrupting your workflow. ## Features The following table lists the GitLab tiers in which each feature is available. | Feature | In Free & Premium | In Ultimate | |:-----------------------------------------------------------------------------------------|:-------------------------------------|:------------| | Basic scanning with [open-source analyzers](#supported-languages-and-frameworks) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Downloadable [SAST JSON report](#download-a-sast-report) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Cross-file, cross-function scanning with [GitLab Advanced SAST](gitlab_advanced_sast.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New findings in [merge request widget](#merge-request-widget) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | New findings in [merge request changes view](#merge-request-changes-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Vulnerability Management](../vulnerabilities/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [UI-based scanner configuration](#configure-sast-by-using-the-ui) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Ruleset customization](customize_rulesets.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Advanced Vulnerability Tracking](#advanced-vulnerability-tracking) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started If you are new to SAST, the following steps show how to enable SAST for your project. Prerequisites: - Linux-based GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. If you're using hosted runners for GitLab.com, this is enabled by default. - Windows Runners are not supported. - CPU architectures other than amd64 are not supported. - GitLab CI/CD configuration (`.gitlab-ci.yml`) must include the `test` stage, which is included by default. If you redefine the stages in the `.gitlab-ci.yml` file, the `test` stage is required. To enable SAST: 1. On the left sidebar, select **Search or go to** and find your project. 1. If your project does not already have one, create a `.gitlab-ci.yml` file in the root directory. 1. At the top of the `.gitlab-ci.yml` file, add one of the following lines: Using a template: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` Or using a CI component: ```yaml include: - component: gitlab.com/components/sast/sast@main ``` At this point, SAST is enabled in your pipeline. If supported source code is present, the appropriate analyzers and default rules automatically scan for vulnerabilities when a pipeline runs. The corresponding jobs will appear under the `test` stage in your pipeline. {{< alert type="note" >}} You can see a working example in [SAST example project](https://gitlab.com/gitlab-org/security-products/demos/analyzer-configurations/semgrep/sast-getting-started). {{< /alert >}} After completing these steps, you can: - Learn more about how to [understand the results](#understanding-the-results). - Review [optimization tips](#optimization). - Plan a [rollout to more projects](#roll-out). For details on other configuration methods, see [Configuration](#configuration). ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Either download results, or select a vulnerability to view its details (Ultimate only), including: - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Status: Indicates whether the vulnerability has been triaged or resolved. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - Location: Shows the filename and line number where the issue was found. Selecting the file path opens the corresponding line in the code view. - Scanner: Identifies which analyzer detected the vulnerability. - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers and the IDs of the rules that detected it. SAST vulnerabilities are named according to the primary Common Weakness Enumeration (CWE) identifier for the discovered vulnerability. Read the description of each vulnerability finding to learn more about the specific issue that the scanner has detected. For more information on SAST coverage, see [SAST rules](rules.md). In Ultimate, you can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see [Pipeline security report](../detect/security_scanning_results.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} Additional ways to see SAST results: - [Merge request widget](#merge-request-widget): Shows newly introduced or resolved findings. - [Merge request changes view](#merge-request-changes-view): Shows inline annotations for changed lines. - [Vulnerability report](../vulnerability_report/_index.md): Shows confirmed vulnerabilities on the default branch. A pipeline consists of multiple jobs, including SAST and DAST scanning. If any job fails to finish for any reason, the security dashboard does not show SAST scanner output. For example, if the SAST job finishes but the DAST job fails, the security dashboard does not show SAST results. On failure, the analyzer outputs an exit code. ### Merge request widget {{< details >}} - Tier: Ultimate {{< /details >}} SAST results display in the merge request widget area if a report from the target branch is available for comparison. The merge request widget shows: - new SAST findings that are introduced by the MR. - existing findings that are resolved by the MR. The results are compared using [Advanced Vulnerability Tracking](#advanced-vulnerability-tracking) whenever it is available. ![Security Merge request widget](img/sast_mr_widget_v16_7.png) ### Merge request changes view {{< details >}} - Tier: Ultimate {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/10959) in GitLab 16.6 with a [flag](../../../administration/feature_flags/_index.md) named `sast_reports_in_inline_diff`. Disabled by default. - Enabled by default in GitLab 16.8. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410191) in GitLab 16.9. {{< /history >}} SAST results display in the merge request **Changes** view. Lines containing SAST issues are marked by a symbol beside the gutter. Select the symbol to see the list of issues, then select an issue to see its details. ![SAST Inline Indicator](img/sast_inline_indicator_v16_7.png) ## Optimization To optimize SAST according to your requirements you can: - Disable a rule. - Exclude files or paths from being scanned. ### Disable a rule To disable a rule, for example because it generates too many false positives: 1. On the left sidebar, select **Search or go to** and find your project. 1. Create a `.gitlab/sast-ruleset.toml` file at the root of your project if one does not already exist. 1. In the vulnerability's details, locate the ID of the rule that triggered the finding. 1. Use the rule ID to disable the rule. For example, to disable `gosec.G107-1`, add the following in `.gitlab/sast-ruleset.toml`: ```toml [semgrep] [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G107-1" ``` For more details on customizing rulesets, see [Customize rulesets](customize_rulesets.md). ### Exclude files or paths from being scanned To exclude files or paths from being scanned, for example test or temporary code, set the `SAST_EXCLUDED_PATHS` variable. For example, to skip `rule-template-injection.go`, add the following to your `.gitlab-ci.yml`: ```yaml variables: SAST_EXCLUDED_PATHS: "rule-template-injection.go" ``` For more information about configuration options, see [Available CI/CD variables](#available-cicd-variables). ## Roll out After you are confident in the SAST results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply SAST settings across groups. - Share and reuse a central ruleset by [specifying a remote configuration file](customize_rulesets.md#specify-a-remote-configuration-file). - If you have unique requirements, SAST can be run in [offline environments](#running-sast-in-an-offline-environment) or under [SELinux](#running-sast-in-selinux) constraints. ## Supported languages and frameworks GitLab SAST supports scanning the following languages and frameworks. The available scanning options depend on the GitLab tier: - In Ultimate, [GitLab Advanced SAST](gitlab_advanced_sast.md) provides more accurate results. You should use it for the languages it supports. - In all tiers, you can use GitLab-provided analyzers, based on open-source scanners, to scan your code. For more information about our plans for language support in SAST, see the [category direction page](https://about.gitlab.com/direction/application_security_testing/static-analysis/sast/#language-support). | Language | Supported by [GitLab Advanced SAST](gitlab_advanced_sast.md) (Ultimate only) | Supported by another [analyzer](analyzers.md) (all tiers) | |------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------| | Apex (Salesforce) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [PMD-Apex](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex) | | C | {{< icon name="dotted-circle" >}} No, tracked in [epic 14271](https://gitlab.com/groups/gitlab-org/-/epics/14271) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | C++ | {{< icon name="dotted-circle" >}} No, tracked in [epic 14271](https://gitlab.com/groups/gitlab-org/-/epics/14271) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | C# | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Elixir (Phoenix) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Sobelow](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow) | | Go | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Groovy | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) with the find-sec-bugs plugin<sup><b><a href="#spotbugs-footnote">1</a></b></sup> | | Java | {{< icon name="check-circle" >}} Yes, including Java Server Pages (JSP) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) (including Android) | | JavaScript, including Node.js and React | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Kotlin | {{< icon name="dotted-circle" >}} No, tracked in [epic 15173](https://gitlab.com/groups/gitlab-org/-/epics/15173) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) (including Android) | | Objective-C (iOS) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | PHP | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Python | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Ruby, including Ruby on Rails | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Scala | {{< icon name="dotted-circle" >}} No, tracked in [epic 15174](https://gitlab.com/groups/gitlab-org/-/epics/15174) | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Swift (iOS) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | TypeScript | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | YAML<sup><b><a href="#yaml-footnote">2</a></b></sup> | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | | Java Properties | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes: [Semgrep](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with [GitLab-managed rules](rules.md#semgrep-based-analyzer) | **Footnotes**: 1. <a id="spotbugs-footnote"></a>The SpotBugs-based analyzer supports [Gradle](https://gradle.org/), [Maven](https://maven.apache.org/), and [SBT](https://www.scala-sbt.org/). It can also be used with variants like the [Gradle wrapper](https://docs.gradle.org/current/userguide/gradle_wrapper.html), [Grails](https://grails.org/), and the [Maven wrapper](https://github.com/takari/maven-wrapper). However, SpotBugs has [limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/350801) when used against [Ant](https://ant.apache.org/)-based projects. You should use the GitLab Advanced SAST or Semgrep-based analyzer for Ant-based Java or Scala projects. 1. <a id="yaml-footnote"></a>`YAML` support is restricted to the following file patterns: - `application*.yml` - `application*.yaml` - `bootstrap*.yml` - `bootstrap*.yaml` The SAST CI/CD template also includes an analyzer job that can scan Kubernetes manifests and Helm charts; this job is off by default. See [Enabling Kubesec analyzer](#enabling-kubesec-analyzer) or consider [IaC Scanning](../iac_scanning/_index.md), which supports additional platforms, instead. To learn more about SAST analyzers that are no longer supported, see [Analyzers that have reached End of Support](analyzers.md#analyzers-that-have-reached-end-of-support). ## Advanced vulnerability tracking {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Source code is volatile; as developers make changes, source code may move within files or between files. Security analyzers may have already reported vulnerabilities that are being tracked in the [vulnerability report](../vulnerability_report/_index.md). These vulnerabilities are linked to specific problematic code fragments so that they can be found and fixed. If the code fragments are not tracked reliably as they move, vulnerability management is harder because the same vulnerability could be reported again. GitLab SAST uses an advanced vulnerability tracking algorithm to more accurately identify when the same vulnerability has moved within a file due to refactoring or unrelated changes. Advanced vulnerability tracking is available in a subset of the [supported languages](#supported-languages-and-frameworks) and [analyzers](analyzers.md): - C, in the Semgrep-based only - C++, in the Semgrep-based only - C#, in the GitLab Advanced SAST and Semgrep-based analyzers - Go, in the GitLab Advanced SAST and Semgrep-based analyzers - Java, in the GitLab Advanced SAST and Semgrep-based analyzers - JavaScript, in the GitLab Advanced SAST and Semgrep-based analyzers - PHP, in the Semgrep-based analyzer only - Python, in the GitLab Advanced SAST and Semgrep-based analyzers - Ruby, in the Semgrep-based analyzer only Support for more languages and analyzers is tracked in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/5144). For more information, see the confidential project `https://gitlab.com/gitlab-org/security-products/post-analyzers/tracking-calculator`. The content of this project is available only to GitLab team members. ## Automatic vulnerability resolution {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368284) in GitLab 15.9 [with a project-level flag](../../../administration/feature_flags/_index.md) named `sec_mark_dropped_findings_as_resolved`. - Enabled by default in GitLab 15.10. On GitLab.com, [contact Support](https://about.gitlab.com/support/) if you need to disable the flag for your project. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/375128) in GitLab 16.2. {{< /history >}} To help you focus on the vulnerabilities that are still relevant, GitLab SAST automatically [resolves](../vulnerabilities/_index.md#vulnerability-status-values) vulnerabilities when: - You [disable a predefined rule](customize_rulesets.md#disable-predefined-rules). - We remove a rule from the default ruleset. Automatic resolution is available only for findings from the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep). The Vulnerability Management system leaves a comment on automatically-resolved vulnerabilities so you still have a historical record of the vulnerability. If you re-enable the rule later, the findings are reopened for triage. ## Supported distributions The default scanner images are built on a base Alpine image for size and maintainability. ### FIPS-enabled images GitLab offers an image version, based on the [Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) base image, that uses a FIPS 140-validated cryptographic module. To use the FIPS-enabled image, you can either: - Set the `SAST_IMAGE_SUFFIX` to `-fips`. - Add the `-fips` extension to the default image name. For example: ```yaml variables: SAST_IMAGE_SUFFIX: '-fips' include: - template: Jobs/SAST.gitlab-ci.yml ``` A FIPS-compliant image is only available for the GitLab Advanced SAST and Semgrep-based analyzer. {{< alert type="warning" >}} To use SAST in a FIPS-compliant manner, you must [exclude other analyzers from running](analyzers.md#customize-analyzers). If you use a FIPS-enabled image to run Advanced SAST or Semgrep in [a runner with non-root user](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html#run-with-non-root-user), you must update the `run_as_user` attribute under `runners.kubernetes.pod_security_context` to use the ID of `gitlab` user [created by the image](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/a5d822401014f400b24450c92df93467d5bbc6fd/Dockerfile.fips#L58), which is `1000`. {{< /alert >}} ## Download a SAST report Each SAST analyzer outputs a JSON report as a job artifact. The file contains details of all detected vulnerabilities. You can [download](../../../ci/jobs/job_artifacts.md#download-job-artifacts) the file for processing outside GitLab. For more information, see: - [SAST report file schema](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json) - [Example SAST report file](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/main/qa/expect/js/default/gl-sast-report.json) ## Configuration SAST scanning runs in your CI/CD pipeline. When you add the GitLab-managed CI/CD template to your pipeline, the right [SAST analyzers](analyzers.md) automatically scan your code and save results as [SAST report artifacts](../../../ci/yaml/artifacts_reports.md#artifactsreportssast). To configure SAST for a project you can: - Use [Auto SAST](../../../topics/autodevops/stages.md#auto-sast), provided by [Auto DevOps](../../../topics/autodevops/_index.md). - [Configure SAST in your CI/CD YAML](#configure-sast-in-your-cicd-yaml). - [Configure SAST by using the UI](#configure-sast-by-using-the-ui). You can enable SAST across many projects by [enforcing scan execution](../detect/security_configuration.md#create-a-shared-configuration). To configure Advanced SAST (available in GitLab Ultimate only), follow these [instructions](gitlab_advanced_sast.md#configuration). You can [change configuration variables](_index.md#available-cicd-variables) or [customize detection rules](customize_rulesets.md) if needed, but GitLab SAST is designed to be used in its default configuration. ### Configure SAST in your CI/CD YAML To enable SAST, you [include](../../../ci/yaml/_index.md#includetemplate) the [`SAST.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). The template is provided as a part of your GitLab installation. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` The included template creates SAST jobs in your CI/CD pipeline and scans your project's source code for possible vulnerabilities. The results are saved as a [SAST report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportssast) that you can later download and analyze. When downloading, you always receive the most recent SAST artifact available. ### Stable vs latest SAST templates SAST provides two templates for incorporating security testing into your CI/CD pipelines: - [`SAST.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) (recommended) The stable template offers a reliable and consistent SAST experience. You should use the stable template for most users and projects that require stability and predictable behavior in their CI/CD pipelines. - [`SAST.latest.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.latest.gitlab-ci.yml) This template is for those who want to access and test cutting-edge features. It is not considered stable and may include breaking changes that are planned for the next major release. This template allows you to try new features and updates before they become part of the stable release, making it ideal for those comfortable with potential instability and eager to provide feedback on new functionality. ### Configure SAST by using the UI You can enable and configure SAST by using the UI, either with the default settings or with customizations. The method you can use depends on your GitLab license tier. #### Configure SAST with customizations {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410013) individual SAST analyzers configuration options from the UI in GitLab 16.2. {{< /history >}} {{< alert type="note" >}} The configuration tool works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it may not be parsed successfully, and an error may occur. {{< /alert >}} To enable and configure SAST with customizations: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. If the latest pipeline for the default branch of the project has completed and produced valid `SAST` artifacts, select **Configure SAST**, otherwise select **Enable SAST** in the Static Application Security Testing (SAST) row. 1. Enter the custom SAST values. Custom values are stored in the `.gitlab-ci.yml` file. For CI/CD variables not in the SAST Configuration page, their values are inherited from the GitLab SAST template. 1. Select **Create Merge Request**. 1. Review and merge the merge request. Pipelines now include a SAST job. #### Configure SAST with default settings only {{< alert type="note" >}} The configuration tool works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it may not be parsed successfully, and an error may occur. {{< /alert >}} To enable and configure SAST with default settings: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the SAST section, select **Configure with a merge request**. 1. Review and merge the merge request to enable SAST. Pipelines now include a SAST job. ### Overriding SAST jobs To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../ci/yaml/_index.md#rules)), declare a job with the same name as the SAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this enables `FAIL_NEVER` for the `spotbugs` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml spotbugs-sast: variables: FAIL_NEVER: 1 ``` ### Pinning to minor image version The GitLab-managed CI/CD template specifies a major version and automatically pulls the latest analyzer release within that major version. In some cases, you may need to use a specific version. For example, you might need to avoid a regression in a later release. To override the automatic update behavior, set the `SAST_ANALYZER_IMAGE_TAG` CI/CD variable in your CI/CD configuration file after you include the [`SAST.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). Only set this variable within a specific job. If you set it [at the top level](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file), the version you set is used for other SAST analyzers. You can set the tag to: - A major version, like `3`. Your pipelines use any minor or patch updates that are released within this major version. - A minor version, like `3.7`. Your pipelines use any patch updates that are released within this minor version. - A patch version, like `3.7.0`. Your pipelines don't receive any updates. This example uses a specific minor version of the `semgrep` analyzer and a specific patch version of the `brakeman` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml semgrep-sast: variables: SAST_ANALYZER_IMAGE_TAG: "3.7" brakeman-sast: variables: SAST_ANALYZER_IMAGE_TAG: "3.1.1" ``` ### Using CI/CD variables to pass credentials for private repositories Some analyzers require downloading the project's dependencies to perform the analysis. In turn, such dependencies may live in private Git repositories and thus require credentials like username and password to download them. Depending on the analyzer, such credentials can be provided to it via [custom CI/CD variables](#custom-cicd-variables). #### Using a CI/CD variable to pass username and password to a private Maven repository If your private Maven repository requires login credentials, you can use the `MAVEN_CLI_OPTS` CI/CD variable. For more information, see [how to use private Maven repositories](../dependency_scanning/_index.md#authenticate-with-a-private-maven-repository). ### Enabling Kubesec analyzer You need to set `SCAN_KUBERNETES_MANIFESTS` to `"true"` to enable the Kubesec analyzer. In `.gitlab-ci.yml`, define: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SCAN_KUBERNETES_MANIFESTS: "true" ``` ### Scan other languages with the Semgrep-based analyzer You can customize the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) to scan languages that aren't [supported](#supported-languages-and-frameworks) with a GitLab-managed ruleset. However, because GitLab does not provide rulesets for these other languages, you must provide a [custom ruleset](customize_rulesets.md#build-a-custom-configuration) to cover them. You must also modify the `rules` of the `semgrep-sast` CI/CD job so that the job runs when the relevant files are modified. #### Scan a Rust application For example, to scan a Rust application, you must: 1. Provide a custom ruleset for Rust. Create a file named `sast-ruleset.toml` in a `.gitlab/` directory at the root of your repository. The following example uses the Semgrep registry's default ruleset for Rust: ```toml [semgrep] description = "Rust ruleset for Semgrep" targetdir = "/sgrules" timeout = 60 [[semgrep.passthrough]] type = "url" value = "https://semgrep.dev/c/p/rust" target = "rust.yml" ``` Read more on [customizing rulesets](customize_rulesets.md#build-a-custom-configuration). 1. Override the `semgrep-sast` job to add a rule that detects Rust (`.rs`) files. Define the following in the `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml semgrep-sast: rules: - if: $CI_COMMIT_BRANCH exists: - '**/*.rs' # include any other file extensions you need to scan from the semgrep-sast template: Jobs/SAST.gitlab-ci.yml ``` ### JDK21 support for SpotBugs analyzer Version `6` of the SpotBugs analyzer adds support for JDK21 and removes JDK11. The default version remains at `5` as discussed in [issue 517169](https://gitlab.com/gitlab-org/gitlab/-/issues/517169). To use version `6`, manually pin the version by following the instructions [Pinning to minor image version](#pinning-to-minor-image-version). ```yaml spotbugs-sast: variables: SAST_ANALYZER_IMAGE_TAG: "6" ``` ### Using pre-compilation with SpotBugs analyzer The SpotBugs-based analyzer scans compiled bytecode for `Groovy` projects. By default, it automatically attempts to fetch dependencies and compile your code so it can be scanned. Automatic compilation can fail if: - your project requires custom build configurations - you use language versions that aren't built into the analyzer To resolve these issues, you should skip the analyzer's compilation step and directly provide artifacts from an earlier stage in your pipeline instead. This strategy is called _pre-compilation_. #### Sharing pre-compiled artifacts 1. Use a compilation job (typically named `build`) to compile your project and store the compiled output as a `job artifact` using [`artifacts: paths`](../../../ci/yaml/_index.md#artifactspaths). - For `Maven` projects, the output folder is usually the `target` directory - For `Gradle` projects, it's typically the `build` directory - If your project uses a custom output location, set the artifacts path accordingly 1. Disable automatic compilation by setting the `COMPILE: "false"` CI/CD variable in the `spotbugs-sast` job. 1. Ensure the `spotbugs-sast` job depends on the compilation job by setting the `dependencies` keyword. This allows the `spotbugs-sast` job to download and use the artifacts created in the compilation job. The following example pre-compiles a Gradle project and provides the compiled bytecode to the analyzer: ```yaml stages: - build - test include: - template: Jobs/SAST.gitlab-ci.yml build: image: gradle:7.6-jdk8 stage: build script: - gradle build artifacts: paths: - build/ spotbugs-sast: dependencies: - build variables: COMPILE: "false" SECURE_LOG_LEVEL: debug ``` #### Specifying dependencies (Maven only) If your project requires external dependencies to be recognized by the analyzer and you're using Maven, you can specify the location of the local repository by using the `MAVEN_REPO_PATH` variable. Specifying dependencies is only supported for Maven-based projects. Other build tools (for example, Gradle) do not have an equivalent mechanism for specifying dependencies. In that case, ensure that your compiled artifacts include all necessary dependencies. The following example pre-compiles a Maven project and provides the compiled bytecode along with the dependencies to the analyzer: ```yaml stages: - build - test include: - template: Jobs/SAST.gitlab-ci.yml build: image: maven:3.6-jdk-8-slim stage: build script: - mvn package -Dmaven.repo.local=./.m2/repository artifacts: paths: - .m2/ - target/ spotbugs-sast: dependencies: - build variables: MAVEN_REPO_PATH: $CI_PROJECT_DIR/.m2/repository COMPILE: "false" SECURE_LOG_LEVEL: debug ``` ### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). ### Available CI/CD variables SAST can be configured using the [`variables`](../../../ci/yaml/_index.md#variables) parameter in `.gitlab-ci.yml`. {{< alert type="warning" >}} All customization of GitLab security scanning tools should be tested in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} The following example includes the SAST template to override the `SEARCH_MAX_DEPTH` variable to `10` in all jobs. The template is [evaluated before](../../../ci/yaml/_index.md#include) the pipeline configuration, so the last mention of the variable takes precedence. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SEARCH_MAX_DEPTH: 10 ``` #### Custom Certificate Authority To trust a custom Certificate Authority, set the `ADDITIONAL_CA_CERT_BUNDLE` variable to the bundle of CA certs that you want to trust in the SAST environment. The `ADDITIONAL_CA_CERT_BUNDLE` value should contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` The `ADDITIONAL_CA_CERT_BUNDLE` value can also be configured as a [custom variable in the UI](../../../ci/variables/_index.md#for-a-project), either as a `file`, which requires the path to the certificate, or as a variable, which requires the text representation of the certificate. #### Docker images The following are Docker image-related CI/CD variables. | CI/CD variable | Description | |---------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Override the name of the Docker registry providing the default images (proxy). Read more about [customizing analyzers](analyzers.md). | | `SAST_EXCLUDED_ANALYZERS` | Names of default images that should never run. Read more about [customizing analyzers](analyzers.md). | | `SAST_ANALYZER_IMAGE_TAG` | Override the default version of analyzer image. Read more about [pinning the analyzer image version](#pinning-to-minor-image-version). | | `SAST_IMAGE_SUFFIX` | Suffix added to the image name. If set to `-fips`, `FIPS-enabled` images are used for scan. See [FIPS-enabled images](#fips-enabled-images) for more details. | #### Vulnerability filters <table class="sast-table"> <thead> <tr> <th>CI/CD variable</th> <th>Description</th> <th>Default Value</th> <th>Analyzer</th> </tr> </thead> <tbody> <tr> <td rowspan="3"> <code>SAST_EXCLUDED_PATHS</code> </td> <td rowspan="3"> Comma-separated list of paths for excluding vulnerabilities. The exact handling of this variable depends on which analyzer is used.<sup><b><a href="#sast-excluded-paths-description">1</a></b></sup> </td> <td rowspan="3"> <code> <a href="https://gitlab.com/gitlab-org/gitlab/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L13">spec, test, tests, tmp</a> </code> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a><sup><b><a href="#sast-excluded-paths-semgrep">2</a></b>,</sup><sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> <a href="gitlab_advanced_sast.md">GitLab Advanced SAST</a><sup><b><a href="#sast-excluded-paths-semgrep">2</a></b>,</sup><sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> All other SAST analyzers<sup><b><a href="#sast-excluded-paths-all-other-sast-analyzers">3</a></b></sup> </td> </tr> <tr> <td> <!-- markdownlint-disable MD044 --> <code>SAST_SPOTBUGS_EXCLUDED_BUILD_PATHS</code> <!-- markdownlint-enable MD044 --> </td> <td> Comma-separated list of paths for excluding directories from being built and scanned. </td> <td>None</td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs">SpotBugs</a><sup><b><a href="#sast-spotbugs-excluded-build-paths-description">4</a></b></sup> </td> </tr> <tr> <td rowspan="3"> <code>SEARCH_MAX_DEPTH</code> </td> <td rowspan="3"> The number of directory levels the analyzer will descend into when searching for matching files to scan.<sup><b><a href="#search-max-depth-description">5</a></b></sup> </td> <td rowspan="2"> <code> <a href="https://gitlab.com/gitlab-org/gitlab/-/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L54">20</a> </code> </td> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a> </td> </tr> <tr> <td> <a href="gitlab_advanced_sast.md">GitLab Advanced SAST</a> </td> </tr> <tr> <td> <code> <a href="https://gitlab.com/gitlab-org/gitlab/blob/v17.3.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml#L26">4</a> </code> </td> <td> All other SAST analyzers </td> </tr> </tbody> </table> **Footnotes**: 1. <a id="sast-excluded-paths-description"></a>You might need to exclude temporary directories used by your build tool as these can generate false positives. To exclude paths, copy and paste the default excluded paths, then **add** your own paths to be excluded. If you don't specify the default excluded paths, the defaults are overridden and only the paths you specify are excluded from SAST scans. 1. <a id="sast-excluded-paths-semgrep"></a>For these analyzers, `SAST_EXCLUDED_PATHS` is implemented as a **pre-filter**, which is applied before the scan is executed. The analyzer skips any files or directories whose path matches one of the comma-separated patterns. For example, if `SAST_EXCLUDED_PATHS` is set to `*.py,tests`: - `*.py` ignores the following: - `foo.py` - `src/foo.py` - `foo.py/bar.sh` - `tests` ignores: - `tests/foo.py` - `a/b/tests/c/foo.py` Each pattern is a glob-style pattern that uses the same syntax as [gitignore](https://git-scm.com/docs/gitignore#_pattern_format). 1. <a id="sast-excluded-paths-all-other-sast-analyzers"></a>For these analyzers, `SAST_EXCLUDED_PATHS` is implemented as a **post-filter**, which is applied after the scan is executed. Patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns), or file or folder paths (for example, `doc,spec`). Parent directories also match patterns. The post-filter implementation of `SAST_EXCLUDED_PATHS` is available for all SAST analyzers. Some SAST analyzers such as those with [superscript `2`](#sast-excluded-paths-semgrep) implement `SAST_EXCLUDED_PATHS` as both a pre-filter and post-filter. A pre-filter is more efficient because it reduces the number of files to be scanned. For analyzers that support `SAST_EXCLUDED_PATHS` as both a pre-filter and post-filter, the pre-filter is applied first, then the post-filter is applied to any vulnerabilities that remain. 1. <a id="sast-spotbugs-excluded-build-paths-description"></a> For this variable, Path patterns can be globs (see [`doublestar.Match`](https://pkg.go.dev/github.com/bmatcuk/doublestar/v4@v4.0.2#Match) for supported patterns). Directories are excluded from the build process if the path pattern matches a supported build file: - `build.sbt` - `grailsw` - `gradlew` - `build.gradle` - `mvnw` - `pom.xml` - `build.xml` For example, to exclude building and scanning a `maven` project containing a build file with the path `project/subdir/pom.xml`, pass a glob pattern that explicitly matches the build file, such as `project/*/*.xml` or `**/*.xml`, or an exact match such as `project/subdir/pom.xml`. Passing a parent directory for the pattern, such as `project` or `project/subdir`, does not exclude the directory from being built, because in this case, the build file is not explicitly matched by the pattern. 1. <a id="search-max-depth-description"></a>The [SAST CI/CD template](https://gitlab.com/gitlab-org/gitlab/blob/v17.4.1-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml) searches the repository to detect the programming languages used, and selects the matching analyzers. Then, each analyzer searches the codebase to find the specific files or directories it should scan. Set the value of `SEARCH_MAX_DEPTH` to specify how many directory levels the analyzer's search phase should span. #### Analyzer settings Some analyzers can be customized with CI/CD variables. | CI/CD variable | Analyzer | Default | Description | |-------------------------------------|----------------------|-------------------------------------------------|-------------| | `GITLAB_ADVANCED_SAST_ENABLED` | GitLab Advanced SAST | `false` | Set to `true` to enable [GitLab Advanced SAST](gitlab_advanced_sast.md) scanning (available in GitLab Ultimate only). | | `SCAN_KUBERNETES_MANIFESTS` | Kubesec | `"false"` | Set to `"true"` to scan Kubernetes manifests. | | `KUBESEC_HELM_CHARTS_PATH` | Kubesec | | Optional path to Helm charts that `helm` uses to generate a Kubernetes manifest that `kubesec` scans. If dependencies are defined, `helm dependency build` should be ran in a `before_script` to fetch the necessary dependencies. | | `KUBESEC_HELM_OPTIONS` | Kubesec | | Additional arguments for the `helm` executable. | | `COMPILE` | SpotBugs | `true` | Set to `false` to disable project compilation and dependency fetching. | | `ANT_HOME` | SpotBugs | | The `ANT_HOME` variable. | | `ANT_PATH` | SpotBugs | `ant` | Path to the `ant` executable. | | `GRADLE_PATH` | SpotBugs | `gradle` | Path to the `gradle` executable. | | `JAVA_OPTS` | SpotBugs | `-XX:MaxRAMPercentage=80` | Additional arguments for the `java` executable. | | `JAVA_PATH` | SpotBugs | `java` | Path to the `java` executable. | | `SAST_JAVA_VERSION` | SpotBugs | `17` | Java version used. Supported versions are `17` and `11`. | | `MAVEN_CLI_OPTS` | SpotBugs | `--batch-mode -DskipTests=true` | Additional arguments for the `mvn` or `mvnw` executable. | | `MAVEN_PATH` | SpotBugs | `mvn` | Path to the `mvn` executable. | | `MAVEN_REPO_PATH` | SpotBugs | `$HOME/.m2/repository` | Path to the Maven local repository (shortcut for the `maven.repo.local` property). | | `SBT_PATH` | SpotBugs | `sbt` | Path to the `sbt` executable. | | `FAIL_NEVER` | SpotBugs | `false` | Set to `true` or `1` to ignore compilation failure. | | `SAST_SEMGREP_METRICS` | Semgrep | `true` | Set to `false` to disable sending anonymized scan metrics to [r2c](https://semgrep.dev). | | `SAST_SCANNER_ALLOWED_CLI_OPTS` | Semgrep | `--max-target-bytes=1000000 --timeout=5` | CLI options (arguments with value, or flags) that are passed to the underlying security scanner when running scan operation. Only a limited set of [options](#security-scanner-configuration) are accepted. Separate a CLI option and its value using either a blank space or equals (`=`) character. For example: `name1 value1` or `name1=value1`. Multiple options must be separated by blank spaces. For example: `name1 value1 name2 value2`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/368565) in GitLab 15.3. | | `SAST_RULESET_GIT_REFERENCE` | All | | Defines a path to a custom ruleset configuration. If a project has a `.gitlab/sast-ruleset.toml` file committed, that local configuration takes precedence and the file from `SAST_RULESET_GIT_REFERENCE` isn't used. This variable is available for the Ultimate tier only. | | `SECURE_ENABLE_LOCAL_CONFIGURATION` | All | `false` | Enables the option to use custom ruleset configuration. If `SECURE_ENABLE_LOCAL_CONFIGURATION` is set to `false`, the project's custom ruleset configuration file at `.gitlab/sast-ruleset.toml` is ignored and the file from `SAST_RULESET_GIT_REFERENCE` or the default configuration takes precedence. | #### Security scanner configuration SAST analyzers internally use OSS security scanners to perform the analysis. We set the recommended configuration for the security scanner so that you need not to worry about tuning them. However, there can be some rare cases where our default scanner configuration does not suit your requirements. To allow some customization of scanner behavior, you can add a limited set of flags to the underlying scanner. Specify the flags in the `SAST_SCANNER_ALLOWED_CLI_OPTS` CI/CD variable. These flags are added to the scanner's CLI options. <table class="sast-table"> <thead> <tr> <th>Analyzer</th> <th>CLI option</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td rowspan="2"> GitLab Advanced SAST </td> <td> <code>--include-propagator-files</code> </td> <td> WARNING: This flag may cause significant performance degradation. <br> This option enables the scanning of intermediary files that connect source and sink files without containing either sources or sinks themselves. While useful for comprehensive analysis in smaller repositories, enabling this feature for large repositories will substantially impact performance. </td> </tr> <tr> <td> <code>--multi-core</code> </td> <td> Multi-core scanning is enabled by default, automatically detecting and utilizing available CPU cores based on container information. On self-hosted runners, the maximum number of cores is capped at 4. You can override the automatic core detection by explicitly setting <code>--multi-core</code> to a specific value. Multi-core execution requires proportionally more memory than single-core execution. To disable multi-core scanning, set the environment variable <code>DISABLE_MULTI_CORE</code>. Exceeding available cores or memory resources may lead to resource contention and suboptimal performance. </td> </tr> <tr> <td rowspan="3"> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/semgrep">Semgrep</a> </td> <td> <code>--max-memory</code> </td> <td> Sets the maximum system memory in MB to use when running a rule on a single file. </td> </tr> <tr> <td> <code>--max-target-bytes</code> </td> <td> <p> Maximum size for a file to be scanned. Any input program larger than this is ignored. Set to <code>0</code> or a negative value to disable this filter. Bytes can be specified with or without a unit of measurement, for example: <code>12.5kb</code>, <code>1.5MB</code>, or <code>123</code>. Defaults to <code>1000000</code> bytes. </p> <p> <b>Note:</b> You should keep this flag set to the default value. Also, avoid changing this flag to scan minified JavaScript, which is unlikely to work well, <code>DLLs</code>, <code>JARs</code> or other binary files because binary files are not scanned. </p> </td> </tr> <tr> <td> <code>--timeout</code> </td> <td> Maximum time in seconds to spend running a rule on a single file. Set to <code>0</code> to have no time limit. Timeout value must be an integer, for example: <code>10</code> or <code>15</code>. Defaults to <code>5</code>. </td> </tr> <tr> <td> <a href="https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs">SpotBugs</a> </td> <td> <code>-effort</code> </td> <td> Sets the analysis effort level. Valid values are, in increasing order of precision and ability to detect more vulnerabilities <code>min</code>, <code>less</code>, <code>more</code> and <code>max</code>. Default value is set to <code>max</code> which may require more memory and time to complete the scan, depending on the project's size. If you face memory or performance issues, you can reduce the analysis effort level to a lower value. For example: <code>-effort less</code>. </td> </tr> </tbody> </table> #### Custom CI/CD variables In addition to the aforementioned SAST configuration CI/CD variables, all [custom variables](../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) are propagated to the underlying SAST analyzer images if [the SAST vendored template](#configuration) is used. ### Exclude code from analysis You can mark individual lines, or blocks, of code to be excluded from being analyzed for vulnerabilities. You should manage all vulnerabilities through Vulnerability Management, or adjust the scanned file paths using `SAST_EXCLUDED_PATHS` before using this method of finding-by-finding comment annotation. When using the Semgrep-based analyzer, the following options are also available: - Ignore a line of code - add `// nosemgrep:` comment to the end of the line (the prefix is according to the development language). Java example: ```java vuln_func(); // nosemgrep ``` Python example: ```python vuln_func(); # nosemgrep ``` - Ignore a line of code for specific rule - add `// nosemgrep: RULE_ID` comment at the end of the line (the prefix is according to the development language). - Ignore a file or directory - create a `.semgrepignore` file in your repository's root directory or your project's working directory and add patterns for files and folders there. GitLab Semgrep analyzer automatically merges your custom `.semgrepignore` file with [GitLab built-in ignore patterns](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/blob/abcea7419961320f9718a2f24fe438cc1a7f8e08/semgrepignore). {{< alert type="note" >}} The Semgrep analyzer does not respect `.gitignore` files. Files listed in `.gitignore` are analyzed unless explicitly excluded by using `.semgrepignore` or `SAST_EXCLUDED_PATHS`. {{< /alert >}} For more details see [Semgrep documentation](https://semgrep.dev/docs/ignoring-files-folders-code). ## Running SAST in an offline environment {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the SAST job to run successfully. For more information, see [Offline environments](../offline_deployments/_index.md). ### Requirements for offline SAST To use SAST in an offline environment, you need: - GitLab Runner with the [`docker`](https://docs.gitlab.com/runner/executors/docker.html) or [`kubernetes`](https://docs.gitlab.com/runner/install/kubernetes.html) executor. See [prerequisites](#getting-started) for details. - A Docker container registry with locally available copies of SAST [analyzer](https://gitlab.com/gitlab-org/security-products/analyzers) images. - Configure certificate checking of packages (optional). GitLab Runner has a [default `pull_policy` of `always`](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy), meaning the runner tries to pull Docker images from the GitLab container registry even if a local copy is available. The GitLab Runner [`pull_policy` can be set to `if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy) in an offline environment if you prefer using only locally available Docker images. However, we recommend keeping the pull policy setting to `always` if not in an offline environment, as this enables the use of updated scanners in your CI/CD pipelines. ### Make GitLab SAST analyzer images available inside your Docker registry For SAST with all [supported languages and frameworks](#supported-languages-and-frameworks), import the following default SAST analyzer images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/gitlab-advanced-sast:1 registry.gitlab.com/security-products/kubesec:5 registry.gitlab.com/security-products/pmd-apex:5 registry.gitlab.com/security-products/semgrep:5 registry.gitlab.com/security-products/sobelow:5 registry.gitlab.com/security-products/spotbugs:5 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which external resources can be imported or temporarily accessed. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md) with new definitions, and you may be able to make occasional updates on your own. For details on saving and transporting Docker images as a file, see the Docker documentation on [`docker save`](https://docs.docker.com/reference/cli/docker/image/save/), [`docker load`](https://docs.docker.com/reference/cli/docker/image/load/), [`docker export`](https://docs.docker.com/reference/cli/docker/container/export/), and [`docker import`](https://docs.docker.com/reference/cli/docker/image/import/). #### If support for Custom Certificate Authorities are needed Support for custom certificate authorities was introduced in the following versions. | Analyzer | Version | |------------|---------| | `kubesec` | [v2.1.0](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec/-/releases/v2.1.0) | | `pmd-apex` | [v2.1.0](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex/-/releases/v2.1.0) | | `semgrep` | [v0.0.1](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep/-/releases/v0.0.1) | | `sobelow` | [v2.2.0](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow/-/releases/v2.2.0) | | `spotbugs` | [v2.7.1](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs/-/releases/v2.7.1) | ### Set SAST CI/CD variables to use local SAST analyzers Add the following configuration to your `.gitlab-ci.yml` file. You must replace `SECURE_ANALYZERS_PREFIX` to refer to your local Docker container registry: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: "localhost:5000/analyzers" ``` The SAST job should now use local copies of the SAST analyzers to scan your code and generate security reports without requiring internet access. ### Configure certificate checking of packages If a SAST job invokes a package manager, you must configure its certificate verification. In an offline environment, certificate verification with an external source is not possible. Either use a self-signed certificate or disable certificate verification. Refer to the package manager's documentation for instructions. ## Running SAST in SELinux By default SAST analyzers are supported in GitLab instances hosted on SELinux. Adding a `before_script` in an [overridden SAST job](#overriding-sast-jobs) may not work as runners hosted on SELinux have restricted permissions.
https://docs.gitlab.com/user/application_security/evaluation_guide
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/evaluation_guide.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
evaluation_guide.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Evaluate GitLab SAST
Learn how to evaluate GitLab SAST by selecting a test codebase, configuring scans, interpreting results, and comparing features with other security tools.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You might choose to evaluate GitLab SAST before using it in your organization. Consider the following guidance as you plan and conduct your evaluation. ## Important concepts GitLab SAST is designed to help teams collaboratively improve the security of the code they write. The steps you take to scan your code and view the results are centered around the source code repository being scanned. ### Scanning process GitLab SAST automatically selects the right scanning technology to use depending on which programming languages are found in your project. For all languages except Groovy, GitLab SAST scans your source code directly without requiring a compilation or build step. This makes it easier to enable scanning across a variety of projects. For details, see [Supported languages and frameworks](_index.md#supported-languages-and-frameworks). ### When vulnerabilities are reported GitLab SAST [analyzers](analyzers.md) and their [rules](rules.md) are designed to minimize noise for development and security teams. For details on when the GitLab Advanced SAST analyzer reports vulnerabilities, see [When vulnerabilities are reported](gitlab_advanced_sast.md#when-vulnerabilities-are-reported). ### Other platform features SAST is integrated with other security and compliance features in GitLab Ultimate. If you're comparing GitLab SAST to another product, you may find that some of its features are included in a related GitLab feature area instead of SAST: - [IaC scanning](../iac_scanning/_index.md) scans your Infrastructure as Code (IaC) definitions for security problems. - [Secret detection](../secret_detection/_index.md) finds leaked secrets in your code. - [Security policies](../policies/_index.md) allow you to force scans to run or require that vulnerabilities are fixed. - [Vulnerability management and reporting](../vulnerability_report/_index.md) manages the vulnerabilities that exist in the codebase and integrates with issue trackers. - GitLab Duo [vulnerability explanation](../vulnerabilities/_index.md#vulnerability-explanation) and [vulnerability resolution](../vulnerabilities/_index.md#vulnerability-resolution) help you remediate vulnerabilities quickly by using AI. ## Choose a test codebase When choosing a codebase to test SAST, you should: - Test in a repository where you can safely modify the CI/CD configuration without getting in the way of normal development activities. SAST scans run in your CI/CD pipeline, so you'll need to make a small edit to the CI/CD configuration to [enable SAST](_index.md#configuration). - You can make a fork or copy of an existing repository for testing. This way, you can set up your testing environment without any chance of interrupting normal development. - Use a codebase that matches your organization's typical technology stack. - Use a language that [GitLab Advanced SAST supports](gitlab_advanced_sast.md#supported-languages). GitLab Advanced SAST produces more accurate results than other [analyzers](analyzers.md). Your test project must have GitLab Ultimate. Only Ultimate includes [features](_index.md#features) like: - Proprietary cross-file, cross-function scanning with GitLab Advanced SAST. - The merge request widget, pipeline security report, and default-branch vulnerability report that makes scan results visible and actionable. ### Benchmarks and example projects If you choose to use a benchmark or an intentionally vulnerable application for testing, remember that these applications: - Focus on specific vulnerability types. The benchmark's focus may be different from the vulnerability types your organization prioritizes for discovery and remediation. - Use specific technologies in specific ways that may differ from how your organization builds software. - Report results in ways that may implicitly emphasize certain criteria over others. For example, you may prioritize precision (fewer false-positive results) while the benchmark only scores based on recall (fewer false-negative results). [Epic 15296](https://gitlab.com/groups/gitlab-org/-/epics/15296) tracks work to recommend specific projects for testing. ### AI-generated test code You should not use AI tools to create vulnerable code for testing SAST. AI models often return code that is not truly exploitable. For example: - AI tools often write small functions that take a parameter and use it in a sensitive context (called a "sink"), without actually receiving any user input. This can be a safe design if the function is only called with program-controlled values, like constants. The code is not vulnerable unless user input is allowed to flow to these sinks without first being sanitized or validated. - AI tools may comment out part of the vulnerability to prevent you from accidentally running the code. Reporting vulnerabilities in these unrealistic examples would cause false-positive results in real-world code. GitLab SAST is not designed to report vulnerabilities in these cases. ## Conduct the test After you choose a codebase to test with, you're ready to conduct the test. You can follow these steps: 1. [Enable SAST](_index.md#configuration) by creating a merge request (MR) that adds SAST to the CI/CD configuration. - Be sure to set the CI/CD variable to [enable GitLab Advanced SAST](gitlab_advanced_sast.md#enable-gitlab-advanced-sast-scanning) for more accurate results. 1. Merge the MR to the repository's default branch. 1. Open the [vulnerability report](../vulnerability_report/_index.md) to see the vulnerabilities found on the default branch. - If you're using GitLab Advanced SAST, you can use the [Scanner filter](../vulnerability_report/_index.md#scanner-filter) to show results only from that scanner. 1. Review vulnerability results. - Check the [code flow view](../vulnerabilities/_index.md#vulnerability-code-flow) for GitLab Advanced SAST vulnerabilities that involve tainted user input, like SQL injection or path traversal. - If you have GitLab Duo Enterprise, [explain](../vulnerabilities/_index.md#vulnerability-explanation) or [resolve](../vulnerabilities/_index.md#vulnerability-resolution) a vulnerability. 1. To see how scanning works as new code is developed, create a new merge request that changes application code and adds a new vulnerability or weakness.
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to evaluate GitLab SAST by selecting a test codebase, configuring scans, interpreting results, and comparing features with other security tools. title: Evaluate GitLab SAST breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You might choose to evaluate GitLab SAST before using it in your organization. Consider the following guidance as you plan and conduct your evaluation. ## Important concepts GitLab SAST is designed to help teams collaboratively improve the security of the code they write. The steps you take to scan your code and view the results are centered around the source code repository being scanned. ### Scanning process GitLab SAST automatically selects the right scanning technology to use depending on which programming languages are found in your project. For all languages except Groovy, GitLab SAST scans your source code directly without requiring a compilation or build step. This makes it easier to enable scanning across a variety of projects. For details, see [Supported languages and frameworks](_index.md#supported-languages-and-frameworks). ### When vulnerabilities are reported GitLab SAST [analyzers](analyzers.md) and their [rules](rules.md) are designed to minimize noise for development and security teams. For details on when the GitLab Advanced SAST analyzer reports vulnerabilities, see [When vulnerabilities are reported](gitlab_advanced_sast.md#when-vulnerabilities-are-reported). ### Other platform features SAST is integrated with other security and compliance features in GitLab Ultimate. If you're comparing GitLab SAST to another product, you may find that some of its features are included in a related GitLab feature area instead of SAST: - [IaC scanning](../iac_scanning/_index.md) scans your Infrastructure as Code (IaC) definitions for security problems. - [Secret detection](../secret_detection/_index.md) finds leaked secrets in your code. - [Security policies](../policies/_index.md) allow you to force scans to run or require that vulnerabilities are fixed. - [Vulnerability management and reporting](../vulnerability_report/_index.md) manages the vulnerabilities that exist in the codebase and integrates with issue trackers. - GitLab Duo [vulnerability explanation](../vulnerabilities/_index.md#vulnerability-explanation) and [vulnerability resolution](../vulnerabilities/_index.md#vulnerability-resolution) help you remediate vulnerabilities quickly by using AI. ## Choose a test codebase When choosing a codebase to test SAST, you should: - Test in a repository where you can safely modify the CI/CD configuration without getting in the way of normal development activities. SAST scans run in your CI/CD pipeline, so you'll need to make a small edit to the CI/CD configuration to [enable SAST](_index.md#configuration). - You can make a fork or copy of an existing repository for testing. This way, you can set up your testing environment without any chance of interrupting normal development. - Use a codebase that matches your organization's typical technology stack. - Use a language that [GitLab Advanced SAST supports](gitlab_advanced_sast.md#supported-languages). GitLab Advanced SAST produces more accurate results than other [analyzers](analyzers.md). Your test project must have GitLab Ultimate. Only Ultimate includes [features](_index.md#features) like: - Proprietary cross-file, cross-function scanning with GitLab Advanced SAST. - The merge request widget, pipeline security report, and default-branch vulnerability report that makes scan results visible and actionable. ### Benchmarks and example projects If you choose to use a benchmark or an intentionally vulnerable application for testing, remember that these applications: - Focus on specific vulnerability types. The benchmark's focus may be different from the vulnerability types your organization prioritizes for discovery and remediation. - Use specific technologies in specific ways that may differ from how your organization builds software. - Report results in ways that may implicitly emphasize certain criteria over others. For example, you may prioritize precision (fewer false-positive results) while the benchmark only scores based on recall (fewer false-negative results). [Epic 15296](https://gitlab.com/groups/gitlab-org/-/epics/15296) tracks work to recommend specific projects for testing. ### AI-generated test code You should not use AI tools to create vulnerable code for testing SAST. AI models often return code that is not truly exploitable. For example: - AI tools often write small functions that take a parameter and use it in a sensitive context (called a "sink"), without actually receiving any user input. This can be a safe design if the function is only called with program-controlled values, like constants. The code is not vulnerable unless user input is allowed to flow to these sinks without first being sanitized or validated. - AI tools may comment out part of the vulnerability to prevent you from accidentally running the code. Reporting vulnerabilities in these unrealistic examples would cause false-positive results in real-world code. GitLab SAST is not designed to report vulnerabilities in these cases. ## Conduct the test After you choose a codebase to test with, you're ready to conduct the test. You can follow these steps: 1. [Enable SAST](_index.md#configuration) by creating a merge request (MR) that adds SAST to the CI/CD configuration. - Be sure to set the CI/CD variable to [enable GitLab Advanced SAST](gitlab_advanced_sast.md#enable-gitlab-advanced-sast-scanning) for more accurate results. 1. Merge the MR to the repository's default branch. 1. Open the [vulnerability report](../vulnerability_report/_index.md) to see the vulnerabilities found on the default branch. - If you're using GitLab Advanced SAST, you can use the [Scanner filter](../vulnerability_report/_index.md#scanner-filter) to show results only from that scanner. 1. Review vulnerability results. - Check the [code flow view](../vulnerabilities/_index.md#vulnerability-code-flow) for GitLab Advanced SAST vulnerabilities that involve tainted user input, like SQL injection or path traversal. - If you have GitLab Duo Enterprise, [explain](../vulnerabilities/_index.md#vulnerability-explanation) or [resolve](../vulnerabilities/_index.md#vulnerability-resolution) a vulnerability. 1. To see how scanning works as new code is developed, create a new merge request that changes application code and adds a new vulnerability or weakness.
https://docs.gitlab.com/user/application_security/troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/troubleshooting.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
troubleshooting.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting SAST
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The following troubleshooting scenarios have been collected from customer support cases. If you experience a problem not addressed here, or the information here does not fix your problem, see the [GitLab Support](https://about.gitlab.com/support/) page for ways to get help. ## Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../troubleshooting_application_security.md#debug-level-logging). ## Changes in the CI/CD template The [GitLab-managed SAST CI/CD template](_index.md#configure-sast-in-your-cicd-yaml) controls which [analyzer](analyzers.md) jobs run and how they're configured. While using the template, you might experience a job failure or other pipeline error. For example, you might: - See an error message like `'<your job>' needs 'spotbugs-sast' job, but 'spotbugs-sast' is not in any previous stage` when you view an affected pipeline. - Experience another type of unexpected issue with your CI/CD pipeline configuration. If you're experiencing a job failure or seeing a SAST-related `yaml invalid` pipeline status, you can temporarily revert to an older version of the template so your pipelines keep working while you investigate the issue. To use an older version of the template, change the existing `include` statement in your CI/CD YAML file to refer to a specific template version, such as `v15.3.3-ee`: ```yaml include: remote: 'https://gitlab.com/gitlab-org/gitlab/-/raw/v15.3.3-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml' ``` If your GitLab instance has limited network connectivity, you can also download the file and host it elsewhere. You should only use this solution temporarily, returning to [the standard template](_index.md#configure-sast-in-your-cicd-yaml) as soon as possible. ## Errors in a specific analyzer job GitLab SAST [analyzers](analyzers.md) are released as container images. If you're seeing a new error that doesn't appear to be related to [the GitLab-managed SAST CI/CD template](_index.md#configure-sast-in-your-cicd-yaml) or changes in your own project, you can try [pinning the affected analyzer to a specific older version](_index.md#pinning-to-minor-image-version). Each [analyzer project](analyzers.md) has a `CHANGELOG.md` file listing the changes made in each available version. ## Job log messages The SAST job's log may include error messages that help pinpoint the root cause. Below are some of the error messages and recommended actions. ### Executable format ```plaintext exec /bin/sh: exec format error` message in job log ``` GitLab SAST analyzers [only support](_index.md#getting-started) running on the `amd64` CPU architecture. This message indicates that the job is being run on a different architecture, such as `arm`. ### Docker error ```plaintext Error response from daemon: error processing tar file: docker-tar: relocation error ``` This error occurs when the Docker version that runs the SAST job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. For more details, see [issue 13830](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992) - "Current SAST container fails". ### No matching files ```plaintext gl-sast-report.json: no matching files ``` For information on this, see the [general Application Security troubleshooting section](../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). ### Configuration only ```plaintext sast is used for configuration only, and its script should not be executed ``` For information on this, see the [GitLab Secure troubleshooting section](../troubleshooting_application_security.md#error-job-is-used-for-configuration-only-and-its-script-should-not-be-executed). ## Error: `An error occurred while creating the merge request` When attempting to enable SAST on a project by using the UI, the operation can fail with the warning: ```plaintext An error occurred while creating the merge request. ``` This issue can occur because something prevents the branch being created for the merge request. When configuring SAST by using the UI, a branch with a numeric suffix is created, for example `set-sast-config-1`. Features such as a [push rule that validates branch names](../../project/repository/push_rules.md#validate-branch-names) may block the creation of the branch because of the naming format. To resolve this issue, edit the push rule so that it allows the branch naming format required by SAST. ## SAST jobs run unexpectedly The [SAST CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml) uses the `rules:exists` parameter. For performance reasons, a maximum number of 10000 matches are made against the given glob pattern. If the number of matches exceeds the maximum, the `rules:exists` parameter returns `true`. Depending on the number of files in your repository, a SAST job might be triggered even if the scanner doesn't support your project. For more details about this limitation, see the [`rules:exists` documentation](../../../ci/yaml/_index.md#rulesexists). ## SpotBugs errors Below are details of the most common SpotBugs errors that occur, and recommended actions. ### UTF-8 unmappable character errors These errors occur when UTF-8 encoding isn't enabled on a SpotBugs build and there are UTF-8 characters in the source code. To fix this error, enable UTF-8 for your project's build tool. For Gradle builds, add the following to your `build.gradle` file: ```groovy compileJava.options.encoding = 'UTF-8' tasks.withType(JavaCompile) { options.encoding = 'UTF-8' } ``` For Maven builds, add the following to your `pom.xml` file: ```xml <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> ``` ### Project couldn't be built If your `spotbugs-sast` job is failing at the build step with the message "Project couldn't be built", it's most likely because: - Your project is asking SpotBugs to build with a tool that isn't part of its default tools. For a list of the SpotBugs default tools, see [SpotBugs' asdf dependencies](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs/-/blob/master/config/.gl-tool-versions). - Your build needs custom configurations or additional dependencies that the analyzer's automatic build process can't accommodate. The SpotBugs-based analyzer is only used for scanning Groovy code, but it may trigger in other cases, such as [when all SAST jobs run unexpectedly](#sast-jobs-run-unexpectedly). The solution depends on whether you need to scan Groovy code: - If you don't have any Groovy code, or don't need to scan it, you should [disable the SpotBugs analyzer](analyzers.md#disable-specific-default-analyzers). - If you do need to scan Groovy code, you should use [pre-compilation](_index.md#using-pre-compilation-with-spotbugs-analyzer). Pre-compilation avoids these failures by scanning an artifact you've already built in your pipeline, rather than trying to compile it in the `spotbugs-sast` job. ### Java out of memory error When a `spotbugs-sast` job is running you might get an error that states `java.lang.OutOfMemoryError`. This issue occurs when Java has run out of memory while scanning. To try to resolve this issue you can: - Choose a lower [level of effort](_index.md#security-scanner-configuration). - Set the CI/CD variable `JAVA_OPTS` to replace the default `-XX:MaxRAMPercentage=80` (for example: `-XX:MaxRAMPercentage=90`). - [Tag a larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) in your `spotbugs-sast` job. #### Related topics - [Overhauling memory tuning in OpenJDK containers updates](https://developers.redhat.com/articles/2023/03/07/overhauling-memory-tuning-openjdk-containers-updates) - [OpenJDK Configuration & Tuning](https://wiki.openjdk.org/display/zgc/Main#Main-Configuration&Tuning) - [Garbage First Garbage Collector Tuning](https://www.oracle.com/technical-resources/articles/java/g1gc.html) ### Exception analyzing If your job log contains a message of the form "Exception analyzing ... using detector ..." followed by a Java stack trace, this is **not** a failure of the SAST pipeline. SpotBugs has determined that the exception is [recoverable](https://github.com/spotbugs/spotbugs/blob/5ebd4439f6f8f2c11246b79f58c44324718d39d8/spotbugs/src/main/java/edu/umd/cs/findbugs/FindBugs2.java#L1200), logged it, and resumed analysis. The first "..." part of the message is the class being analyzed - if it's not part of your project, you can likely ignore the message and the stack trace that follows. If, on the other hand, the class being analyzed is part of your project, consider creating an issue with the SpotBugs project on [GitHub](https://github.com/spotbugs/spotbugs/issues). ## Flawfinder encoding error This occurs when Flawfinder encounters an invalid UTF-8 character. To fix this, apply [their documented advice](https://github.com/david-a-wheeler/flawfinder#character-encoding-errors) to your entire repository, or only per job using the [`before_script`](../../../ci/yaml/_index.md#before_script) feature. You can configure the `before_script` section in each `.gitlab-ci.yml` file, or use a [pipeline execution policy](../policies/pipeline_execution_policies.md) to install the encoder and run the converter command. For example, you can add a `before_script` section to the `flawfinder-sast` job generated from the security scanner template to convert all files with a `.cpp` extension. ### Example pipeline execution policy YAML ```yaml --- pipeline_execution_policy: - name: SAST description: 'Run SAST on C++ application' enabled: true pipeline_config_strategy: inject_ci content: include: - project: my-group/compliance-project file: flawfinder.yml ref: main ``` `flawfinder.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml flawfinder-sast: before_script: - pip install cvt2utf - cvt2utf convert "$PWD" -i cpp ``` ## Semgrep slowness, unexpected results, or other errors If Semgrep is slow, reports too many false positives or false negatives, crashes, fails, or is otherwise broken, see the Semgrep docs for [troubleshooting GitLab SAST](https://semgrep.dev/docs/troubleshooting/semgrep-app#troubleshooting-gitlab-sast).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting SAST breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The following troubleshooting scenarios have been collected from customer support cases. If you experience a problem not addressed here, or the information here does not fix your problem, see the [GitLab Support](https://about.gitlab.com/support/) page for ways to get help. ## Debug-level logging Debug-level logging can help when troubleshooting. For details, see [debug-level logging](../troubleshooting_application_security.md#debug-level-logging). ## Changes in the CI/CD template The [GitLab-managed SAST CI/CD template](_index.md#configure-sast-in-your-cicd-yaml) controls which [analyzer](analyzers.md) jobs run and how they're configured. While using the template, you might experience a job failure or other pipeline error. For example, you might: - See an error message like `'<your job>' needs 'spotbugs-sast' job, but 'spotbugs-sast' is not in any previous stage` when you view an affected pipeline. - Experience another type of unexpected issue with your CI/CD pipeline configuration. If you're experiencing a job failure or seeing a SAST-related `yaml invalid` pipeline status, you can temporarily revert to an older version of the template so your pipelines keep working while you investigate the issue. To use an older version of the template, change the existing `include` statement in your CI/CD YAML file to refer to a specific template version, such as `v15.3.3-ee`: ```yaml include: remote: 'https://gitlab.com/gitlab-org/gitlab/-/raw/v15.3.3-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml' ``` If your GitLab instance has limited network connectivity, you can also download the file and host it elsewhere. You should only use this solution temporarily, returning to [the standard template](_index.md#configure-sast-in-your-cicd-yaml) as soon as possible. ## Errors in a specific analyzer job GitLab SAST [analyzers](analyzers.md) are released as container images. If you're seeing a new error that doesn't appear to be related to [the GitLab-managed SAST CI/CD template](_index.md#configure-sast-in-your-cicd-yaml) or changes in your own project, you can try [pinning the affected analyzer to a specific older version](_index.md#pinning-to-minor-image-version). Each [analyzer project](analyzers.md) has a `CHANGELOG.md` file listing the changes made in each available version. ## Job log messages The SAST job's log may include error messages that help pinpoint the root cause. Below are some of the error messages and recommended actions. ### Executable format ```plaintext exec /bin/sh: exec format error` message in job log ``` GitLab SAST analyzers [only support](_index.md#getting-started) running on the `amd64` CPU architecture. This message indicates that the job is being run on a different architecture, such as `arm`. ### Docker error ```plaintext Error response from daemon: error processing tar file: docker-tar: relocation error ``` This error occurs when the Docker version that runs the SAST job is `19.03.0`. Consider updating to Docker `19.03.1` or greater. Older versions are not affected. For more details, see [issue 13830](https://gitlab.com/gitlab-org/gitlab/-/issues/13830#note_211354992) - "Current SAST container fails". ### No matching files ```plaintext gl-sast-report.json: no matching files ``` For information on this, see the [general Application Security troubleshooting section](../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). ### Configuration only ```plaintext sast is used for configuration only, and its script should not be executed ``` For information on this, see the [GitLab Secure troubleshooting section](../troubleshooting_application_security.md#error-job-is-used-for-configuration-only-and-its-script-should-not-be-executed). ## Error: `An error occurred while creating the merge request` When attempting to enable SAST on a project by using the UI, the operation can fail with the warning: ```plaintext An error occurred while creating the merge request. ``` This issue can occur because something prevents the branch being created for the merge request. When configuring SAST by using the UI, a branch with a numeric suffix is created, for example `set-sast-config-1`. Features such as a [push rule that validates branch names](../../project/repository/push_rules.md#validate-branch-names) may block the creation of the branch because of the naming format. To resolve this issue, edit the push rule so that it allows the branch naming format required by SAST. ## SAST jobs run unexpectedly The [SAST CI template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/SAST.gitlab-ci.yml) uses the `rules:exists` parameter. For performance reasons, a maximum number of 10000 matches are made against the given glob pattern. If the number of matches exceeds the maximum, the `rules:exists` parameter returns `true`. Depending on the number of files in your repository, a SAST job might be triggered even if the scanner doesn't support your project. For more details about this limitation, see the [`rules:exists` documentation](../../../ci/yaml/_index.md#rulesexists). ## SpotBugs errors Below are details of the most common SpotBugs errors that occur, and recommended actions. ### UTF-8 unmappable character errors These errors occur when UTF-8 encoding isn't enabled on a SpotBugs build and there are UTF-8 characters in the source code. To fix this error, enable UTF-8 for your project's build tool. For Gradle builds, add the following to your `build.gradle` file: ```groovy compileJava.options.encoding = 'UTF-8' tasks.withType(JavaCompile) { options.encoding = 'UTF-8' } ``` For Maven builds, add the following to your `pom.xml` file: ```xml <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> ``` ### Project couldn't be built If your `spotbugs-sast` job is failing at the build step with the message "Project couldn't be built", it's most likely because: - Your project is asking SpotBugs to build with a tool that isn't part of its default tools. For a list of the SpotBugs default tools, see [SpotBugs' asdf dependencies](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs/-/blob/master/config/.gl-tool-versions). - Your build needs custom configurations or additional dependencies that the analyzer's automatic build process can't accommodate. The SpotBugs-based analyzer is only used for scanning Groovy code, but it may trigger in other cases, such as [when all SAST jobs run unexpectedly](#sast-jobs-run-unexpectedly). The solution depends on whether you need to scan Groovy code: - If you don't have any Groovy code, or don't need to scan it, you should [disable the SpotBugs analyzer](analyzers.md#disable-specific-default-analyzers). - If you do need to scan Groovy code, you should use [pre-compilation](_index.md#using-pre-compilation-with-spotbugs-analyzer). Pre-compilation avoids these failures by scanning an artifact you've already built in your pipeline, rather than trying to compile it in the `spotbugs-sast` job. ### Java out of memory error When a `spotbugs-sast` job is running you might get an error that states `java.lang.OutOfMemoryError`. This issue occurs when Java has run out of memory while scanning. To try to resolve this issue you can: - Choose a lower [level of effort](_index.md#security-scanner-configuration). - Set the CI/CD variable `JAVA_OPTS` to replace the default `-XX:MaxRAMPercentage=80` (for example: `-XX:MaxRAMPercentage=90`). - [Tag a larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) in your `spotbugs-sast` job. #### Related topics - [Overhauling memory tuning in OpenJDK containers updates](https://developers.redhat.com/articles/2023/03/07/overhauling-memory-tuning-openjdk-containers-updates) - [OpenJDK Configuration & Tuning](https://wiki.openjdk.org/display/zgc/Main#Main-Configuration&Tuning) - [Garbage First Garbage Collector Tuning](https://www.oracle.com/technical-resources/articles/java/g1gc.html) ### Exception analyzing If your job log contains a message of the form "Exception analyzing ... using detector ..." followed by a Java stack trace, this is **not** a failure of the SAST pipeline. SpotBugs has determined that the exception is [recoverable](https://github.com/spotbugs/spotbugs/blob/5ebd4439f6f8f2c11246b79f58c44324718d39d8/spotbugs/src/main/java/edu/umd/cs/findbugs/FindBugs2.java#L1200), logged it, and resumed analysis. The first "..." part of the message is the class being analyzed - if it's not part of your project, you can likely ignore the message and the stack trace that follows. If, on the other hand, the class being analyzed is part of your project, consider creating an issue with the SpotBugs project on [GitHub](https://github.com/spotbugs/spotbugs/issues). ## Flawfinder encoding error This occurs when Flawfinder encounters an invalid UTF-8 character. To fix this, apply [their documented advice](https://github.com/david-a-wheeler/flawfinder#character-encoding-errors) to your entire repository, or only per job using the [`before_script`](../../../ci/yaml/_index.md#before_script) feature. You can configure the `before_script` section in each `.gitlab-ci.yml` file, or use a [pipeline execution policy](../policies/pipeline_execution_policies.md) to install the encoder and run the converter command. For example, you can add a `before_script` section to the `flawfinder-sast` job generated from the security scanner template to convert all files with a `.cpp` extension. ### Example pipeline execution policy YAML ```yaml --- pipeline_execution_policy: - name: SAST description: 'Run SAST on C++ application' enabled: true pipeline_config_strategy: inject_ci content: include: - project: my-group/compliance-project file: flawfinder.yml ref: main ``` `flawfinder.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml flawfinder-sast: before_script: - pip install cvt2utf - cvt2utf convert "$PWD" -i cpp ``` ## Semgrep slowness, unexpected results, or other errors If Semgrep is slow, reports too many false positives or false negatives, crashes, fails, or is otherwise broken, see the Semgrep docs for [troubleshooting GitLab SAST](https://semgrep.dev/docs/troubleshooting/semgrep-app#troubleshooting-gitlab-sast).
https://docs.gitlab.com/user/application_security/customize_rulesets
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/customize_rulesets.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
customize_rulesets.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Customize rulesets
Customize SAST analyzer rules in GitLab by disabling, overriding, or replacing predefined rules.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Enabled](https://gitlab.com/gitlab-org/security-products/analyzers/ruleset/-/merge_requests/18) support for specifying ambiguous passthrough refs in GitLab 16.2. {{< /history >}} You can customize the behavior of our SAST analyzers by [defining a ruleset configuration file](#create-the-configuration-file) in the repository being scanned. There are two kinds of customization: - Modifying the behavior of **predefined rules**. This includes: - [Disabling predefined rules](#disable-predefined-rules). Available for all analyzers. - [Overriding metadata of predefined rules](#override-metadata-of-predefined-rules). Available for all analyzers. - Replacing predefined rules by [building a custom configuration](#build-a-custom-configuration) using **passthroughs**. Available only for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep). GitLab Advanced SAST supports only modifying the behavior of **predefined rules**, not replacing predefined rules. ## Disable predefined rules You can disable predefined rules for any SAST analyzer. When you disable a rule: - All SAST analyzers that support custom rulesets still scan for the vulnerability. The results are removed as a processing step after the scan completes, and they don't appear in the [`gl-sast-report.json` artifact](_index.md#download-a-sast-report). GitLab Advanced SAST differs by excluding disabled rules from the initial scan. - Findings for the disabled rule no longer appear in the [pipeline security tab](../detect/security_scanning_results.md). - Existing findings for the disabled rule on the default branch are marked as [`No longer detected`](../vulnerability_report/_index.md#activity-filter) in the [vulnerability report](../vulnerability_report/_index.md). The Semgrep-based analyzer handles disabled rules differently: - If you disable a rule in the Semgrep-based analyzer, existing vulnerability findings for that rule are [automatically resolved](_index.md#automatic-vulnerability-resolution) after you merge the `sast-ruleset.toml` file to the default branch. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Override metadata of predefined rules You can override certain attributes of predefined rules for any SAST analyzer. This can be useful when adapting SAST to your existing workflow or tools. For example, you might want to override the severity of a vulnerability based on organizational policy, or choose a different message to display in the vulnerability report. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Build a custom configuration You can replace the [GitLab-maintained ruleset](rules.md) for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with your own rules. You provide your customizations via passthroughs, which are composed into a passthrough chain at runtime and evaluated to produce a complete configuration. The underlying scanner is then executed against this new configuration. There are multiple passthrough types that let you provide configuration in different ways, such as using a file committed to your repository or inline in the ruleset configuration file. You can also choose how subsequent passthroughs in the chain are handled; they can overwrite or append to previous configuration. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Create the configuration file To create the ruleset configuration file: 1. Create a `.gitlab` directory at the root of your project, if one doesn't already exist. 1. Create a file named `sast-ruleset.toml` in the `.gitlab` directory. ## Specify a remote configuration file {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/393452) in 16.1. {{< /history >}} You can set a [CI/CD variable](../../../ci/variables/_index.md) to use a ruleset configuration file that's stored outside of the current repository. This can help you apply the same rules across multiple projects. The `SAST_RULESET_GIT_REFERENCE` variable uses a format similar to [Git URLs](https://git-scm.com/docs/git-clone#_git_urls) for specifying a project URI, optional authentication, and optional Git SHA. The variable uses the following format: ```plaintext [<AUTH_USER>[:<AUTH_PASSWORD>]@]<PROJECT_PATH>[@<GIT_SHA>] ``` {{< alert type="note" >}} If a project has a `.gitlab/sast-ruleset.toml` file committed, that local configuration takes precedence and the file from `SAST_RULESET_GIT_REFERENCE` isn't used. {{< /alert >}} The following example [enables SAST](_index.md#configure-sast-in-your-cicd-yaml) and uses a shared ruleset customization file. In this example, the file is committed on the default branch of `example-ruleset-project` at the path `.gitlab/sast-ruleset.toml`. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_RULESET_GIT_REFERENCE: "gitlab.com/example-group/example-ruleset-project" ``` See [specify a private remote configuration example](#specify-a-private-remote-configuration) for advanced usage. ### Troubleshooting remote configuration files If remote configuration file doesn't seem to be applying customizations correctly, the causes can be: 1. Your repository has a local `.gitlab/sast-ruleset.toml` file. - By default, a local file is used if it's present, even if a remote configuration is set as a variable. - You can set the [SECURE_ENABLE_LOCAL_CONFIGURATION CI/CD variable](../../../ci/variables/_index.md) to `false` to ignore the local configuration file. 1. There is a problem with authentication. - To check whether this is the cause of the problem, try referencing a configuration file from a repository location that doesn't require authentication. ## Schema ### The top-level section The top-level section contains one or more configuration sections, defined as [TOML tables](https://toml.io/en/v1.0.0#table). | Setting | Description | | --------| ----------- | | `[$analyzer]` | Declares a configuration section for an analyzer. The name follows the names defined in the list of [SAST analyzers](analyzers.md#official-analyzers). | Configuration example: ```toml [semgrep] ... ``` Avoid creating configuration sections that modify existing rules and build a custom ruleset, as the latter replaces predefined rules completely. ### The `[$analyzer]` configuration section The `[$analyzer]` section lets you customize the behavior of an analyzer. Valid properties differ based on the kind of configuration you're making. | Setting | Applies to | Description | | --------| -------------- | ----------- | | `[[$analyzer.ruleset]]` | Predefined rules | Defines modifications to an existing rule. | | `interpolate` | All | If set to `true`, you can use `$VAR` in the configuration to evaluate environment variables. Use this feature with caution, so you don't leak secrets or tokens. (Default: `false`) | | `description` | Passthroughs | Description of the custom ruleset. | | `targetdir` | Passthroughs | The directory where the final configuration should be persisted. If empty, a directory with a random name is created. The directory can contain up to 100 MB of files. In case the SAST job is running with non-root user privileges, ensure that the active user has read and write permissions for this directory. | | `validate` | Passthroughs | If set to `true`, the content of each passthrough is validated. The validation works for `yaml`, `xml`, `json` and `toml` content. The proper validator is identified based on the extension used in the `target` parameter of the `[[$analyzer.passthrough]]` section. (Default: `false`) | | `timeout` | Passthroughs | The maximum time to spend to evaluate the passthrough chain, before timing out. The timeout cannot exceed 300 seconds. (Default: 60) | #### `interpolate` {{< alert type="warning" >}} To reduce the risk of leaking secrets, use this feature with caution. {{< /alert >}} The example below shows a configuration that uses the `$GITURL` environment variable to access a private repository. The variable contains a username and token (for example `https://user:token@url`), so they're not explicitly stored in the configuration file. ```toml [semgrep] description = "My private Semgrep ruleset" interpolate = true [[semgrep.passthrough]] type = "git" value = "$GITURL" ref = "main" ``` ### The `[[$analyzer.ruleset]]` section The `[[$analyzer.ruleset]]` section targets and modifies a single predefined rule. You can define one to many of these sections per analyzer. | Setting | Description | | --------| ----------- | | `disable` | Whether the rule should be disabled. (Default: `false`) | | `[$analyzer.ruleset.identifier]` | Selects the predefined rule to be modified. | | `[$analyzer.ruleset.override]` | Defines the overrides for the rule. | Configuration example: ```toml [semgrep] [[semgrep.ruleset]] disable = true ... ``` ### The `[$analyzer.ruleset.identifier]` section The `[$analyzer.ruleset.identifier]` section defines the identifiers of the predefined rule that you wish to modify. | Setting | Description | | --------| ----------- | | `type` | The type of identifier used by the predefined rule. | | `value` | The value of the identifier used by the predefined rule. | You can look up the correct values for `type` and `value` by viewing the [`gl-sast-report.json`](_index.md#download-a-sast-report) produced by the analyzer. You can download this file as a job artifact from the analyzer's CI job. For example, the snippet below shows a finding from a `semgrep` rule with three identifiers. The `type` and `value` keys in the JSON object correspond to the values you should provide in this section. ```json ... "vulnerabilities": [ { "id": "7331a4b7093875f6eb9f6eb1755b30cc792e9fb3a08c9ce673fb0d2207d7c9c9", "category": "sast", "message": "Key Exchange without Entity Authentication", "description": "Audit the use of ssh.InsecureIgnoreHostKey\n", ... "identifiers": [ { "type": "semgrep_id", "name": "gosec.G106-1", "value": "gosec.G106-1" }, { "type": "cwe", "name": "CWE-322", "value": "322", "url": "https://cwe.mitre.org/data/definitions/322.html" }, { "type": "gosec_rule_id", "name": "Gosec Rule ID G106", "value": "G106" } ] } ... ] ... ``` Configuration example: ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G106-1 ... ``` ### The `[$analyzer.ruleset.override]` section The `[$analyzer.ruleset.override]` section allows you to override attributes of a predefined rule. | Setting | Description | | --------| ----------- | | `description` | A detailed description of the issue. | | `message` | (Deprecated) A description of the issue. | | `name` | The name of the rule. | | `severity` | The severity of the rule. Valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info`) | {{< alert type="note" >}} While `message` is populated by the analyzers, it has been [deprecated](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/1d86d5f2e61dc38c775fb0490ee27a45eee4b8b3/vulnerability.go#L22) in favor of `name` and `description`. {{< /alert >}} Configuration example: ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.override] severity = "Critical" name = "Command injection" ... ``` ### The `[[$analyzer.passthrough]]` section {{< alert type="note" >}} Passthrough configurations are available for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) only. {{< /alert >}} The `[[$analyzer.passthrough]]` section allows you to build a custom configuration for an analyzer. You can define up to 20 of these sections per analyzer. Passthroughs are composed into a _passthrough chain_ that evaluates into a complete configuration that replaces the predefined rules of the analyzer. Passthroughs are evaluated in order. Passthroughs listed later in the chain have a higher precedence and can overwrite or append to data yielded by previous passthroughs (depending on the `mode`). This is useful for cases where you need to use or modify an existing configuration. The size of the configuration generated by a single passthrough is limited to 10 MB. | Setting | Applies to | Description | | ------- | ---------- | ----------- | | `type` | All | One of `file`, `raw`, `git` or `url`. | | `target` | All | The target file to contain the data written by the passthrough evaluation. If empty, a random filename is used. | | `mode` | All | If `overwrite`, the `target` file is overwritten. If `append`, new content is appended to the `target` file. The `git` type only supports `overwrite`. (Default: `overwrite`) | | `ref` | `type = "git"` | Contains the name of the branch, tag, or the SHA to pull | | `subdir` | `type = "git"` | Used to select a subdirectory of the Git repository as the configuration source. | | `value` | All | For the `file`, `url`, and `git` types, defines the location of the file or Git repository. For the `raw` type, contains the inline configuration. | | `validator` | All | Used to explicitly invoke validators (`xml`, `yaml`, `json`, `toml`) on the target file after the evaluation of a passthrough. | #### Passthrough types | Type | Description | | ------ | ----------- | | `file` | Use a file that is present in the Git repository. | | `raw` | Provide the configuration inline. | | `git` | Pull the configuration from a remote Git repository. | | `url` | Fetch the configuration using HTTP. | {{< alert type="warning" >}} When using the `raw` passthrough with a YAML snippet, it's recommended to format all indentation in the `sast-ruleset.toml` file as spaces. The YAML specification mandates spaces over tabs, and the analyzer fails to parse your custom ruleset unless the indentation is represented accordingly. {{< /alert >}} ## Examples ### Disable predefined GitLab Advanced SAST rules You can disable GitLab Advanced SAST rules or edit their metadata. The following example disables rules based on different criteria: - A CWE identifier, which identifies an entire class of vulnerabilities. - An GitLab Advanced SAST rule ID, which identifies a specific detection strategy used in GitLab Advanced SAST. - An associated Semgrep rule ID, which is included in GitLab Advanced SAST findings for compatibility. This additional metadata allows findings to be automatically transitioned when both analyzers create similar findings in the same location. These identifiers are shown in the [vulnerability details](../vulnerabilities/_index.md) of each vulnerability. You can also see each identifier and its associated `type` in the [downloadable SAST report artifact](_index.md#download-a-sast-report). ```toml [gitlab-advanced-sast] [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "cwe" value = "89" [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "gitlab-advanced-sast_id" value = "java-spring-csrf-unrestricted-requestmapping-atomic" [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "semgrep_id" value = "java_cookie_rule-CookieHTTPOnly" ``` ### Disable predefined rules of other SAST analyzers With the following custom ruleset configuration, the following rules are omitted from the report: - `semgrep` rules with a `semgrep_id` of `gosec.G106-1` or a `cwe` of `322`. - `sobelow` rules with a `sobelow_rule_id` of `sql_injection`. - `flawfinder` rules with a `flawfinder_func_name` of `memcpy`. ```toml [semgrep] [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G106-1" [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "cwe" value = "322" [sobelow] [[sobelow.ruleset]] disable = true [sobelow.ruleset.identifier] type = "sobelow_rule_id" value = "sql_injection" [flawfinder] [[flawfinder.ruleset]] disable = true [flawfinder.ruleset.identifier] type = "flawfinder_func_name" value = "memcpy" ``` ### Override predefined rule metadata With the following custom ruleset configuration, vulnerabilities found with `semgrep` with a type `CWE` and a value `322` have their severity overridden to `Critical`. ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.identifier] type = "cwe" value = "322" [semgrep.ruleset.override] severity = "Critical" ``` ### Build a custom configuration using a file passthrough for `semgrep` With the following custom ruleset configuration, the predefined ruleset of the `semgrep` analyzer is replaced with a custom ruleset contained in a file called `my-semgrep-rules.yaml` in the repository being scanned. ```yaml # my-semgrep-rules.yml --- rules: - id: my-custom-rule pattern: print("Hello World") message: | Unauthorized use of Hello World. severity: ERROR languages: - python ``` ```toml [semgrep] description = "My custom ruleset for Semgrep" [[semgrep.passthrough]] type = "file" value = "my-semgrep-rules.yml" ``` ### Build a custom configuration using a passthrough chain for `semgrep` With the following custom ruleset configuration, the predefined ruleset of the `semgrep` analyzer is replaced with a custom ruleset produced by evaluating a chain of four passthroughs. Each passthrough produces a file that's written to the `/sgrules` directory within the container. A `timeout` of 60 seconds is set in case any Git remotes are unresponsive. Different passthrough types are demonstrated in this example: - Two `git` passthroughs, the first pulling `develop` branch from the `myrules` Git repository, and the second pulling revision `97f7686` from the `sast-rules` repository, and considering only files in the `go` subdirectory. - The `sast-rules` entry has a higher precedence because it appears later in the configuration. - If there's a filename collision between the two checkouts, files from the `sast-rules` repository overwrite files from the `myrules` repository. - A `raw` passthrough, which writes its `value` to `/sgrules/insecure.yml`. - A `url` passthrough, which fetches a configuration hosted at a URL and writes it to `/sgrules/gosec.yml`. Afterwards, Semgrep is invoked with the final configuration located under `/sgrules`. ```toml [semgrep] description = "My custom ruleset for Semgrep" targetdir = "/sgrules" timeout = 60 [[semgrep.passthrough]] type = "git" value = "https://gitlab.com/user/myrules.git" ref = "develop" [[semgrep.passthrough]] type = "git" value = "https://gitlab.com/gitlab-org/secure/gsoc-sast-vulnerability-rules/playground/sast-rules.git" ref = "97f7686db058e2141c0806a477c1e04835c4f395" subdir = "go" [[semgrep.passthrough]] type = "raw" target = "insecure.yml" value = """ rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function insecure detected metadata: cwe: "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor" severity: "ERROR" languages: - "go" """ [[semgrep.passthrough]] type = "url" value = "https://semgrep.dev/c/p/gosec" target = "gosec.yml" ``` ### Configure the mode for passthroughs in a chain You can choose how to handle filename conflicts that occur between passthroughs in a chain. The default behavior is to overwrite existing files with the same name, but you can choose `mode = append` instead to append the content of later files onto earlier ones. You can use the `append` mode for the `file`, `url`, and `raw` passthrough types only. With the following custom ruleset configuration, two `raw` passthroughs are used to iteratively assemble the `/sgrules/my-rules.yml` file, which is then provided to Semgrep as the ruleset. Each passthrough appends a single rule to the ruleset. The first passthrough is responsible for initialising the top-level `rules` object, according to the [Semgrep rule syntax](https://semgrep.dev/docs/writing-rules/rule-syntax). ```toml [semgrep] description = "My custom ruleset for Semgrep" targetdir = "/sgrules" validate = true [[semgrep.passthrough]] type = "raw" target = "my-rules.yml" value = """ rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function 'insecure' detected metadata: cwe: "..." severity: "ERROR" languages: - "go" """ [[semgrep.passthrough]] type = "raw" mode = "append" target = "my-rules.yml" value = """ - id: "secret" patterns: - pattern-either: - pattern: '$MASK = "..."' - metavariable-regex: metavariable: "$MASK" regex: "(password|pass|passwd|pwd|secret|token)" message: | Use of hard-coded password metadata: cwe: "..." severity: "ERROR" languages: - "go" """ ``` ```yaml # /sgrules/my-rules.yml rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function 'insecure' detected metadata: cwe: "..." severity: "ERROR" languages: - "go" - id: "secret" patterns: - pattern-either: - pattern: '$MASK = "..."' - metavariable-regex: metavariable: "$MASK" regex: "(password|pass|passwd|pwd|secret|token)" message: | Use of hard-coded password metadata: cwe: "..." severity: "ERROR" languages: - "go" ``` ### Specify a private remote configuration The following example [enables SAST](_index.md#configure-sast-in-your-cicd-yaml) and uses a shared ruleset customization file. The file is: - Downloaded from a private project that requires authentication, by using a [Group Access Token](../../group/settings/group_access_tokens.md) securely stored within a CI variable. - Checked out at a specific Git commit SHA instead of the default branch. See [group access tokens](../../group/settings/group_access_tokens.md#bot-users-for-groups) for how to find the username associated with a group token. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:$PERSONAL_ACCESS_TOKEN@gitlab.com/example-group/example-ruleset-project@c8ea7e3ff126987fb4819cc35f2310755511c2ab" ``` ### Demo Projects There are [demonstration projects](https://gitlab.com/gitlab-org/security-products/demos/SAST-analyzer-configurations) that illustrate some of these configuration options. Many of these projects illustrate using remote rulesets to override or disable rules and are grouped together by which analyzer they are for. There are also some video demonstrations walking through setting up remote rulesets: - [IaC analyzer with a remote ruleset](https://youtu.be/VzJFyaKpA-8)
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Customize SAST analyzer rules in GitLab by disabling, overriding, or replacing predefined rules. title: Customize rulesets breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Enabled](https://gitlab.com/gitlab-org/security-products/analyzers/ruleset/-/merge_requests/18) support for specifying ambiguous passthrough refs in GitLab 16.2. {{< /history >}} You can customize the behavior of our SAST analyzers by [defining a ruleset configuration file](#create-the-configuration-file) in the repository being scanned. There are two kinds of customization: - Modifying the behavior of **predefined rules**. This includes: - [Disabling predefined rules](#disable-predefined-rules). Available for all analyzers. - [Overriding metadata of predefined rules](#override-metadata-of-predefined-rules). Available for all analyzers. - Replacing predefined rules by [building a custom configuration](#build-a-custom-configuration) using **passthroughs**. Available only for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep). GitLab Advanced SAST supports only modifying the behavior of **predefined rules**, not replacing predefined rules. ## Disable predefined rules You can disable predefined rules for any SAST analyzer. When you disable a rule: - All SAST analyzers that support custom rulesets still scan for the vulnerability. The results are removed as a processing step after the scan completes, and they don't appear in the [`gl-sast-report.json` artifact](_index.md#download-a-sast-report). GitLab Advanced SAST differs by excluding disabled rules from the initial scan. - Findings for the disabled rule no longer appear in the [pipeline security tab](../detect/security_scanning_results.md). - Existing findings for the disabled rule on the default branch are marked as [`No longer detected`](../vulnerability_report/_index.md#activity-filter) in the [vulnerability report](../vulnerability_report/_index.md). The Semgrep-based analyzer handles disabled rules differently: - If you disable a rule in the Semgrep-based analyzer, existing vulnerability findings for that rule are [automatically resolved](_index.md#automatic-vulnerability-resolution) after you merge the `sast-ruleset.toml` file to the default branch. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Override metadata of predefined rules You can override certain attributes of predefined rules for any SAST analyzer. This can be useful when adapting SAST to your existing workflow or tools. For example, you might want to override the severity of a vulnerability based on organizational policy, or choose a different message to display in the vulnerability report. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Build a custom configuration You can replace the [GitLab-maintained ruleset](rules.md) for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) with your own rules. You provide your customizations via passthroughs, which are composed into a passthrough chain at runtime and evaluated to produce a complete configuration. The underlying scanner is then executed against this new configuration. There are multiple passthrough types that let you provide configuration in different ways, such as using a file committed to your repository or inline in the ruleset configuration file. You can also choose how subsequent passthroughs in the chain are handled; they can overwrite or append to previous configuration. See the [Schema](#schema) and [Examples](#examples) sections for information on how to configure this behavior. ## Create the configuration file To create the ruleset configuration file: 1. Create a `.gitlab` directory at the root of your project, if one doesn't already exist. 1. Create a file named `sast-ruleset.toml` in the `.gitlab` directory. ## Specify a remote configuration file {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/393452) in 16.1. {{< /history >}} You can set a [CI/CD variable](../../../ci/variables/_index.md) to use a ruleset configuration file that's stored outside of the current repository. This can help you apply the same rules across multiple projects. The `SAST_RULESET_GIT_REFERENCE` variable uses a format similar to [Git URLs](https://git-scm.com/docs/git-clone#_git_urls) for specifying a project URI, optional authentication, and optional Git SHA. The variable uses the following format: ```plaintext [<AUTH_USER>[:<AUTH_PASSWORD>]@]<PROJECT_PATH>[@<GIT_SHA>] ``` {{< alert type="note" >}} If a project has a `.gitlab/sast-ruleset.toml` file committed, that local configuration takes precedence and the file from `SAST_RULESET_GIT_REFERENCE` isn't used. {{< /alert >}} The following example [enables SAST](_index.md#configure-sast-in-your-cicd-yaml) and uses a shared ruleset customization file. In this example, the file is committed on the default branch of `example-ruleset-project` at the path `.gitlab/sast-ruleset.toml`. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_RULESET_GIT_REFERENCE: "gitlab.com/example-group/example-ruleset-project" ``` See [specify a private remote configuration example](#specify-a-private-remote-configuration) for advanced usage. ### Troubleshooting remote configuration files If remote configuration file doesn't seem to be applying customizations correctly, the causes can be: 1. Your repository has a local `.gitlab/sast-ruleset.toml` file. - By default, a local file is used if it's present, even if a remote configuration is set as a variable. - You can set the [SECURE_ENABLE_LOCAL_CONFIGURATION CI/CD variable](../../../ci/variables/_index.md) to `false` to ignore the local configuration file. 1. There is a problem with authentication. - To check whether this is the cause of the problem, try referencing a configuration file from a repository location that doesn't require authentication. ## Schema ### The top-level section The top-level section contains one or more configuration sections, defined as [TOML tables](https://toml.io/en/v1.0.0#table). | Setting | Description | | --------| ----------- | | `[$analyzer]` | Declares a configuration section for an analyzer. The name follows the names defined in the list of [SAST analyzers](analyzers.md#official-analyzers). | Configuration example: ```toml [semgrep] ... ``` Avoid creating configuration sections that modify existing rules and build a custom ruleset, as the latter replaces predefined rules completely. ### The `[$analyzer]` configuration section The `[$analyzer]` section lets you customize the behavior of an analyzer. Valid properties differ based on the kind of configuration you're making. | Setting | Applies to | Description | | --------| -------------- | ----------- | | `[[$analyzer.ruleset]]` | Predefined rules | Defines modifications to an existing rule. | | `interpolate` | All | If set to `true`, you can use `$VAR` in the configuration to evaluate environment variables. Use this feature with caution, so you don't leak secrets or tokens. (Default: `false`) | | `description` | Passthroughs | Description of the custom ruleset. | | `targetdir` | Passthroughs | The directory where the final configuration should be persisted. If empty, a directory with a random name is created. The directory can contain up to 100 MB of files. In case the SAST job is running with non-root user privileges, ensure that the active user has read and write permissions for this directory. | | `validate` | Passthroughs | If set to `true`, the content of each passthrough is validated. The validation works for `yaml`, `xml`, `json` and `toml` content. The proper validator is identified based on the extension used in the `target` parameter of the `[[$analyzer.passthrough]]` section. (Default: `false`) | | `timeout` | Passthroughs | The maximum time to spend to evaluate the passthrough chain, before timing out. The timeout cannot exceed 300 seconds. (Default: 60) | #### `interpolate` {{< alert type="warning" >}} To reduce the risk of leaking secrets, use this feature with caution. {{< /alert >}} The example below shows a configuration that uses the `$GITURL` environment variable to access a private repository. The variable contains a username and token (for example `https://user:token@url`), so they're not explicitly stored in the configuration file. ```toml [semgrep] description = "My private Semgrep ruleset" interpolate = true [[semgrep.passthrough]] type = "git" value = "$GITURL" ref = "main" ``` ### The `[[$analyzer.ruleset]]` section The `[[$analyzer.ruleset]]` section targets and modifies a single predefined rule. You can define one to many of these sections per analyzer. | Setting | Description | | --------| ----------- | | `disable` | Whether the rule should be disabled. (Default: `false`) | | `[$analyzer.ruleset.identifier]` | Selects the predefined rule to be modified. | | `[$analyzer.ruleset.override]` | Defines the overrides for the rule. | Configuration example: ```toml [semgrep] [[semgrep.ruleset]] disable = true ... ``` ### The `[$analyzer.ruleset.identifier]` section The `[$analyzer.ruleset.identifier]` section defines the identifiers of the predefined rule that you wish to modify. | Setting | Description | | --------| ----------- | | `type` | The type of identifier used by the predefined rule. | | `value` | The value of the identifier used by the predefined rule. | You can look up the correct values for `type` and `value` by viewing the [`gl-sast-report.json`](_index.md#download-a-sast-report) produced by the analyzer. You can download this file as a job artifact from the analyzer's CI job. For example, the snippet below shows a finding from a `semgrep` rule with three identifiers. The `type` and `value` keys in the JSON object correspond to the values you should provide in this section. ```json ... "vulnerabilities": [ { "id": "7331a4b7093875f6eb9f6eb1755b30cc792e9fb3a08c9ce673fb0d2207d7c9c9", "category": "sast", "message": "Key Exchange without Entity Authentication", "description": "Audit the use of ssh.InsecureIgnoreHostKey\n", ... "identifiers": [ { "type": "semgrep_id", "name": "gosec.G106-1", "value": "gosec.G106-1" }, { "type": "cwe", "name": "CWE-322", "value": "322", "url": "https://cwe.mitre.org/data/definitions/322.html" }, { "type": "gosec_rule_id", "name": "Gosec Rule ID G106", "value": "G106" } ] } ... ] ... ``` Configuration example: ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G106-1 ... ``` ### The `[$analyzer.ruleset.override]` section The `[$analyzer.ruleset.override]` section allows you to override attributes of a predefined rule. | Setting | Description | | --------| ----------- | | `description` | A detailed description of the issue. | | `message` | (Deprecated) A description of the issue. | | `name` | The name of the rule. | | `severity` | The severity of the rule. Valid options are: `Critical`, `High`, `Medium`, `Low`, `Unknown`, `Info`) | {{< alert type="note" >}} While `message` is populated by the analyzers, it has been [deprecated](https://gitlab.com/gitlab-org/security-products/analyzers/report/-/blob/1d86d5f2e61dc38c775fb0490ee27a45eee4b8b3/vulnerability.go#L22) in favor of `name` and `description`. {{< /alert >}} Configuration example: ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.override] severity = "Critical" name = "Command injection" ... ``` ### The `[[$analyzer.passthrough]]` section {{< alert type="note" >}} Passthrough configurations are available for the [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) only. {{< /alert >}} The `[[$analyzer.passthrough]]` section allows you to build a custom configuration for an analyzer. You can define up to 20 of these sections per analyzer. Passthroughs are composed into a _passthrough chain_ that evaluates into a complete configuration that replaces the predefined rules of the analyzer. Passthroughs are evaluated in order. Passthroughs listed later in the chain have a higher precedence and can overwrite or append to data yielded by previous passthroughs (depending on the `mode`). This is useful for cases where you need to use or modify an existing configuration. The size of the configuration generated by a single passthrough is limited to 10 MB. | Setting | Applies to | Description | | ------- | ---------- | ----------- | | `type` | All | One of `file`, `raw`, `git` or `url`. | | `target` | All | The target file to contain the data written by the passthrough evaluation. If empty, a random filename is used. | | `mode` | All | If `overwrite`, the `target` file is overwritten. If `append`, new content is appended to the `target` file. The `git` type only supports `overwrite`. (Default: `overwrite`) | | `ref` | `type = "git"` | Contains the name of the branch, tag, or the SHA to pull | | `subdir` | `type = "git"` | Used to select a subdirectory of the Git repository as the configuration source. | | `value` | All | For the `file`, `url`, and `git` types, defines the location of the file or Git repository. For the `raw` type, contains the inline configuration. | | `validator` | All | Used to explicitly invoke validators (`xml`, `yaml`, `json`, `toml`) on the target file after the evaluation of a passthrough. | #### Passthrough types | Type | Description | | ------ | ----------- | | `file` | Use a file that is present in the Git repository. | | `raw` | Provide the configuration inline. | | `git` | Pull the configuration from a remote Git repository. | | `url` | Fetch the configuration using HTTP. | {{< alert type="warning" >}} When using the `raw` passthrough with a YAML snippet, it's recommended to format all indentation in the `sast-ruleset.toml` file as spaces. The YAML specification mandates spaces over tabs, and the analyzer fails to parse your custom ruleset unless the indentation is represented accordingly. {{< /alert >}} ## Examples ### Disable predefined GitLab Advanced SAST rules You can disable GitLab Advanced SAST rules or edit their metadata. The following example disables rules based on different criteria: - A CWE identifier, which identifies an entire class of vulnerabilities. - An GitLab Advanced SAST rule ID, which identifies a specific detection strategy used in GitLab Advanced SAST. - An associated Semgrep rule ID, which is included in GitLab Advanced SAST findings for compatibility. This additional metadata allows findings to be automatically transitioned when both analyzers create similar findings in the same location. These identifiers are shown in the [vulnerability details](../vulnerabilities/_index.md) of each vulnerability. You can also see each identifier and its associated `type` in the [downloadable SAST report artifact](_index.md#download-a-sast-report). ```toml [gitlab-advanced-sast] [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "cwe" value = "89" [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "gitlab-advanced-sast_id" value = "java-spring-csrf-unrestricted-requestmapping-atomic" [[gitlab-advanced-sast.ruleset]] disable = true [gitlab-advanced-sast.ruleset.identifier] type = "semgrep_id" value = "java_cookie_rule-CookieHTTPOnly" ``` ### Disable predefined rules of other SAST analyzers With the following custom ruleset configuration, the following rules are omitted from the report: - `semgrep` rules with a `semgrep_id` of `gosec.G106-1` or a `cwe` of `322`. - `sobelow` rules with a `sobelow_rule_id` of `sql_injection`. - `flawfinder` rules with a `flawfinder_func_name` of `memcpy`. ```toml [semgrep] [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "semgrep_id" value = "gosec.G106-1" [[semgrep.ruleset]] disable = true [semgrep.ruleset.identifier] type = "cwe" value = "322" [sobelow] [[sobelow.ruleset]] disable = true [sobelow.ruleset.identifier] type = "sobelow_rule_id" value = "sql_injection" [flawfinder] [[flawfinder.ruleset]] disable = true [flawfinder.ruleset.identifier] type = "flawfinder_func_name" value = "memcpy" ``` ### Override predefined rule metadata With the following custom ruleset configuration, vulnerabilities found with `semgrep` with a type `CWE` and a value `322` have their severity overridden to `Critical`. ```toml [semgrep] [[semgrep.ruleset]] [semgrep.ruleset.identifier] type = "cwe" value = "322" [semgrep.ruleset.override] severity = "Critical" ``` ### Build a custom configuration using a file passthrough for `semgrep` With the following custom ruleset configuration, the predefined ruleset of the `semgrep` analyzer is replaced with a custom ruleset contained in a file called `my-semgrep-rules.yaml` in the repository being scanned. ```yaml # my-semgrep-rules.yml --- rules: - id: my-custom-rule pattern: print("Hello World") message: | Unauthorized use of Hello World. severity: ERROR languages: - python ``` ```toml [semgrep] description = "My custom ruleset for Semgrep" [[semgrep.passthrough]] type = "file" value = "my-semgrep-rules.yml" ``` ### Build a custom configuration using a passthrough chain for `semgrep` With the following custom ruleset configuration, the predefined ruleset of the `semgrep` analyzer is replaced with a custom ruleset produced by evaluating a chain of four passthroughs. Each passthrough produces a file that's written to the `/sgrules` directory within the container. A `timeout` of 60 seconds is set in case any Git remotes are unresponsive. Different passthrough types are demonstrated in this example: - Two `git` passthroughs, the first pulling `develop` branch from the `myrules` Git repository, and the second pulling revision `97f7686` from the `sast-rules` repository, and considering only files in the `go` subdirectory. - The `sast-rules` entry has a higher precedence because it appears later in the configuration. - If there's a filename collision between the two checkouts, files from the `sast-rules` repository overwrite files from the `myrules` repository. - A `raw` passthrough, which writes its `value` to `/sgrules/insecure.yml`. - A `url` passthrough, which fetches a configuration hosted at a URL and writes it to `/sgrules/gosec.yml`. Afterwards, Semgrep is invoked with the final configuration located under `/sgrules`. ```toml [semgrep] description = "My custom ruleset for Semgrep" targetdir = "/sgrules" timeout = 60 [[semgrep.passthrough]] type = "git" value = "https://gitlab.com/user/myrules.git" ref = "develop" [[semgrep.passthrough]] type = "git" value = "https://gitlab.com/gitlab-org/secure/gsoc-sast-vulnerability-rules/playground/sast-rules.git" ref = "97f7686db058e2141c0806a477c1e04835c4f395" subdir = "go" [[semgrep.passthrough]] type = "raw" target = "insecure.yml" value = """ rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function insecure detected metadata: cwe: "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor" severity: "ERROR" languages: - "go" """ [[semgrep.passthrough]] type = "url" value = "https://semgrep.dev/c/p/gosec" target = "gosec.yml" ``` ### Configure the mode for passthroughs in a chain You can choose how to handle filename conflicts that occur between passthroughs in a chain. The default behavior is to overwrite existing files with the same name, but you can choose `mode = append` instead to append the content of later files onto earlier ones. You can use the `append` mode for the `file`, `url`, and `raw` passthrough types only. With the following custom ruleset configuration, two `raw` passthroughs are used to iteratively assemble the `/sgrules/my-rules.yml` file, which is then provided to Semgrep as the ruleset. Each passthrough appends a single rule to the ruleset. The first passthrough is responsible for initialising the top-level `rules` object, according to the [Semgrep rule syntax](https://semgrep.dev/docs/writing-rules/rule-syntax). ```toml [semgrep] description = "My custom ruleset for Semgrep" targetdir = "/sgrules" validate = true [[semgrep.passthrough]] type = "raw" target = "my-rules.yml" value = """ rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function 'insecure' detected metadata: cwe: "..." severity: "ERROR" languages: - "go" """ [[semgrep.passthrough]] type = "raw" mode = "append" target = "my-rules.yml" value = """ - id: "secret" patterns: - pattern-either: - pattern: '$MASK = "..."' - metavariable-regex: metavariable: "$MASK" regex: "(password|pass|passwd|pwd|secret|token)" message: | Use of hard-coded password metadata: cwe: "..." severity: "ERROR" languages: - "go" """ ``` ```yaml # /sgrules/my-rules.yml rules: - id: "insecure" patterns: - pattern: "func insecure() {...}" message: | Insecure function 'insecure' detected metadata: cwe: "..." severity: "ERROR" languages: - "go" - id: "secret" patterns: - pattern-either: - pattern: '$MASK = "..."' - metavariable-regex: metavariable: "$MASK" regex: "(password|pass|passwd|pwd|secret|token)" message: | Use of hard-coded password metadata: cwe: "..." severity: "ERROR" languages: - "go" ``` ### Specify a private remote configuration The following example [enables SAST](_index.md#configure-sast-in-your-cicd-yaml) and uses a shared ruleset customization file. The file is: - Downloaded from a private project that requires authentication, by using a [Group Access Token](../../group/settings/group_access_tokens.md) securely stored within a CI variable. - Checked out at a specific Git commit SHA instead of the default branch. See [group access tokens](../../group/settings/group_access_tokens.md#bot-users-for-groups) for how to find the username associated with a group token. ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_RULESET_GIT_REFERENCE: "group_2504721_bot_7c9311ffb83f2850e794d478ccee36f5:$PERSONAL_ACCESS_TOKEN@gitlab.com/example-group/example-ruleset-project@c8ea7e3ff126987fb4819cc35f2310755511c2ab" ``` ### Demo Projects There are [demonstration projects](https://gitlab.com/gitlab-org/security-products/demos/SAST-analyzer-configurations) that illustrate some of these configuration options. Many of these projects illustrate using remote rulesets to override or disable rules and are grouped together by which analyzer they are for. There are also some video demonstrations walking through setting up remote rulesets: - [IaC analyzer with a remote ruleset](https://youtu.be/VzJFyaKpA-8)
https://docs.gitlab.com/user/application_security/analyzers
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/analyzers.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
analyzers.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
SAST analyzers
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Moved](https://gitlab.com/groups/gitlab-org/-/epics/2098) from GitLab Ultimate to GitLab Free in 13.3. {{< /history >}} Static Application Security Testing (SAST) uses analyzers to detect vulnerabilities in source code. Each analyzer is a wrapper around a [scanner](../terminology/_index.md#scanner), a third-party code analysis tool. The analyzers are published as Docker images that SAST uses to launch dedicated containers for each analysis. We recommend a minimum of 4 GB RAM to ensure consistent performance of the analyzers. SAST default images are maintained by GitLab, but you can also integrate your own custom image. For each scanner, an analyzer: - Exposes its detection logic. - Handles its execution. - Converts its output to a [standard format](../terminology/_index.md#secure-report-format). ## Official analyzers SAST supports the following official analyzers: - [`gitlab-advanced-sast`](gitlab_advanced_sast.md), providing cross-file and cross-function taint analysis and improved detection accuracy. Ultimate only. - [`kubesec`](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec), based on Kubesec. Off by default; see [Enabling KubeSec analyzer](_index.md#enabling-kubesec-analyzer). - [`pmd-apex`](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex), based on PMD with rules for the Apex language. - [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep), based on the Semgrep OSS engine [with GitLab-managed rules](rules.md#semgrep-based-analyzer). - [`sobelow`](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow), based on Sobelow. - [`spotbugs`](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs), based on SpotBugs with the Find Sec Bugs plugin (Ant, Gradle and wrapper, Grails, Maven and wrapper, SBT). ### Supported versions Official analyzers are released as container images, separate from the GitLab platform. Each analyzer version is compatible with a limited set of GitLab versions. When an analyzer version will no longer be supported in a future GitLab version, this change is announced in advance. For example, see the [announcement for GitLab 17.0](../../../update/deprecations.md#secure-analyzers-major-version-update). The supported major version for each official analyzer is reflected in its job definition in the [SAST CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). To see the analyzer version supported in a previous GitLab version, select a historical version of the SAST template file, such as [v16.11.0-ee](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.11.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml?ref_type=tags) for GitLab 16.11.0. ## Analyzers that have reached End of Support The following GitLab analyzers have reached [End of Support](../../../update/terminology.md#end-of-support) status and do not receive updates. They were replaced by the Semgrep-based analyzer [with GitLab-managed rules](rules.md#semgrep-based-analyzer). After you upgrade to GitLab 17.3.1 or later, a one-time data migration [automatically resolves](_index.md#automatic-vulnerability-resolution) findings from the analyzers that reached End of Support. This includes all of the analyzers listed below except for SpotBugs, because SpotBugs still scans Groovy code. The migration only resolves vulnerabilities that you haven't confirmed or dismissed, and it doesn't affect vulnerabilities that were [automatically translated to Semgrep-based scanning](#transition-to-semgrep-based-scanning). For details, see [issue 444926](https://gitlab.com/gitlab-org/gitlab/-/issues/444926). | Analyzer | Languages scanned | End Of Support GitLab version | |------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------| | [Bandit](https://gitlab.com/gitlab-org/security-products/analyzers/bandit) | Python | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [Brakeman](https://gitlab.com/gitlab-org/security-products/analyzers/brakeman) | Ruby, including Ruby on Rails | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [ESLint](https://gitlab.com/gitlab-org/security-products/analyzers/eslint) with React and Security plugins | JavaScript and TypeScript, including React | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [Flawfinder](https://gitlab.com/gitlab-org/security-products/analyzers/flawfinder) | C, C++ | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [gosec](https://gitlab.com/gitlab-org/security-products/analyzers/gosec) | Go | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [MobSF](https://gitlab.com/gitlab-org/security-products/analyzers/mobsf) | Java and Kotlin, for Android applications only; Objective-C, for iOS applications only | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [NodeJsScan](https://gitlab.com/gitlab-org/security-products/analyzers/nodejs-scan) | JavaScript (Node.js only) | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [phpcs-security-audit](https://gitlab.com/gitlab-org/security-products/analyzers/phpcs-security-audit) | PHP | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [Security Code Scan](https://gitlab.com/gitlab-org/security-products/analyzers/security-code-scan) | .NET (including C#, Visual Basic) | [16.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-160) | | [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) | Java only<sup>1</sup> | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) | Kotlin and Scala only<sup>1</sup> | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | Footnotes: 1. SpotBugs remains a [supported analyzer](_index.md#supported-languages-and-frameworks) for Groovy. It only activates when Groovy code is detected. ## SAST analyzer features For an analyzer to be considered generally available, it is expected to minimally support the following features: - [Customizable configuration](_index.md#available-cicd-variables) - [Customizable rulesets](customize_rulesets.md) - [Scan projects](_index.md#supported-languages-and-frameworks) - Multi-project support - [Offline support](_index.md#running-sast-in-an-offline-environment) - [Output results in JSON report format](_index.md#download-a-sast-report) - [SELinux support](_index.md#running-sast-in-selinux) ## Post analyzers {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Post analyzers enrich the report output by an analyzer. A post analyzer doesn't modify report content directly. Instead, it enhances the results with additional properties, including: - CWEs. - Location tracking fields. ## Transition to Semgrep-based scanning In addition to the [GitLab Advanced SAST analyzer](gitlab_advanced_sast.md), GitLab also provides a [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) that covers [multiple languages](_index.md#supported-languages-and-frameworks). GitLab maintains the analyzer and writes [detection rules](rules.md) for it. These rules replace language-specific analyzers that were used in previous releases. ### Vulnerability translation The Vulnerability Management system automatically moves vulnerabilities from the old analyzer to a new Semgrep-based finding when possible. For translation to the GitLab Advanced SAST analyzer, refer to the [GitLab Advanced SAST documentation](gitlab_advanced_sast.md). When this happens, the system combines the vulnerabilities from each analyzer into a single record. But, vulnerabilities may not match up if: - The new Semgrep-based rule detects the vulnerability in a different location, or in a different way, than the old analyzer did. - You previously [disabled SAST analyzers](#disable-specific-default-analyzers). This can interfere with automatic translation by preventing necessary identifiers from being recorded for each vulnerability. If a vulnerability doesn't match: - The original vulnerability is marked as "no longer detected" in the vulnerability report. - A new vulnerability is then created based on the Semgrep-based finding. ## Customize analyzers Use [CI/CD variables](_index.md#available-cicd-variables) in your `.gitlab-ci.yml` file to customize the behavior of your analyzers. ### Use a custom Docker mirror You can use a custom Docker registry, instead of the GitLab registry, to host the analyzers' images. Prerequisites: - The custom Docker registry must provide images for all the official analyzers. {{< alert type="note" >}} This variable affects all Secure analyzers, not just the analyzers for SAST. {{< /alert >}} To have GitLab download the analyzers' images from a custom Docker registry, define the prefix with the `SECURE_ANALYZERS_PREFIX` CI/CD variable. For example, the following instructs SAST to pull `my-docker-registry/gitlab-images/semgrep` instead of `registry.gitlab.com/security-products/semgrep`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: my-docker-registry/gitlab-images ``` ### Disable all default analyzers You can disable all default SAST analyzers, leaving only [custom analyzers](#custom-analyzers) enabled. To disable all default analyzers, set the CI/CD variable `SAST_DISABLED` to `"true"` in your `.gitlab-ci.yml` file. Example: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_DISABLED: "true" ``` ### Disable specific default analyzers Analyzers are run automatically according to the source code languages detected. However, you can disable select analyzers. To disable select analyzers, set the CI/CD variable `SAST_EXCLUDED_ANALYZERS` to a comma-delimited string listing the analyzers that you want to prevent running. For example, to disable the `spotbugs` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_EXCLUDED_ANALYZERS: "spotbugs" ``` ### Custom analyzers You can provide your own analyzers by defining jobs in your CI/CD configuration. For consistency with the default analyzers, you should add the suffix `-sast` to your custom SAST jobs. #### Example custom analyzer This example shows how to add a scanning job that's based on the Docker image `my-docker-registry/analyzers/csharp`. It runs the script `/analyzer run` and outputs a SAST report `gl-sast-report.json`. Define the following in your `.gitlab-ci.yml` file: ```yaml csharp-sast: image: name: "my-docker-registry/analyzers/csharp" script: - /analyzer run artifacts: reports: sast: gl-sast-report.json ```
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: SAST analyzers breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Moved](https://gitlab.com/groups/gitlab-org/-/epics/2098) from GitLab Ultimate to GitLab Free in 13.3. {{< /history >}} Static Application Security Testing (SAST) uses analyzers to detect vulnerabilities in source code. Each analyzer is a wrapper around a [scanner](../terminology/_index.md#scanner), a third-party code analysis tool. The analyzers are published as Docker images that SAST uses to launch dedicated containers for each analysis. We recommend a minimum of 4 GB RAM to ensure consistent performance of the analyzers. SAST default images are maintained by GitLab, but you can also integrate your own custom image. For each scanner, an analyzer: - Exposes its detection logic. - Handles its execution. - Converts its output to a [standard format](../terminology/_index.md#secure-report-format). ## Official analyzers SAST supports the following official analyzers: - [`gitlab-advanced-sast`](gitlab_advanced_sast.md), providing cross-file and cross-function taint analysis and improved detection accuracy. Ultimate only. - [`kubesec`](https://gitlab.com/gitlab-org/security-products/analyzers/kubesec), based on Kubesec. Off by default; see [Enabling KubeSec analyzer](_index.md#enabling-kubesec-analyzer). - [`pmd-apex`](https://gitlab.com/gitlab-org/security-products/analyzers/pmd-apex), based on PMD with rules for the Apex language. - [`semgrep`](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep), based on the Semgrep OSS engine [with GitLab-managed rules](rules.md#semgrep-based-analyzer). - [`sobelow`](https://gitlab.com/gitlab-org/security-products/analyzers/sobelow), based on Sobelow. - [`spotbugs`](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs), based on SpotBugs with the Find Sec Bugs plugin (Ant, Gradle and wrapper, Grails, Maven and wrapper, SBT). ### Supported versions Official analyzers are released as container images, separate from the GitLab platform. Each analyzer version is compatible with a limited set of GitLab versions. When an analyzer version will no longer be supported in a future GitLab version, this change is announced in advance. For example, see the [announcement for GitLab 17.0](../../../update/deprecations.md#secure-analyzers-major-version-update). The supported major version for each official analyzer is reflected in its job definition in the [SAST CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml). To see the analyzer version supported in a previous GitLab version, select a historical version of the SAST template file, such as [v16.11.0-ee](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.11.0-ee/lib/gitlab/ci/templates/Jobs/SAST.gitlab-ci.yml?ref_type=tags) for GitLab 16.11.0. ## Analyzers that have reached End of Support The following GitLab analyzers have reached [End of Support](../../../update/terminology.md#end-of-support) status and do not receive updates. They were replaced by the Semgrep-based analyzer [with GitLab-managed rules](rules.md#semgrep-based-analyzer). After you upgrade to GitLab 17.3.1 or later, a one-time data migration [automatically resolves](_index.md#automatic-vulnerability-resolution) findings from the analyzers that reached End of Support. This includes all of the analyzers listed below except for SpotBugs, because SpotBugs still scans Groovy code. The migration only resolves vulnerabilities that you haven't confirmed or dismissed, and it doesn't affect vulnerabilities that were [automatically translated to Semgrep-based scanning](#transition-to-semgrep-based-scanning). For details, see [issue 444926](https://gitlab.com/gitlab-org/gitlab/-/issues/444926). | Analyzer | Languages scanned | End Of Support GitLab version | |------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------| | [Bandit](https://gitlab.com/gitlab-org/security-products/analyzers/bandit) | Python | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [Brakeman](https://gitlab.com/gitlab-org/security-products/analyzers/brakeman) | Ruby, including Ruby on Rails | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [ESLint](https://gitlab.com/gitlab-org/security-products/analyzers/eslint) with React and Security plugins | JavaScript and TypeScript, including React | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [Flawfinder](https://gitlab.com/gitlab-org/security-products/analyzers/flawfinder) | C, C++ | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [gosec](https://gitlab.com/gitlab-org/security-products/analyzers/gosec) | Go | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [MobSF](https://gitlab.com/gitlab-org/security-products/analyzers/mobsf) | Java and Kotlin, for Android applications only; Objective-C, for iOS applications only | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [NodeJsScan](https://gitlab.com/gitlab-org/security-products/analyzers/nodejs-scan) | JavaScript (Node.js only) | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [phpcs-security-audit](https://gitlab.com/gitlab-org/security-products/analyzers/phpcs-security-audit) | PHP | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | | [Security Code Scan](https://gitlab.com/gitlab-org/security-products/analyzers/security-code-scan) | .NET (including C#, Visual Basic) | [16.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-160) | | [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) | Java only<sup>1</sup> | [15.4](../../../update/deprecations.md#sast-analyzer-consolidation-and-cicd-template-changes) | | [SpotBugs](https://gitlab.com/gitlab-org/security-products/analyzers/spotbugs) | Kotlin and Scala only<sup>1</sup> | [17.0](../../../update/deprecations.md#sast-analyzer-coverage-changing-in-gitlab-170) | Footnotes: 1. SpotBugs remains a [supported analyzer](_index.md#supported-languages-and-frameworks) for Groovy. It only activates when Groovy code is detected. ## SAST analyzer features For an analyzer to be considered generally available, it is expected to minimally support the following features: - [Customizable configuration](_index.md#available-cicd-variables) - [Customizable rulesets](customize_rulesets.md) - [Scan projects](_index.md#supported-languages-and-frameworks) - Multi-project support - [Offline support](_index.md#running-sast-in-an-offline-environment) - [Output results in JSON report format](_index.md#download-a-sast-report) - [SELinux support](_index.md#running-sast-in-selinux) ## Post analyzers {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Post analyzers enrich the report output by an analyzer. A post analyzer doesn't modify report content directly. Instead, it enhances the results with additional properties, including: - CWEs. - Location tracking fields. ## Transition to Semgrep-based scanning In addition to the [GitLab Advanced SAST analyzer](gitlab_advanced_sast.md), GitLab also provides a [Semgrep-based analyzer](https://gitlab.com/gitlab-org/security-products/analyzers/semgrep) that covers [multiple languages](_index.md#supported-languages-and-frameworks). GitLab maintains the analyzer and writes [detection rules](rules.md) for it. These rules replace language-specific analyzers that were used in previous releases. ### Vulnerability translation The Vulnerability Management system automatically moves vulnerabilities from the old analyzer to a new Semgrep-based finding when possible. For translation to the GitLab Advanced SAST analyzer, refer to the [GitLab Advanced SAST documentation](gitlab_advanced_sast.md). When this happens, the system combines the vulnerabilities from each analyzer into a single record. But, vulnerabilities may not match up if: - The new Semgrep-based rule detects the vulnerability in a different location, or in a different way, than the old analyzer did. - You previously [disabled SAST analyzers](#disable-specific-default-analyzers). This can interfere with automatic translation by preventing necessary identifiers from being recorded for each vulnerability. If a vulnerability doesn't match: - The original vulnerability is marked as "no longer detected" in the vulnerability report. - A new vulnerability is then created based on the Semgrep-based finding. ## Customize analyzers Use [CI/CD variables](_index.md#available-cicd-variables) in your `.gitlab-ci.yml` file to customize the behavior of your analyzers. ### Use a custom Docker mirror You can use a custom Docker registry, instead of the GitLab registry, to host the analyzers' images. Prerequisites: - The custom Docker registry must provide images for all the official analyzers. {{< alert type="note" >}} This variable affects all Secure analyzers, not just the analyzers for SAST. {{< /alert >}} To have GitLab download the analyzers' images from a custom Docker registry, define the prefix with the `SECURE_ANALYZERS_PREFIX` CI/CD variable. For example, the following instructs SAST to pull `my-docker-registry/gitlab-images/semgrep` instead of `registry.gitlab.com/security-products/semgrep`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SECURE_ANALYZERS_PREFIX: my-docker-registry/gitlab-images ``` ### Disable all default analyzers You can disable all default SAST analyzers, leaving only [custom analyzers](#custom-analyzers) enabled. To disable all default analyzers, set the CI/CD variable `SAST_DISABLED` to `"true"` in your `.gitlab-ci.yml` file. Example: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_DISABLED: "true" ``` ### Disable specific default analyzers Analyzers are run automatically according to the source code languages detected. However, you can disable select analyzers. To disable select analyzers, set the CI/CD variable `SAST_EXCLUDED_ANALYZERS` to a comma-delimited string listing the analyzers that you want to prevent running. For example, to disable the `spotbugs` analyzer: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml variables: SAST_EXCLUDED_ANALYZERS: "spotbugs" ``` ### Custom analyzers You can provide your own analyzers by defining jobs in your CI/CD configuration. For consistency with the default analyzers, you should add the suffix `-sast` to your custom SAST jobs. #### Example custom analyzer This example shows how to add a scanning job that's based on the Docker image `my-docker-registry/analyzers/csharp`. It runs the script `/analyzer run` and outputs a SAST report `gl-sast-report.json`. Define the following in your `.gitlab-ci.yml` file: ```yaml csharp-sast: image: name: "my-docker-registry/analyzers/csharp" script: - /analyzer run artifacts: reports: sast: gl-sast-report.json ```
https://docs.gitlab.com/user/application_security/rules
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/rules.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
rules.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
SAST rules
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab SAST uses a set of [analyzers](analyzers.md) to scan code for potential vulnerabilities. It automatically chooses which analyzers to run based on which programming languages are found in the repository. Each analyzer processes the code, then uses rules to find possible weaknesses in source code. The analyzer's rules determine what types of weaknesses it reports. ## Scope of rules GitLab SAST focuses on security weaknesses and vulnerabilities. It does not aim to find general bugs or assess overall code quality or maintainability. GitLab manages the detection ruleset with a focus on identifying actionable security weaknesses and vulnerabilities. The ruleset is designed to provide broad coverage against the most impactful vulnerabilities while minimizing false positives (reported vulnerabilities where no vulnerability exists). GitLab SAST is designed to be used in its default configuration, but you can [configure detection rules](#configure-rules-in-your-projects) if needed. ## Source of rules ### GitLab Advanced SAST {{< details >}} - Tier: Ultimate {{< /details >}} GitLab creates, maintains, and supports the rules for [GitLab Advanced SAST](gitlab_advanced_sast.md). Its rules are custom-built to leverage the GitLab Advanced SAST scanning engine's cross-file, cross-function analysis capabilities. The GitLab Advanced SAST ruleset is not open source, and is not the same ruleset as any other analyzer. For details of which types of vulnerabilities GitLab Advanced SAST detects, see [When vulnerabilities are reported](gitlab_advanced_sast.md#when-vulnerabilities-are-reported). ### Semgrep-based analyzer GitLab creates, maintains, and supports the rules that are used in the Semgrep-based GitLab SAST analyzer. This analyzer scans [many languages](_index.md#supported-languages-and-frameworks) in a single CI/CD pipeline job. It combines: - the Semgrep open-source engine. - a GitLab-managed detection ruleset, which is managed in [the GitLab-managed open source `sast-rules` project](https://gitlab.com/gitlab-org/security-products/sast-rules). - GitLab proprietary technology for [vulnerability tracking](_index.md#advanced-vulnerability-tracking). ### Other analyzers GitLab SAST uses other analyzers to scan the remaining [supported languages](_index.md#supported-languages-and-frameworks). The rules for these scans are defined in the upstream projects for each scanner. ## How rule updates are released GitLab updates rules regularly based on customer feedback and internal research. Rules are released as part of the container image for each analyzer. You automatically receive updated analyzers and rules unless you [manually pin analyzers to a specific version](_index.md#pinning-to-minor-image-version). Analyzers and their rules are updated [at least monthly](../detect/vulnerability_scanner_maintenance.md) if relevant updates are available. ### Rule update policies Updates to SAST rules are not [breaking changes](../../../update/terminology.md#breaking-change). This means that rules may be added, removed, or updated without prior notice. However, to make rule changes more convenient and understandable, GitLab: - Documents [rule changes](#important-rule-changes) that are planned or completed. - [Automatically resolves](_index.md#automatic-vulnerability-resolution) findings from rules after they are removed for Semgrep-based analyzers. - Enables you to [change the status on vulnerabilities where activity = "no longer detected" in bulk](../vulnerability_report/_index.md#change-status-of-vulnerabilities). - Evaluates proposed rule changes for the impact they will have on existing vulnerability records. ## Configure rules in your projects You should use the default SAST rules unless you have a specific reason to make a change. The default ruleset is designed to be relevant to most projects. However, you can [customize which rules are used](#apply-local-rule-preferences) or [control how rule changes are rolled out](#coordinate-rule-rollouts) if needed. ### Apply local rule preferences You may want to customize the rules used in SAST scans because: - Your organization has assigned priorities to specific vulnerability classes, such as choosing to address Cross-Site Scripting (XSS) or SQL Injection before other classes of vulnerabilities. - You believe that a specific rule is a false positive result or isn't relevant in the context of your codebase. To change which rules are used to scan your projects, adjust their severity, or apply other preferences, see [Customize rulesets](customize_rulesets.md). If your customization would benefit other users, consider [reporting a problem to GitLab](#report-a-problem-with-a-gitlab-sast-rule). ### Coordinate rule rollouts To control the rollout of rule changes, you can [pin SAST analyzers to a specific version](_index.md#pinning-to-minor-image-version). If you want to make these changes at the same time across multiple projects, consider setting the variables in: - [Group-level CI/CD variables](../../../ci/variables/_index.md#for-a-group). - Custom CI/CD variables in a [Scan Execution Policy](../policies/scan_execution_policies.md). ## Report a problem with a GitLab SAST rule <!-- This title is intended to match common search queries users might make. --> GitLab welcomes contributions to the rulesets used in SAST. Contributions might address: - False positive results, where the potential vulnerability is incorrect. - False negative results, where SAST did not report a potential vulnerability that truly exists. - The name, severity rating, description, guidance, or other explanatory content for a rule. If you believe a detection rule could be improved for all users, consider: - Submitting a merge request to [the `sast-rules` repository](https://gitlab.com/gitlab-org/security-products/sast-rules). See the [contribution instructions](https://gitlab.com/gitlab-org/security-products/sast-rules#contributing) for details. - Filing an issue in [the `gitlab-org/gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/). - Post a comment that says `@gitlab-bot label ~"group::static analysis" ~"Category:SAST"` so your issue lands in the correct triage workflow. ## Important rule changes GitLab updates SAST rules [regularly](#how-rule-updates-are-released). This section highlights the most important changes. More details are available in release announcements and in the CHANGELOG links provided. ### Rule changes in the Semgrep-based analyzer Key changes to the GitLab-managed ruleset for Semgrep-based scanning include: - Beginning in GitLab 16.3, the GitLab Static Analysis and Vulnerability Research teams are working to remove rules that tend to produce too many false positive results or not enough actionable true positive results. Existing findings from these removed rules are [automatically resolved](_index.md#automatic-vulnerability-resolution); they no longer appear in the [Security Dashboard](../security_dashboard/_index.md#project-security-dashboard) or in the default view of the [vulnerability report](../vulnerability_report/_index.md). This work is tracked in [epic 10907](https://gitlab.com/groups/gitlab-org/-/epics/10907). - In GitLab 16.0 through 16.2, the GitLab Vulnerability Research team updated the guidance that's included in each result. - In GitLab 15.10, the `detect-object-injection` rule was [removed by default](https://gitlab.com/gitlab-org/gitlab/-/issues/373920) and its findings were [automatically resolved](_index.md#automatic-vulnerability-resolution). For more details, see the [CHANGELOG for `sast-rules`](https://gitlab.com/gitlab-org/security-products/sast-rules/-/blob/main/CHANGELOG.md). ### Rule changes in other analyzers See the CHANGELOG file for each [analyzer](analyzers.md) for details of the changes, including new or updated rules, included in each version.
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: SAST rules breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab SAST uses a set of [analyzers](analyzers.md) to scan code for potential vulnerabilities. It automatically chooses which analyzers to run based on which programming languages are found in the repository. Each analyzer processes the code, then uses rules to find possible weaknesses in source code. The analyzer's rules determine what types of weaknesses it reports. ## Scope of rules GitLab SAST focuses on security weaknesses and vulnerabilities. It does not aim to find general bugs or assess overall code quality or maintainability. GitLab manages the detection ruleset with a focus on identifying actionable security weaknesses and vulnerabilities. The ruleset is designed to provide broad coverage against the most impactful vulnerabilities while minimizing false positives (reported vulnerabilities where no vulnerability exists). GitLab SAST is designed to be used in its default configuration, but you can [configure detection rules](#configure-rules-in-your-projects) if needed. ## Source of rules ### GitLab Advanced SAST {{< details >}} - Tier: Ultimate {{< /details >}} GitLab creates, maintains, and supports the rules for [GitLab Advanced SAST](gitlab_advanced_sast.md). Its rules are custom-built to leverage the GitLab Advanced SAST scanning engine's cross-file, cross-function analysis capabilities. The GitLab Advanced SAST ruleset is not open source, and is not the same ruleset as any other analyzer. For details of which types of vulnerabilities GitLab Advanced SAST detects, see [When vulnerabilities are reported](gitlab_advanced_sast.md#when-vulnerabilities-are-reported). ### Semgrep-based analyzer GitLab creates, maintains, and supports the rules that are used in the Semgrep-based GitLab SAST analyzer. This analyzer scans [many languages](_index.md#supported-languages-and-frameworks) in a single CI/CD pipeline job. It combines: - the Semgrep open-source engine. - a GitLab-managed detection ruleset, which is managed in [the GitLab-managed open source `sast-rules` project](https://gitlab.com/gitlab-org/security-products/sast-rules). - GitLab proprietary technology for [vulnerability tracking](_index.md#advanced-vulnerability-tracking). ### Other analyzers GitLab SAST uses other analyzers to scan the remaining [supported languages](_index.md#supported-languages-and-frameworks). The rules for these scans are defined in the upstream projects for each scanner. ## How rule updates are released GitLab updates rules regularly based on customer feedback and internal research. Rules are released as part of the container image for each analyzer. You automatically receive updated analyzers and rules unless you [manually pin analyzers to a specific version](_index.md#pinning-to-minor-image-version). Analyzers and their rules are updated [at least monthly](../detect/vulnerability_scanner_maintenance.md) if relevant updates are available. ### Rule update policies Updates to SAST rules are not [breaking changes](../../../update/terminology.md#breaking-change). This means that rules may be added, removed, or updated without prior notice. However, to make rule changes more convenient and understandable, GitLab: - Documents [rule changes](#important-rule-changes) that are planned or completed. - [Automatically resolves](_index.md#automatic-vulnerability-resolution) findings from rules after they are removed for Semgrep-based analyzers. - Enables you to [change the status on vulnerabilities where activity = "no longer detected" in bulk](../vulnerability_report/_index.md#change-status-of-vulnerabilities). - Evaluates proposed rule changes for the impact they will have on existing vulnerability records. ## Configure rules in your projects You should use the default SAST rules unless you have a specific reason to make a change. The default ruleset is designed to be relevant to most projects. However, you can [customize which rules are used](#apply-local-rule-preferences) or [control how rule changes are rolled out](#coordinate-rule-rollouts) if needed. ### Apply local rule preferences You may want to customize the rules used in SAST scans because: - Your organization has assigned priorities to specific vulnerability classes, such as choosing to address Cross-Site Scripting (XSS) or SQL Injection before other classes of vulnerabilities. - You believe that a specific rule is a false positive result or isn't relevant in the context of your codebase. To change which rules are used to scan your projects, adjust their severity, or apply other preferences, see [Customize rulesets](customize_rulesets.md). If your customization would benefit other users, consider [reporting a problem to GitLab](#report-a-problem-with-a-gitlab-sast-rule). ### Coordinate rule rollouts To control the rollout of rule changes, you can [pin SAST analyzers to a specific version](_index.md#pinning-to-minor-image-version). If you want to make these changes at the same time across multiple projects, consider setting the variables in: - [Group-level CI/CD variables](../../../ci/variables/_index.md#for-a-group). - Custom CI/CD variables in a [Scan Execution Policy](../policies/scan_execution_policies.md). ## Report a problem with a GitLab SAST rule <!-- This title is intended to match common search queries users might make. --> GitLab welcomes contributions to the rulesets used in SAST. Contributions might address: - False positive results, where the potential vulnerability is incorrect. - False negative results, where SAST did not report a potential vulnerability that truly exists. - The name, severity rating, description, guidance, or other explanatory content for a rule. If you believe a detection rule could be improved for all users, consider: - Submitting a merge request to [the `sast-rules` repository](https://gitlab.com/gitlab-org/security-products/sast-rules). See the [contribution instructions](https://gitlab.com/gitlab-org/security-products/sast-rules#contributing) for details. - Filing an issue in [the `gitlab-org/gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/). - Post a comment that says `@gitlab-bot label ~"group::static analysis" ~"Category:SAST"` so your issue lands in the correct triage workflow. ## Important rule changes GitLab updates SAST rules [regularly](#how-rule-updates-are-released). This section highlights the most important changes. More details are available in release announcements and in the CHANGELOG links provided. ### Rule changes in the Semgrep-based analyzer Key changes to the GitLab-managed ruleset for Semgrep-based scanning include: - Beginning in GitLab 16.3, the GitLab Static Analysis and Vulnerability Research teams are working to remove rules that tend to produce too many false positive results or not enough actionable true positive results. Existing findings from these removed rules are [automatically resolved](_index.md#automatic-vulnerability-resolution); they no longer appear in the [Security Dashboard](../security_dashboard/_index.md#project-security-dashboard) or in the default view of the [vulnerability report](../vulnerability_report/_index.md). This work is tracked in [epic 10907](https://gitlab.com/groups/gitlab-org/-/epics/10907). - In GitLab 16.0 through 16.2, the GitLab Vulnerability Research team updated the guidance that's included in each result. - In GitLab 15.10, the `detect-object-injection` rule was [removed by default](https://gitlab.com/gitlab-org/gitlab/-/issues/373920) and its findings were [automatically resolved](_index.md#automatic-vulnerability-resolution). For more details, see the [CHANGELOG for `sast-rules`](https://gitlab.com/gitlab-org/security-products/sast-rules/-/blob/main/CHANGELOG.md). ### Rule changes in other analyzers See the CHANGELOG file for each [analyzer](analyzers.md) for details of the changes, including new or updated rules, included in each version.
https://docs.gitlab.com/user/application_security/advanced_sast_coverage
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/advanced_sast_coverage.md
2025-08-13
doc/user/application_security/sast
[ "doc", "user", "application_security", "sast" ]
advanced_sast_coverage.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab Advanced SAST CWE coverage
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [GitLab Advanced SAST](gitlab_advanced_sast.md) finds many types of potential security vulnerabilities in code written in [supported languages](gitlab_advanced_sast.md#supported-languages). GitLab assigns a matching [Common Weakness Enumeration (CWE)](https://cwe.mitre.org) identifier to each potential vulnerability. CWE identifiers are an industry-standard way to identify security weaknesses, but it's important to know: - CWEs are arranged in a tree structure. For example, [CWE-22: Path Traversal](https://cwe.mitre.org/data/definitions/22.html) is a parent of [CWE-23: Relative Path Traversal](https://cwe.mitre.org/data/definitions/23.html). A scanner that specifically detects relative path traversal weaknesses (CWE-23) by definition also detects a portion of the more general path traversal category (CWE-22). - For clarity, this table identifies the exact CWE identifiers that are assigned to GitLab Advanced SAST rules. It doesn't report parent identifiers. To learn more about the rules used in GitLab Advanced SAST, see [SAST rules](rules.md#gitlab-advanced-sast). ## CWE coverage by language GitLab Advanced SAST finds the following types of weaknesses in each programming language: <!-- Table contents are automatically produced by a job in https://gitlab.com/gitlab-org/security-products/oxeye/product/oxeye-rulez. --> | CWE | CWE Description | C# | Go | Java | JavaScript, TypeScript | PHP | Python | Ruby | |:-------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------| | [CWE-15](https://cwe.mitre.org/data/definitions/15.html) | External Control of System or Configuration Setting | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-22](https://cwe.mitre.org/data/definitions/22.html) | Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-23](https://cwe.mitre.org/data/definitions/23.html) | Relative Path Traversal | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-73](https://cwe.mitre.org/data/definitions/73.html) | External Control of File Name or Path | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-76](https://cwe.mitre.org/data/definitions/76.html) | Improper Neutralization of Equivalent Special Elements | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-77](https://cwe.mitre.org/data/definitions/77.html) | Improper Neutralization of Special Elements used in a Command ('Command Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-78](https://cwe.mitre.org/data/definitions/78.html) | Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-79](https://cwe.mitre.org/data/definitions/79.html) | Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-80](https://cwe.mitre.org/data/definitions/80.html) | Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-88](https://cwe.mitre.org/data/definitions/88.html) | Improper Neutralization of Argument Delimiters in a Command ('Argument Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-89](https://cwe.mitre.org/data/definitions/89.html) | Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-90](https://cwe.mitre.org/data/definitions/90.html) | Improper Neutralization of Special Elements used in an LDAP Query ('LDAP Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-91](https://cwe.mitre.org/data/definitions/91.html) | XML Injection (aka Blind XPath Injection) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-94](https://cwe.mitre.org/data/definitions/94.html) | Improper Control of Generation of Code ('Code Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-95](https://cwe.mitre.org/data/definitions/95.html) | Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-113](https://cwe.mitre.org/data/definitions/113.html) | Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Request/Response Splitting') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-116](https://cwe.mitre.org/data/definitions/116.html) | Improper Encoding or Escaping of Output | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-117](https://cwe.mitre.org/data/definitions/117.html) | Improper Output Neutralization for Logs | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-118](https://cwe.mitre.org/data/definitions/118.html) | Incorrect Access of Indexable Resource ('Range Error') | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-125](https://cwe.mitre.org/data/definitions/125.html) | Out-of-bounds Read | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-155](https://cwe.mitre.org/data/definitions/155.html) | Improper Neutralization of Wildcards or Matching Symbols | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-180](https://cwe.mitre.org/data/definitions/180.html) | Incorrect Behavior Order: Validate Before Canonicalize | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-182](https://cwe.mitre.org/data/definitions/182.html) | Collapse of Data into Unsafe Value | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-185](https://cwe.mitre.org/data/definitions/185.html) | Incorrect Regular Expression | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-190](https://cwe.mitre.org/data/definitions/190.html) | Integer Overflow or Wraparound | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-191](https://cwe.mitre.org/data/definitions/191.html) | Integer Underflow (Wrap or Wraparound) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-208](https://cwe.mitre.org/data/definitions/208.html) | Observable Timing Discrepancy | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-209](https://cwe.mitre.org/data/definitions/209.html) | Generation of Error Message Containing Sensitive Information | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-242](https://cwe.mitre.org/data/definitions/242.html) | Use of Inherently Dangerous Function | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-256](https://cwe.mitre.org/data/definitions/256.html) | Plaintext Storage of a Password | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-272](https://cwe.mitre.org/data/definitions/272.html) | Least Privilege Violation | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-276](https://cwe.mitre.org/data/definitions/276.html) | Incorrect Default Permissions | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-295](https://cwe.mitre.org/data/definitions/295.html) | Improper Certificate Validation | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-297](https://cwe.mitre.org/data/definitions/297.html) | Improper Validation of Certificate with Host Mismatch | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-306](https://cwe.mitre.org/data/definitions/306.html) | Missing Authentication for Critical Function | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-311](https://cwe.mitre.org/data/definitions/311.html) | Missing Encryption of Sensitive Data | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-319](https://cwe.mitre.org/data/definitions/319.html) | Cleartext Transmission of Sensitive Information | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-322](https://cwe.mitre.org/data/definitions/322.html) | Key Exchange without Entity Authentication | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-323](https://cwe.mitre.org/data/definitions/323.html) | Reusing a Nonce, Key Pair in Encryption | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-326](https://cwe.mitre.org/data/definitions/326.html) | Inadequate Encryption Strength | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-327](https://cwe.mitre.org/data/definitions/327.html) | Use of a Broken or Risky Cryptographic Algorithm | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-328](https://cwe.mitre.org/data/definitions/328.html) | Use of Weak Hash | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-338](https://cwe.mitre.org/data/definitions/338.html) | Use of Cryptographically Weak Pseudo-Random Number Generator (PRNG) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-346](https://cwe.mitre.org/data/definitions/346.html) | Origin Validation Error | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-347](https://cwe.mitre.org/data/definitions/347.html) | Improper Verification of Cryptographic Signature | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-348](https://cwe.mitre.org/data/definitions/348.html) | Use of Less Trusted Source | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-352](https://cwe.mitre.org/data/definitions/352.html) | Cross-Site Request Forgery (CSRF) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-358](https://cwe.mitre.org/data/definitions/358.html) | Improperly Implemented Security Check for Standard | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-369](https://cwe.mitre.org/data/definitions/369.html) | Divide By Zero | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-377](https://cwe.mitre.org/data/definitions/377.html) | Insecure Temporary File | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-409](https://cwe.mitre.org/data/definitions/409.html) | Improper Handling of Highly Compressed Data (Data Amplification) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-470](https://cwe.mitre.org/data/definitions/470.html) | Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-489](https://cwe.mitre.org/data/definitions/489.html) | Active Debug Code | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-497](https://cwe.mitre.org/data/definitions/497.html) | Exposure of Sensitive System Information to an Unauthorized Control Sphere | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-501](https://cwe.mitre.org/data/definitions/501.html) | Trust Boundary Violation | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-502](https://cwe.mitre.org/data/definitions/502.html) | Deserialization of Untrusted Data | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-521](https://cwe.mitre.org/data/definitions/521.html) | Weak Password Requirements | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-522](https://cwe.mitre.org/data/definitions/522.html) | Insufficiently Protected Credentials | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-552](https://cwe.mitre.org/data/definitions/552.html) | Files or Directories Accessible to External Parties | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-554](https://cwe.mitre.org/data/definitions/554.html) | ASP.NET Misconfiguration: Not Using Input Validation Framework | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-598](https://cwe.mitre.org/data/definitions/598.html) | Use of GET Request Method With Sensitive Query Strings | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-599](https://cwe.mitre.org/data/definitions/599.html) | Missing Validation of OpenSSL Certificate | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-601](https://cwe.mitre.org/data/definitions/601.html) | URL Redirection to Untrusted Site ('Open Redirect') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-606](https://cwe.mitre.org/data/definitions/606.html) | Unchecked Input for Loop Condition | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-611](https://cwe.mitre.org/data/definitions/611.html) | Improper Restriction of XML External Entity Reference | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-613](https://cwe.mitre.org/data/definitions/613.html) | Insufficient Session Expiration | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-614](https://cwe.mitre.org/data/definitions/614.html) | Sensitive Cookie in HTTPS Session Without 'Secure' Attribute | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-639](https://cwe.mitre.org/data/definitions/639.html) | Authorization Bypass Through User-Controlled Key | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-643](https://cwe.mitre.org/data/definitions/643.html) | Improper Neutralization of Data within XPath Expressions ('XPath Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-704](https://cwe.mitre.org/data/definitions/704.html) | Incorrect Type Conversion or Cast | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-732](https://cwe.mitre.org/data/definitions/732.html) | Incorrect Permission Assignment for Critical Resource | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-749](https://cwe.mitre.org/data/definitions/749.html) | Exposed Dangerous Method or Function | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-754](https://cwe.mitre.org/data/definitions/754.html) | Improper Check for Unusual or Exceptional Conditions | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-757](https://cwe.mitre.org/data/definitions/757.html) | Selection of Less-Secure Algorithm During Negotiation ('Algorithm Downgrade') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-770](https://cwe.mitre.org/data/definitions/770.html) | Allocation of Resources Without Limits or Throttling | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-776](https://cwe.mitre.org/data/definitions/776.html) | Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-780](https://cwe.mitre.org/data/definitions/780.html) | Use of RSA Algorithm without OAEP | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-787](https://cwe.mitre.org/data/definitions/787.html) | Out-of-bounds Write | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-798](https://cwe.mitre.org/data/definitions/798.html) | Use of Hard-coded Credentials | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-913](https://cwe.mitre.org/data/definitions/913.html) | Improper Control of Dynamically-Managed Code Resources | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-915](https://cwe.mitre.org/data/definitions/915.html) | Improperly Controlled Modification of Dynamically-Determined Object Attributes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-917](https://cwe.mitre.org/data/definitions/917.html) | Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-918](https://cwe.mitre.org/data/definitions/918.html) | Server-Side Request Forgery (SSRF) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-942](https://cwe.mitre.org/data/definitions/942.html) | Permissive Cross-domain Policy with Untrusted Domains | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-943](https://cwe.mitre.org/data/definitions/943.html) | Improper Neutralization of Special Elements in Data Query Logic | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1004](https://cwe.mitre.org/data/definitions/1004.html) | Sensitive Cookie Without 'HttpOnly' Flag | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-1021](https://cwe.mitre.org/data/definitions/1021.html) | Improper Restriction of Rendered UI Layers or Frames | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1104](https://cwe.mitre.org/data/definitions/1104.html) | Use of Unmaintained Third Party Components | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1204](https://cwe.mitre.org/data/definitions/1204.html) | Generation of Weak Initialization Vector (IV) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1275](https://cwe.mitre.org/data/definitions/1275.html) | Sensitive Cookie with Improper SameSite Attribute | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1321](https://cwe.mitre.org/data/definitions/1321.html) | Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1327](https://cwe.mitre.org/data/definitions/1327.html) | Binding to an Unrestricted IP Address | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1333](https://cwe.mitre.org/data/definitions/1333.html) | Inefficient Regular Expression Complexity | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-1336](https://cwe.mitre.org/data/definitions/1336.html) | Improper Neutralization of Special Elements Used in a Template Engine | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1390](https://cwe.mitre.org/data/definitions/1390.html) | Weak Authentication | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< alert type="note" >}} Did this page answer the question you had? If not, comment on [epic 15343](https://gitlab.com/groups/gitlab-org/-/epics/15343) to share your use case. {{< /alert >}}
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab Advanced SAST CWE coverage breadcrumbs: - doc - user - application_security - sast --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [GitLab Advanced SAST](gitlab_advanced_sast.md) finds many types of potential security vulnerabilities in code written in [supported languages](gitlab_advanced_sast.md#supported-languages). GitLab assigns a matching [Common Weakness Enumeration (CWE)](https://cwe.mitre.org) identifier to each potential vulnerability. CWE identifiers are an industry-standard way to identify security weaknesses, but it's important to know: - CWEs are arranged in a tree structure. For example, [CWE-22: Path Traversal](https://cwe.mitre.org/data/definitions/22.html) is a parent of [CWE-23: Relative Path Traversal](https://cwe.mitre.org/data/definitions/23.html). A scanner that specifically detects relative path traversal weaknesses (CWE-23) by definition also detects a portion of the more general path traversal category (CWE-22). - For clarity, this table identifies the exact CWE identifiers that are assigned to GitLab Advanced SAST rules. It doesn't report parent identifiers. To learn more about the rules used in GitLab Advanced SAST, see [SAST rules](rules.md#gitlab-advanced-sast). ## CWE coverage by language GitLab Advanced SAST finds the following types of weaknesses in each programming language: <!-- Table contents are automatically produced by a job in https://gitlab.com/gitlab-org/security-products/oxeye/product/oxeye-rulez. --> | CWE | CWE Description | C# | Go | Java | JavaScript, TypeScript | PHP | Python | Ruby | |:-------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------|:-------------------------------------| | [CWE-15](https://cwe.mitre.org/data/definitions/15.html) | External Control of System or Configuration Setting | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-22](https://cwe.mitre.org/data/definitions/22.html) | Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-23](https://cwe.mitre.org/data/definitions/23.html) | Relative Path Traversal | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-73](https://cwe.mitre.org/data/definitions/73.html) | External Control of File Name or Path | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-76](https://cwe.mitre.org/data/definitions/76.html) | Improper Neutralization of Equivalent Special Elements | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-77](https://cwe.mitre.org/data/definitions/77.html) | Improper Neutralization of Special Elements used in a Command ('Command Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-78](https://cwe.mitre.org/data/definitions/78.html) | Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-79](https://cwe.mitre.org/data/definitions/79.html) | Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-80](https://cwe.mitre.org/data/definitions/80.html) | Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-88](https://cwe.mitre.org/data/definitions/88.html) | Improper Neutralization of Argument Delimiters in a Command ('Argument Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-89](https://cwe.mitre.org/data/definitions/89.html) | Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-90](https://cwe.mitre.org/data/definitions/90.html) | Improper Neutralization of Special Elements used in an LDAP Query ('LDAP Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-91](https://cwe.mitre.org/data/definitions/91.html) | XML Injection (aka Blind XPath Injection) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-94](https://cwe.mitre.org/data/definitions/94.html) | Improper Control of Generation of Code ('Code Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-95](https://cwe.mitre.org/data/definitions/95.html) | Improper Neutralization of Directives in Dynamically Evaluated Code ('Eval Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-113](https://cwe.mitre.org/data/definitions/113.html) | Improper Neutralization of CRLF Sequences in HTTP Headers ('HTTP Request/Response Splitting') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-116](https://cwe.mitre.org/data/definitions/116.html) | Improper Encoding or Escaping of Output | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-117](https://cwe.mitre.org/data/definitions/117.html) | Improper Output Neutralization for Logs | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-118](https://cwe.mitre.org/data/definitions/118.html) | Incorrect Access of Indexable Resource ('Range Error') | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-125](https://cwe.mitre.org/data/definitions/125.html) | Out-of-bounds Read | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-155](https://cwe.mitre.org/data/definitions/155.html) | Improper Neutralization of Wildcards or Matching Symbols | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-180](https://cwe.mitre.org/data/definitions/180.html) | Incorrect Behavior Order: Validate Before Canonicalize | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-182](https://cwe.mitre.org/data/definitions/182.html) | Collapse of Data into Unsafe Value | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-185](https://cwe.mitre.org/data/definitions/185.html) | Incorrect Regular Expression | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-190](https://cwe.mitre.org/data/definitions/190.html) | Integer Overflow or Wraparound | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-191](https://cwe.mitre.org/data/definitions/191.html) | Integer Underflow (Wrap or Wraparound) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-208](https://cwe.mitre.org/data/definitions/208.html) | Observable Timing Discrepancy | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-209](https://cwe.mitre.org/data/definitions/209.html) | Generation of Error Message Containing Sensitive Information | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-242](https://cwe.mitre.org/data/definitions/242.html) | Use of Inherently Dangerous Function | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-256](https://cwe.mitre.org/data/definitions/256.html) | Plaintext Storage of a Password | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-272](https://cwe.mitre.org/data/definitions/272.html) | Least Privilege Violation | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-276](https://cwe.mitre.org/data/definitions/276.html) | Incorrect Default Permissions | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-295](https://cwe.mitre.org/data/definitions/295.html) | Improper Certificate Validation | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-297](https://cwe.mitre.org/data/definitions/297.html) | Improper Validation of Certificate with Host Mismatch | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-306](https://cwe.mitre.org/data/definitions/306.html) | Missing Authentication for Critical Function | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-311](https://cwe.mitre.org/data/definitions/311.html) | Missing Encryption of Sensitive Data | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-319](https://cwe.mitre.org/data/definitions/319.html) | Cleartext Transmission of Sensitive Information | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-322](https://cwe.mitre.org/data/definitions/322.html) | Key Exchange without Entity Authentication | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-323](https://cwe.mitre.org/data/definitions/323.html) | Reusing a Nonce, Key Pair in Encryption | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-326](https://cwe.mitre.org/data/definitions/326.html) | Inadequate Encryption Strength | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-327](https://cwe.mitre.org/data/definitions/327.html) | Use of a Broken or Risky Cryptographic Algorithm | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-328](https://cwe.mitre.org/data/definitions/328.html) | Use of Weak Hash | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-338](https://cwe.mitre.org/data/definitions/338.html) | Use of Cryptographically Weak Pseudo-Random Number Generator (PRNG) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-346](https://cwe.mitre.org/data/definitions/346.html) | Origin Validation Error | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-347](https://cwe.mitre.org/data/definitions/347.html) | Improper Verification of Cryptographic Signature | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-348](https://cwe.mitre.org/data/definitions/348.html) | Use of Less Trusted Source | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-352](https://cwe.mitre.org/data/definitions/352.html) | Cross-Site Request Forgery (CSRF) | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-358](https://cwe.mitre.org/data/definitions/358.html) | Improperly Implemented Security Check for Standard | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-369](https://cwe.mitre.org/data/definitions/369.html) | Divide By Zero | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-377](https://cwe.mitre.org/data/definitions/377.html) | Insecure Temporary File | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-409](https://cwe.mitre.org/data/definitions/409.html) | Improper Handling of Highly Compressed Data (Data Amplification) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-470](https://cwe.mitre.org/data/definitions/470.html) | Use of Externally-Controlled Input to Select Classes or Code ('Unsafe Reflection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-489](https://cwe.mitre.org/data/definitions/489.html) | Active Debug Code | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-497](https://cwe.mitre.org/data/definitions/497.html) | Exposure of Sensitive System Information to an Unauthorized Control Sphere | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-501](https://cwe.mitre.org/data/definitions/501.html) | Trust Boundary Violation | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-502](https://cwe.mitre.org/data/definitions/502.html) | Deserialization of Untrusted Data | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-521](https://cwe.mitre.org/data/definitions/521.html) | Weak Password Requirements | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-522](https://cwe.mitre.org/data/definitions/522.html) | Insufficiently Protected Credentials | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-552](https://cwe.mitre.org/data/definitions/552.html) | Files or Directories Accessible to External Parties | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-554](https://cwe.mitre.org/data/definitions/554.html) | ASP.NET Misconfiguration: Not Using Input Validation Framework | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-598](https://cwe.mitre.org/data/definitions/598.html) | Use of GET Request Method With Sensitive Query Strings | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-599](https://cwe.mitre.org/data/definitions/599.html) | Missing Validation of OpenSSL Certificate | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-601](https://cwe.mitre.org/data/definitions/601.html) | URL Redirection to Untrusted Site ('Open Redirect') | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-606](https://cwe.mitre.org/data/definitions/606.html) | Unchecked Input for Loop Condition | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-611](https://cwe.mitre.org/data/definitions/611.html) | Improper Restriction of XML External Entity Reference | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-613](https://cwe.mitre.org/data/definitions/613.html) | Insufficient Session Expiration | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-614](https://cwe.mitre.org/data/definitions/614.html) | Sensitive Cookie in HTTPS Session Without 'Secure' Attribute | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-639](https://cwe.mitre.org/data/definitions/639.html) | Authorization Bypass Through User-Controlled Key | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-643](https://cwe.mitre.org/data/definitions/643.html) | Improper Neutralization of Data within XPath Expressions ('XPath Injection') | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-704](https://cwe.mitre.org/data/definitions/704.html) | Incorrect Type Conversion or Cast | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-732](https://cwe.mitre.org/data/definitions/732.html) | Incorrect Permission Assignment for Critical Resource | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-749](https://cwe.mitre.org/data/definitions/749.html) | Exposed Dangerous Method or Function | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-754](https://cwe.mitre.org/data/definitions/754.html) | Improper Check for Unusual or Exceptional Conditions | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-757](https://cwe.mitre.org/data/definitions/757.html) | Selection of Less-Secure Algorithm During Negotiation ('Algorithm Downgrade') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-770](https://cwe.mitre.org/data/definitions/770.html) | Allocation of Resources Without Limits or Throttling | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-776](https://cwe.mitre.org/data/definitions/776.html) | Improper Restriction of Recursive Entity References in DTDs ('XML Entity Expansion') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-780](https://cwe.mitre.org/data/definitions/780.html) | Use of RSA Algorithm without OAEP | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-787](https://cwe.mitre.org/data/definitions/787.html) | Out-of-bounds Write | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-798](https://cwe.mitre.org/data/definitions/798.html) | Use of Hard-coded Credentials | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-913](https://cwe.mitre.org/data/definitions/913.html) | Improper Control of Dynamically-Managed Code Resources | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-915](https://cwe.mitre.org/data/definitions/915.html) | Improperly Controlled Modification of Dynamically-Determined Object Attributes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [CWE-917](https://cwe.mitre.org/data/definitions/917.html) | Improper Neutralization of Special Elements used in an Expression Language Statement ('Expression Language Injection') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-918](https://cwe.mitre.org/data/definitions/918.html) | Server-Side Request Forgery (SSRF) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-942](https://cwe.mitre.org/data/definitions/942.html) | Permissive Cross-domain Policy with Untrusted Domains | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-943](https://cwe.mitre.org/data/definitions/943.html) | Improper Neutralization of Special Elements in Data Query Logic | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1004](https://cwe.mitre.org/data/definitions/1004.html) | Sensitive Cookie Without 'HttpOnly' Flag | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-1021](https://cwe.mitre.org/data/definitions/1021.html) | Improper Restriction of Rendered UI Layers or Frames | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1104](https://cwe.mitre.org/data/definitions/1104.html) | Use of Unmaintained Third Party Components | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1204](https://cwe.mitre.org/data/definitions/1204.html) | Generation of Weak Initialization Vector (IV) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1275](https://cwe.mitre.org/data/definitions/1275.html) | Sensitive Cookie with Improper SameSite Attribute | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1321](https://cwe.mitre.org/data/definitions/1321.html) | Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution') | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1327](https://cwe.mitre.org/data/definitions/1327.html) | Binding to an Unrestricted IP Address | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | | [CWE-1333](https://cwe.mitre.org/data/definitions/1333.html) | Inefficient Regular Expression Complexity | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [CWE-1336](https://cwe.mitre.org/data/definitions/1336.html) | Improper Neutralization of Special Elements Used in a Template Engine | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | | [CWE-1390](https://cwe.mitre.org/data/definitions/1390.html) | Weak Authentication | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="dotted-circle" >}} No | {{< alert type="note" >}} Did this page answer the question you had? If not, comment on [epic 15343](https://gitlab.com/groups/gitlab-org/-/epics/15343) to share your use case. {{< /alert >}}
https://docs.gitlab.com/user/application_security/container_scanning
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/container_scanning
[ "doc", "user", "application_security", "container_scanning" ]
_index.md
Application Security Testing
Composition Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Container Scanning
Image vulnerability scanning, configuration, customization, and reporting.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86092) the major analyzer version from `4` to `5` in GitLab 15.0. - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86783) from GitLab Ultimate to GitLab Free in 15.0. - Container Scanning variables that reference Docker [renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/357264) in GitLab 15.4. - Container Scanning template [moved](https://gitlab.com/gitlab-org/gitlab/-/issues/381665) from `Security/Container-Scanning.gitlab-ci.yml` to `Jobs/Container-Scanning.gitlab-ci.yml` in GitLab 15.6. {{< /history >}} Security vulnerabilities in container images create risk throughout your application lifecycle. Container Scanning detects these risks early, before they reach production environments. When vulnerabilities appear in your base images or operating system's packages, Container Scanning identifies them and provides a remediation path for those that it can. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Container Scanning - Advanced Security Testing](https://www.youtube.com/watch?v=C0jn2eN5MAs). - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Container Scanning using GitLab](https://youtu.be/h__mcXpil_4?si=w_BVG68qnkL9x4l1). - For an introductory tutorial, see [Scan a Docker container for vulnerabilities](../../../tutorials/container_scanning/_index.md). Container Scanning is often considered part of Software Composition Analysis (SCA). SCA can contain aspects of inspecting the items your code uses. These items typically include application and system dependencies that are almost always imported from external sources, rather than sourced from items you wrote yourself. GitLab offers both Container Scanning and [Dependency Scanning](../dependency_scanning/_index.md) to ensure coverage for all these dependency types. To cover as much of your risk area as possible, we encourage you to use all the security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). GitLab integrates with the [Trivy](https://github.com/aquasecurity/trivy) security scanner to perform vulnerability static analysis in containers. {{< alert type="warning" >}} The Grype analyzer is no longer maintained, except for limited fixes as explained in our [statement of support](https://about.gitlab.com/support/statement-of-support/#version-support). The existing current major version for the Grype analyzer image will continue to be updated with the latest advisory database, and operating system packages until GitLab 19.0, at which point the analyzer will stop working. {{< /alert >}} ## Features | Features | In Free and Premium | In Ultimate | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------| | Customize Settings ([Variables](#available-cicd-variables), [Overriding](#overriding-the-container-scanning-template), [offline environment support](#running-container-scanning-in-an-offline-environment), etc) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [View JSON Report](#reports-json-format) as a CI job artifact | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Generate a [CycloneDX SBOM JSON report](#cyclonedx-software-bill-of-materials) as a CI job artifact | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Ability to enable container scanning via an MR in the GitLab UI | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [UBI Image Support](#fips-enabled-images) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Support for Trivy | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [End-of-life Operating System Detection](#end-of-life-operating-system-detection) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Inclusion of GitLab Advisory Database | Limited to the time-delayed content from GitLab [advisories-communities](https://gitlab.com/gitlab-org/advisories-community/) project | Yes - all the latest content from [Gemnasium DB](https://gitlab.com/gitlab-org/security-products/gemnasium-db) | | Presentation of Report data in Merge Request and Security tab of the CI pipeline job | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Solutions for vulnerabilities (auto-remediation)](#solutions-for-vulnerabilities-auto-remediation) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Support for the [vulnerability allow list](#vulnerability-allowlisting) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Access to Dependency List page](../dependency_list/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started Enable the Container Scanning analyzer in your CI/CD pipeline. When a pipeline runs, the images your application depends on are scanned for vulnerabilities. You can customize Container Scanning by using CI/CD variables. Prerequisites: - The test stage is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the `docker` or `kubernetes` executor on Linux/amd64. If you're using the instance runners on GitLab.com, this is enabled by default. - An image matching the [supported distributions](#supported-distributions). - [Build and push](../../packages/container_registry/build_and_push_images.md#use-gitlab-cicd) the Docker image to your project's container registry. - If you're using a third-party container registry, you might need to provide authentication credentials through the `CS_REGISTRY_USER` and `CS_REGISTRY_PASSWORD` [configuration variables](#available-cicd-variables). For more details on how to use these variables, see [authenticate to a remote registry](#authenticate-to-a-remote-registry). Please see details below for [user and project-specific requirements](#prerequisites). To enable the analyzer, either: - Enable Auto DevOps, which includes dependency scanning. - Use a preconfigured merge request. - Create a [scan execution policy](../policies/scan_execution_policies.md) that enforces container scanning. - Edit the `.gitlab-ci.yml` file manually. ### Use a preconfigured merge request This method automatically prepares a merge request that includes the container scanning template in the `.gitlab-ci.yml` file. You then merge the merge request to enable dependency scanning. {{< alert type="note" >}} This method works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it might not be parsed successfully, and an error might occur. In that case, use the [manual](#edit-the-gitlab-ciyml-file-manually) method instead. {{< /alert >}} To enable Container Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Container Scanning** row, select **Configure with a merge request**. 1. Select **Create merge request**. 1. Review the merge request, then select **Merge**. Pipelines now include a Container Scanning job. ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit the existing `.gitlab-ci.yml` file. Use this method if your GitLab CI/CD configuration file is complex or you need to use non-default options. To enable Container Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, wait until the pipeline passes, then select **Merge**. Pipelines now include a Container Scanning job. ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Status: Indicates whether the vulnerability has been triaged or resolved. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - CVSS score: Provides a numeric value that maps to severity. - EPSS: Shows the likelihood of a vulnerability being exploited in the wild. - Has Known Exploit (KEV): Indicates that a given vulnerability has been exploited. - Project: Highlights the project where the vulnerability was identified. - Report type: Explains the output type. - Scanner: Identifies which analyzer detected the vulnerability. - Image: Provides the image attributed to the vulnerability - Namespace: Identifies the workspace attributed to the vulnerability. - Links: Evidence of the vulnerability being cataloged in various advisory databases. - Identifiers: A list of references used to classify the vulnerability, such as CVE identifiers. For more details, see [Pipeline security report](../vulnerability_report/pipeline.md). Additional ways to see Container Scanning results: - [Vulnerability report](../vulnerability_report/_index.md): Shows confirmed vulnerabilities on the default branch. - [Container scanning report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscontainer_scanning) ## Roll out After you are confident in the Container Scanning results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply Container Scanning settings across groups. - If you have unique requirements, Container Scanning can be run in [offline environments](#running-container-scanning-in-an-offline-environment). ## Supported distributions The following Linux distributions are supported: - Alma Linux - Alpine Linux - Amazon Linux - CentOS - CBL-Mariner - Debian - Distroless - Oracle Linux - Photon OS - Red Hat (RHEL) - Rocky Linux - SUSE - Ubuntu ### FIPS-enabled images GitLab also offers [FIPS-enabled Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the container-scanning images. You can therefore replace standard images with FIPS-enabled images. To configure the images, set the `CS_IMAGE_SUFFIX` to `-fips` or modify the `CS_ANALYZER_IMAGE` variable to the standard tag plus the `-fips` extension. {{< alert type="note" >}} The `-fips` flag is automatically added to `CS_ANALYZER_IMAGE` when FIPS mode is enabled in the GitLab instance. {{< /alert >}} Container scanning of images in authenticated registries is not supported when FIPS mode is enabled. When `CI_GITLAB_FIPS_MODE` is `"true"`, and `CS_REGISTRY_USER` or `CS_REGISTRY_PASSWORD` is set, the analyzer exits with an error and does not perform the scan. ## Configuration ### Customizing analyzer behavior To customize Container Scanning, use [CI/CD variables](#available-cicd-variables). #### Enable verbose output Enable verbose output when you need to see in detail what the Dependency Scanning job does, for example when troubleshooting. In the following example, the Container Scanning template is included and verbose output is enabled. ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml variables: SECURE_LOG_LEVEL: 'debug' ``` #### Scan an image in a remote registry To scan images located in a registry other than the project's, use the following `.gitlab-ci.yml`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_IMAGE: example.com/user/image:tag ``` ##### Authenticate to a remote registry Scanning an image in a private registry requires authentication. Provide the username in the `CS_REGISTRY_USER` variable, and the password in the `CS_REGISTRY_PASSWORD` configuration variable. For example, to scan an image from AWS Elastic Container Registry: ```yaml container_scanning: before_script: - ruby -r open-uri -e "IO.copy_stream(URI.open('https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip'), 'awscliv2.zip')" - unzip awscliv2.zip - sudo ./aws/install - aws --version - export AWS_ECR_PASSWORD=$(aws ecr get-login-password --region region) include: - template: Jobs/Container-Scanning.gitlab-ci.yml variables: CS_IMAGE: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<tag> CS_REGISTRY_USER: AWS CS_REGISTRY_PASSWORD: "$AWS_ECR_PASSWORD" AWS_DEFAULT_REGION: <region> ``` Authenticating to a remote registry is not supported when FIPS mode is enabled. #### Report language-specific findings The `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` CI/CD variable controls whether the scan reports findings related to programming languages. For more information about the supported languages, see [Language-specific Packages](https://aquasecurity.github.io/trivy/latest/docs/coverage/language/#supported-languages) in the Trivy documentation. By default, the report only includes packages managed by the Operating System (OS) package manager (for example, `yum`, `apt`, `apk`, `tdnf`). To report security findings in non-OS packages, set `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` to `"false"`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN: "false" ``` When you enable this feature, you may see [duplicate findings](../terminology/_index.md#duplicate-finding) in the [vulnerability report](../vulnerability_report/_index.md) if [Dependency Scanning](../dependency_scanning/_index.md) is enabled for your project. This happens because GitLab can't automatically deduplicate findings across different types of scanning tools. To understand which types of dependencies are likely to be duplicated, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). #### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). #### Available CI/CD variables To customize Container Scanning, use CI/CD variables. The following table lists CI/CD variables specific to Container Scanning. You can also use any of the [predefined CI/CD variables](../../../ci/variables/predefined_variables.md). {{< alert type="warning" >}} Test customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} | CI/CD Variable | Default | Description | |------------------------------------------|---------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `ADDITIONAL_CA_CERT_BUNDLE` | `""` | Bundle of CA certs that you want to trust. See [Using a custom SSL CA certificate authority](#using-a-custom-ssl-ca-certificate-authority) for more details. | | `CI_APPLICATION_REPOSITORY` | `$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG` | Docker repository URL for the image to be scanned. | | `CI_APPLICATION_TAG` | `$CI_COMMIT_SHA` | Docker repository tag for the image to be scanned. | | `CS_ANALYZER_IMAGE` | `registry.gitlab.com/security-products/container-scanning:8` | Docker image of the analyzer. Do not use the `:latest` tag with analyzer images provided by GitLab. | | `CS_DEFAULT_BRANCH_IMAGE` | `""` | The name of the `CS_IMAGE` on the default branch. See [Setting the default branch image](#setting-the-default-branch-image) for more details. | | `CS_DISABLE_DEPENDENCY_LIST` | `"false"` | {{< icon name="warning" >}} **[Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/439782)** in GitLab 17.0. | | `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` | `"true"` | Disable scanning for language-specific packages installed in the scanned image. | | `CS_DOCKER_INSECURE` | `"false"` | Allow access to secure Docker registries using HTTPS without validating the certificates. | | `CS_DOCKERFILE_PATH` | `Dockerfile` | The path to the `Dockerfile` to use for generating remediations. By default, the scanner looks for a file named `Dockerfile` in the root directory of the project. You should configure this variable only if your `Dockerfile` is in a non-standard location, such as a subdirectory. See [Solutions for vulnerabilities](#solutions-for-vulnerabilities-auto-remediation) for more details. | | `CS_INCLUDE_LICENSES` | `""` | If set, this variable includes licenses for each component. It is only applicable to cyclonedx reports and those licenses are provided by [trivy](https://trivy.dev/v0.60/docs/scanner/license/) | | `CS_IGNORE_STATUSES` | `""` | Force the analyzer to ignore findings with specified statuses in a comma-delimited list. The following values are allowed: `unknown,not_affected,affected,fixed,under_investigation,will_not_fix,fix_deferred,end_of_life`. <sup>1</sup> | | `CS_IGNORE_UNFIXED` | `"false"` | Ignore findings that are not fixed. Ignored findings are not included in the report. | | `CS_IMAGE` | `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG` | The Docker image to be scanned. If set, this variable overrides the `$CI_APPLICATION_REPOSITORY` and `$CI_APPLICATION_TAG` variables. | | `CS_IMAGE_SUFFIX` | `""` | Suffix added to `CS_ANALYZER_IMAGE`. If set to `-fips`, `FIPS-enabled` image is used for scan. See [FIPS-enabled images](#fips-enabled-images) for more details. | | `CS_QUIET` | `""` | If set, this variable disables output of the [vulnerabilities table](#container-scanning-job-log-format) in the job log. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/merge_requests/50) in GitLab 15.1. | | `CS_REGISTRY_INSECURE` | `"false"` | Allow access to insecure registries (HTTP only). Should only be set to `true` when testing the image locally. Works with all scanners, but the registry must listen on port `80/tcp` for Trivy to work. | | `CS_REGISTRY_PASSWORD` | `$CI_REGISTRY_PASSWORD` | Password for accessing a Docker registry requiring authentication. The default is only set if `$CS_IMAGE` resides at [`$CI_REGISTRY`](../../../ci/variables/predefined_variables.md). Not supported when FIPS mode is enabled. | | `CS_REGISTRY_USER` | `$CI_REGISTRY_USER` | Username for accessing a Docker registry requiring authentication. The default is only set if `$CS_IMAGE` resides at [`$CI_REGISTRY`](../../../ci/variables/predefined_variables.md). Not supported when FIPS mode is enabled. | | `CS_REPORT_OS_EOL` | `"false"` | Enable EOL detection | | `CS_REPORT_OS_EOL_SEVERITY` | `"Medium"` | Severity level assigned to EOL OS findings when `CS_REPORT_OS_EOL` is enabled. EOL findings are always reported regardless of `CS_SEVERITY_THRESHOLD`. Supported levels are `UNKNOWN`, `LOW`, `MEDIUM`, `HIGH`, and `CRITICAL`. | | `CS_SEVERITY_THRESHOLD` | `UNKNOWN` | Severity level threshold. The scanner outputs vulnerabilities with severity level higher than or equal to this threshold. Supported levels are `UNKNOWN`, `LOW`, `MEDIUM`, `HIGH`, and `CRITICAL`. | | `CS_TRIVY_JAVA_DB` | `"registry.gitlab.com/gitlab-org/security-products/dependencies/trivy-java-db"` | Specify an alternate location for the [trivy-java-db](https://github.com/aquasecurity/trivy-java-db) vulnerability database. | | `CS_TRIVY_DETECTION_PRIORITY` | `"precise"` | Scan using the defined Trivy [detection priority](https://trivy.dev/latest/docs/scanner/vulnerability/#detection-priority). The following values are allowed: `precise` or `comprehensive`. | | `SECURE_LOG_LEVEL` | `info` | Set the minimum logging level. Messages of this logging level or higher are output. From highest to lowest severity, the logging levels are: `fatal`, `error`, `warn`, `info`, `debug`. | | `TRIVY_TIMEOUT` | `5m0s` | Set the timeout for the scan. | | `TRIVY_PLATFORM` | `linux/amd64` | Set platform in the format `os/arch` if image is multi-platform capable. | **Footnotes**: 1. Fix status information is highly dependent on accurate fix availability data from the software vendor and container image operating system package metadata. It is also subject to interpretation by individual container scanners. In cases where a container scanner misreports the availability of a fixed package for a vulnerability, using `CS_IGNORE_STATUSES` can lead to false positive or false negative filtering of findings when this setting is enabled. ### Overriding the container scanning template If you want to override the job definition (for example, to change properties like `variables`), you must declare and override a job after the template inclusion, and then specify any additional keys. This example sets `GIT_STRATEGY` to `fetch`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: GIT_STRATEGY: fetch ``` ### Setting the default branch image By default, container scanning assumes that the image naming convention stores any branch-specific identifiers in the image tag rather than the image name. When the image name differs between the default branch and the non-default branch, previously-detected vulnerabilities show up as newly detected in merge requests. When the same image has different names on the default branch and a non-default branch, you can use the `CS_DEFAULT_BRANCH_IMAGE` variable to indicate what that image's name is on the default branch. GitLab then correctly determines if a vulnerability already exists when running scans on non-default branches. As an example, suppose the following: - Non-default branches publish images with the naming convention `$CI_REGISTRY_IMAGE/$CI_COMMIT_BRANCH:$CI_COMMIT_SHA`. - The default branch publishes images with the naming convention `$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA`. In this example, you can use the following CI/CD configuration to ensure that vulnerabilities aren't duplicated: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_DEFAULT_BRANCH_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA before_script: - export CS_IMAGE="$CI_REGISTRY_IMAGE/$CI_COMMIT_BRANCH:$CI_COMMIT_SHA" - | if [ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]; then export CS_IMAGE="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" fi ``` `CS_DEFAULT_BRANCH_IMAGE` should remain the same for a given `CS_IMAGE`. If it changes, then a duplicate set of vulnerabilities are created, which must be manually dismissed. When using [Auto DevOps](../../../topics/autodevops/_index.md), `CS_DEFAULT_BRANCH_IMAGE` is automatically set to `$CI_REGISTRY_IMAGE/$CI_DEFAULT_BRANCH:$CI_APPLICATION_TAG`. ### Using a custom SSL CA certificate authority You can use the `ADDITIONAL_CA_CERT_BUNDLE` CI/CD variable to configure a custom SSL CA certificate authority, which is used to verify the peer when fetching Docker images from a registry which uses HTTPS. The `ADDITIONAL_CA_CERT_BUNDLE` value should contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml container_scanning: variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` The `ADDITIONAL_CA_CERT_BUNDLE` value can also be configured as a [custom variable in the UI](../../../ci/variables/_index.md#for-a-project), either as a `file`, which requires the path to the certificate, or as a variable, which requires the text representation of the certificate. ### Scanning a multi-arch image You can use the `TRIVY_PLATFORM` CI/CD variable to configure the container scan to run against a specific operating system and architecture. For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml container_scanning: # Use an arm64 SaaS runner to scan this natively tags: ["saas-linux-small-arm64"] variables: TRIVY_PLATFORM: "linux/arm64" ``` ### Vulnerability allowlisting {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To allowlist specific vulnerabilities, follow these steps: 1. Set `GIT_STRATEGY: fetch` in your `.gitlab-ci.yml` file by following the instructions in [overriding the container scanning template](#overriding-the-container-scanning-template). 1. Define the allowlisted vulnerabilities in a YAML file named `vulnerability-allowlist.yml`. This must use the format described in [`vulnerability-allowlist.yml` data format](#vulnerability-allowlistyml-data-format). 1. Add the `vulnerability-allowlist.yml` file to the root folder of your project's Git repository. #### `vulnerability-allowlist.yml` data format The `vulnerability-allowlist.yml` file is a YAML file that specifies a list of CVE IDs of vulnerabilities that are **allowed** to exist, because they're false positives, or they're not applicable. If a matching entry is found in the `vulnerability-allowlist.yml` file, the following happens: - The vulnerability **is not included** when the analyzer generates the `gl-container-scanning-report.json` file. - The Security tab of the pipeline **does not show** the vulnerability. It is not included in the JSON file, which is the source of truth for the Security tab. Example `vulnerability-allowlist.yml` file: ```yaml generalallowlist: CVE-2019-8696: CVE-2014-8166: cups CVE-2017-18248: images: registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256: CVE-2018-4180: your.private.registry:5000/centos: CVE-2015-1419: libxml2 CVE-2015-1447: ``` This example excludes from `gl-container-scanning-report.json`: 1. All vulnerabilities with CVE IDs: `CVE-2019-8696`, `CVE-2014-8166`, `CVE-2017-18248`. 1. All vulnerabilities found in the `registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256` container image with CVE ID `CVE-2018-4180`. 1. All vulnerabilities found in `your.private.registry:5000/centos` container with CVE IDs `CVE-2015-1419`, `CVE-2015-1447`. ##### File format - `generalallowlist` block allows you to specify CVE IDs globally. All vulnerabilities with matching CVE IDs are excluded from the scan report. - `images` block allows you to specify CVE IDs for each container image independently. All vulnerabilities from the given image with matching CVE IDs are excluded from the scan report. The image name is retrieved from one of the environment variables used to specify the Docker image to be scanned, such as `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG` or `CS_IMAGE`. The image provided in this block **must** match this value and **must not** include the tag value. For example, if you specify the image to be scanned using `CS_IMAGE=alpine:3.7`, then you would use `alpine` in the `images` block, but you cannot use `alpine:3.7`. You can specify container image in multiple ways: - as image name only (such as `centos`). - as full image name with registry hostname (such as `your.private.registry:5000/centos`). - as full image name with registry hostname and sha256 label (such as `registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256`). {{< alert type="note" >}} The string after CVE ID (`cups` and `libxml2` in the previous example) is an optional comment format. It has **no impact** on the handling of vulnerabilities. You can include comments to describe the vulnerability. {{< /alert >}} ##### Container scanning job log format You can verify the results of your scan and the correctness of your `vulnerability-allowlist.yml` file by looking at the logs that are produced by the container scanning analyzer in `container_scanning` job details. The log contains a list of found vulnerabilities as a table, for example: ```plaintext +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | STATUS | CVE SEVERITY | PACKAGE NAME | PACKAGE VERSION | CVE DESCRIPTION | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Approved | High CVE-2019-3462 | apt | 1.4.8 | Incorrect sanitation of the 302 redirect field in HTTP transport metho | | | | | | d of apt versions 1.4.8 and earlier can lead to content injection by a | | | | | | MITM attacker, potentially leading to remote code execution on the ta | | | | | | rget machine. | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Unapproved | Medium CVE-2020-27350 | apt | 1.4.8 | APT had several integer overflows and underflows while parsing .deb pa | | | | | | ckages, aka GHSL-2020-168 GHSL-2020-169, in files apt-pkg/contrib/extr | | | | | | acttar.cc, apt-pkg/deb/debfile.cc, and apt-pkg/contrib/arfile.cc. This | | | | | | issue affects: apt 1.2.32ubuntu0 versions prior to 1.2.32ubuntu0.2; 1 | | | | | | .6.12ubuntu0 versions prior to 1.6.12ubuntu0.2; 2.0.2ubuntu0 versions | | | | | | prior to 2.0.2ubuntu0.2; 2.1.10ubuntu0 versions prior to 2.1.10ubuntu0 | | | | | | .1; | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Unapproved | Medium CVE-2020-3810 | apt | 1.4.8 | Missing input validation in the ar/tar implementations of APT before v | | | | | | ersion 2.1.2 could result in denial of service when processing special | | | | | | ly crafted deb files. | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ ``` Vulnerabilities in the log are marked as `Approved` when the corresponding CVE ID is added to the `vulnerability-allowlist.yml` file. ### Running container scanning in an offline environment {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the container scanning job to successfully run. For more information, see [Offline environments](../offline_deployments/_index.md). #### Requirements for offline container scanning To use container scanning in an offline environment, you need: - GitLab Runner with the [`docker` or `kubernetes` executor](#getting-started). - To configure a local Docker container registry with copies of the container scanning images. You can find these images in their respective registries: | GitLab Analyzer | Container registry | | --- | --- | | [Container-Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) | [Container-Scanning container registry](https://gitlab.com/security-products/container-scanning/container_registry/) | GitLab Runner has a [default `pull policy` of `always`](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy), meaning the runner tries to pull Docker images from the GitLab container registry even if a local copy is available. The GitLab Runner [`pull_policy` can be set to `if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy) in an offline environment if you prefer using only locally available Docker images. However, we recommend keeping the pull policy setting to `always` if not in an offline environment, as this enables the use of updated scanners in your CI/CD pipelines. ##### Support for Custom Certificate Authorities Support for custom certificate authorities for Trivy was introduced in version [4.0.0](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/releases/4.0.0). #### Make GitLab container scanning analyzer images available inside your Docker registry For container scanning, import the following images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/container-scanning:8 registry.gitlab.com/security-products/container-scanning/trivy:8 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which you can import or temporarily access external resources. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md), and you may be able to make occasional updates on your own. For more information, see [the specific steps on how to update an image with a pipeline](#automating-container-scanning-vulnerability-database-updates-with-a-pipeline). For details on saving and transporting Docker images as a file, see the Docker documentation on [`docker save`](https://docs.docker.com/reference/cli/docker/image/save/), [`docker load`](https://docs.docker.com/reference/cli/docker/image/load/), [`docker export`](https://docs.docker.com/reference/cli/docker/container/export/), and [`docker import`](https://docs.docker.com/reference/cli/docker/image/import/). #### Set container scanning CI/CD variables to use local container scanner analyzers {{< alert type="note" >}} The methods described here apply to `container_scanning` jobs that are defined in your `.gitlab-ci.yml` file. These methods do not work for the Container Scanning for Registry feature, which is managed by a bot and does not use the `.gitlab-ci.yml` file. To configure automatic Container Scanning for Registry in an offline environment, [define the `CS_ANALYZER_IMAGE` variable in the GitLab UI](#use-with-offline-or-air-gapped-environments) instead. {{< /alert >}} 1. [Override the container scanning template](#overriding-the-container-scanning-template) in your `.gitlab-ci.yml` file to refer to the Docker images hosted on your local Docker container registry: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: image: $CI_REGISTRY/namespace/container-scanning ``` 1. If your local Docker container registry is running securely over `HTTPS`, but you're using a self-signed certificate, then you must set `CS_DOCKER_INSECURE: "true"` in the `container_scanning` section of your `.gitlab-ci.yml`. #### Automating container scanning vulnerability database updates with a pipeline We recommend that you set up a [scheduled pipeline](../../../ci/pipelines/schedules.md) to fetch the latest vulnerabilities database on a preset schedule. Automating this with a pipeline means you do not have to do it manually each time. You can use the following `.gitlab-ci.yml` example as a template. ```yaml variables: SOURCE_IMAGE: registry.gitlab.com/security-products/container-scanning:8 TARGET_IMAGE: $CI_REGISTRY/namespace/container-scanning image: docker:latest update-scanner-image: services: - docker:dind script: - docker pull $SOURCE_IMAGE - docker tag $SOURCE_IMAGE $TARGET_IMAGE - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY --username $CI_REGISTRY_USER --password-stdin - docker push $TARGET_IMAGE ``` The previous template works for a GitLab Docker registry running on a local installation. However, if you're using a non-GitLab Docker registry, you must change the `$CI_REGISTRY` value and the `docker login` credentials to match your local registry's details. #### Scan images in external private registries To scan an image in an external private registry, you must configure access credentials so the container scanning analyzer can authenticate itself before attempting to access the image to scan. If you use the GitLab [Container Registry](../../packages/container_registry/_index.md), the `CS_REGISTRY_USER` and `CS_REGISTRY_PASSWORD` [configuration variables](#available-cicd-variables) are set automatically and you can skip this configuration. This example shows the configuration needed to scan images in a private [Google Container Registry](https://cloud.google.com/artifact-registry): ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_REGISTRY_USER: _json_key CS_REGISTRY_PASSWORD: "$GCP_CREDENTIALS" CS_IMAGE: "gcr.io/path-to-you-registry/image:tag" ``` Before you commit this configuration, [add a CI/CD variable](../../../ci/variables/_index.md#for-a-project) for `GCP_CREDENTIALS` containing the JSON key, as described in the [Google Cloud Platform Container Registry documentation](https://cloud.google.com/container-registry/docs/advanced-authentication#json-key). Also: - The value of the variable may not fit the masking requirements for the **Mask variable** option, so the value could be exposed in the job logs. - Scans may not run in unprotected feature branches if you select the **Protect variable** option. - Consider creating credentials with read-only permissions and rotating them regularly if the options aren't selected. Scanning images in external private registries is not supported when FIPS mode is enabled. #### Create and use a Trivy Java database mirror When the `trivy` scanner is used and a `jar` file is encountered in a container image being scanned, `trivy` downloads an additional `trivy-java-db` vulnerability database. By default, the `trivy-java-db` database is hosted as an [OCI artifact](https://oras.land/docs/quickstart/) at `ghcr.io/aquasecurity/trivy-java-db:1`. If this registry is [not accessible](#running-container-scanning-in-an-offline-environment) or responds with `TOOMANYREQUESTS`, one solution is to mirror the `trivy-java-db` to a more accessible container registry: ```yaml mirror trivy java db: image: name: ghcr.io/oras-project/oras:v1.1.0 entrypoint: [""] script: - oras login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - oras pull ghcr.io/aquasecurity/trivy-java-db:1 - oras push $CI_REGISTRY_IMAGE:1 --config /dev/null:application/vnd.aquasec.trivy.config.v1+json javadb.tar.gz:application/vnd.aquasec.trivy.javadb.layer.v1.tar+gzip ``` The vulnerability database is not a regular Docker image, so it is not possible to pull it by using `docker pull`. The image shows an error if you go to it in the GitLab UI. If the container registry is `gitlab.example.com/trivy-java-db-mirror`, then the container scanning job should be configured in the following way. Do not add the tag `:1` at the end, it is added by `trivy`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_TRIVY_JAVA_DB: gitlab.example.com/trivy-java-db-mirror ``` ## Scanning archive formats {{< history >}} - Scanning tar files [introduced](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/merge_requests/3151) in GitLab 18.0. {{< /history >}} Container Scanning supports images in archive formats (`.tar`, `.tar.gz`). Such images may be created, for example, using `docker save` or `docker buildx build`. To scan an archive file, set the environment variable `CS_IMAGE` to the format `archive://path/to/archive`: - The `archive://` scheme prefix specifies that the analyzer is to scan an archive. - `path/to/archive` specifies the path to the archive to scan, whether an absolute path or a relative path. Container Scanning supports tar image files following the [Docker Image Specification](https://github.com/moby/docker-image-spec). OCI tarballs are not supported. For more information regarding supported formats, see [Trivy tar file support](https://trivy.dev/v0.48/docs/target/container_image/#tar-files). ### Building supported tar files Container Scanning uses metadata from the tar file for image naming. When building tar image files, ensure the image is tagged: ```shell # Pull or build an image with a name and a tag docker pull image:latest # OR docker build . -t image:latest # Then export to tar using docker save docker save image:latest -o image-latest.tar # Or build an image with a tag using buildx build docker buildx create --name container --driver=docker-container docker buildx build -t image:latest --builder=container -o type=docker,dest=- . > image-latest.tar # With podman podman build -t image:latest . podman save -o image-latest.tar image:latest ``` ### Image name Container Scanning determines the image name by first evaluating the archive's `manifest.json` and using the first item in `RepoTags`. If this is not found, `index.json` is used to fetch the `io.containerd.image.name` annotation. If this is not found, the archive filename is used instead. - `manifest.json` is defined in [Docker Image Specification v1.1.0](https://github.com/moby/docker-image-spec/blob/v1.1.0/v1.1.md#combined-image-json--filesystem-changeset-format) and created by using the command `docker save`. - `index.json` format is defined in the [OCI image specification v1.1.1](https://github.com/opencontainers/image-spec/blob/v1.1.1/spec.md). `io.containerd.image.name` is [available in containerd v1.3.0 and later](https://github.com/containerd/containerd/blob/v1.3.0/images/annotations.go) when using `ctr image export`. ### Scanning archives built in a previous job To scan an archive built in a CI/CD job, you must pass the archive artifact from the build job to the container scanning job. Use the [`artifacts:paths`](../../../ci/yaml/_index.md#artifactspaths) and [`dependencies`](../../../ci/yaml/_index.md#dependencies) keywords to pass artifacts from one job to a following one: ```yaml build_job: script: - docker build . -t image:latest - docker save image:latest -o image-latest.tar artifacts: paths: - "image-latest.tar" container_scanning: variables: CS_IMAGE: "archive://image-latest.tar" dependencies: - build_job ``` ### Scanning archives from the project repository To scan an archive found in your project repository, ensure that your [Git strategy](../../../ci/runners/configure_runners.md#git-strategy) enables access to your repository. Set the `GIT_STRATEGY` keyword to either `clone` or `fetch` in the `container_scanning` job because it is set to `none` by default. ```yaml container_scanning: variables: GIT_STRATEGY: fetch ``` ## Running the standalone container scanning tool It's possible to run the [GitLab container scanning tool](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) against a Docker container without needing to run it within the context of a CI job. To scan an image directly, follow these steps: 1. Run [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Docker Machine](https://github.com/docker/machine). 1. Run the analyzer's Docker image, passing the image and tag you want to analyze in the `CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG` variables: ```shell docker run \ --interactive --rm \ --volume "$PWD":/tmp/app \ -e CI_PROJECT_DIR=/tmp/app \ -e CI_APPLICATION_REPOSITORY=registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256 \ -e CI_APPLICATION_TAG=bc09fe2e0721dfaeee79364115aeedf2174cce0947b9ae5fe7c33312ee019a4e \ registry.gitlab.com/security-products/container-scanning ``` The results are stored in `gl-container-scanning-report.json`. ## Reports JSON format The container scanning tool emits JSON reports which the [GitLab Runner](https://docs.gitlab.com/runner/) recognizes through the [`artifacts:reports`](../../../ci/yaml/_index.md#artifactsreports) keyword in the CI configuration file. Once the CI job finishes, the Runner uploads these reports to GitLab, which are then available in the CI Job artifacts. In GitLab Ultimate, these reports can be viewed in the corresponding [pipeline](../detect/security_scanning_results.md) and become part of the [vulnerability report](../vulnerability_report/_index.md). These reports must follow a format defined in the [security report schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/). See: - [Latest schema for the container scanning report](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json). - [Example container scanning report](https://gitlab.com/gitlab-examples/security/security-reports/-/blob/master/samples/container-scanning.json) ### CycloneDX Software Bill of Materials {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/396381) in GitLab 15.11. {{< /history >}} In addition to the [JSON report file](#reports-json-format), the [Container Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) tool outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for the scanned image. This CycloneDX SBOM is named `gl-sbom-report.cdx.json` and is saved in the same directory as the `JSON report file`. This feature is only supported when the `Trivy` analyzer is used. This report can be viewed in the [Dependency List](../dependency_list/_index.md). You can download CycloneDX SBOMs [the same way as other job artifacts](../../../ci/jobs/job_artifacts.md#download-job-artifacts). #### License Information in CycloneDX Reports {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/472064) in GitLab 18.0. {{< /history >}} Container scanning can include license information in CycloneDX reports. This feature is disabled by default to maintain backward compatibility. To enable license scanning in your container scanning results: - Set the `CS_INCLUDE_LICENSES` variable in your `.gitlab-ci.yml` file: ```yaml container_scanning: variables: CS_INCLUDE_LICENSES: "true" ``` - After enabling this feature, the generated CycloneDX report will include license information for components detected in your container images. - You can view this license information in the dependency list page or as part of the downloadable CycloneDX job artifact. It is important to mention that only SPDX licenses are supported. However, licenses that are non-compliant with SPDX will still be ingested without any user-facing error. ## End-of-life operating system detection Container scanning includes the ability to detect and report when your container images are using operating systems that have reached their end-of-life (EOL). Operating systems that have reached EOL no longer receive security updates, leaving them vulnerable to newly discovered security issues. The EOL detection feature uses Trivy to identify operating systems that are no longer supported by their respective distributions. When an EOL operating system is detected, it's reported as a vulnerability in your container scanning report alongside other security findings. To enable EOL detection, set `CS_REPORT_OS_EOL` to `"true"`. ## Container Scanning for Registry {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2340) in GitLab 17.1 [with a flag](../../../administration/feature_flags/_index.md) named `enable_container_scanning_for_registry`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/443827) in GitLab 17.2. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/443827) in GitLab 17.2. Feature flag `enable_container_scanning_for_registry` removed. {{< /history >}} When a container image is pushed with the `latest` tag, a container scanning job is automatically triggered by the security policy bot in a new pipeline against the default branch. Unlike regular container scanning, the scan results do not include a security report. Instead, Container Scanning for Registry relies on [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md) to inspect the components detected by the scan. When security findings are identified, GitLab populates the [vulnerability report](../vulnerability_report/_index.md) with these findings. Vulnerabilities can be viewed under the **Container registry vulnerabilities** tab of the vulnerability report page. {{< alert type="note" >}} Container Scanning for Registry populates the vulnerability report only when a new advisory is published to the [GitLab Advisory Database](../gitlab_advisory_database/_index.md). Support for populating the vulnerability report with all present advisory data, instead of only newly-detected data, is proposed in [epic 11219](https://gitlab.com/groups/gitlab-org/-/epics/11219). {{< /alert >}} ### Prerequisites - You must have at least the Maintainer role in a project to enable Container Scanning for Registry. - The project being used must not be empty. If you are utilizing an empty project solely for storing container images, this feature won't function as intended. As a workaround, ensure the project contains an initial commit on the default branch. - By default there is a limit of `50` scans per project per day. - You must [configure container registry notifications](../../../administration/packages/container_registry.md#configure-container-registry-notifications). ### Enabling Container Scanning for Registry To enable container scanning for the GitLab Container Registry: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Scroll down to the **Container Scanning For Registry** section and turn on the toggle. ### Use with offline or air-gapped environments To use Container Scanning for Registry in an offline or air-gapped environment, you must use a local copy of the container scanning analyzer image. Because this feature is managed by the GitLab Security Policy Bot, the analyzer image cannot be configured by editing the `.gitlab-ci.yml` file. Instead, you must override the default scanner image by setting the `CS_ANALYZER_IMAGE` CI/CD variable in the GitLab UI. The dynamically-created scanning job inherits variables defined in the UI. You can set the variable at the project, group, or instance level. To configure a custom scanner image: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Settings** > **CI/CD**. 1. Expand the **Variables** section. 1. Select **Add variable** and fill in the details: - Key: `CS_ANALYZER_IMAGE` - Value: The full URL to your mirrored container scanning image. For example, `my.local.registry:5000/analyzers/container-scanning:7`. 1. Select **Add variable**. The GitLab Security Policy Bot will now use the specified image when it triggers a scan. ## Vulnerabilities database All analyzer images are [updated daily](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/blob/master/README.md#image-updates). The images use data from upstream advisory databases: - AlmaLinux Security Advisory - Amazon Linux Security Center - Arch Linux Security Tracker - SUSE CVRF - CWE Advisories - Debian Security Bug Tracker - GitHub Security Advisory - Go Vulnerability Database - CBL-Mariner Vulnerability Data - NVD - OSV - Red Hat OVAL v2 - Red Hat Security Data API - Photon Security Advisories - Rocky Linux UpdateInfo - Ubuntu CVE Tracker (only data sources from mid 2021 and later) In addition to the sources provided by these scanners, GitLab maintains the following vulnerability databases: - The proprietary [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db). - The open source [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). In the GitLab Ultimate tier, the data from the [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) is merged in to augment the data from the external sources. In the GitLab Premium and Free tiers, the data from the [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community) is merged in to augment the data from the external sources. This augmentation currently only applies to the analyzer images for the Trivy scanner. Database update information for other analyzers is available in the [maintenance table](../detect/vulnerability_scanner_maintenance.md). ## Solutions for vulnerabilities (auto-remediation) {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Some vulnerabilities can be fixed by applying the solution that GitLab automatically generates. To enable remediation support, the scanning tool must have access to the `Dockerfile` specified by the [`CS_DOCKERFILE_PATH`](#available-cicd-variables) CI/CD variable. To ensure that the scanning tool has access to this file, it's necessary to set [`GIT_STRATEGY: fetch`](../../../ci/runners/configure_runners.md#git-strategy) in your `.gitlab-ci.yml` file by following the instructions described in this document's [overriding the container scanning template](#overriding-the-container-scanning-template) section. Read more about the [solutions for vulnerabilities](../vulnerabilities/_index.md#resolve-a-vulnerability). ## Troubleshooting ### `docker: Error response from daemon: failed to copy xattrs` When the runner uses the `docker` executor and NFS is used (for example, `/var/lib/docker` is on an NFS mount), container scanning might fail with an error like the following: ```plaintext docker: Error response from daemon: failed to copy xattrs: failed to set xattr "security.selinux" on /path/to/file: operation not supported. ``` This is a result of a bug in Docker which is now [fixed](https://github.com/containerd/continuity/pull/138 "fs: add WithAllowXAttrErrors CopyOpt"). To prevent the error, ensure the Docker version that the runner is using is `18.09.03` or higher. For more information, see [issue #10241](https://gitlab.com/gitlab-org/gitlab/-/issues/10241 "Investigate why Container Scanning is not working with NFS mounts"). ### Getting warning message `gl-container-scanning-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). ### `unexpected status code 401 Unauthorized: Not Authorized` when scanning an image from AWS ECR This might happen when AWS region is not configured and the scanner cannot retrieve an authorization token. When you set `SECURE_LOG_LEVEL` to `debug` you will see a log message like below: ```shell [35mDEBUG[0m failed to get authorization token: MissingRegion: could not find region configuration ``` To resolve this, add the `AWS_DEFAULT_REGION` to your CI/CD variables: ```yaml variables: AWS_DEFAULT_REGION: <AWS_REGION_FOR_ECR> ``` ### `unable to open a file: open /home/gitlab/.cache/trivy/ee/db/metadata.json: no such file or directory` The compressed Trivy database is stored in the `/tmp` folder of the container and it is extracted to `/home/gitlab/.cache/trivy/{ee|ce}/db` at runtime. This error can happen if you have a volume mount for `/tmp` directory in your runner configuration. To resolve this, instead of binding the `/tmp` folder, bind specific files or folders in `/tmp` (for example `/tmp/myfile.txt`). ### Resolving `context deadline exceeded` error This error means a timeout occurred. To resolve it, add the `TRIVY_TIMEOUT` environment variable to the `container_scanning` job with a sufficiently long duration. ## Changes Changes to the container scanning analyzer can be found in the project's [changelog](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/blob/master/CHANGELOG.md). ### Container Scanning v6.x: outdated vulnerability database error Using Container Scanning with `registry.gitlab.com/security-products/container-scanning/grype:6` and `registry.gitlab.com/security-products/container-scanning/grype:6-fips` analyzer images may fail with an outdated vulnerability database error, for example: `1 error occurred: * the vulnerability database was built 6 days ago (max allowed age is 5 days)` This happens when one of the Container Scanning images above is copied to a user's own repository and not updated to the image (images are rebuilt daily).
--- stage: Application Security Testing group: Composition Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Container Scanning description: Image vulnerability scanning, configuration, customization, and reporting. breadcrumbs: - doc - user - application_security - container_scanning --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86092) the major analyzer version from `4` to `5` in GitLab 15.0. - [Moved](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/86783) from GitLab Ultimate to GitLab Free in 15.0. - Container Scanning variables that reference Docker [renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/357264) in GitLab 15.4. - Container Scanning template [moved](https://gitlab.com/gitlab-org/gitlab/-/issues/381665) from `Security/Container-Scanning.gitlab-ci.yml` to `Jobs/Container-Scanning.gitlab-ci.yml` in GitLab 15.6. {{< /history >}} Security vulnerabilities in container images create risk throughout your application lifecycle. Container Scanning detects these risks early, before they reach production environments. When vulnerabilities appear in your base images or operating system's packages, Container Scanning identifies them and provides a remediation path for those that it can. - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Container Scanning - Advanced Security Testing](https://www.youtube.com/watch?v=C0jn2eN5MAs). - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For a video walkthrough, see [How to set up Container Scanning using GitLab](https://youtu.be/h__mcXpil_4?si=w_BVG68qnkL9x4l1). - For an introductory tutorial, see [Scan a Docker container for vulnerabilities](../../../tutorials/container_scanning/_index.md). Container Scanning is often considered part of Software Composition Analysis (SCA). SCA can contain aspects of inspecting the items your code uses. These items typically include application and system dependencies that are almost always imported from external sources, rather than sourced from items you wrote yourself. GitLab offers both Container Scanning and [Dependency Scanning](../dependency_scanning/_index.md) to ensure coverage for all these dependency types. To cover as much of your risk area as possible, we encourage you to use all the security scanners. For a comparison of these features, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). GitLab integrates with the [Trivy](https://github.com/aquasecurity/trivy) security scanner to perform vulnerability static analysis in containers. {{< alert type="warning" >}} The Grype analyzer is no longer maintained, except for limited fixes as explained in our [statement of support](https://about.gitlab.com/support/statement-of-support/#version-support). The existing current major version for the Grype analyzer image will continue to be updated with the latest advisory database, and operating system packages until GitLab 19.0, at which point the analyzer will stop working. {{< /alert >}} ## Features | Features | In Free and Premium | In Ultimate | |-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------| | Customize Settings ([Variables](#available-cicd-variables), [Overriding](#overriding-the-container-scanning-template), [offline environment support](#running-container-scanning-in-an-offline-environment), etc) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [View JSON Report](#reports-json-format) as a CI job artifact | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Generate a [CycloneDX SBOM JSON report](#cyclonedx-software-bill-of-materials) as a CI job artifact | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Ability to enable container scanning via an MR in the GitLab UI | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [UBI Image Support](#fips-enabled-images) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Support for Trivy | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [End-of-life Operating System Detection](#end-of-life-operating-system-detection) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | Inclusion of GitLab Advisory Database | Limited to the time-delayed content from GitLab [advisories-communities](https://gitlab.com/gitlab-org/advisories-community/) project | Yes - all the latest content from [Gemnasium DB](https://gitlab.com/gitlab-org/security-products/gemnasium-db) | | Presentation of Report data in Merge Request and Security tab of the CI pipeline job | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Solutions for vulnerabilities (auto-remediation)](#solutions-for-vulnerabilities-auto-remediation) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | Support for the [vulnerability allow list](#vulnerability-allowlisting) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Access to Dependency List page](../dependency_list/_index.md) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Getting started Enable the Container Scanning analyzer in your CI/CD pipeline. When a pipeline runs, the images your application depends on are scanned for vulnerabilities. You can customize Container Scanning by using CI/CD variables. Prerequisites: - The test stage is required in the `.gitlab-ci.yml` file. - With self-managed runners you need a GitLab Runner with the `docker` or `kubernetes` executor on Linux/amd64. If you're using the instance runners on GitLab.com, this is enabled by default. - An image matching the [supported distributions](#supported-distributions). - [Build and push](../../packages/container_registry/build_and_push_images.md#use-gitlab-cicd) the Docker image to your project's container registry. - If you're using a third-party container registry, you might need to provide authentication credentials through the `CS_REGISTRY_USER` and `CS_REGISTRY_PASSWORD` [configuration variables](#available-cicd-variables). For more details on how to use these variables, see [authenticate to a remote registry](#authenticate-to-a-remote-registry). Please see details below for [user and project-specific requirements](#prerequisites). To enable the analyzer, either: - Enable Auto DevOps, which includes dependency scanning. - Use a preconfigured merge request. - Create a [scan execution policy](../policies/scan_execution_policies.md) that enforces container scanning. - Edit the `.gitlab-ci.yml` file manually. ### Use a preconfigured merge request This method automatically prepares a merge request that includes the container scanning template in the `.gitlab-ci.yml` file. You then merge the merge request to enable dependency scanning. {{< alert type="note" >}} This method works best with no existing `.gitlab-ci.yml` file, or with a minimal configuration file. If you have a complex GitLab configuration file it might not be parsed successfully, and an error might occur. In that case, use the [manual](#edit-the-gitlab-ciyml-file-manually) method instead. {{< /alert >}} To enable Container Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. In the **Container Scanning** row, select **Configure with a merge request**. 1. Select **Create merge request**. 1. Review the merge request, then select **Merge**. Pipelines now include a Container Scanning job. ### Edit the `.gitlab-ci.yml` file manually This method requires you to manually edit the existing `.gitlab-ci.yml` file. Use this method if your GitLab CI/CD configuration file is complex or you need to use non-default options. To enable Container Scanning: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. If no `.gitlab-ci.yml` file exists, select **Configure pipeline**, then delete the example content. 1. Copy and paste the following to the bottom of the `.gitlab-ci.yml` file. If an `include` line already exists, add only the `template` line below it. ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml ``` 1. Select the **Validate** tab, then select **Validate pipeline**. The message **Simulation completed successfully** confirms the file is valid. 1. Select the **Edit** tab. 1. Complete the fields. Do not use the default branch for the **Branch** field. 1. Select the **Start a new merge request with these changes** checkbox, then select **Commit changes**. 1. Complete the fields according to your standard workflow, then select **Create merge request**. 1. Review and edit the merge request according to your standard workflow, wait until the pipeline passes, then select **Merge**. Pipelines now include a Container Scanning job. ## Understanding the results You can review vulnerabilities in a pipeline: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Status: Indicates whether the vulnerability has been triaged or resolved. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - CVSS score: Provides a numeric value that maps to severity. - EPSS: Shows the likelihood of a vulnerability being exploited in the wild. - Has Known Exploit (KEV): Indicates that a given vulnerability has been exploited. - Project: Highlights the project where the vulnerability was identified. - Report type: Explains the output type. - Scanner: Identifies which analyzer detected the vulnerability. - Image: Provides the image attributed to the vulnerability - Namespace: Identifies the workspace attributed to the vulnerability. - Links: Evidence of the vulnerability being cataloged in various advisory databases. - Identifiers: A list of references used to classify the vulnerability, such as CVE identifiers. For more details, see [Pipeline security report](../vulnerability_report/pipeline.md). Additional ways to see Container Scanning results: - [Vulnerability report](../vulnerability_report/_index.md): Shows confirmed vulnerabilities on the default branch. - [Container scanning report artifact](../../../ci/yaml/artifacts_reports.md#artifactsreportscontainer_scanning) ## Roll out After you are confident in the Container Scanning results for a single project, you can extend its implementation to additional projects: - Use [enforced scan execution](../detect/security_configuration.md#create-a-shared-configuration) to apply Container Scanning settings across groups. - If you have unique requirements, Container Scanning can be run in [offline environments](#running-container-scanning-in-an-offline-environment). ## Supported distributions The following Linux distributions are supported: - Alma Linux - Alpine Linux - Amazon Linux - CentOS - CBL-Mariner - Debian - Distroless - Oracle Linux - Photon OS - Red Hat (RHEL) - Rocky Linux - SUSE - Ubuntu ### FIPS-enabled images GitLab also offers [FIPS-enabled Red Hat UBI](https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image) versions of the container-scanning images. You can therefore replace standard images with FIPS-enabled images. To configure the images, set the `CS_IMAGE_SUFFIX` to `-fips` or modify the `CS_ANALYZER_IMAGE` variable to the standard tag plus the `-fips` extension. {{< alert type="note" >}} The `-fips` flag is automatically added to `CS_ANALYZER_IMAGE` when FIPS mode is enabled in the GitLab instance. {{< /alert >}} Container scanning of images in authenticated registries is not supported when FIPS mode is enabled. When `CI_GITLAB_FIPS_MODE` is `"true"`, and `CS_REGISTRY_USER` or `CS_REGISTRY_PASSWORD` is set, the analyzer exits with an error and does not perform the scan. ## Configuration ### Customizing analyzer behavior To customize Container Scanning, use [CI/CD variables](#available-cicd-variables). #### Enable verbose output Enable verbose output when you need to see in detail what the Dependency Scanning job does, for example when troubleshooting. In the following example, the Container Scanning template is included and verbose output is enabled. ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml variables: SECURE_LOG_LEVEL: 'debug' ``` #### Scan an image in a remote registry To scan images located in a registry other than the project's, use the following `.gitlab-ci.yml`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_IMAGE: example.com/user/image:tag ``` ##### Authenticate to a remote registry Scanning an image in a private registry requires authentication. Provide the username in the `CS_REGISTRY_USER` variable, and the password in the `CS_REGISTRY_PASSWORD` configuration variable. For example, to scan an image from AWS Elastic Container Registry: ```yaml container_scanning: before_script: - ruby -r open-uri -e "IO.copy_stream(URI.open('https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip'), 'awscliv2.zip')" - unzip awscliv2.zip - sudo ./aws/install - aws --version - export AWS_ECR_PASSWORD=$(aws ecr get-login-password --region region) include: - template: Jobs/Container-Scanning.gitlab-ci.yml variables: CS_IMAGE: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<tag> CS_REGISTRY_USER: AWS CS_REGISTRY_PASSWORD: "$AWS_ECR_PASSWORD" AWS_DEFAULT_REGION: <region> ``` Authenticating to a remote registry is not supported when FIPS mode is enabled. #### Report language-specific findings The `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` CI/CD variable controls whether the scan reports findings related to programming languages. For more information about the supported languages, see [Language-specific Packages](https://aquasecurity.github.io/trivy/latest/docs/coverage/language/#supported-languages) in the Trivy documentation. By default, the report only includes packages managed by the Operating System (OS) package manager (for example, `yum`, `apt`, `apk`, `tdnf`). To report security findings in non-OS packages, set `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` to `"false"`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN: "false" ``` When you enable this feature, you may see [duplicate findings](../terminology/_index.md#duplicate-finding) in the [vulnerability report](../vulnerability_report/_index.md) if [Dependency Scanning](../dependency_scanning/_index.md) is enabled for your project. This happens because GitLab can't automatically deduplicate findings across different types of scanning tools. To understand which types of dependencies are likely to be duplicated, see [Dependency Scanning compared to Container Scanning](../comparison_dependency_and_container_scanning.md). #### Running jobs in merge request pipelines See [Use security scanning tools with merge request pipelines](../detect/security_configuration.md#use-security-scanning-tools-with-merge-request-pipelines). #### Available CI/CD variables To customize Container Scanning, use CI/CD variables. The following table lists CI/CD variables specific to Container Scanning. You can also use any of the [predefined CI/CD variables](../../../ci/variables/predefined_variables.md). {{< alert type="warning" >}} Test customization of GitLab analyzers in a merge request before merging these changes to the default branch. Failure to do so can give unexpected results, including a large number of false positives. {{< /alert >}} | CI/CD Variable | Default | Description | |------------------------------------------|---------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `ADDITIONAL_CA_CERT_BUNDLE` | `""` | Bundle of CA certs that you want to trust. See [Using a custom SSL CA certificate authority](#using-a-custom-ssl-ca-certificate-authority) for more details. | | `CI_APPLICATION_REPOSITORY` | `$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG` | Docker repository URL for the image to be scanned. | | `CI_APPLICATION_TAG` | `$CI_COMMIT_SHA` | Docker repository tag for the image to be scanned. | | `CS_ANALYZER_IMAGE` | `registry.gitlab.com/security-products/container-scanning:8` | Docker image of the analyzer. Do not use the `:latest` tag with analyzer images provided by GitLab. | | `CS_DEFAULT_BRANCH_IMAGE` | `""` | The name of the `CS_IMAGE` on the default branch. See [Setting the default branch image](#setting-the-default-branch-image) for more details. | | `CS_DISABLE_DEPENDENCY_LIST` | `"false"` | {{< icon name="warning" >}} **[Removed](https://gitlab.com/gitlab-org/gitlab/-/issues/439782)** in GitLab 17.0. | | `CS_DISABLE_LANGUAGE_VULNERABILITY_SCAN` | `"true"` | Disable scanning for language-specific packages installed in the scanned image. | | `CS_DOCKER_INSECURE` | `"false"` | Allow access to secure Docker registries using HTTPS without validating the certificates. | | `CS_DOCKERFILE_PATH` | `Dockerfile` | The path to the `Dockerfile` to use for generating remediations. By default, the scanner looks for a file named `Dockerfile` in the root directory of the project. You should configure this variable only if your `Dockerfile` is in a non-standard location, such as a subdirectory. See [Solutions for vulnerabilities](#solutions-for-vulnerabilities-auto-remediation) for more details. | | `CS_INCLUDE_LICENSES` | `""` | If set, this variable includes licenses for each component. It is only applicable to cyclonedx reports and those licenses are provided by [trivy](https://trivy.dev/v0.60/docs/scanner/license/) | | `CS_IGNORE_STATUSES` | `""` | Force the analyzer to ignore findings with specified statuses in a comma-delimited list. The following values are allowed: `unknown,not_affected,affected,fixed,under_investigation,will_not_fix,fix_deferred,end_of_life`. <sup>1</sup> | | `CS_IGNORE_UNFIXED` | `"false"` | Ignore findings that are not fixed. Ignored findings are not included in the report. | | `CS_IMAGE` | `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG` | The Docker image to be scanned. If set, this variable overrides the `$CI_APPLICATION_REPOSITORY` and `$CI_APPLICATION_TAG` variables. | | `CS_IMAGE_SUFFIX` | `""` | Suffix added to `CS_ANALYZER_IMAGE`. If set to `-fips`, `FIPS-enabled` image is used for scan. See [FIPS-enabled images](#fips-enabled-images) for more details. | | `CS_QUIET` | `""` | If set, this variable disables output of the [vulnerabilities table](#container-scanning-job-log-format) in the job log. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/merge_requests/50) in GitLab 15.1. | | `CS_REGISTRY_INSECURE` | `"false"` | Allow access to insecure registries (HTTP only). Should only be set to `true` when testing the image locally. Works with all scanners, but the registry must listen on port `80/tcp` for Trivy to work. | | `CS_REGISTRY_PASSWORD` | `$CI_REGISTRY_PASSWORD` | Password for accessing a Docker registry requiring authentication. The default is only set if `$CS_IMAGE` resides at [`$CI_REGISTRY`](../../../ci/variables/predefined_variables.md). Not supported when FIPS mode is enabled. | | `CS_REGISTRY_USER` | `$CI_REGISTRY_USER` | Username for accessing a Docker registry requiring authentication. The default is only set if `$CS_IMAGE` resides at [`$CI_REGISTRY`](../../../ci/variables/predefined_variables.md). Not supported when FIPS mode is enabled. | | `CS_REPORT_OS_EOL` | `"false"` | Enable EOL detection | | `CS_REPORT_OS_EOL_SEVERITY` | `"Medium"` | Severity level assigned to EOL OS findings when `CS_REPORT_OS_EOL` is enabled. EOL findings are always reported regardless of `CS_SEVERITY_THRESHOLD`. Supported levels are `UNKNOWN`, `LOW`, `MEDIUM`, `HIGH`, and `CRITICAL`. | | `CS_SEVERITY_THRESHOLD` | `UNKNOWN` | Severity level threshold. The scanner outputs vulnerabilities with severity level higher than or equal to this threshold. Supported levels are `UNKNOWN`, `LOW`, `MEDIUM`, `HIGH`, and `CRITICAL`. | | `CS_TRIVY_JAVA_DB` | `"registry.gitlab.com/gitlab-org/security-products/dependencies/trivy-java-db"` | Specify an alternate location for the [trivy-java-db](https://github.com/aquasecurity/trivy-java-db) vulnerability database. | | `CS_TRIVY_DETECTION_PRIORITY` | `"precise"` | Scan using the defined Trivy [detection priority](https://trivy.dev/latest/docs/scanner/vulnerability/#detection-priority). The following values are allowed: `precise` or `comprehensive`. | | `SECURE_LOG_LEVEL` | `info` | Set the minimum logging level. Messages of this logging level or higher are output. From highest to lowest severity, the logging levels are: `fatal`, `error`, `warn`, `info`, `debug`. | | `TRIVY_TIMEOUT` | `5m0s` | Set the timeout for the scan. | | `TRIVY_PLATFORM` | `linux/amd64` | Set platform in the format `os/arch` if image is multi-platform capable. | **Footnotes**: 1. Fix status information is highly dependent on accurate fix availability data from the software vendor and container image operating system package metadata. It is also subject to interpretation by individual container scanners. In cases where a container scanner misreports the availability of a fixed package for a vulnerability, using `CS_IGNORE_STATUSES` can lead to false positive or false negative filtering of findings when this setting is enabled. ### Overriding the container scanning template If you want to override the job definition (for example, to change properties like `variables`), you must declare and override a job after the template inclusion, and then specify any additional keys. This example sets `GIT_STRATEGY` to `fetch`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: GIT_STRATEGY: fetch ``` ### Setting the default branch image By default, container scanning assumes that the image naming convention stores any branch-specific identifiers in the image tag rather than the image name. When the image name differs between the default branch and the non-default branch, previously-detected vulnerabilities show up as newly detected in merge requests. When the same image has different names on the default branch and a non-default branch, you can use the `CS_DEFAULT_BRANCH_IMAGE` variable to indicate what that image's name is on the default branch. GitLab then correctly determines if a vulnerability already exists when running scans on non-default branches. As an example, suppose the following: - Non-default branches publish images with the naming convention `$CI_REGISTRY_IMAGE/$CI_COMMIT_BRANCH:$CI_COMMIT_SHA`. - The default branch publishes images with the naming convention `$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA`. In this example, you can use the following CI/CD configuration to ensure that vulnerabilities aren't duplicated: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_DEFAULT_BRANCH_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA before_script: - export CS_IMAGE="$CI_REGISTRY_IMAGE/$CI_COMMIT_BRANCH:$CI_COMMIT_SHA" - | if [ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]; then export CS_IMAGE="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" fi ``` `CS_DEFAULT_BRANCH_IMAGE` should remain the same for a given `CS_IMAGE`. If it changes, then a duplicate set of vulnerabilities are created, which must be manually dismissed. When using [Auto DevOps](../../../topics/autodevops/_index.md), `CS_DEFAULT_BRANCH_IMAGE` is automatically set to `$CI_REGISTRY_IMAGE/$CI_DEFAULT_BRANCH:$CI_APPLICATION_TAG`. ### Using a custom SSL CA certificate authority You can use the `ADDITIONAL_CA_CERT_BUNDLE` CI/CD variable to configure a custom SSL CA certificate authority, which is used to verify the peer when fetching Docker images from a registry which uses HTTPS. The `ADDITIONAL_CA_CERT_BUNDLE` value should contain the [text representation of the X.509 PEM public-key certificate](https://www.rfc-editor.org/rfc/rfc7468#section-5.1). For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml container_scanning: variables: ADDITIONAL_CA_CERT_BUNDLE: | -----BEGIN CERTIFICATE----- MIIGqTCCBJGgAwIBAgIQI7AVxxVwg2kch4d56XNdDjANBgkqhkiG9w0BAQsFADCB ... jWgmPqF3vUbZE0EyScetPJquRFRKIesyJuBFMAs= -----END CERTIFICATE----- ``` The `ADDITIONAL_CA_CERT_BUNDLE` value can also be configured as a [custom variable in the UI](../../../ci/variables/_index.md#for-a-project), either as a `file`, which requires the path to the certificate, or as a variable, which requires the text representation of the certificate. ### Scanning a multi-arch image You can use the `TRIVY_PLATFORM` CI/CD variable to configure the container scan to run against a specific operating system and architecture. For example, to configure this value in the `.gitlab-ci.yml` file, use the following: ```yaml container_scanning: # Use an arm64 SaaS runner to scan this natively tags: ["saas-linux-small-arm64"] variables: TRIVY_PLATFORM: "linux/arm64" ``` ### Vulnerability allowlisting {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To allowlist specific vulnerabilities, follow these steps: 1. Set `GIT_STRATEGY: fetch` in your `.gitlab-ci.yml` file by following the instructions in [overriding the container scanning template](#overriding-the-container-scanning-template). 1. Define the allowlisted vulnerabilities in a YAML file named `vulnerability-allowlist.yml`. This must use the format described in [`vulnerability-allowlist.yml` data format](#vulnerability-allowlistyml-data-format). 1. Add the `vulnerability-allowlist.yml` file to the root folder of your project's Git repository. #### `vulnerability-allowlist.yml` data format The `vulnerability-allowlist.yml` file is a YAML file that specifies a list of CVE IDs of vulnerabilities that are **allowed** to exist, because they're false positives, or they're not applicable. If a matching entry is found in the `vulnerability-allowlist.yml` file, the following happens: - The vulnerability **is not included** when the analyzer generates the `gl-container-scanning-report.json` file. - The Security tab of the pipeline **does not show** the vulnerability. It is not included in the JSON file, which is the source of truth for the Security tab. Example `vulnerability-allowlist.yml` file: ```yaml generalallowlist: CVE-2019-8696: CVE-2014-8166: cups CVE-2017-18248: images: registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256: CVE-2018-4180: your.private.registry:5000/centos: CVE-2015-1419: libxml2 CVE-2015-1447: ``` This example excludes from `gl-container-scanning-report.json`: 1. All vulnerabilities with CVE IDs: `CVE-2019-8696`, `CVE-2014-8166`, `CVE-2017-18248`. 1. All vulnerabilities found in the `registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256` container image with CVE ID `CVE-2018-4180`. 1. All vulnerabilities found in `your.private.registry:5000/centos` container with CVE IDs `CVE-2015-1419`, `CVE-2015-1447`. ##### File format - `generalallowlist` block allows you to specify CVE IDs globally. All vulnerabilities with matching CVE IDs are excluded from the scan report. - `images` block allows you to specify CVE IDs for each container image independently. All vulnerabilities from the given image with matching CVE IDs are excluded from the scan report. The image name is retrieved from one of the environment variables used to specify the Docker image to be scanned, such as `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG` or `CS_IMAGE`. The image provided in this block **must** match this value and **must not** include the tag value. For example, if you specify the image to be scanned using `CS_IMAGE=alpine:3.7`, then you would use `alpine` in the `images` block, but you cannot use `alpine:3.7`. You can specify container image in multiple ways: - as image name only (such as `centos`). - as full image name with registry hostname (such as `your.private.registry:5000/centos`). - as full image name with registry hostname and sha256 label (such as `registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256`). {{< alert type="note" >}} The string after CVE ID (`cups` and `libxml2` in the previous example) is an optional comment format. It has **no impact** on the handling of vulnerabilities. You can include comments to describe the vulnerability. {{< /alert >}} ##### Container scanning job log format You can verify the results of your scan and the correctness of your `vulnerability-allowlist.yml` file by looking at the logs that are produced by the container scanning analyzer in `container_scanning` job details. The log contains a list of found vulnerabilities as a table, for example: ```plaintext +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | STATUS | CVE SEVERITY | PACKAGE NAME | PACKAGE VERSION | CVE DESCRIPTION | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Approved | High CVE-2019-3462 | apt | 1.4.8 | Incorrect sanitation of the 302 redirect field in HTTP transport metho | | | | | | d of apt versions 1.4.8 and earlier can lead to content injection by a | | | | | | MITM attacker, potentially leading to remote code execution on the ta | | | | | | rget machine. | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Unapproved | Medium CVE-2020-27350 | apt | 1.4.8 | APT had several integer overflows and underflows while parsing .deb pa | | | | | | ckages, aka GHSL-2020-168 GHSL-2020-169, in files apt-pkg/contrib/extr | | | | | | acttar.cc, apt-pkg/deb/debfile.cc, and apt-pkg/contrib/arfile.cc. This | | | | | | issue affects: apt 1.2.32ubuntu0 versions prior to 1.2.32ubuntu0.2; 1 | | | | | | .6.12ubuntu0 versions prior to 1.6.12ubuntu0.2; 2.0.2ubuntu0 versions | | | | | | prior to 2.0.2ubuntu0.2; 2.1.10ubuntu0 versions prior to 2.1.10ubuntu0 | | | | | | .1; | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ | Unapproved | Medium CVE-2020-3810 | apt | 1.4.8 | Missing input validation in the ar/tar implementations of APT before v | | | | | | ersion 2.1.2 could result in denial of service when processing special | | | | | | ly crafted deb files. | +------------+-------------------------+------------------------+-----------------------+------------------------------------------------------------------------+ ``` Vulnerabilities in the log are marked as `Approved` when the corresponding CVE ID is added to the `vulnerability-allowlist.yml` file. ### Running container scanning in an offline environment {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} For instances in an environment with limited, restricted, or intermittent access to external resources through the internet, some adjustments are required for the container scanning job to successfully run. For more information, see [Offline environments](../offline_deployments/_index.md). #### Requirements for offline container scanning To use container scanning in an offline environment, you need: - GitLab Runner with the [`docker` or `kubernetes` executor](#getting-started). - To configure a local Docker container registry with copies of the container scanning images. You can find these images in their respective registries: | GitLab Analyzer | Container registry | | --- | --- | | [Container-Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) | [Container-Scanning container registry](https://gitlab.com/security-products/container-scanning/container_registry/) | GitLab Runner has a [default `pull policy` of `always`](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy), meaning the runner tries to pull Docker images from the GitLab container registry even if a local copy is available. The GitLab Runner [`pull_policy` can be set to `if-not-present`](https://docs.gitlab.com/runner/executors/docker.html#using-the-if-not-present-pull-policy) in an offline environment if you prefer using only locally available Docker images. However, we recommend keeping the pull policy setting to `always` if not in an offline environment, as this enables the use of updated scanners in your CI/CD pipelines. ##### Support for Custom Certificate Authorities Support for custom certificate authorities for Trivy was introduced in version [4.0.0](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/releases/4.0.0). #### Make GitLab container scanning analyzer images available inside your Docker registry For container scanning, import the following images from `registry.gitlab.com` into your [local Docker container registry](../../packages/container_registry/_index.md): ```plaintext registry.gitlab.com/security-products/container-scanning:8 registry.gitlab.com/security-products/container-scanning/trivy:8 ``` The process for importing Docker images into a local offline Docker registry depends on **your network security policy**. Consult your IT staff to find an accepted and approved process by which you can import or temporarily access external resources. These scanners are [periodically updated](../detect/vulnerability_scanner_maintenance.md), and you may be able to make occasional updates on your own. For more information, see [the specific steps on how to update an image with a pipeline](#automating-container-scanning-vulnerability-database-updates-with-a-pipeline). For details on saving and transporting Docker images as a file, see the Docker documentation on [`docker save`](https://docs.docker.com/reference/cli/docker/image/save/), [`docker load`](https://docs.docker.com/reference/cli/docker/image/load/), [`docker export`](https://docs.docker.com/reference/cli/docker/container/export/), and [`docker import`](https://docs.docker.com/reference/cli/docker/image/import/). #### Set container scanning CI/CD variables to use local container scanner analyzers {{< alert type="note" >}} The methods described here apply to `container_scanning` jobs that are defined in your `.gitlab-ci.yml` file. These methods do not work for the Container Scanning for Registry feature, which is managed by a bot and does not use the `.gitlab-ci.yml` file. To configure automatic Container Scanning for Registry in an offline environment, [define the `CS_ANALYZER_IMAGE` variable in the GitLab UI](#use-with-offline-or-air-gapped-environments) instead. {{< /alert >}} 1. [Override the container scanning template](#overriding-the-container-scanning-template) in your `.gitlab-ci.yml` file to refer to the Docker images hosted on your local Docker container registry: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: image: $CI_REGISTRY/namespace/container-scanning ``` 1. If your local Docker container registry is running securely over `HTTPS`, but you're using a self-signed certificate, then you must set `CS_DOCKER_INSECURE: "true"` in the `container_scanning` section of your `.gitlab-ci.yml`. #### Automating container scanning vulnerability database updates with a pipeline We recommend that you set up a [scheduled pipeline](../../../ci/pipelines/schedules.md) to fetch the latest vulnerabilities database on a preset schedule. Automating this with a pipeline means you do not have to do it manually each time. You can use the following `.gitlab-ci.yml` example as a template. ```yaml variables: SOURCE_IMAGE: registry.gitlab.com/security-products/container-scanning:8 TARGET_IMAGE: $CI_REGISTRY/namespace/container-scanning image: docker:latest update-scanner-image: services: - docker:dind script: - docker pull $SOURCE_IMAGE - docker tag $SOURCE_IMAGE $TARGET_IMAGE - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY --username $CI_REGISTRY_USER --password-stdin - docker push $TARGET_IMAGE ``` The previous template works for a GitLab Docker registry running on a local installation. However, if you're using a non-GitLab Docker registry, you must change the `$CI_REGISTRY` value and the `docker login` credentials to match your local registry's details. #### Scan images in external private registries To scan an image in an external private registry, you must configure access credentials so the container scanning analyzer can authenticate itself before attempting to access the image to scan. If you use the GitLab [Container Registry](../../packages/container_registry/_index.md), the `CS_REGISTRY_USER` and `CS_REGISTRY_PASSWORD` [configuration variables](#available-cicd-variables) are set automatically and you can skip this configuration. This example shows the configuration needed to scan images in a private [Google Container Registry](https://cloud.google.com/artifact-registry): ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_REGISTRY_USER: _json_key CS_REGISTRY_PASSWORD: "$GCP_CREDENTIALS" CS_IMAGE: "gcr.io/path-to-you-registry/image:tag" ``` Before you commit this configuration, [add a CI/CD variable](../../../ci/variables/_index.md#for-a-project) for `GCP_CREDENTIALS` containing the JSON key, as described in the [Google Cloud Platform Container Registry documentation](https://cloud.google.com/container-registry/docs/advanced-authentication#json-key). Also: - The value of the variable may not fit the masking requirements for the **Mask variable** option, so the value could be exposed in the job logs. - Scans may not run in unprotected feature branches if you select the **Protect variable** option. - Consider creating credentials with read-only permissions and rotating them regularly if the options aren't selected. Scanning images in external private registries is not supported when FIPS mode is enabled. #### Create and use a Trivy Java database mirror When the `trivy` scanner is used and a `jar` file is encountered in a container image being scanned, `trivy` downloads an additional `trivy-java-db` vulnerability database. By default, the `trivy-java-db` database is hosted as an [OCI artifact](https://oras.land/docs/quickstart/) at `ghcr.io/aquasecurity/trivy-java-db:1`. If this registry is [not accessible](#running-container-scanning-in-an-offline-environment) or responds with `TOOMANYREQUESTS`, one solution is to mirror the `trivy-java-db` to a more accessible container registry: ```yaml mirror trivy java db: image: name: ghcr.io/oras-project/oras:v1.1.0 entrypoint: [""] script: - oras login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - oras pull ghcr.io/aquasecurity/trivy-java-db:1 - oras push $CI_REGISTRY_IMAGE:1 --config /dev/null:application/vnd.aquasec.trivy.config.v1+json javadb.tar.gz:application/vnd.aquasec.trivy.javadb.layer.v1.tar+gzip ``` The vulnerability database is not a regular Docker image, so it is not possible to pull it by using `docker pull`. The image shows an error if you go to it in the GitLab UI. If the container registry is `gitlab.example.com/trivy-java-db-mirror`, then the container scanning job should be configured in the following way. Do not add the tag `:1` at the end, it is added by `trivy`: ```yaml include: - template: Jobs/Container-Scanning.gitlab-ci.yml container_scanning: variables: CS_TRIVY_JAVA_DB: gitlab.example.com/trivy-java-db-mirror ``` ## Scanning archive formats {{< history >}} - Scanning tar files [introduced](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/merge_requests/3151) in GitLab 18.0. {{< /history >}} Container Scanning supports images in archive formats (`.tar`, `.tar.gz`). Such images may be created, for example, using `docker save` or `docker buildx build`. To scan an archive file, set the environment variable `CS_IMAGE` to the format `archive://path/to/archive`: - The `archive://` scheme prefix specifies that the analyzer is to scan an archive. - `path/to/archive` specifies the path to the archive to scan, whether an absolute path or a relative path. Container Scanning supports tar image files following the [Docker Image Specification](https://github.com/moby/docker-image-spec). OCI tarballs are not supported. For more information regarding supported formats, see [Trivy tar file support](https://trivy.dev/v0.48/docs/target/container_image/#tar-files). ### Building supported tar files Container Scanning uses metadata from the tar file for image naming. When building tar image files, ensure the image is tagged: ```shell # Pull or build an image with a name and a tag docker pull image:latest # OR docker build . -t image:latest # Then export to tar using docker save docker save image:latest -o image-latest.tar # Or build an image with a tag using buildx build docker buildx create --name container --driver=docker-container docker buildx build -t image:latest --builder=container -o type=docker,dest=- . > image-latest.tar # With podman podman build -t image:latest . podman save -o image-latest.tar image:latest ``` ### Image name Container Scanning determines the image name by first evaluating the archive's `manifest.json` and using the first item in `RepoTags`. If this is not found, `index.json` is used to fetch the `io.containerd.image.name` annotation. If this is not found, the archive filename is used instead. - `manifest.json` is defined in [Docker Image Specification v1.1.0](https://github.com/moby/docker-image-spec/blob/v1.1.0/v1.1.md#combined-image-json--filesystem-changeset-format) and created by using the command `docker save`. - `index.json` format is defined in the [OCI image specification v1.1.1](https://github.com/opencontainers/image-spec/blob/v1.1.1/spec.md). `io.containerd.image.name` is [available in containerd v1.3.0 and later](https://github.com/containerd/containerd/blob/v1.3.0/images/annotations.go) when using `ctr image export`. ### Scanning archives built in a previous job To scan an archive built in a CI/CD job, you must pass the archive artifact from the build job to the container scanning job. Use the [`artifacts:paths`](../../../ci/yaml/_index.md#artifactspaths) and [`dependencies`](../../../ci/yaml/_index.md#dependencies) keywords to pass artifacts from one job to a following one: ```yaml build_job: script: - docker build . -t image:latest - docker save image:latest -o image-latest.tar artifacts: paths: - "image-latest.tar" container_scanning: variables: CS_IMAGE: "archive://image-latest.tar" dependencies: - build_job ``` ### Scanning archives from the project repository To scan an archive found in your project repository, ensure that your [Git strategy](../../../ci/runners/configure_runners.md#git-strategy) enables access to your repository. Set the `GIT_STRATEGY` keyword to either `clone` or `fetch` in the `container_scanning` job because it is set to `none` by default. ```yaml container_scanning: variables: GIT_STRATEGY: fetch ``` ## Running the standalone container scanning tool It's possible to run the [GitLab container scanning tool](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) against a Docker container without needing to run it within the context of a CI job. To scan an image directly, follow these steps: 1. Run [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Docker Machine](https://github.com/docker/machine). 1. Run the analyzer's Docker image, passing the image and tag you want to analyze in the `CI_APPLICATION_REPOSITORY` and `CI_APPLICATION_TAG` variables: ```shell docker run \ --interactive --rm \ --volume "$PWD":/tmp/app \ -e CI_PROJECT_DIR=/tmp/app \ -e CI_APPLICATION_REPOSITORY=registry.gitlab.com/gitlab-org/security-products/dast/webgoat-8.0@sha256 \ -e CI_APPLICATION_TAG=bc09fe2e0721dfaeee79364115aeedf2174cce0947b9ae5fe7c33312ee019a4e \ registry.gitlab.com/security-products/container-scanning ``` The results are stored in `gl-container-scanning-report.json`. ## Reports JSON format The container scanning tool emits JSON reports which the [GitLab Runner](https://docs.gitlab.com/runner/) recognizes through the [`artifacts:reports`](../../../ci/yaml/_index.md#artifactsreports) keyword in the CI configuration file. Once the CI job finishes, the Runner uploads these reports to GitLab, which are then available in the CI Job artifacts. In GitLab Ultimate, these reports can be viewed in the corresponding [pipeline](../detect/security_scanning_results.md) and become part of the [vulnerability report](../vulnerability_report/_index.md). These reports must follow a format defined in the [security report schemas](https://gitlab.com/gitlab-org/security-products/security-report-schemas/). See: - [Latest schema for the container scanning report](https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/container-scanning-report-format.json). - [Example container scanning report](https://gitlab.com/gitlab-examples/security/security-reports/-/blob/master/samples/container-scanning.json) ### CycloneDX Software Bill of Materials {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/396381) in GitLab 15.11. {{< /history >}} In addition to the [JSON report file](#reports-json-format), the [Container Scanning](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning) tool outputs a [CycloneDX](https://cyclonedx.org/) Software Bill of Materials (SBOM) for the scanned image. This CycloneDX SBOM is named `gl-sbom-report.cdx.json` and is saved in the same directory as the `JSON report file`. This feature is only supported when the `Trivy` analyzer is used. This report can be viewed in the [Dependency List](../dependency_list/_index.md). You can download CycloneDX SBOMs [the same way as other job artifacts](../../../ci/jobs/job_artifacts.md#download-job-artifacts). #### License Information in CycloneDX Reports {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/472064) in GitLab 18.0. {{< /history >}} Container scanning can include license information in CycloneDX reports. This feature is disabled by default to maintain backward compatibility. To enable license scanning in your container scanning results: - Set the `CS_INCLUDE_LICENSES` variable in your `.gitlab-ci.yml` file: ```yaml container_scanning: variables: CS_INCLUDE_LICENSES: "true" ``` - After enabling this feature, the generated CycloneDX report will include license information for components detected in your container images. - You can view this license information in the dependency list page or as part of the downloadable CycloneDX job artifact. It is important to mention that only SPDX licenses are supported. However, licenses that are non-compliant with SPDX will still be ingested without any user-facing error. ## End-of-life operating system detection Container scanning includes the ability to detect and report when your container images are using operating systems that have reached their end-of-life (EOL). Operating systems that have reached EOL no longer receive security updates, leaving them vulnerable to newly discovered security issues. The EOL detection feature uses Trivy to identify operating systems that are no longer supported by their respective distributions. When an EOL operating system is detected, it's reported as a vulnerability in your container scanning report alongside other security findings. To enable EOL detection, set `CS_REPORT_OS_EOL` to `"true"`. ## Container Scanning for Registry {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/2340) in GitLab 17.1 [with a flag](../../../administration/feature_flags/_index.md) named `enable_container_scanning_for_registry`. Disabled by default. - [Enabled on GitLab Self-Managed, and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/443827) in GitLab 17.2. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/443827) in GitLab 17.2. Feature flag `enable_container_scanning_for_registry` removed. {{< /history >}} When a container image is pushed with the `latest` tag, a container scanning job is automatically triggered by the security policy bot in a new pipeline against the default branch. Unlike regular container scanning, the scan results do not include a security report. Instead, Container Scanning for Registry relies on [Continuous Vulnerability Scanning](../continuous_vulnerability_scanning/_index.md) to inspect the components detected by the scan. When security findings are identified, GitLab populates the [vulnerability report](../vulnerability_report/_index.md) with these findings. Vulnerabilities can be viewed under the **Container registry vulnerabilities** tab of the vulnerability report page. {{< alert type="note" >}} Container Scanning for Registry populates the vulnerability report only when a new advisory is published to the [GitLab Advisory Database](../gitlab_advisory_database/_index.md). Support for populating the vulnerability report with all present advisory data, instead of only newly-detected data, is proposed in [epic 11219](https://gitlab.com/groups/gitlab-org/-/epics/11219). {{< /alert >}} ### Prerequisites - You must have at least the Maintainer role in a project to enable Container Scanning for Registry. - The project being used must not be empty. If you are utilizing an empty project solely for storing container images, this feature won't function as intended. As a workaround, ensure the project contains an initial commit on the default branch. - By default there is a limit of `50` scans per project per day. - You must [configure container registry notifications](../../../administration/packages/container_registry.md#configure-container-registry-notifications). ### Enabling Container Scanning for Registry To enable container scanning for the GitLab Container Registry: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Security configuration**. 1. Scroll down to the **Container Scanning For Registry** section and turn on the toggle. ### Use with offline or air-gapped environments To use Container Scanning for Registry in an offline or air-gapped environment, you must use a local copy of the container scanning analyzer image. Because this feature is managed by the GitLab Security Policy Bot, the analyzer image cannot be configured by editing the `.gitlab-ci.yml` file. Instead, you must override the default scanner image by setting the `CS_ANALYZER_IMAGE` CI/CD variable in the GitLab UI. The dynamically-created scanning job inherits variables defined in the UI. You can set the variable at the project, group, or instance level. To configure a custom scanner image: 1. On the left sidebar, select **Search or go to** and find your project or group. 1. Select **Settings** > **CI/CD**. 1. Expand the **Variables** section. 1. Select **Add variable** and fill in the details: - Key: `CS_ANALYZER_IMAGE` - Value: The full URL to your mirrored container scanning image. For example, `my.local.registry:5000/analyzers/container-scanning:7`. 1. Select **Add variable**. The GitLab Security Policy Bot will now use the specified image when it triggers a scan. ## Vulnerabilities database All analyzer images are [updated daily](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/blob/master/README.md#image-updates). The images use data from upstream advisory databases: - AlmaLinux Security Advisory - Amazon Linux Security Center - Arch Linux Security Tracker - SUSE CVRF - CWE Advisories - Debian Security Bug Tracker - GitHub Security Advisory - Go Vulnerability Database - CBL-Mariner Vulnerability Data - NVD - OSV - Red Hat OVAL v2 - Red Hat Security Data API - Photon Security Advisories - Rocky Linux UpdateInfo - Ubuntu CVE Tracker (only data sources from mid 2021 and later) In addition to the sources provided by these scanners, GitLab maintains the following vulnerability databases: - The proprietary [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db). - The open source [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community). In the GitLab Ultimate tier, the data from the [GitLab Advisory Database](https://gitlab.com/gitlab-org/security-products/gemnasium-db) is merged in to augment the data from the external sources. In the GitLab Premium and Free tiers, the data from the [GitLab Advisory Database (Open Source Edition)](https://gitlab.com/gitlab-org/advisories-community) is merged in to augment the data from the external sources. This augmentation currently only applies to the analyzer images for the Trivy scanner. Database update information for other analyzers is available in the [maintenance table](../detect/vulnerability_scanner_maintenance.md). ## Solutions for vulnerabilities (auto-remediation) {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Some vulnerabilities can be fixed by applying the solution that GitLab automatically generates. To enable remediation support, the scanning tool must have access to the `Dockerfile` specified by the [`CS_DOCKERFILE_PATH`](#available-cicd-variables) CI/CD variable. To ensure that the scanning tool has access to this file, it's necessary to set [`GIT_STRATEGY: fetch`](../../../ci/runners/configure_runners.md#git-strategy) in your `.gitlab-ci.yml` file by following the instructions described in this document's [overriding the container scanning template](#overriding-the-container-scanning-template) section. Read more about the [solutions for vulnerabilities](../vulnerabilities/_index.md#resolve-a-vulnerability). ## Troubleshooting ### `docker: Error response from daemon: failed to copy xattrs` When the runner uses the `docker` executor and NFS is used (for example, `/var/lib/docker` is on an NFS mount), container scanning might fail with an error like the following: ```plaintext docker: Error response from daemon: failed to copy xattrs: failed to set xattr "security.selinux" on /path/to/file: operation not supported. ``` This is a result of a bug in Docker which is now [fixed](https://github.com/containerd/continuity/pull/138 "fs: add WithAllowXAttrErrors CopyOpt"). To prevent the error, ensure the Docker version that the runner is using is `18.09.03` or higher. For more information, see [issue #10241](https://gitlab.com/gitlab-org/gitlab/-/issues/10241 "Investigate why Container Scanning is not working with NFS mounts"). ### Getting warning message `gl-container-scanning-report.json: no matching files` For information on this, see the [general Application Security troubleshooting section](../../../ci/jobs/job_artifacts_troubleshooting.md#error-message-no-files-to-upload). ### `unexpected status code 401 Unauthorized: Not Authorized` when scanning an image from AWS ECR This might happen when AWS region is not configured and the scanner cannot retrieve an authorization token. When you set `SECURE_LOG_LEVEL` to `debug` you will see a log message like below: ```shell [35mDEBUG[0m failed to get authorization token: MissingRegion: could not find region configuration ``` To resolve this, add the `AWS_DEFAULT_REGION` to your CI/CD variables: ```yaml variables: AWS_DEFAULT_REGION: <AWS_REGION_FOR_ECR> ``` ### `unable to open a file: open /home/gitlab/.cache/trivy/ee/db/metadata.json: no such file or directory` The compressed Trivy database is stored in the `/tmp` folder of the container and it is extracted to `/home/gitlab/.cache/trivy/{ee|ce}/db` at runtime. This error can happen if you have a volume mount for `/tmp` directory in your runner configuration. To resolve this, instead of binding the `/tmp` folder, bind specific files or folders in `/tmp` (for example `/tmp/myfile.txt`). ### Resolving `context deadline exceeded` error This error means a timeout occurred. To resolve it, add the `TRIVY_TIMEOUT` environment variable to the `container_scanning` job with a sufficiently long duration. ## Changes Changes to the container scanning analyzer can be found in the project's [changelog](https://gitlab.com/gitlab-org/security-products/analyzers/container-scanning/-/blob/master/CHANGELOG.md). ### Container Scanning v6.x: outdated vulnerability database error Using Container Scanning with `registry.gitlab.com/security-products/container-scanning/grype:6` and `registry.gitlab.com/security-products/container-scanning/grype:6-fips` analyzer images may fail with an outdated vulnerability database error, for example: `1 error occurred: * the vulnerability database was built 6 days ago (max allowed age is 5 days)` This happens when one of the Container Scanning images above is copied to a user's own repository and not updated to the image (images are rebuilt daily).
https://docs.gitlab.com/user/application_security/api_security
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/api_security
[ "doc", "user", "application_security", "api_security" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
API Security
Protection, analysis, testing, scanning, and discovery.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} API Security refers to the measures taken to secure and protect web Application Programming Interfaces (APIs) from unauthorized access, misuse, and attacks. APIs are a crucial component of modern application development as they allow applications to interact with each other and exchange data. However, this also makes them attractive to attackers and vulnerable to security threats if not properly secured. In this section, we discuss GitLab features that can be used to ensure the security of web APIs in your application. Some of the features discussed are specific to web APIs and others are more general solutions that are also used with web API applications. - [SAST](../sast/_index.md) identified vulnerabilities by analyzing the application's codebase. - [Dependency Scanning](../dependency_scanning/_index.md) reviews a project 3rd party dependencies for known vulnerabilities (for example CVEs). - [Container Scanning](../container_scanning/_index.md) analyzes container images to identify known OS package vulnerabilities and installed language dependencies. - [API Discovery](api_discovery/_index.md) examines an application containing a REST API and intuits an OpenAPI specification for that API. OpenAPI specification documents are used by other GitLab security tools. - [API security testing analyzer](../api_security_testing/_index.md) performs dynamic analysis security testing of web APIs. It can identify various security vulnerabilities in your application, including the OWASP Top 10. - [API Fuzzing](../api_fuzzing/_index.md) performs fuzz testing of a web API. Fuzz testing looks for issues in an application that are not previously known and don't map to classic vulnerability types such as SQL Injection.
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: API Security description: Protection, analysis, testing, scanning, and discovery. breadcrumbs: - doc - user - application_security - api_security --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} API Security refers to the measures taken to secure and protect web Application Programming Interfaces (APIs) from unauthorized access, misuse, and attacks. APIs are a crucial component of modern application development as they allow applications to interact with each other and exchange data. However, this also makes them attractive to attackers and vulnerable to security threats if not properly secured. In this section, we discuss GitLab features that can be used to ensure the security of web APIs in your application. Some of the features discussed are specific to web APIs and others are more general solutions that are also used with web API applications. - [SAST](../sast/_index.md) identified vulnerabilities by analyzing the application's codebase. - [Dependency Scanning](../dependency_scanning/_index.md) reviews a project 3rd party dependencies for known vulnerabilities (for example CVEs). - [Container Scanning](../container_scanning/_index.md) analyzes container images to identify known OS package vulnerabilities and installed language dependencies. - [API Discovery](api_discovery/_index.md) examines an application containing a REST API and intuits an OpenAPI specification for that API. OpenAPI specification documents are used by other GitLab security tools. - [API security testing analyzer](../api_security_testing/_index.md) performs dynamic analysis security testing of web APIs. It can identify various security vulnerabilities in your application, including the OWASP Top 10. - [API Fuzzing](../api_fuzzing/_index.md) performs fuzz testing of a web API. Fuzz testing looks for issues in an application that are not previously known and don't map to classic vulnerability types such as SQL Injection.
https://docs.gitlab.com/user/application_security/api_security/api_discovery
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security/_index.md
2025-08-13
doc/user/application_security/api_security/api_discovery
[ "doc", "user", "application_security", "api_security", "api_discovery" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
API Discovery
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9302) in GitLab 15.9. The API Discovery feature is in [beta](../../../../policy/development_stages_support.md). {{< /history >}} API Discovery analyzes your application and produces an OpenAPI document describing the web APIs it exposes. This schema document can then be used by the [API security testing analyzer](../../api_security_testing/_index.md) or [API Fuzzing](../../api_fuzzing/_index.md) to perform security scans of the web API. ## Supported frameworks - [Java Spring-Boot](#java-spring-boot) ## When does API Discovery run? API Discovery runs as a standalone job in your pipeline. The resulting OpenAPI document is captured as a job artifact so it can be used by other jobs in later stages. API Discovery runs in the `test` stage by default. The `test` stage was chosen as it typically executes before the stages used by other security features such as API security testing and API fuzzing. ## Example API Discovery configurations The following projects demonstrate API Discovery: - [Example Java Spring Boot v2 Pet Store](https://gitlab.com/gitlab-org/security-products/demos/api-discovery/java-spring-boot-v2-petstore) ## Java Spring-Boot [Spring Boot](https://spring.io/projects/spring-boot/) is a popular framework for creating stand-alone, production-grade Spring-based applications. ### Supported Applications - Spring Boot: v2.X (>= 2.1) - Java: 11, 17 (LTS versions) - Executable JARs API Discovery supports Spring Boot major version 2, minor versions 1 and later. Versions 2.0.X are not supported due to known bugs which affect API Discovery and were fixed in 2.1. Major version 3 is planned to be supported in the future. Support for major version 1 is not planned. API Discovery is tested with and officially supports LTS versions of the Java runtime. Other versions may work also, and bug reports from non-LTS versions are welcome. Only applications that are built as Spring Boot [executable JARs](https://docs.spring.io/spring-boot/redirect.html?page=executable-jar#appendix.executable-jar.nested-jars.jar-structure) are supported. ### Configure as pipeline job The easiest way to run API Discovery is through a pipeline job based on our CI template. When running in this method, you provide a container image that has the required dependencies installed (such as an appropriate Java runtime). See [Image Requirements](#image-requirements) for more information. 1. A container image that meets the [image requirements](#image-requirements) is uploaded to a container registry. If the container registry requires authentication see [this help section](../../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry). 1. In a job in the `build` stage, build your application and configure the resulting Spring Boot executable JAR as a job artifact. 1. Include the API Discovery template in your `.gitlab-ci.yml` file. ```yaml include: - template: Security/API-Discovery.gitlab-ci.yml ``` Only a single `include` statement is allowed per `.gitlab-ci.yml` file. If you are including other files, combine them into a single `include` statement. ```yaml include: - template: Security/API-Discovery.gitlab-ci.yml - template: Security/DAST-API.gitlab-ci.yml ``` 1. Create a new job that extends from `.api_discovery_java_spring_boot`. The default stage is `test` which can be optionally changed to any value. ```yaml api_discovery: extends: .api_discovery_java_spring_boot ``` 1. Configure the `image` for the job. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine ``` 1. Provide the Java class path needed by your application. This includes your compatible build artifact from step 2, along with any additional dependencies. For this example, the build artifact is `build/libs/spring-boot-app-0.0.0.jar` and contains all needed dependencies. The variable `API_DISCOVERY_JAVA_CLASSPATH` is used to provide the class path. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar ``` 1. Optional. If the image provided is missing a dependency needed by API Discovery, it can be added using a `before_script`. In this example, the `eclipse-temurin:17-jre-alpine` container doesn't include `curl` which is required by API Discovery. The dependency can be installed using the Debian package manager `apt`: ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar before_script: - apk add --no-cache curl ``` 1. Optional. If the image provided doesn't automatically set the `JAVA_HOME` environment variable, or include `java` in the path, the `API_DISCOVERY_JAVA_HOME` variable can be used. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar API_DISCOVERY_JAVA_HOME: /opt/java ``` 1. Optional. If the package registry at `API_DISCOVERY_PACKAGES` is not public, provide a token that has read access to the GitLab API and registry using the `API_DISCOVERY_PACKAGE_TOKEN` variable. This is not required if you are using `gitlab.com` and have not customized the `API_DISCOVERY_PACKAGES` variable. The following example uses a [custom CI/CD variable](../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) named `GITLAB_READ_TOKEN` to store the token. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar API_DISCOVERY_PACKAGE_TOKEN: $GITLAB_READ_TOKEN ``` After the API Discovery job has successfully run, the OpenAPI document is available as a job artifact called `gl-api-discovery-openapi.json`. #### Image requirements - Linux container image. - Java versions 11 or 17 are officially supported, but other versions are likely compatible as well. - The `curl` command. - A shell at `/bin/sh` (like `busybox`, `sh`, or `bash`). ### Available CI/CD variables | CI/CD variable | Description | |---------------------------------------------|--------------------| | `API_DISCOVERY_DISABLED` | Disables the API Discovery job when using template job rules. | | `API_DISCOVERY_DISABLED_FOR_DEFAULT_BRANCH` | Disables the API Discovery job for default branch pipelines when using template job rules. | | `API_DISCOVERY_JAVA_CLASSPATH` | Java class-path that includes target Spring Boot application. (`build/libs/sample-0.0.0.jar`) | | `API_DISCOVERY_JAVA_HOME` | If provided is used to set `JAVA_HOME`. | | `API_DISCOVERY_PACKAGES` | GitLab Project Package API Prefix (defaults to `$CI_API_V4_URL/projects/42503323/packages`). | | `API_DISCOVERY_PACKAGE_TOKEN` | GitLab token for calling the GitLab package API. Only needed when `API_DISCOVERY_PACKAGES` is set to a non-public project. | | `API_DISCOVERY_VERSION` | API Discovery version to use (defaults to `1`). Can be used to pin a version by providing the full version number `1.1.0`. | ## Get support or request an improvement To get support for your particular problem, use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Discovery. Use `~"Category:API Security"` label when opening a new issue regarding API Discovery to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Framework in use with version (for example "Spring Boot v2.3.2"). - Language runtime with version (for example "Eclipse Temurin v17.0.1"). <!-- - Scanner log file is available as a job artifact named `gl-api-discovery.log`. --> {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}}
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: API Discovery breadcrumbs: - doc - user - application_security - api_security - api_discovery --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/9302) in GitLab 15.9. The API Discovery feature is in [beta](../../../../policy/development_stages_support.md). {{< /history >}} API Discovery analyzes your application and produces an OpenAPI document describing the web APIs it exposes. This schema document can then be used by the [API security testing analyzer](../../api_security_testing/_index.md) or [API Fuzzing](../../api_fuzzing/_index.md) to perform security scans of the web API. ## Supported frameworks - [Java Spring-Boot](#java-spring-boot) ## When does API Discovery run? API Discovery runs as a standalone job in your pipeline. The resulting OpenAPI document is captured as a job artifact so it can be used by other jobs in later stages. API Discovery runs in the `test` stage by default. The `test` stage was chosen as it typically executes before the stages used by other security features such as API security testing and API fuzzing. ## Example API Discovery configurations The following projects demonstrate API Discovery: - [Example Java Spring Boot v2 Pet Store](https://gitlab.com/gitlab-org/security-products/demos/api-discovery/java-spring-boot-v2-petstore) ## Java Spring-Boot [Spring Boot](https://spring.io/projects/spring-boot/) is a popular framework for creating stand-alone, production-grade Spring-based applications. ### Supported Applications - Spring Boot: v2.X (>= 2.1) - Java: 11, 17 (LTS versions) - Executable JARs API Discovery supports Spring Boot major version 2, minor versions 1 and later. Versions 2.0.X are not supported due to known bugs which affect API Discovery and were fixed in 2.1. Major version 3 is planned to be supported in the future. Support for major version 1 is not planned. API Discovery is tested with and officially supports LTS versions of the Java runtime. Other versions may work also, and bug reports from non-LTS versions are welcome. Only applications that are built as Spring Boot [executable JARs](https://docs.spring.io/spring-boot/redirect.html?page=executable-jar#appendix.executable-jar.nested-jars.jar-structure) are supported. ### Configure as pipeline job The easiest way to run API Discovery is through a pipeline job based on our CI template. When running in this method, you provide a container image that has the required dependencies installed (such as an appropriate Java runtime). See [Image Requirements](#image-requirements) for more information. 1. A container image that meets the [image requirements](#image-requirements) is uploaded to a container registry. If the container registry requires authentication see [this help section](../../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry). 1. In a job in the `build` stage, build your application and configure the resulting Spring Boot executable JAR as a job artifact. 1. Include the API Discovery template in your `.gitlab-ci.yml` file. ```yaml include: - template: Security/API-Discovery.gitlab-ci.yml ``` Only a single `include` statement is allowed per `.gitlab-ci.yml` file. If you are including other files, combine them into a single `include` statement. ```yaml include: - template: Security/API-Discovery.gitlab-ci.yml - template: Security/DAST-API.gitlab-ci.yml ``` 1. Create a new job that extends from `.api_discovery_java_spring_boot`. The default stage is `test` which can be optionally changed to any value. ```yaml api_discovery: extends: .api_discovery_java_spring_boot ``` 1. Configure the `image` for the job. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine ``` 1. Provide the Java class path needed by your application. This includes your compatible build artifact from step 2, along with any additional dependencies. For this example, the build artifact is `build/libs/spring-boot-app-0.0.0.jar` and contains all needed dependencies. The variable `API_DISCOVERY_JAVA_CLASSPATH` is used to provide the class path. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar ``` 1. Optional. If the image provided is missing a dependency needed by API Discovery, it can be added using a `before_script`. In this example, the `eclipse-temurin:17-jre-alpine` container doesn't include `curl` which is required by API Discovery. The dependency can be installed using the Debian package manager `apt`: ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar before_script: - apk add --no-cache curl ``` 1. Optional. If the image provided doesn't automatically set the `JAVA_HOME` environment variable, or include `java` in the path, the `API_DISCOVERY_JAVA_HOME` variable can be used. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar API_DISCOVERY_JAVA_HOME: /opt/java ``` 1. Optional. If the package registry at `API_DISCOVERY_PACKAGES` is not public, provide a token that has read access to the GitLab API and registry using the `API_DISCOVERY_PACKAGE_TOKEN` variable. This is not required if you are using `gitlab.com` and have not customized the `API_DISCOVERY_PACKAGES` variable. The following example uses a [custom CI/CD variable](../../../../ci/variables/_index.md#define-a-cicd-variable-in-the-ui) named `GITLAB_READ_TOKEN` to store the token. ```yaml api_discovery: extends: .api_discovery_java_spring_boot image: eclipse-temurin:17-jre-alpine variables: API_DISCOVERY_JAVA_CLASSPATH: build/libs/spring-boot-app-0.0.0.jar API_DISCOVERY_PACKAGE_TOKEN: $GITLAB_READ_TOKEN ``` After the API Discovery job has successfully run, the OpenAPI document is available as a job artifact called `gl-api-discovery-openapi.json`. #### Image requirements - Linux container image. - Java versions 11 or 17 are officially supported, but other versions are likely compatible as well. - The `curl` command. - A shell at `/bin/sh` (like `busybox`, `sh`, or `bash`). ### Available CI/CD variables | CI/CD variable | Description | |---------------------------------------------|--------------------| | `API_DISCOVERY_DISABLED` | Disables the API Discovery job when using template job rules. | | `API_DISCOVERY_DISABLED_FOR_DEFAULT_BRANCH` | Disables the API Discovery job for default branch pipelines when using template job rules. | | `API_DISCOVERY_JAVA_CLASSPATH` | Java class-path that includes target Spring Boot application. (`build/libs/sample-0.0.0.jar`) | | `API_DISCOVERY_JAVA_HOME` | If provided is used to set `JAVA_HOME`. | | `API_DISCOVERY_PACKAGES` | GitLab Project Package API Prefix (defaults to `$CI_API_V4_URL/projects/42503323/packages`). | | `API_DISCOVERY_PACKAGE_TOKEN` | GitLab token for calling the GitLab package API. Only needed when `API_DISCOVERY_PACKAGES` is set to a non-public project. | | `API_DISCOVERY_VERSION` | API Discovery version to use (defaults to `1`). Can be used to pin a version by providing the full version number `1.1.0`. | ## Get support or request an improvement To get support for your particular problem, use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Discovery. Use `~"Category:API Security"` label when opening a new issue regarding API Discovery to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Framework in use with version (for example "Spring Boot v2.3.2"). - Language runtime with version (for example "Eclipse Temurin v17.0.1"). <!-- - Scanner log file is available as a job artifact named `gl-api-discovery.log`. --> {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}}
https://docs.gitlab.com/user/application_security/api_security_testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/_index.md
2025-08-13
doc/user/application_security/api_security_testing
[ "doc", "user", "application_security", "api_security_testing" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
API security testing analyzer
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/4254) in GitLab 15.6 to the default analyzer for on-demand API security testing scans. - [Renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/457449) in GitLab 17.0 from "DAST API analyzer" to "API security testing analyzer". {{< /history >}} Perform Dynamic Application Security Testing (DAST) of web APIs to help discover bugs and potential security issues that other QA processes may miss. Use API security testing in addition to other [GitLab Secure](../_index.md) security scanners and your own test processes. You can run DAST API tests either as part your CI/CD workflow, [on-demand](../dast/on-demand_scan.md), or both. {{< alert type="warning" >}} Do not run API security testing against a production server. Not only can it perform any function that the API can, it may also trigger bugs in the API. This includes actions like modifying and deleting data. Only run API security testing against a test server. {{< /alert >}} {{< alert type="note" >}} DAST API has been re-branded to API Security Testing. As part of this re-branding the template name and variable prefixes have also been updated. The old template and variable names continue to work until the next major release, 18.0 in May 2025. {{< /alert >}} ## Getting started Get started with API security testing by editing your CI/CD configuration. Prerequisites: - You have a web API using one of the supported API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - You have an API specification in one of the following formats: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) Each scan supports exactly one specification. To scan more than one specification, use multiple scans. - You have a [GitLab Runner](../../../ci/runners/_index.md) available, with the [`docker` executor](https://docs.gitlab.com/runner/executors/docker.html) on Linux/amd64. - You have a deployed target application. For more details, see the [deployment options](#application-deployment-options). - The `dast` stage is added to your CI/CD pipeline definition, after the `deploy` stage. For example: ```yaml stages: - build - test - deploy - dast ``` To enable API security testing, you must alter your GitLab CI/CD configuration YAML based on the unique needs of your environment. You can specify the API you want to scan using: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) ## Understanding the results To view the output of a security scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - Scanner: Identifies which analyzer detected the vulnerability. - Method: Establishes the vulnerable server interaction type. - URL: Shows the location of the vulnerability. - Evidence: Describes test case to prove the presence of a given vulnerability - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers. You can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see the [pipeline security report](../vulnerability_report/pipeline.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} ## Optimization To get the most out of API security testing, follow these recommendations: - Configure runners to use the [always pull policy](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy) to run the latest versions of the analyzers. - By default, API security testing downloads all artifacts defined by previous jobs in the pipeline. If your DAST job does not rely on `environment_url.txt` to define the URL under test or any other files created in previous jobs, you should not download artifacts. To avoid downloading artifacts, extend the analyzer CI/CD job to specify no dependencies. For example, for the API security testing analyzer, add the following to your `.gitlab-ci.yml` file: ```yaml api_security: dependencies: [] ``` To configure API security testing for your particular application or environment, see the full list of [configuration options](configuration/_index.md). ## Roll out When run in your CI/CD pipeline, API security testing scanning runs in the `dast` stage by default. To ensure API security testing scanning examines the latest code, ensure your CI/CD pipeline deploys changes to a test environment in a stage before the `dast` stage. If your pipeline is configured to deploy to the same web server on each run, running a pipeline while another is still running could cause a race condition in which one pipeline overwrites the code from another. The API to be scanned should be excluded from changes for the duration of a API security testing scan. The only changes to the API should be from the API security testing scanner. Changes made to the API (for example, by users, scheduled tasks, database changes, code changes, other pipelines, or other scanners) during a scan could cause inaccurate results. ### Example API security testing scanning configurations The following projects demonstrate API security testing scanning: - [Example OpenAPI v3 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-v3-example) - [Example OpenAPI v2 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-example) - [Example HTTP Archive (HAR) project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/har-example) - [Example Postman Collection project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/postman-example) - [Example GraphQL project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/graphql-example) - [Example SOAP project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/soap-example) - [Authentication Token using Selenium](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-token-selenium) ### Application deployment options API security testing requires a deployed application to be available to scan. Depending on the complexity of the target application, there are a few options as to how to deploy and configure the API security testing template. #### Review apps Review apps are the most involved method of deploying your DAST target application. To assist in the process, GitLab created a review app deployment using Google Kubernetes Engine (GKE). This example can be found in the [Review Apps - GKE](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke) project, plus detailed instructions to configure review apps for DAST in the [README.md](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke/-/blob/master/README.md). #### Docker Services If your application uses Docker containers you have another option for deploying and scanning with DAST. After your Docker build job completes and your image is added to your container registry, you can use the image as a [service](../../../ci/services/_index.md). By using service definitions in your `.gitlab-ci.yml`, you can scan services with the DAST analyzer. When adding a `services` section to the job, the `alias` is used to define the hostname that can be used to access the service. In the following example, the `alias: yourapp` portion of the `dast` job definition means that the URL to the deployed application uses `yourapp` as the hostname (`https://yourapp/`). ```yaml stages: - build - dast include: - template: API-Security.gitlab-ci.yml # Deploys the container to the GitLab container registry deploy: services: - name: docker:dind alias: dind image: docker:20.10.16 stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest api_security: services: # use services to link your app container to the dast job - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp variables: APISEC_TARGET_URL: https://yourapp ``` Most applications depend on multiple services such as databases or caching services. By default, services defined in the services fields cannot communicate with each another. To allow communication between services, enable the `FF_NETWORK_PER_BUILD` [feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html#available-feature-flags). ```yaml variables: FF_NETWORK_PER_BUILD: "true" # enable network per build so all services can communicate on the same network services: # use services to link the container to the dast job - name: mongo:latest alias: mongo - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp ``` ## Get support or request an improvement To get support for your particular problem, use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Security and API security testing. Use `~"Category:API Security"` label when opening a new issue regarding API security testing to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Scanner log file available as a job artifact named `gl-api-security-scanner.log`. {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}} ## Glossary - Assert: Assertions are detection modules used by checks to trigger a vulnerability. Many assertions have configurations. A check can use multiple Assertions. For example, Log Analysis, Response Analysis, and Status Code are common Assertions used together by checks. Checks with multiple Assertions allow them to be turned on and off. - Check: Performs a specific type of test, or performed a check for a type of vulnerability. For example, the SQL Injection Check performs DAST testing for SQL Injection vulnerabilities. The API security testing scanner is comprised of several checks. Checks can be turned on and off in a profile. - Profile: A configuration file has one or more testing profiles, or sub-configurations. You may have a profile for feature branches and another with extra testing for a main branch.
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: API security testing analyzer breadcrumbs: - doc - user - application_security - api_security_testing --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/groups/gitlab-org/-/epics/4254) in GitLab 15.6 to the default analyzer for on-demand API security testing scans. - [Renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/457449) in GitLab 17.0 from "DAST API analyzer" to "API security testing analyzer". {{< /history >}} Perform Dynamic Application Security Testing (DAST) of web APIs to help discover bugs and potential security issues that other QA processes may miss. Use API security testing in addition to other [GitLab Secure](../_index.md) security scanners and your own test processes. You can run DAST API tests either as part your CI/CD workflow, [on-demand](../dast/on-demand_scan.md), or both. {{< alert type="warning" >}} Do not run API security testing against a production server. Not only can it perform any function that the API can, it may also trigger bugs in the API. This includes actions like modifying and deleting data. Only run API security testing against a test server. {{< /alert >}} {{< alert type="note" >}} DAST API has been re-branded to API Security Testing. As part of this re-branding the template name and variable prefixes have also been updated. The old template and variable names continue to work until the next major release, 18.0 in May 2025. {{< /alert >}} ## Getting started Get started with API security testing by editing your CI/CD configuration. Prerequisites: - You have a web API using one of the supported API types: - REST API - SOAP - GraphQL - Form bodies, JSON, or XML - You have an API specification in one of the following formats: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) Each scan supports exactly one specification. To scan more than one specification, use multiple scans. - You have a [GitLab Runner](../../../ci/runners/_index.md) available, with the [`docker` executor](https://docs.gitlab.com/runner/executors/docker.html) on Linux/amd64. - You have a deployed target application. For more details, see the [deployment options](#application-deployment-options). - The `dast` stage is added to your CI/CD pipeline definition, after the `deploy` stage. For example: ```yaml stages: - build - test - deploy - dast ``` To enable API security testing, you must alter your GitLab CI/CD configuration YAML based on the unique needs of your environment. You can specify the API you want to scan using: - [OpenAPI v2 or v3 Specification](configuration/enabling_the_analyzer.md#openapi-specification) - [GraphQL Schema](configuration/enabling_the_analyzer.md#graphql-schema) - [HTTP Archive (HAR)](configuration/enabling_the_analyzer.md#http-archive-har) - [Postman Collection v2.0 or v2.1](configuration/enabling_the_analyzer.md#postman-collection) ## Understanding the results To view the output of a security scan: 1. On the left sidebar, select **Search or go to** and find your project. 1. On the left sidebar, select **Build > Pipelines**. 1. Select the pipeline. 1. Select the **Security** tab. 1. Select a vulnerability to view its details, including: - Status: Indicates whether the vulnerability has been triaged or resolved. - Description: Explains the cause of the vulnerability, its potential impact, and recommended remediation steps. - Severity: Categorized into six levels based on impact. [Learn more about severity levels](../vulnerabilities/severities.md). - Scanner: Identifies which analyzer detected the vulnerability. - Method: Establishes the vulnerable server interaction type. - URL: Shows the location of the vulnerability. - Evidence: Describes test case to prove the presence of a given vulnerability - Identifiers: A list of references used to classify the vulnerability, such as CWE identifiers. You can also download the security scan results: - In the pipeline's **Security** tab, select **Download results**. For more details, see the [pipeline security report](../vulnerability_report/pipeline.md). {{< alert type="note" >}} Findings are generated on feature branches. When they are merged into the default branch, they become vulnerabilities. This distinction is important when evaluating your security posture. {{< /alert >}} ## Optimization To get the most out of API security testing, follow these recommendations: - Configure runners to use the [always pull policy](https://docs.gitlab.com/runner/executors/docker.html#using-the-always-pull-policy) to run the latest versions of the analyzers. - By default, API security testing downloads all artifacts defined by previous jobs in the pipeline. If your DAST job does not rely on `environment_url.txt` to define the URL under test or any other files created in previous jobs, you should not download artifacts. To avoid downloading artifacts, extend the analyzer CI/CD job to specify no dependencies. For example, for the API security testing analyzer, add the following to your `.gitlab-ci.yml` file: ```yaml api_security: dependencies: [] ``` To configure API security testing for your particular application or environment, see the full list of [configuration options](configuration/_index.md). ## Roll out When run in your CI/CD pipeline, API security testing scanning runs in the `dast` stage by default. To ensure API security testing scanning examines the latest code, ensure your CI/CD pipeline deploys changes to a test environment in a stage before the `dast` stage. If your pipeline is configured to deploy to the same web server on each run, running a pipeline while another is still running could cause a race condition in which one pipeline overwrites the code from another. The API to be scanned should be excluded from changes for the duration of a API security testing scan. The only changes to the API should be from the API security testing scanner. Changes made to the API (for example, by users, scheduled tasks, database changes, code changes, other pipelines, or other scanners) during a scan could cause inaccurate results. ### Example API security testing scanning configurations The following projects demonstrate API security testing scanning: - [Example OpenAPI v3 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-v3-example) - [Example OpenAPI v2 Specification project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-example) - [Example HTTP Archive (HAR) project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/har-example) - [Example Postman Collection project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/postman-example) - [Example GraphQL project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/graphql-example) - [Example SOAP project](https://gitlab.com/gitlab-org/security-products/demos/api-dast/soap-example) - [Authentication Token using Selenium](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-token-selenium) ### Application deployment options API security testing requires a deployed application to be available to scan. Depending on the complexity of the target application, there are a few options as to how to deploy and configure the API security testing template. #### Review apps Review apps are the most involved method of deploying your DAST target application. To assist in the process, GitLab created a review app deployment using Google Kubernetes Engine (GKE). This example can be found in the [Review Apps - GKE](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke) project, plus detailed instructions to configure review apps for DAST in the [README.md](https://gitlab.com/gitlab-org/security-products/demos/dast/review-app-gke/-/blob/master/README.md). #### Docker Services If your application uses Docker containers you have another option for deploying and scanning with DAST. After your Docker build job completes and your image is added to your container registry, you can use the image as a [service](../../../ci/services/_index.md). By using service definitions in your `.gitlab-ci.yml`, you can scan services with the DAST analyzer. When adding a `services` section to the job, the `alias` is used to define the hostname that can be used to access the service. In the following example, the `alias: yourapp` portion of the `dast` job definition means that the URL to the deployed application uses `yourapp` as the hostname (`https://yourapp/`). ```yaml stages: - build - dast include: - template: API-Security.gitlab-ci.yml # Deploys the container to the GitLab container registry deploy: services: - name: docker:dind alias: dind image: docker:20.10.16 stage: build script: - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest api_security: services: # use services to link your app container to the dast job - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp variables: APISEC_TARGET_URL: https://yourapp ``` Most applications depend on multiple services such as databases or caching services. By default, services defined in the services fields cannot communicate with each another. To allow communication between services, enable the `FF_NETWORK_PER_BUILD` [feature flag](https://docs.gitlab.com/runner/configuration/feature-flags.html#available-feature-flags). ```yaml variables: FF_NETWORK_PER_BUILD: "true" # enable network per build so all services can communicate on the same network services: # use services to link the container to the dast job - name: mongo:latest alias: mongo - name: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA alias: yourapp ``` ## Get support or request an improvement To get support for your particular problem, use the [getting help channels](https://about.gitlab.com/get-help/). The [GitLab issue tracker on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues) is the right place for bugs and feature proposals about API Security and API security testing. Use `~"Category:API Security"` label when opening a new issue regarding API security testing to ensure it is quickly reviewed by the right people. [Search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues) for similar entries before submitting your own, there's a good chance somebody else had the same issue or feature proposal. Show your support with an emoji reaction or join the discussion. When experiencing a behavior not working as expected, consider providing contextual information: - GitLab version if using a GitLab Self-Managed instance. - `.gitlab-ci.yml` job definition. - Full job console output. - Scanner log file available as a job artifact named `gl-api-security-scanner.log`. {{< alert type="warning" >}} **Sanitize data attached to a support issue**. Remove sensitive information, including: credentials, passwords, tokens, keys, and secrets. {{< /alert >}} ## Glossary - Assert: Assertions are detection modules used by checks to trigger a vulnerability. Many assertions have configurations. A check can use multiple Assertions. For example, Log Analysis, Response Analysis, and Status Code are common Assertions used together by checks. Checks with multiple Assertions allow them to be turned on and off. - Check: Performs a specific type of test, or performed a check for a type of vulnerability. For example, the SQL Injection Check performs DAST testing for SQL Injection vulnerabilities. The API security testing scanner is comprised of several checks. Checks can be turned on and off in a profile. - Profile: A configuration file has one or more testing profiles, or sub-configurations. You may have a profile for feature branches and another with extra testing for a main branch.
https://docs.gitlab.com/user/application_security/performance
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/performance.md
2025-08-13
doc/user/application_security/api_security_testing
[ "doc", "user", "application_security", "api_security_testing" ]
performance.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Performance tuning and testing speed
null
Security tools that perform dynamic analysis testing, such as API security testing, perform testing by sending requests to an instance of your running application. The requests are engineered to test for specific vulnerabilities that might exist in your application. The speed of a dynamic analysis test depends on the following: - How many requests per second can be sent to your application by our tooling - How fast your application responds to requests - How many requests must be sent to test the application - How many operations your API is comprised of - How many fields are in each operation (think JSON bodies, headers, query string, cookies, etc.) If the API security testing job still takes longer than expected reach after following the advice in this performance guide, reach out to support for further assistance. ## Diagnosing performance issues The first step to resolving performance issues is to understand what is contributing to the slower-than-expected testing time. Some common issues we see are: - API security testing is running on a low-vCPU runner - The application deployed to a slow/single-CPU instance and is not able to keep up with the testing load - The application contains a slow operation that impacts the overall test speed (> 1/2 second) - The application contains an operation that returns a large amount of data (> 500K+) - The application contains a large number of operations (> 40) ### The application contains a slow operation that impacts the overall test speed (> 1/2 second) The API security testing job output contains helpful information about how fast we are testing, how fast each operation being tested responds, and summary information. Let's take a look at some sample output to see how it can be used in tracking down performance issues: ```shell API SECURITY: Loaded 10 operations from: assets/har-large-response/large_responses.har API SECURITY: API SECURITY: Testing operation [1/10]: 'GET http://target:7777/api/large_response_json'. API SECURITY: - Parameters: (Headers: 4, Query: 0, Body: 0) API SECURITY: - Request body size: 0 Bytes (0 bytes) API SECURITY: API SECURITY: Finished testing operation 'GET http://target:7777/api/large_response_json'. API SECURITY: - Excluded Parameters: (Headers: 0, Query: 0, Body: 0) API SECURITY: - Performed 767 requests API SECURITY: - Average response body size: 130 MB API SECURITY: - Average call time: 2 seconds and 82.69 milliseconds (2.082693 seconds) API SECURITY: - Time to complete: 14 minutes, 8 seconds and 788.36 milliseconds (848.788358 seconds) ``` This job console output snippet starts by telling us how many operations were found (10), followed by notifications that testing has started on a specific operation and a summary of the operation has been completed. The summary is the most interesting part of this log output. In the summary, we can see that it took API security testing 767 requests to fully test this operation and its related fields. We can also see that the average response time was 2 seconds and the time to complete was 14 minutes for this one operation. An average response time of 2 seconds is a good initial indicator that this specific operation takes a long time to test. Further, we can see that the response body size is quite large. The large body size is the culprit here, transferring that much data on each request is what takes the majority of that 2 seconds. For this issue, the team might decide to: - Use a runner with more vCPUs, as this allows API security testing to parallelize the work being performed. This helps lower the test time, but getting the test down under 10 minutes might still be problematic without moving to a high CPU machine due to how long the operation takes to test. While larger runners are more costly, you also pay for less minutes if the job executions are quicker. - [Exclude this operation](#excluding-slow-operations) from API security testing. While this is the simplest, it has the downside of a gap in security test coverage. - [Exclude the operation from feature branch API security testing, but include it in the default branch test](#excluding-operations-in-feature-branches-but-not-default-branch). - [Split up API security testing into multiple jobs](#splitting-a-test-into-multiple-jobs). The likely solution is to use a combination of these solutions to reach an acceptable test time, assuming your team's requirements are in the 5-7 minute range. ## Addressing performance issues The following sections document various options for addressing performance issues for API security testing: - [Using a larger runner](#using-a-larger-runner) - [Excluding slow operations](#excluding-slow-operations) - [Splitting a test into multiple jobs](#splitting-a-test-into-multiple-jobs) - [Excluding operations in feature branches, but not default branch](#excluding-operations-in-feature-branches-but-not-default-branch) ### Using a larger runner One of the easiest performance boosts can be achieved using a [larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) with API security testing. This table shows statistics collected during benchmarking of a Java Spring Boot REST API. In this benchmark, the target and API security testing share a single runner instance. | Hosted runner on Linux tag | Requests per Second | |------------------------------------|-----------| | `saas-linux-small-amd64` (default) | 255 | | `saas-linux-medium-amd64` | 400 | As we can see from this table, increasing the size of the runner and vCPU count can have a large impact on testing speed/performance. Here is an example job definition for API security testing that adds a `tags` section to use the medium SaaS runner on Linux. The job extends the job definition included through the API security testing template. ```yaml api_security: tags: - saas-linux-medium-amd64 ``` In the `gl-api-security-scanner.log` file you can search for the string `Starting work item processor` to inspect the reported max DOP (degree of parallelism). The max DOP should be greater than or equal to the number of vCPUs assigned to the runner. If unable to identify the problem, open a ticket with support to assist. Example log entry: `17:00:01.084 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Starting work item processor with 4 max DOP` ### Excluding slow operations In the case of one or two slow operations, the team might decide to skip testing the operations. Excluding the operation is done using the `APISEC_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `APISEC_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. To verify the operation is excluded, run the API security testing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml api_security: variables: APISEC_EXCLUDE_PATHS: /api/large_response_json ``` {{< alert type="warning" >}} Excluding operations from testing could allow some vulnerabilities to go undetected. {{< /alert >}} ### Splitting a test into multiple jobs Splitting a test into multiple jobs is supported by API security testing through the use of [`APISEC_EXCLUDE_PATHS`](configuration/customizing_analyzer_settings.md#exclude-paths) and [`APISEC_EXCLUDE_URLS`](configuration/customizing_analyzer_settings.md#exclude-urls). When splitting a test up, a good pattern is to disable the `dast_api` job and replace it with two jobs with identifying names. In this example we have two jobs, each job is testing a version of the API, so our names reflect that. However, this technique can be applied to any situation, not just with versions of an API. The rules we are using in the `APISEC_v1` and `APISEC_v2` jobs are copied from the [API security testing template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/API-Security.gitlab-ci.yml). ```yaml # Disable the main dast_api job api_security: rules: - if: $CI_COMMIT_BRANCH when: never APISEC_v1: extends: dast_api variables: APISEC_EXCLUDE_PATHS: /api/v1/** rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH APISEC_v2: variables: APISEC_EXCLUDE_PATHS: /api/v2/** rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH ``` ### Excluding operations in feature branches, but not default branch In the case of one or two slow operations, the team might decide to skip testing the operations, or exclude them from feature branch tests, but include them for default branch tests. Excluding the operation is done using the `APISEC_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `APISEC_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. Our configuration disables the main `dast_api` job and creates two new jobs `APISEC_main` and `APISEC_branch`. The `APISEC_branch` is set up to exclude the long operation and only run on non-default branches (for example, feature branches). The `APISEC_main` branch is set up to only execute on the default branch (`main` in this example). The `APISEC_branch` jobs run faster, allowing for quick development cycles, while the `APISEC_main` job which only runs on default branch builds, takes longer to run. To verify the operation is excluded, run the API security testing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml # Disable the main job so we can create two jobs with # different names api_security: rules: - if: $CI_COMMIT_BRANCH when: never # API security testing for feature branch work, excludes /api/large_response_json APISEC_branch: extends: dast_api variables: APISEC_EXCLUDE_PATHS: /api/large_response_json rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH # API security testing for default branch (main in our case) # Includes the long running operations APISEC_main: extends: dast_api rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ```
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Performance tuning and testing speed breadcrumbs: - doc - user - application_security - api_security_testing --- Security tools that perform dynamic analysis testing, such as API security testing, perform testing by sending requests to an instance of your running application. The requests are engineered to test for specific vulnerabilities that might exist in your application. The speed of a dynamic analysis test depends on the following: - How many requests per second can be sent to your application by our tooling - How fast your application responds to requests - How many requests must be sent to test the application - How many operations your API is comprised of - How many fields are in each operation (think JSON bodies, headers, query string, cookies, etc.) If the API security testing job still takes longer than expected reach after following the advice in this performance guide, reach out to support for further assistance. ## Diagnosing performance issues The first step to resolving performance issues is to understand what is contributing to the slower-than-expected testing time. Some common issues we see are: - API security testing is running on a low-vCPU runner - The application deployed to a slow/single-CPU instance and is not able to keep up with the testing load - The application contains a slow operation that impacts the overall test speed (> 1/2 second) - The application contains an operation that returns a large amount of data (> 500K+) - The application contains a large number of operations (> 40) ### The application contains a slow operation that impacts the overall test speed (> 1/2 second) The API security testing job output contains helpful information about how fast we are testing, how fast each operation being tested responds, and summary information. Let's take a look at some sample output to see how it can be used in tracking down performance issues: ```shell API SECURITY: Loaded 10 operations from: assets/har-large-response/large_responses.har API SECURITY: API SECURITY: Testing operation [1/10]: 'GET http://target:7777/api/large_response_json'. API SECURITY: - Parameters: (Headers: 4, Query: 0, Body: 0) API SECURITY: - Request body size: 0 Bytes (0 bytes) API SECURITY: API SECURITY: Finished testing operation 'GET http://target:7777/api/large_response_json'. API SECURITY: - Excluded Parameters: (Headers: 0, Query: 0, Body: 0) API SECURITY: - Performed 767 requests API SECURITY: - Average response body size: 130 MB API SECURITY: - Average call time: 2 seconds and 82.69 milliseconds (2.082693 seconds) API SECURITY: - Time to complete: 14 minutes, 8 seconds and 788.36 milliseconds (848.788358 seconds) ``` This job console output snippet starts by telling us how many operations were found (10), followed by notifications that testing has started on a specific operation and a summary of the operation has been completed. The summary is the most interesting part of this log output. In the summary, we can see that it took API security testing 767 requests to fully test this operation and its related fields. We can also see that the average response time was 2 seconds and the time to complete was 14 minutes for this one operation. An average response time of 2 seconds is a good initial indicator that this specific operation takes a long time to test. Further, we can see that the response body size is quite large. The large body size is the culprit here, transferring that much data on each request is what takes the majority of that 2 seconds. For this issue, the team might decide to: - Use a runner with more vCPUs, as this allows API security testing to parallelize the work being performed. This helps lower the test time, but getting the test down under 10 minutes might still be problematic without moving to a high CPU machine due to how long the operation takes to test. While larger runners are more costly, you also pay for less minutes if the job executions are quicker. - [Exclude this operation](#excluding-slow-operations) from API security testing. While this is the simplest, it has the downside of a gap in security test coverage. - [Exclude the operation from feature branch API security testing, but include it in the default branch test](#excluding-operations-in-feature-branches-but-not-default-branch). - [Split up API security testing into multiple jobs](#splitting-a-test-into-multiple-jobs). The likely solution is to use a combination of these solutions to reach an acceptable test time, assuming your team's requirements are in the 5-7 minute range. ## Addressing performance issues The following sections document various options for addressing performance issues for API security testing: - [Using a larger runner](#using-a-larger-runner) - [Excluding slow operations](#excluding-slow-operations) - [Splitting a test into multiple jobs](#splitting-a-test-into-multiple-jobs) - [Excluding operations in feature branches, but not default branch](#excluding-operations-in-feature-branches-but-not-default-branch) ### Using a larger runner One of the easiest performance boosts can be achieved using a [larger runner](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64) with API security testing. This table shows statistics collected during benchmarking of a Java Spring Boot REST API. In this benchmark, the target and API security testing share a single runner instance. | Hosted runner on Linux tag | Requests per Second | |------------------------------------|-----------| | `saas-linux-small-amd64` (default) | 255 | | `saas-linux-medium-amd64` | 400 | As we can see from this table, increasing the size of the runner and vCPU count can have a large impact on testing speed/performance. Here is an example job definition for API security testing that adds a `tags` section to use the medium SaaS runner on Linux. The job extends the job definition included through the API security testing template. ```yaml api_security: tags: - saas-linux-medium-amd64 ``` In the `gl-api-security-scanner.log` file you can search for the string `Starting work item processor` to inspect the reported max DOP (degree of parallelism). The max DOP should be greater than or equal to the number of vCPUs assigned to the runner. If unable to identify the problem, open a ticket with support to assist. Example log entry: `17:00:01.084 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Starting work item processor with 4 max DOP` ### Excluding slow operations In the case of one or two slow operations, the team might decide to skip testing the operations. Excluding the operation is done using the `APISEC_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `APISEC_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. To verify the operation is excluded, run the API security testing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml api_security: variables: APISEC_EXCLUDE_PATHS: /api/large_response_json ``` {{< alert type="warning" >}} Excluding operations from testing could allow some vulnerabilities to go undetected. {{< /alert >}} ### Splitting a test into multiple jobs Splitting a test into multiple jobs is supported by API security testing through the use of [`APISEC_EXCLUDE_PATHS`](configuration/customizing_analyzer_settings.md#exclude-paths) and [`APISEC_EXCLUDE_URLS`](configuration/customizing_analyzer_settings.md#exclude-urls). When splitting a test up, a good pattern is to disable the `dast_api` job and replace it with two jobs with identifying names. In this example we have two jobs, each job is testing a version of the API, so our names reflect that. However, this technique can be applied to any situation, not just with versions of an API. The rules we are using in the `APISEC_v1` and `APISEC_v2` jobs are copied from the [API security testing template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Security/API-Security.gitlab-ci.yml). ```yaml # Disable the main dast_api job api_security: rules: - if: $CI_COMMIT_BRANCH when: never APISEC_v1: extends: dast_api variables: APISEC_EXCLUDE_PATHS: /api/v1/** rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH APISEC_v2: variables: APISEC_EXCLUDE_PATHS: /api/v2/** rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH ``` ### Excluding operations in feature branches, but not default branch In the case of one or two slow operations, the team might decide to skip testing the operations, or exclude them from feature branch tests, but include them for default branch tests. Excluding the operation is done using the `APISEC_EXCLUDE_PATHS` configuration [variable as explained in this section.](configuration/customizing_analyzer_settings.md#exclude-paths) In this example, we have an operation that returns a large amount of data. The operation is `GET http://target:7777/api/large_response_json`. To exclude it we provide the `APISEC_EXCLUDE_PATHS` configuration variable with the path portion of our operation URL `/api/large_response_json`. Our configuration disables the main `dast_api` job and creates two new jobs `APISEC_main` and `APISEC_branch`. The `APISEC_branch` is set up to exclude the long operation and only run on non-default branches (for example, feature branches). The `APISEC_main` branch is set up to only execute on the default branch (`main` in this example). The `APISEC_branch` jobs run faster, allowing for quick development cycles, while the `APISEC_main` job which only runs on default branch builds, takes longer to run. To verify the operation is excluded, run the API security testing job and review the job console output. It includes a list of included and excluded operations at the end of the test. ```yaml # Disable the main job so we can create two jobs with # different names api_security: rules: - if: $CI_COMMIT_BRANCH when: never # API security testing for feature branch work, excludes /api/large_response_json APISEC_branch: extends: dast_api variables: APISEC_EXCLUDE_PATHS: /api/large_response_json rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH # API security testing for default branch (main in our case) # Includes the long running operations APISEC_main: extends: dast_api rules: - if: $APISEC_DISABLED == 'true' || $APISEC_DISABLED == '1' when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == 'true' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $APISEC_DISABLED_FOR_DEFAULT_BRANCH == '1' && $CI_DEFAULT_BRANCH == $CI_COMMIT_REF_NAME when: never - if: $CI_COMMIT_BRANCH && $CI_GITLAB_FIPS_MODE == "true" variables: APISEC_IMAGE_SUFFIX: "-fips" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ```
https://docs.gitlab.com/user/application_security/troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/troubleshooting.md
2025-08-13
doc/user/application_security/api_security_testing
[ "doc", "user", "application_security", "api_security_testing" ]
troubleshooting.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting API security testing jobs
null
## API security testing job times out after N hours For larger repositories, the API security testing job could time out on the [small hosted runner on Linux](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64), which is set per default. If this happens in your jobs, you should scale up to a [larger runner](performance.md#using-a-larger-runner). See the following documentation sections for assistance: - [Performance tuning and testing speed](performance.md) - [Using a larger Runner](performance.md#using-a-larger-runner) - [Excluding operations by path](configuration/customizing_analyzer_settings.md#exclude-paths) - [Excluding slow operations](performance.md#excluding-slow-operations) ## API security testing job takes too long to complete See [Performance Tuning and Testing Speed](performance.md) ## Error: `Error waiting for DAST API 'http://127.0.0.1:5000' to become available` A bug exists in versions of the API security testing analyzer prior to v1.6.196 that can cause a background process to fail under certain conditions. The solution is to update to a newer version of the API security testing analyzer. The version information can be found in the job details for the `dast_api` job. If the issue is occurring with versions v1.6.196 or greater, contact Support and provide the following information: 1. Reference this troubleshooting section and ask for the issue to be escalated to the Dynamic Analysis Team. 1. The full console output of the job. 1. The `gl-api-security-scanner.log` file available as a job artifact. In the right-hand panel of the job details page, select the **Browse** button. 1. The `dast_api` job definition from your `.gitlab-ci.yml` file. **Error message** - In [GitLab 15.6 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/376078), `Error waiting for DAST API 'http://127.0.0.1:5000' to become available` - In GitLab 15.5 and earlier, `Error waiting for API Security 'http://127.0.0.1:5000' to become available`. ## `Failed to start scanner session (version header not found)` The API security testing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `dast_api` job. A common cause of this issue is changing the `APISEC_API` variable from its default. **Error message** - `Failed to start scanner session (version header not found).` **Solution** - Remove the `APISEC_API` variable from the `.gitlab-ci.yml` file. The value inherits from the API security testing CI/CD template. We recommend this method instead of manually setting a value. - If removing the variable is not possible, check to see if this value has changed in the latest version of the [API security testing CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Security.gitlab-ci.yml). If so, update the value in the `.gitlab-ci.yml` file. ## `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.` The API security testing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `dast_api` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts. Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause: 1. Go to the job console. 1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**. 1. Open the file `gl-api-security-scanner.log` in a text editor. 1. If the error message was produced because the port was already taken, you should see in the file a message like the following: - In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734): ```log Failed to bind to address http://127.0.0.1:5500: address already in use. ``` - In GitLab 15.4 and earlier: ```log Failed to bind to address http://[::]:5000: address already in use. ``` The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken. If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](_index.md#get-support-or-request-an-improvement) through the proper channels. Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `APISEC_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component. **Solution** 1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `APISEC_API_PORT`. 1. Update the value of `APISEC_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports) ## `Application cannot determine the base URL for the target API` The API security testing engine outputs an error message when it cannot determine the target API after inspecting the OpenAPI document. This error message is shown when the target API has not been set in the `.gitlab-ci.yml` file, it is not available in the `environment_url.txt` file, and it could not be computed using the OpenAPI document. There is a order of precedence in which the API security testing engine tries to get the target API when checking the different sources. First, it tries to use the `APISEC_TARGET_URL`. If the environment variable has not been set, then the API security testing engine attempts to use the `environment_url.txt` file. If there is no file `environment_url.txt`, then the API security testing engine uses the OpenAPI document contents and the URL provided in `APISEC_OPENAPI` (if a URL is provided) to try to compute the target API. The best-suited solution depends on whether or not your target API changes for each deployment. In static environments, the target API is the same for each deployment, in this case refer to the [static environment solution](#static-environment-solution). If the target API changes for each deployment a [dynamic environment solution](#dynamic-environment-solutions) should be applied. ## API security testing job excludes some paths from operations If you find that some paths are being excluded from operations, make sure that: - The variable `DAST_API_EXCLUDE_URLS` is not configured to exclude operations you want to test. - The `consumes` array is defined and has a valid type in the target definition JSON file. For an example definition, see the [example project target definition file](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-example/-/blob/12e2b039d08208f1dd38a1e7c52b0bda848bb449/rest_target_openapi.json?plain=1#L13). ### Static environment solution This solution is for pipelines in which the target API URL doesn't change (is static). **Add environmental variable** For environments where the target API remains the same, we recommend you specify the target URL by using the `APISEC_TARGET_URL` environment variable. In your `.gitlab-ci.yml`, add a variable `APISEC_TARGET_URL`. The variable must be set to the base URL of API testing target. For example: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: http://test-deployment/ APISEC_OPENAPI: test-api-specification.json ``` ### Dynamic environment solutions In a dynamic environment your target API changes for each different deployment. In this case, there is more than one possible solution, we recommend you use the `environment_url.txt` file when dealing with dynamic environments. **Use environment_url.txt** To support dynamic environments in which the target API URL changes during each pipeline, API security testing engine supports the use of an `environment_url.txt` file that contains the URL to use. This file is not checked into the repository, instead it's created during the pipeline by the job that deploys the test target and collected as an artifact that can be used by later jobs in the pipeline. The job that creates the `environment_url.txt` file must run before the API security testing engine job. 1. Modify the test target deployment job adding the base URL in an `environment_url.txt` file at the root of your project. 1. Modify the test target deployment job collecting the `environment_url.txt` as an artifact. Example: ```yaml deploy-test-target: script: # Perform deployment steps # Create environment_url.txt (example) - echo http://${CI_PROJECT_ID}-${CI_ENVIRONMENT_SLUG}.example.org > environment_url.txt artifacts: paths: - environment_url.txt ``` ## Use OpenAPI with an invalid schema There are cases where the document is autogenerated with an invalid schema or cannot be edited manually in a timely manner. In those scenarios, the API security testing is able to perform a relaxed validation by setting the variable `APISEC_OPENAPI_RELAXED_VALIDATION`. We recommend providing a fully compliant OpenAPI document to prevent unexpected behaviors. ### Edit a non-compliant OpenAPI file To detect and correct elements that don't comply with the OpenAPI specifications, we recommend using an editor. An editor commonly provides document validation, and suggestions to create a schema-compliant OpenAPI document. Suggested editors include: | Editor | OpenAPI 2.0 | OpenAPI 3.0.x | OpenAPI 3.1.x | |--------|-------------|---------------|---------------| | [Stoplight Studio](https://stoplight.io/solutions) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | | [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON | If your OpenAPI document is generated manually, load your document in the editor and fix anything that is non-compliant. If your document is generated automatically, load it in your editor to identify the issues in the schema, then go to the application and perform the corrections based on the framework you are using. ### Enable OpenAPI relaxed validation Relaxed validation is meant for cases when the OpenAPI document cannot meet OpenAPI specifications, but it still has enough content to be consumed by different tools. A validation is performed but less strictly in regards to document schema. API security testing can still try to consume an OpenAPI document that does not fully comply with OpenAPI specifications. To instruct API security testing to perform a relaxed validation, set the variable `APISEC_OPENAPI_RELAXED_VALIDATION` to any value, for example: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_PROFILE: Quick APISEC_TARGET_URL: http://test-deployment/ APISEC_OPENAPI: test-api-specification.json APISEC_OPENAPI_RELAXED_VALIDATION: 'On' ``` ## `No operation in the OpenAPI document is consuming any supported media type` API security testing uses the specified media types in the OpenAPI document to generate requests. If no request can be created due to the lack of supported media types, then an error is thrown. **Error message** - `Error, no operation in the OpenApi document is consuming any supported media type. Check 'OpenAPI Specification' to check the supported media types.` **Solution** 1. Review supported media types in the [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) section. 1. Edit your OpenAPI document, allowing at least a given operation to accept any of the supported media types. Alternatively, a supported media type could be set in the OpenAPI document level and get applied to all operations. This step may require changes in your application to ensure the supported media type is accepted by the application. ## Error: `The SSL connection could not be established, see inner exception.` API security testing is compatible with a broad range of TLS configurations, including outdated protocols and ciphers. Despite broad support, you might encounter connection errors, like this: ```plaintext Error, error occurred trying to download `<URL>`: There was an error when retrieving content from Uri:' <URL>'. Error:The SSL connection could not be established, see inner exception. ``` This error occurs because API security testing could not establish a secure connection with the server at the given URL. To resolve the issue: If the host in the error message supports non-TLS connections, change `https://` to `http://` in your configuration. For example, if an error occurs with the following configuration: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: https://test-deployment/ APISEC_OPENAPI: https://specs/openapi.json ``` Change the prefix of `APISEC_OPENAPI` from `https://` to `http://`: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: https://test-deployment/ APISEC_OPENAPI: http://specs/openapi.json ``` If you cannot use a non-TLS connection to access the URL, contact the Support team for help. You can expedite the investigation with the [testssl.sh tool](https://testssl.sh/). From a machine with a bash shell and connectivity to the affected server: 1. Download the latest release `zip` or `tar.gz` file and extract from <https://github.com/drwetter/testssl.sh/releases>. 1. Run `./testssl.sh --log https://specs`. 1. Attach the log file to your support ticket. ## `ERROR: Job failed: failed to pull image` This error message occurs when pulling an image from a container registry that requires authentication to access (it is not public). In the job console output the error looks like: ```plaintext Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX Resolving secrets 00:00 Preparing the "docker+machine" executor 00:06 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Pulling docker image registry.example.com/my-target-app:latest ... WARNING: Failed to pull image with policy "always": Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ERROR: Job failed: failed to pull image "registry.example.com/my-target-app:latest" with specified policies [always]: Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ``` **Error message** - In GitLab 15.9 and earlier, `ERROR: Job failed: failed to pull image` followed by `Error response from daemon: Get IMAGE: unauthorized`. **Solution** Authentication credentials are provided using the methods outlined in the [Access an image from a private container registry](../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry) documentation section. The method used is dictated by your container registry provider and its configuration. If your using a container registry provided by a 3rd party, such as a cloud provider (Azure, Google Could (GCP), AWS and so on), check the providers documentation for information on how to authenticate to their container registries. The following example uses the [statically defined credentials](../../../ci/docker/using_docker_images.md#use-statically-defined-credentials) authentication method. In this example the container registry is `registry.example.com` and image is `my-target-app:latest`. 1. Read how to [Determine your `DOCKER_AUTH_CONFIG` data](../../../ci/docker/using_docker_images.md#determine-your-docker_auth_config-data) to understand how to compute the variable value for `DOCKER_AUTH_CONFIG`. The configuration variable `DOCKER_AUTH_CONFIG` contains the Docker JSON configuration to provide the appropriate authentication information. For example, to access private container registry: `registry.example.com` with the credentials `abcdefghijklmn`, the Docker JSON looks like: ```json { "auths": { "registry.example.com": { "auth": "abcdefghijklmn" } } } ``` 1. Add the `DOCKER_AUTH_CONFIG` as a CI/CD variable. Instead of adding the configuration variable directly in your `.gitlab-ci.yml`file you should create a project [CI/CD variable](../../../ci/variables/_index.md#for-a-project). 1. Rerun your job, and the statically-defined credentials are now used to sign in to the private container registry `registry.example.com`, and let you pull the image `my-target-app:latest`. If succeeded the job console shows an output like: ```log Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-4.shared.runners-manager.gitlab.com/default J2nyww-s Resolving secrets 00:00 Preparing the "docker+machine" executor 00:56 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Authenticating with credentials from $DOCKER_AUTH_CONFIG Pulling docker image registry.example.com/my-target-app:latest ... Using docker image sha256:139c39668e5e4417f7d0eb0eeb74145ba862f4f3c24f7c6594ecb2f82dc4ad06 for registry.example.com/my-target-app:latest with digest registry.example.com/my-target- app@sha256:2b69fc7c3627dbd0ebaa17674c264fcd2f2ba21ed9552a472acf8b065d39039c ... Waiting for services to be up and running (timeout 30 seconds)... ``` ## Differing vulnerability results between consecutive scans It is possible that consecutive scans may return differing vulnerability findings in the absence of code or configuration changes. This is primarily due to the unpredictability associated with the target environment and its state, and the parallelization of requests sent by the scanner. Multiple requests are sent in parallel by the scanner to optimize scan time, which in turn means that the exact order the target server responds to the requests is not predetermined. Timing attack vulnerabilities that are detected by the length of time between request and response such as OS Command or SQL Injections may be detected if the server is under load and unable to service responses to the tests within their given thresholds. The same scan executions when the server is not under load may not return positive findings for these vulnerabilities, leading to differing results. Profiling the target server, [Performance tuning and testing speed](performance.md), and establishing baselines for optimal server performance during testing may be helpful in identifying where false positives may appear due to the aforementioned factors. ## `sudo: The "no new privileges" flag is set, which prevents sudo from running as root.` Starting with v5 of the analyzer, a non-root user is used by default. This requires the use of `sudo` when performing privileged operations. This error occurs with a specific container daemon setup that prevents running containers from obtaining new permissions. In most settings, this is not the default configuration, it's something specifically configured, often as part of a security hardening guide. **Error message** This issue can be identified by the error message generated when a `before_script` or `APISEC_PRE_SCRIPT` is executed: ```shell $ sudo apk add nodejs sudo: The "no new privileges" flag is set, which prevents sudo from running as root. sudo: If sudo is running in a container, you may need to adjust the container configuration to disable the flag. ``` **Solution** This issue can be worked around in the following ways: - Run the container as the `root` user. You should test this configuration as it may not work in all cases. This can be done by modifying the CICD configuration and checking the job output to make sure that `whoami` returns `root` and not `gitlab`. If `gitlab` is displayed, use another workaround. After testing has confirmed the change is successful, the `before_script` can be removed. ```yaml api_security: image: name: $SECURE_ANALYZERS_PREFIX/$APISEC_IMAGE:$APISEC_VERSION$APISEC_IMAGE_SUFFIX docker: user: root before_script: - whoami ``` _Example job console output:_ ```log Executing "step_script" stage of the job script Using docker image sha256:8b95f188b37d6b342dc740f68557771bb214fe520a5dc78a88c7a9cc6a0f9901 for registry.gitlab.com/security-products/api-security:5 with digest registry.gitlab.com/security-products/api-security@sha256:092909baa2b41db8a7e3584f91b982174772abdfe8ceafc97cf567c3de3179d1 ... $ whoami root $ /peach/analyzer-api-security 17:17:14 [INF] API Security: Gitlab API Security 17:17:14 [INF] API Security: ------------------- 17:17:14 [INF] API Security: 17:17:14 [INF] API Security: version: 5.7.0 ``` - Wrap the container and add any dependencies at build time. This option has the benefit of running with lower privileges than root which may be a requirement for some customers. 1. Create a new `Dockerfile` that wraps the existing image. ```yaml ARG SECURE_ANALYZERS_PREFIX ARG APISEC_IMAGE ARG APISEC_VERSION ARG APISEC_IMAGE_SUFFIX FROM $SECURE_ANALYZERS_PREFIX/$APISEC_IMAGE:$APISEC_VERSION$APISEC_IMAGE_SUFFIX USER root RUN pip install ... RUN apk add ... USER gitlab ``` 1. Build the new image and push it to your local container registry before the API Security Testing job starts. The image should be removed after the `api_security` job has been completed. ```shell TARGET_NAME=apisec-$CI_COMMIT_SHA docker build -t $TARGET_IMAGE \ --build-arg "SECURE_ANALYZERS_PREFIX=$SECURE_ANALYZERS_PREFIX" \ --build-arg "APISEC_IMAGE=$APISEC_IMAGE" \ --build-arg "APISEC_VERSION=$APISEC_VERSION" \ --build-arg "APISEC_IMAGE_SUFFIX=$APISEC_IMAGE_SUFFIX" \ . docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY docker push $TARGET_IMAGE ``` 1. Extend the `api_security` job and use the new image name. ```yaml api_security: image: apisec-$CI_COMMIT_SHA ``` 1. Remove the temporary container from the registry. See [this documentation page for information on removing container images.](../../packages/container_registry/delete_container_registry_images.md) - Change the GitLab Runner configuration, disabling the no-new-privileges flag. This could have security implications and should be discussed with your operations and security teams. ## `Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders()` This error message indicates that the API security testing analyzer is unable to parse the value of the `APISEC_REQUEST_HEADERS` or `APISEC_REQUEST_HEADERS_BASE64` configuration variable. **Error message** This issue can be identified by two error messages. The first error message is seen in the job console output and the second in the `gl-api-security-scanner.log` file. _Error message from job console:_ ```plaintext 05:48:38 [ERR] API Security: Testing failed: An unexpected exception occurred: Index was outside the bounds of the array. ``` _Error message from `gl_api_security-scanner.log`:_ ```plaintext 08:45:43.616 [ERR] <Peach.Web.Core.Services.WebRunnerMachine> Unexpected exception in WebRunnerMachine::Run() System.IndexOutOfRangeException: Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders() in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/[RunnerOptions.cs:line 362 at Peach.Web.Runner.Services.RunnerService.Start(Job job, IRunnerOptions options) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/RunnerService.cs:line 67 at Peach.Web.Core.Services.WebRunnerMachine.Run(IRunnerOptions runnerOptions, CancellationToken token) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Core/Services/WebRunnerMachine.cs:line 321 08:45:43.634 [WRN] <Peach.Web.Core.Services.WebRunnerMachine> * Session failed: An unexpected exception occurred: Index was outside the bounds of the array. 08:45:43.677 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Finished testing. Performed a total of 0 requests. ``` **Solution** This issue occurs due to a malformed `APISEC_REQUEST_HEADERS` or `APISEC_REQUEST_HEADERS_BASE64` variable. The expected format is one or more headers of `Header: value` construction separated by a comma. The solution is to correct the syntax to match what is expected. _Valid examples:_ - `Authorization: Bearer XYZ` - `X-Custom: Value,Authorization: Bearer XYZ` _Invalid examples:_ - `Header:,value` - `HeaderA: value,HeaderB:,HeaderC: value` - `Header`
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting API security testing jobs breadcrumbs: - doc - user - application_security - api_security_testing --- ## API security testing job times out after N hours For larger repositories, the API security testing job could time out on the [small hosted runner on Linux](../../../ci/runners/hosted_runners/linux.md#machine-types-available-for-linux---x86-64), which is set per default. If this happens in your jobs, you should scale up to a [larger runner](performance.md#using-a-larger-runner). See the following documentation sections for assistance: - [Performance tuning and testing speed](performance.md) - [Using a larger Runner](performance.md#using-a-larger-runner) - [Excluding operations by path](configuration/customizing_analyzer_settings.md#exclude-paths) - [Excluding slow operations](performance.md#excluding-slow-operations) ## API security testing job takes too long to complete See [Performance Tuning and Testing Speed](performance.md) ## Error: `Error waiting for DAST API 'http://127.0.0.1:5000' to become available` A bug exists in versions of the API security testing analyzer prior to v1.6.196 that can cause a background process to fail under certain conditions. The solution is to update to a newer version of the API security testing analyzer. The version information can be found in the job details for the `dast_api` job. If the issue is occurring with versions v1.6.196 or greater, contact Support and provide the following information: 1. Reference this troubleshooting section and ask for the issue to be escalated to the Dynamic Analysis Team. 1. The full console output of the job. 1. The `gl-api-security-scanner.log` file available as a job artifact. In the right-hand panel of the job details page, select the **Browse** button. 1. The `dast_api` job definition from your `.gitlab-ci.yml` file. **Error message** - In [GitLab 15.6 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/376078), `Error waiting for DAST API 'http://127.0.0.1:5000' to become available` - In GitLab 15.5 and earlier, `Error waiting for API Security 'http://127.0.0.1:5000' to become available`. ## `Failed to start scanner session (version header not found)` The API security testing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `dast_api` job. A common cause of this issue is changing the `APISEC_API` variable from its default. **Error message** - `Failed to start scanner session (version header not found).` **Solution** - Remove the `APISEC_API` variable from the `.gitlab-ci.yml` file. The value inherits from the API security testing CI/CD template. We recommend this method instead of manually setting a value. - If removing the variable is not possible, check to see if this value has changed in the latest version of the [API security testing CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Security/API-Security.gitlab-ci.yml). If so, update the value in the `.gitlab-ci.yml` file. ## `Failed to start session with scanner. Please retry, and if the problem persists reach out to support.` The API security testing engine outputs an error message when it cannot establish a connection with the scanner application component. The error message is shown in the job output window of the `dast_api` job. A common cause for this issue is that the background component cannot use the selected port as it's already in use. This error can occur intermittently if timing plays a part (race condition). This issue occurs most often with Kubernetes environments when other services are mapped into the container causing port conflicts. Before proceeding with a solution, it is important to confirm that the error message was produced because the port was already taken. To confirm this was the cause: 1. Go to the job console. 1. Look for the artifact `gl-api-security-scanner.log`. You can either download all artifacts by selecting **Download** and then search for the file, or directly start searching by selecting **Browse**. 1. Open the file `gl-api-security-scanner.log` in a text editor. 1. If the error message was produced because the port was already taken, you should see in the file a message like the following: - In [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734): ```log Failed to bind to address http://127.0.0.1:5500: address already in use. ``` - In GitLab 15.4 and earlier: ```log Failed to bind to address http://[::]:5000: address already in use. ``` The text `http://[::]:5000` in the previous message could be different in your case, for instance it could be `http://[::]:5500` or `http://127.0.0.1:5500`. As long as the remaining parts of the error message are the same, it is safe to assume the port was already taken. If you did not find evidence that the port was already taken, check other troubleshooting sections which also address the same error message shown in the job console output. If there are no more options, feel free to [get support or request an improvement](_index.md#get-support-or-request-an-improvement) through the proper channels. Once you have confirmed the issue was produced because the port was already taken. Then, [GitLab 15.5 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) introduced the configuration variable `APISEC_API_PORT`. This configuration variable allows setting a fixed port number for the scanner background component. **Solution** 1. Ensure your `.gitlab-ci.yml` file defines the configuration variable `APISEC_API_PORT`. 1. Update the value of `APISEC_API_PORT` to any available port number greater than 1024. We recommend checking that the new value is not in used by GitLab. See the full list of ports used by GitLab in [Package defaults](../../../administration/package_information/defaults.md#ports) ## `Application cannot determine the base URL for the target API` The API security testing engine outputs an error message when it cannot determine the target API after inspecting the OpenAPI document. This error message is shown when the target API has not been set in the `.gitlab-ci.yml` file, it is not available in the `environment_url.txt` file, and it could not be computed using the OpenAPI document. There is a order of precedence in which the API security testing engine tries to get the target API when checking the different sources. First, it tries to use the `APISEC_TARGET_URL`. If the environment variable has not been set, then the API security testing engine attempts to use the `environment_url.txt` file. If there is no file `environment_url.txt`, then the API security testing engine uses the OpenAPI document contents and the URL provided in `APISEC_OPENAPI` (if a URL is provided) to try to compute the target API. The best-suited solution depends on whether or not your target API changes for each deployment. In static environments, the target API is the same for each deployment, in this case refer to the [static environment solution](#static-environment-solution). If the target API changes for each deployment a [dynamic environment solution](#dynamic-environment-solutions) should be applied. ## API security testing job excludes some paths from operations If you find that some paths are being excluded from operations, make sure that: - The variable `DAST_API_EXCLUDE_URLS` is not configured to exclude operations you want to test. - The `consumes` array is defined and has a valid type in the target definition JSON file. For an example definition, see the [example project target definition file](https://gitlab.com/gitlab-org/security-products/demos/api-dast/openapi-example/-/blob/12e2b039d08208f1dd38a1e7c52b0bda848bb449/rest_target_openapi.json?plain=1#L13). ### Static environment solution This solution is for pipelines in which the target API URL doesn't change (is static). **Add environmental variable** For environments where the target API remains the same, we recommend you specify the target URL by using the `APISEC_TARGET_URL` environment variable. In your `.gitlab-ci.yml`, add a variable `APISEC_TARGET_URL`. The variable must be set to the base URL of API testing target. For example: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: http://test-deployment/ APISEC_OPENAPI: test-api-specification.json ``` ### Dynamic environment solutions In a dynamic environment your target API changes for each different deployment. In this case, there is more than one possible solution, we recommend you use the `environment_url.txt` file when dealing with dynamic environments. **Use environment_url.txt** To support dynamic environments in which the target API URL changes during each pipeline, API security testing engine supports the use of an `environment_url.txt` file that contains the URL to use. This file is not checked into the repository, instead it's created during the pipeline by the job that deploys the test target and collected as an artifact that can be used by later jobs in the pipeline. The job that creates the `environment_url.txt` file must run before the API security testing engine job. 1. Modify the test target deployment job adding the base URL in an `environment_url.txt` file at the root of your project. 1. Modify the test target deployment job collecting the `environment_url.txt` as an artifact. Example: ```yaml deploy-test-target: script: # Perform deployment steps # Create environment_url.txt (example) - echo http://${CI_PROJECT_ID}-${CI_ENVIRONMENT_SLUG}.example.org > environment_url.txt artifacts: paths: - environment_url.txt ``` ## Use OpenAPI with an invalid schema There are cases where the document is autogenerated with an invalid schema or cannot be edited manually in a timely manner. In those scenarios, the API security testing is able to perform a relaxed validation by setting the variable `APISEC_OPENAPI_RELAXED_VALIDATION`. We recommend providing a fully compliant OpenAPI document to prevent unexpected behaviors. ### Edit a non-compliant OpenAPI file To detect and correct elements that don't comply with the OpenAPI specifications, we recommend using an editor. An editor commonly provides document validation, and suggestions to create a schema-compliant OpenAPI document. Suggested editors include: | Editor | OpenAPI 2.0 | OpenAPI 3.0.x | OpenAPI 3.1.x | |--------|-------------|---------------|---------------| | [Stoplight Studio](https://stoplight.io/solutions) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | | [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON | If your OpenAPI document is generated manually, load your document in the editor and fix anything that is non-compliant. If your document is generated automatically, load it in your editor to identify the issues in the schema, then go to the application and perform the corrections based on the framework you are using. ### Enable OpenAPI relaxed validation Relaxed validation is meant for cases when the OpenAPI document cannot meet OpenAPI specifications, but it still has enough content to be consumed by different tools. A validation is performed but less strictly in regards to document schema. API security testing can still try to consume an OpenAPI document that does not fully comply with OpenAPI specifications. To instruct API security testing to perform a relaxed validation, set the variable `APISEC_OPENAPI_RELAXED_VALIDATION` to any value, for example: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_PROFILE: Quick APISEC_TARGET_URL: http://test-deployment/ APISEC_OPENAPI: test-api-specification.json APISEC_OPENAPI_RELAXED_VALIDATION: 'On' ``` ## `No operation in the OpenAPI document is consuming any supported media type` API security testing uses the specified media types in the OpenAPI document to generate requests. If no request can be created due to the lack of supported media types, then an error is thrown. **Error message** - `Error, no operation in the OpenApi document is consuming any supported media type. Check 'OpenAPI Specification' to check the supported media types.` **Solution** 1. Review supported media types in the [OpenAPI Specification](configuration/enabling_the_analyzer.md#openapi-specification) section. 1. Edit your OpenAPI document, allowing at least a given operation to accept any of the supported media types. Alternatively, a supported media type could be set in the OpenAPI document level and get applied to all operations. This step may require changes in your application to ensure the supported media type is accepted by the application. ## Error: `The SSL connection could not be established, see inner exception.` API security testing is compatible with a broad range of TLS configurations, including outdated protocols and ciphers. Despite broad support, you might encounter connection errors, like this: ```plaintext Error, error occurred trying to download `<URL>`: There was an error when retrieving content from Uri:' <URL>'. Error:The SSL connection could not be established, see inner exception. ``` This error occurs because API security testing could not establish a secure connection with the server at the given URL. To resolve the issue: If the host in the error message supports non-TLS connections, change `https://` to `http://` in your configuration. For example, if an error occurs with the following configuration: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: https://test-deployment/ APISEC_OPENAPI: https://specs/openapi.json ``` Change the prefix of `APISEC_OPENAPI` from `https://` to `http://`: ```yaml stages: - dast include: - template: API-Security.gitlab-ci.yml variables: APISEC_TARGET_URL: https://test-deployment/ APISEC_OPENAPI: http://specs/openapi.json ``` If you cannot use a non-TLS connection to access the URL, contact the Support team for help. You can expedite the investigation with the [testssl.sh tool](https://testssl.sh/). From a machine with a bash shell and connectivity to the affected server: 1. Download the latest release `zip` or `tar.gz` file and extract from <https://github.com/drwetter/testssl.sh/releases>. 1. Run `./testssl.sh --log https://specs`. 1. Attach the log file to your support ticket. ## `ERROR: Job failed: failed to pull image` This error message occurs when pulling an image from a container registry that requires authentication to access (it is not public). In the job console output the error looks like: ```plaintext Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-2.shared.runners-manager.gitlab.com/default XxUrkriX Resolving secrets 00:00 Preparing the "docker+machine" executor 00:06 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Pulling docker image registry.example.com/my-target-app:latest ... WARNING: Failed to pull image with policy "always": Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ERROR: Job failed: failed to pull image "registry.example.com/my-target-app:latest" with specified policies [always]: Error response from daemon: Get https://registry.example.com/my-target-app/manifests/latest: unauthorized (manager.go:237:0s) ``` **Error message** - In GitLab 15.9 and earlier, `ERROR: Job failed: failed to pull image` followed by `Error response from daemon: Get IMAGE: unauthorized`. **Solution** Authentication credentials are provided using the methods outlined in the [Access an image from a private container registry](../../../ci/docker/using_docker_images.md#access-an-image-from-a-private-container-registry) documentation section. The method used is dictated by your container registry provider and its configuration. If your using a container registry provided by a 3rd party, such as a cloud provider (Azure, Google Could (GCP), AWS and so on), check the providers documentation for information on how to authenticate to their container registries. The following example uses the [statically defined credentials](../../../ci/docker/using_docker_images.md#use-statically-defined-credentials) authentication method. In this example the container registry is `registry.example.com` and image is `my-target-app:latest`. 1. Read how to [Determine your `DOCKER_AUTH_CONFIG` data](../../../ci/docker/using_docker_images.md#determine-your-docker_auth_config-data) to understand how to compute the variable value for `DOCKER_AUTH_CONFIG`. The configuration variable `DOCKER_AUTH_CONFIG` contains the Docker JSON configuration to provide the appropriate authentication information. For example, to access private container registry: `registry.example.com` with the credentials `abcdefghijklmn`, the Docker JSON looks like: ```json { "auths": { "registry.example.com": { "auth": "abcdefghijklmn" } } } ``` 1. Add the `DOCKER_AUTH_CONFIG` as a CI/CD variable. Instead of adding the configuration variable directly in your `.gitlab-ci.yml`file you should create a project [CI/CD variable](../../../ci/variables/_index.md#for-a-project). 1. Rerun your job, and the statically-defined credentials are now used to sign in to the private container registry `registry.example.com`, and let you pull the image `my-target-app:latest`. If succeeded the job console shows an output like: ```log Running with gitlab-runner 15.6.0~beta.186.ga889181a (a889181a) on blue-4.shared.runners-manager.gitlab.com/default J2nyww-s Resolving secrets 00:00 Preparing the "docker+machine" executor 00:56 Using Docker executor with image registry.gitlab.com/security-products/api-security:2 ... Starting service registry.example.com/my-target-app:latest ... Authenticating with credentials from $DOCKER_AUTH_CONFIG Pulling docker image registry.example.com/my-target-app:latest ... Using docker image sha256:139c39668e5e4417f7d0eb0eeb74145ba862f4f3c24f7c6594ecb2f82dc4ad06 for registry.example.com/my-target-app:latest with digest registry.example.com/my-target- app@sha256:2b69fc7c3627dbd0ebaa17674c264fcd2f2ba21ed9552a472acf8b065d39039c ... Waiting for services to be up and running (timeout 30 seconds)... ``` ## Differing vulnerability results between consecutive scans It is possible that consecutive scans may return differing vulnerability findings in the absence of code or configuration changes. This is primarily due to the unpredictability associated with the target environment and its state, and the parallelization of requests sent by the scanner. Multiple requests are sent in parallel by the scanner to optimize scan time, which in turn means that the exact order the target server responds to the requests is not predetermined. Timing attack vulnerabilities that are detected by the length of time between request and response such as OS Command or SQL Injections may be detected if the server is under load and unable to service responses to the tests within their given thresholds. The same scan executions when the server is not under load may not return positive findings for these vulnerabilities, leading to differing results. Profiling the target server, [Performance tuning and testing speed](performance.md), and establishing baselines for optimal server performance during testing may be helpful in identifying where false positives may appear due to the aforementioned factors. ## `sudo: The "no new privileges" flag is set, which prevents sudo from running as root.` Starting with v5 of the analyzer, a non-root user is used by default. This requires the use of `sudo` when performing privileged operations. This error occurs with a specific container daemon setup that prevents running containers from obtaining new permissions. In most settings, this is not the default configuration, it's something specifically configured, often as part of a security hardening guide. **Error message** This issue can be identified by the error message generated when a `before_script` or `APISEC_PRE_SCRIPT` is executed: ```shell $ sudo apk add nodejs sudo: The "no new privileges" flag is set, which prevents sudo from running as root. sudo: If sudo is running in a container, you may need to adjust the container configuration to disable the flag. ``` **Solution** This issue can be worked around in the following ways: - Run the container as the `root` user. You should test this configuration as it may not work in all cases. This can be done by modifying the CICD configuration and checking the job output to make sure that `whoami` returns `root` and not `gitlab`. If `gitlab` is displayed, use another workaround. After testing has confirmed the change is successful, the `before_script` can be removed. ```yaml api_security: image: name: $SECURE_ANALYZERS_PREFIX/$APISEC_IMAGE:$APISEC_VERSION$APISEC_IMAGE_SUFFIX docker: user: root before_script: - whoami ``` _Example job console output:_ ```log Executing "step_script" stage of the job script Using docker image sha256:8b95f188b37d6b342dc740f68557771bb214fe520a5dc78a88c7a9cc6a0f9901 for registry.gitlab.com/security-products/api-security:5 with digest registry.gitlab.com/security-products/api-security@sha256:092909baa2b41db8a7e3584f91b982174772abdfe8ceafc97cf567c3de3179d1 ... $ whoami root $ /peach/analyzer-api-security 17:17:14 [INF] API Security: Gitlab API Security 17:17:14 [INF] API Security: ------------------- 17:17:14 [INF] API Security: 17:17:14 [INF] API Security: version: 5.7.0 ``` - Wrap the container and add any dependencies at build time. This option has the benefit of running with lower privileges than root which may be a requirement for some customers. 1. Create a new `Dockerfile` that wraps the existing image. ```yaml ARG SECURE_ANALYZERS_PREFIX ARG APISEC_IMAGE ARG APISEC_VERSION ARG APISEC_IMAGE_SUFFIX FROM $SECURE_ANALYZERS_PREFIX/$APISEC_IMAGE:$APISEC_VERSION$APISEC_IMAGE_SUFFIX USER root RUN pip install ... RUN apk add ... USER gitlab ``` 1. Build the new image and push it to your local container registry before the API Security Testing job starts. The image should be removed after the `api_security` job has been completed. ```shell TARGET_NAME=apisec-$CI_COMMIT_SHA docker build -t $TARGET_IMAGE \ --build-arg "SECURE_ANALYZERS_PREFIX=$SECURE_ANALYZERS_PREFIX" \ --build-arg "APISEC_IMAGE=$APISEC_IMAGE" \ --build-arg "APISEC_VERSION=$APISEC_VERSION" \ --build-arg "APISEC_IMAGE_SUFFIX=$APISEC_IMAGE_SUFFIX" \ . docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY docker push $TARGET_IMAGE ``` 1. Extend the `api_security` job and use the new image name. ```yaml api_security: image: apisec-$CI_COMMIT_SHA ``` 1. Remove the temporary container from the registry. See [this documentation page for information on removing container images.](../../packages/container_registry/delete_container_registry_images.md) - Change the GitLab Runner configuration, disabling the no-new-privileges flag. This could have security implications and should be discussed with your operations and security teams. ## `Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders()` This error message indicates that the API security testing analyzer is unable to parse the value of the `APISEC_REQUEST_HEADERS` or `APISEC_REQUEST_HEADERS_BASE64` configuration variable. **Error message** This issue can be identified by two error messages. The first error message is seen in the job console output and the second in the `gl-api-security-scanner.log` file. _Error message from job console:_ ```plaintext 05:48:38 [ERR] API Security: Testing failed: An unexpected exception occurred: Index was outside the bounds of the array. ``` _Error message from `gl_api_security-scanner.log`:_ ```plaintext 08:45:43.616 [ERR] <Peach.Web.Core.Services.WebRunnerMachine> Unexpected exception in WebRunnerMachine::Run() System.IndexOutOfRangeException: Index was outside the bounds of the array. at Peach.Web.Runner.Services.RunnerOptions.GetHeaders() in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/[RunnerOptions.cs:line 362 at Peach.Web.Runner.Services.RunnerService.Start(Job job, IRunnerOptions options) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Runner/Services/RunnerService.cs:line 67 at Peach.Web.Core.Services.WebRunnerMachine.Run(IRunnerOptions runnerOptions, CancellationToken token) in /builds/gitlab-org/security-products/analyzers/api-fuzzing-src/web/PeachWeb/Core/Services/WebRunnerMachine.cs:line 321 08:45:43.634 [WRN] <Peach.Web.Core.Services.WebRunnerMachine> * Session failed: An unexpected exception occurred: Index was outside the bounds of the array. 08:45:43.677 [INF] <Peach.Web.Core.Services.WebRunnerMachine> Finished testing. Performed a total of 0 requests. ``` **Solution** This issue occurs due to a malformed `APISEC_REQUEST_HEADERS` or `APISEC_REQUEST_HEADERS_BASE64` variable. The expected format is one or more headers of `Header: value` construction separated by a comma. The solution is to correct the syntax to match what is expected. _Valid examples:_ - `Authorization: Bearer XYZ` - `X-Custom: Value,Authorization: Bearer XYZ` _Invalid examples:_ - `Header:,value` - `HeaderA: value,HeaderB:,HeaderC: value` - `Header`
https://docs.gitlab.com/user/application_security/api_security_testing/open_redirect_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/open_redirect_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
open_redirect_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Open redirect
null
## Description Identify open redirects and determine if they can be abused by attackers. ## Remediation Unvalidated redirects and forwards are possible when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts may have a more trustworthy appearance. Unvalidated redirect and forward attacks can also be used to maliciously craft a URL that would pass the application's access control check and then forward the attacker to privileged functions that they would usually not be able to access. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/601.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Open redirect breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Identify open redirects and determine if they can be abused by attackers. ## Remediation Unvalidated redirects and forwards are possible when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts may have a more trustworthy appearance. Unvalidated redirect and forward attacks can also be used to maliciously craft a URL that would pass the application's access control check and then forward the attacker to privileged functions that they would usually not be able to access. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/601.html)
https://docs.gitlab.com/user/application_security/api_security_testing/html_injection_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/html_injection_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
html_injection_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
HTML injection
null
## Description Check for XSS via HTML injection into all fields that support strings. This includes portions of the HTTP request such as path, query, headers and also body parameters such as XML fields, JSON fields, etc. Detection is performed by monitoring responses for the injected value in to known HTML enabled fields. ## Remediation Cross-site scripting (XSS) is an attack technique that involves echoing attacker-supplied code into a user's browser instance. A browser instance can be a standard web browser client, or a browser object embedded in a software product such as the browser within WinAmp, an RSS reader, or an email client. The code itself is usually written in HTML/JavaScript, but may also extend to VBScript, ActiveX, Java, Flash, or any other browser-supported technology. When an attacker gets a user's browser to execute his/her code, the code will run within the security context (or zone) of the hosting web site. With this level of privilege, the code has the ability to read, modify and transmit any sensitive data accessible by the browser. A Cross-site Scripted user could have his/her account hijacked (cookie theft), their browser redirected to another location, or possibly shown fraudulent content delivered by the web site they are visiting. Cross-site Scripting attacks essentially compromise the trust relationship between a user and the web site. Applications utilizing browser object instances which load content from the file system may execute code under the local machine zone allowing for system compromise. There are three types of Cross-site Scripting attacks: non-persistent, persistent and DOM-based. Non-persistent attacks and DOM-based attacks require a user to either visit a specially crafted link laced with malicious code, or visit a malicious web page containing a web form, which when posted to the vulnerable site, will mount the attack. Using a malicious form will oftentimes take place when the vulnerable resource only accepts HTTP POST requests. In such a case, the form can be submitted automatically, without the victim's knowledge (for example, by using JavaScript). Upon clicking on the malicious link or submitting the malicious form, the XSS payload will get echoed back and will get interpreted by the user's browser and execute. Another technique to send almost arbitrary requests (GET and POST) is by using an embedded client, such as Adobe Flash. Persistent attacks occur when the malicious code is submitted to a web site where it's stored for a period of time. Examples of an attacker's favorite targets often include message board posts, web mail messages, and web chat software. The unsuspecting user is not required to interact with any additional site/link (for example, an attacker site or a malicious link sent via email), just simply view the web page containing the code. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/79.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: HTML injection breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for XSS via HTML injection into all fields that support strings. This includes portions of the HTTP request such as path, query, headers and also body parameters such as XML fields, JSON fields, etc. Detection is performed by monitoring responses for the injected value in to known HTML enabled fields. ## Remediation Cross-site scripting (XSS) is an attack technique that involves echoing attacker-supplied code into a user's browser instance. A browser instance can be a standard web browser client, or a browser object embedded in a software product such as the browser within WinAmp, an RSS reader, or an email client. The code itself is usually written in HTML/JavaScript, but may also extend to VBScript, ActiveX, Java, Flash, or any other browser-supported technology. When an attacker gets a user's browser to execute his/her code, the code will run within the security context (or zone) of the hosting web site. With this level of privilege, the code has the ability to read, modify and transmit any sensitive data accessible by the browser. A Cross-site Scripted user could have his/her account hijacked (cookie theft), their browser redirected to another location, or possibly shown fraudulent content delivered by the web site they are visiting. Cross-site Scripting attacks essentially compromise the trust relationship between a user and the web site. Applications utilizing browser object instances which load content from the file system may execute code under the local machine zone allowing for system compromise. There are three types of Cross-site Scripting attacks: non-persistent, persistent and DOM-based. Non-persistent attacks and DOM-based attacks require a user to either visit a specially crafted link laced with malicious code, or visit a malicious web page containing a web form, which when posted to the vulnerable site, will mount the attack. Using a malicious form will oftentimes take place when the vulnerable resource only accepts HTTP POST requests. In such a case, the form can be submitted automatically, without the victim's knowledge (for example, by using JavaScript). Upon clicking on the malicious link or submitting the malicious form, the XSS payload will get echoed back and will get interpreted by the user's browser and execute. Another technique to send almost arbitrary requests (GET and POST) is by using an embedded client, such as Adobe Flash. Persistent attacks occur when the malicious code is submitted to a web site where it's stored for a period of time. Examples of an attacker's favorite targets often include message board posts, web mail messages, and web chat software. The unsuspecting user is not required to interact with any additional site/link (for example, an attacker site or a malicious link sent via email), just simply view the web page containing the code. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/79.html)
https://docs.gitlab.com/user/application_security/api_security_testing/authentication_token_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/authentication_token_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
authentication_token_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Authentication token
null
## Description Perform various authentication token checks such as removing the token or changing to an invalid value. ## Remediation API tokens must be unpredictable (random enough) to prevent guessing attacks, where an attacker is able to guess or predict a valid API Token through statistical analysis techniques. For this purpose, a good PRNG (Pseudo Random Number Generator) must be used. The authentication token may have been: - modified to an invalid value. - removed from request. - not match length requirements. - configured as a signature. An API operation failed to property restrict access using an authentication token. This allows an attacker to bypass authentication gaining access to information or even the ability to modify data. ## Links - [OWASP](https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/285.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Authentication token breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Perform various authentication token checks such as removing the token or changing to an invalid value. ## Remediation API tokens must be unpredictable (random enough) to prevent guessing attacks, where an attacker is able to guess or predict a valid API Token through statistical analysis techniques. For this purpose, a good PRNG (Pseudo Random Number Generator) must be used. The authentication token may have been: - modified to an invalid value. - removed from request. - not match length requirements. - configured as a signature. An API operation failed to property restrict access using an authentication token. This allows an attacker to bypass authentication gaining access to information or even the ability to modify data. ## Links - [OWASP](https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/285.html)
https://docs.gitlab.com/user/application_security/api_security_testing/path_traversal_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/path_traversal_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
path_traversal_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Path traversal
null
## Description Many file operations are intended to take place within a restricted directory. By using special elements such as `..` and `/` separators, attackers can escape outside of the restricted location to access files or directories that are elsewhere on the system. One of the most common special elements is the `../` sequence, which in most modern operating systems is interpreted as the parent directory of the current location. This is referred to as relative path traversal. Path traversal also covers the use of absolute path-names such as `/usr/local/bin`, which may also be useful in accessing unexpected files. This is referred to as absolute path traversal. In many programming languages, the injection of a null byte (the `0` or `NULL` ) may allow an attacker to truncate a generated filename to widen the scope of attack. For example, the software may add `.txt` to any pathname, thus limiting the attacker to text files, but a null injection may effectively remove this restriction. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and access restricted files and files outside of the web-root. Logs and responses are then analyzed to try and detect if the file was successfully accessed. ## Remediation The Path traversal attack technique allows an attacker access to files, directories, and commands that potentially reside outside the web document root directory. An attacker may manipulate a URL in such a way that the web site will execute or reveal the contents of arbitrary files anywhere on the web server. Any device that exposes an HTTP-based interface is potentially vulnerable to Path traversal. Most web sites restrict user access to a specific portion of the file-system, typically called the "web document root" or "CGI root" directory. These directories contain the files intended for user access and the executable necessary to drive web application functionality. To access files or execute commands anywhere on the file-system, Path traversal attacks will utilize the ability of special-characters sequences. The most basic Path traversal attack uses the `../` special-character sequence to alter the resource location requested in the URL. Although most popular web servers will prevent this technique from escaping the web document root, alternate encodings of the `../` sequence may help bypass the security filters. These method variations include valid and invalid Unicode-encoding (`..%u2216` or `..%c0%af`) of the forward slash character, backslash characters (`..`) on Windows-based servers, URL encoded characters (`%2e%2e%2f`), and double URL encoding (`..%255c`) of the backslash character. Even if the web server properly restricts Path traversal attempts in the URL path, a web application itself may still be vulnerable due to improper handling of user-supplied input. This is a common problem of web applications that use template mechanisms or load static text from files. In variations of the attack, the original URL parameter value is substituted with the file name of one of the web application's dynamic scripts. Consequently, the results can reveal source code because the file is interpreted as text instead of an executable script. These techniques often employ additional special characters such as the dot (`.`) to reveal the listing of the current working directory, or `%00` NULL characters in order to bypass rudimentary file extension checks. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/22.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Path traversal breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Many file operations are intended to take place within a restricted directory. By using special elements such as `..` and `/` separators, attackers can escape outside of the restricted location to access files or directories that are elsewhere on the system. One of the most common special elements is the `../` sequence, which in most modern operating systems is interpreted as the parent directory of the current location. This is referred to as relative path traversal. Path traversal also covers the use of absolute path-names such as `/usr/local/bin`, which may also be useful in accessing unexpected files. This is referred to as absolute path traversal. In many programming languages, the injection of a null byte (the `0` or `NULL` ) may allow an attacker to truncate a generated filename to widen the scope of attack. For example, the software may add `.txt` to any pathname, thus limiting the attacker to text files, but a null injection may effectively remove this restriction. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and access restricted files and files outside of the web-root. Logs and responses are then analyzed to try and detect if the file was successfully accessed. ## Remediation The Path traversal attack technique allows an attacker access to files, directories, and commands that potentially reside outside the web document root directory. An attacker may manipulate a URL in such a way that the web site will execute or reveal the contents of arbitrary files anywhere on the web server. Any device that exposes an HTTP-based interface is potentially vulnerable to Path traversal. Most web sites restrict user access to a specific portion of the file-system, typically called the "web document root" or "CGI root" directory. These directories contain the files intended for user access and the executable necessary to drive web application functionality. To access files or execute commands anywhere on the file-system, Path traversal attacks will utilize the ability of special-characters sequences. The most basic Path traversal attack uses the `../` special-character sequence to alter the resource location requested in the URL. Although most popular web servers will prevent this technique from escaping the web document root, alternate encodings of the `../` sequence may help bypass the security filters. These method variations include valid and invalid Unicode-encoding (`..%u2216` or `..%c0%af`) of the forward slash character, backslash characters (`..`) on Windows-based servers, URL encoded characters (`%2e%2e%2f`), and double URL encoding (`..%255c`) of the backslash character. Even if the web server properly restricts Path traversal attempts in the URL path, a web application itself may still be vulnerable due to improper handling of user-supplied input. This is a common problem of web applications that use template mechanisms or load static text from files. In variations of the attack, the original URL parameter value is substituted with the file name of one of the web application's dynamic scripts. Consequently, the results can reveal source code because the file is interpreted as text instead of an executable script. These techniques often employ additional special characters such as the dot (`.`) to reveal the listing of the current working directory, or `%00` NULL characters in order to bypass rudimentary file extension checks. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/22.html)
https://docs.gitlab.com/user/application_security/api_security_testing/cors_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/cors_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
cors_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
CORS
null
## Description Check for CORS misconfiguration including overly permissive white-lists of accepted Origin headers or failure to validate Origin header. Also checks for allowing credentials on potentially invalid or dangerous Origins and missing headers that could potentially result in cache poisoning. ## Remediation A misconfigured CORS implementation may be overly permissive in which domains should be trusted and at what level of trust. This could allow an untrusted domain to forge the Origin header and launch various types of attacks such as cross-site request forgery or cross-site scripting. An attacker could potentially steal a victim's credentials or send malicious requests on behalf of a victim. The victim may not even be aware that an attack is being launched. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/942.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: CORS breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for CORS misconfiguration including overly permissive white-lists of accepted Origin headers or failure to validate Origin header. Also checks for allowing credentials on potentially invalid or dangerous Origins and missing headers that could potentially result in cache poisoning. ## Remediation A misconfigured CORS implementation may be overly permissive in which domains should be trusted and at what level of trust. This could allow an untrusted domain to forge the Origin header and launch various types of attacks such as cross-site request forgery or cross-site scripting. An attacker could potentially steal a victim's credentials or send malicious requests on behalf of a victim. The victim may not even be aware that an attack is being launched. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/942.html)
https://docs.gitlab.com/user/application_security/api_security_testing/os_command_injection_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/os_command_injection_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
os_command_injection_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
OS command injection
null
## Description Check for OS command injection vulnerabilities. An OS command injection attack consists of insertion or "injection" of an OS command via the input data from the client to the application. A successful OS command injection exploit can run arbitrary commands. This allows an attacker the ability to read, write, and delete data. Depending on the user the commands run as, this can also include administrative functions. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and execute an OS command. Both standard injections and blind injections are performed. Blind injections cause delays in response when successful. ## Remediation It is possible to execute arbitrary OS commands on the target application server. OS Command Injection is a critical vulnerability that can lead to a full system compromise. User input should never be used in constructing commands or command arguments to functions which execute OS commands. This includes filenames supplied by user uploads or downloads. Ensure your application does not: - Use user supplied information in the process name to execute. - Use user supplied information in an OS command execution function which does not escape shell meta-characters. - Use user supplied information in arguments to OS commands. The application should have a hardcoded set of arguments that are to be passed to OS commands. If filenames are being passed to these functions, it is recommended that a hash of the filename be used instead, or some other unique identifier. It is strongly recommended that a native library that implements the same functionality be used instead of using OS system commands due to the risk of unknown attacks against third party commands. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/78.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: OS command injection breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for OS command injection vulnerabilities. An OS command injection attack consists of insertion or "injection" of an OS command via the input data from the client to the application. A successful OS command injection exploit can run arbitrary commands. This allows an attacker the ability to read, write, and delete data. Depending on the user the commands run as, this can also include administrative functions. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and execute an OS command. Both standard injections and blind injections are performed. Blind injections cause delays in response when successful. ## Remediation It is possible to execute arbitrary OS commands on the target application server. OS Command Injection is a critical vulnerability that can lead to a full system compromise. User input should never be used in constructing commands or command arguments to functions which execute OS commands. This includes filenames supplied by user uploads or downloads. Ensure your application does not: - Use user supplied information in the process name to execute. - Use user supplied information in an OS command execution function which does not escape shell meta-characters. - Use user supplied information in arguments to OS commands. The application should have a hardcoded set of arguments that are to be passed to OS commands. If filenames are being passed to these functions, it is recommended that a hash of the filename be used instead, or some other unique identifier. It is strongly recommended that a native library that implements the same functionality be used instead of using OS system commands due to the risk of unknown attacks against third party commands. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/78.html)
https://docs.gitlab.com/user/application_security/api_security_testing/xml_injection_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/xml_injection_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
xml_injection_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
XML Injection Check
null
## Description Check for XML serialization/injection vulnerabilities. ## Remediation XML Injection is an attack technique used to manipulate or compromise the logic of an XML application or service. The injection of unintended XML content and/or structures into an XML message can alter the intend logic of the application. Further, XML injection can cause the insertion of malicious content into the resulting message/document. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/91.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: XML Injection Check breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for XML serialization/injection vulnerabilities. ## Remediation XML Injection is an attack technique used to manipulate or compromise the logic of an XML application or service. The injection of unintended XML content and/or structures into an XML message can alter the intend logic of the application. Further, XML injection can cause the insertion of malicious content into the resulting message/document. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/91.html)
https://docs.gitlab.com/user/application_security/api_security_testing/checks
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/_index.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
_index.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
API security testing vulnerability checks
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/457449) from **DAST API vulnerability checks** to **API security testing vulnerability checks** in GitLab 17.0. {{< /history >}} [API security testing](../_index.md) provides vulnerability checks that are used to scan for vulnerabilities in the API under test. ## Passive checks | Check | Severity | Type | Profiles | |:-----------------------------------------------------------------------------|:---------|:--------|:---------| | [Application information check](application_information_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Cleartext authentication check](cleartext_authentication_check.md) | High | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [JSON hijacking](json_hijacking_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Sensitive information](sensitive_information_disclosure_check.md) | High | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Session cookie](session_cookie_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | ## Active checks | Check | Severity | Type | Profiles | |:-----------------------------------------------------------------------------|:---------|:--------|:---------| | [CORS](cors_check.md) | Medium | Active | Active-Full, Full | | [DNS rebinding](dns_rebinding_check.md) | Medium | Active | Active-Full, Full | | [Framework debug mode](framework_debug_mode_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) | High | Active | Active-Full, Full | | [HTML injection check](html_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [Insecure HTTP methods](insecure_http_methods_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [JSON injection](json_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [Open redirect](open_redirect_check.md) | Medium | Active | Active-Full, Full | | [OS command injection](os_command_injection_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [Path traversal](path_traversal_check.md) | High | Active | Active-Full, Full | | [Sensitive file](sensitive_file_disclosure_check.md) | Medium | Active | Active-Full, Full | | [Shellshock](shellshock_check.md) | High | Active | Active-Full, Full | | [SQL injection](sql_injection_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [TLS configuration](tls_server_configuration_check.md) | High | Active | Active-Full, Full | | [Authentication token](authentication_token_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [XML external entity](xml_external_entity_check.md) | High | Active | Active-Full, Full | | [XML injection](xml_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | ## API security testing checks by profile ### Passive-Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [JSON hijacking](json_hijacking_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) ### Active-Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [OS command injection](os_command_injection_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [SQL injection](sql_injection_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) ### Active-Full - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [CORS](cors_check.md) - [DNS rebinding](dns_rebinding_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [Open redirect](open_redirect_check.md) - [OS command injection](os_command_injection_check.md) - [Path traversal](path_traversal_check.md) - [Sensitive file](sensitive_file_disclosure_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [Shellshock](shellshock_check.md) - [SQL injection](sql_injection_check.md) - [TLS configuration](tls_server_configuration_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) - [XML external entity](xml_external_entity_check.md) ### Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [OS command injection](os_command_injection_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [SQL injection](sql_injection_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) ### Full - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [CORS](cors_check.md) - [DNS rebinding](dns_rebinding_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [Open redirect](open_redirect_check.md) - [OS command injection](os_command_injection_check.md) - [Path traversal](path_traversal_check.md) - [Sensitive file](sensitive_file_disclosure_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [Shellshock](shellshock_check.md) - [SQL injection](sql_injection_check.md) - [TLS configuration](tls_server_configuration_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) - [XML external entity](xml_external_entity_check.md)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: API security testing vulnerability checks breadcrumbs: - doc - user - application_security - api_security_testing - checks --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Renamed](https://gitlab.com/gitlab-org/gitlab/-/issues/457449) from **DAST API vulnerability checks** to **API security testing vulnerability checks** in GitLab 17.0. {{< /history >}} [API security testing](../_index.md) provides vulnerability checks that are used to scan for vulnerabilities in the API under test. ## Passive checks | Check | Severity | Type | Profiles | |:-----------------------------------------------------------------------------|:---------|:--------|:---------| | [Application information check](application_information_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Cleartext authentication check](cleartext_authentication_check.md) | High | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [JSON hijacking](json_hijacking_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Sensitive information](sensitive_information_disclosure_check.md) | High | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | | [Session cookie](session_cookie_check.md) | Medium | Passive | Passive, Passive-Quick, Active-Quick, Active-Full, Quick, Full | ## Active checks | Check | Severity | Type | Profiles | |:-----------------------------------------------------------------------------|:---------|:--------|:---------| | [CORS](cors_check.md) | Medium | Active | Active-Full, Full | | [DNS rebinding](dns_rebinding_check.md) | Medium | Active | Active-Full, Full | | [Framework debug mode](framework_debug_mode_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) | High | Active | Active-Full, Full | | [HTML injection check](html_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [Insecure HTTP methods](insecure_http_methods_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [JSON injection](json_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | | [Open redirect](open_redirect_check.md) | Medium | Active | Active-Full, Full | | [OS command injection](os_command_injection_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [Path traversal](path_traversal_check.md) | High | Active | Active-Full, Full | | [Sensitive file](sensitive_file_disclosure_check.md) | Medium | Active | Active-Full, Full | | [Shellshock](shellshock_check.md) | High | Active | Active-Full, Full | | [SQL injection](sql_injection_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [TLS configuration](tls_server_configuration_check.md) | High | Active | Active-Full, Full | | [Authentication token](authentication_token_check.md) | High | Active | Active-Quick, Active-Full, Quick, Full | | [XML external entity](xml_external_entity_check.md) | High | Active | Active-Full, Full | | [XML injection](xml_injection_check.md) | Medium | Active | Active-Quick, Active-Full, Quick, Full | ## API security testing checks by profile ### Passive-Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [JSON hijacking](json_hijacking_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) ### Active-Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [OS command injection](os_command_injection_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [SQL injection](sql_injection_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) ### Active-Full - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [CORS](cors_check.md) - [DNS rebinding](dns_rebinding_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [Open redirect](open_redirect_check.md) - [OS command injection](os_command_injection_check.md) - [Path traversal](path_traversal_check.md) - [Sensitive file](sensitive_file_disclosure_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [Shellshock](shellshock_check.md) - [SQL injection](sql_injection_check.md) - [TLS configuration](tls_server_configuration_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) - [XML external entity](xml_external_entity_check.md) ### Quick - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [OS command injection](os_command_injection_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [SQL injection](sql_injection_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) ### Full - [Application information check](application_information_check.md) - [Cleartext authentication check](cleartext_authentication_check.md) - [CORS](cors_check.md) - [DNS rebinding](dns_rebinding_check.md) - [Framework debug mode](framework_debug_mode_check.md) - [Heartbleed OpenSSL vulnerability](heartbleed_open_ssl_check.md) - [HTML injection check](html_injection_check.md) - [Insecure HTTP methods](insecure_http_methods_check.md) - [JSON hijacking](json_hijacking_check.md) - [JSON injection](json_injection_check.md) - [Open redirect](open_redirect_check.md) - [OS command injection](os_command_injection_check.md) - [Path traversal](path_traversal_check.md) - [Sensitive file](sensitive_file_disclosure_check.md) - [Sensitive information](sensitive_information_disclosure_check.md) - [Session cookie](session_cookie_check.md) - [Shellshock](shellshock_check.md) - [SQL injection](sql_injection_check.md) - [TLS configuration](tls_server_configuration_check.md) - [Authentication token](authentication_token_check.md) - [XML injection](xml_injection_check.md) - [XML external entity](xml_external_entity_check.md)
https://docs.gitlab.com/user/application_security/api_security_testing/json_injection_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/json_injection_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
json_injection_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
JSON injection
null
## Description Check for JSON serialization/injection vulnerabilities. ## Remediation JSON injection is an attack technique used to manipulate or compromise the logic of a JSON application or service. The injection of unintended JSON content and/or structures into an JSON message can alter the intend logic of the application. Further, JSON injection can cause the insertion of malicious content into the resulting message/document. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/929.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: JSON injection breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for JSON serialization/injection vulnerabilities. ## Remediation JSON injection is an attack technique used to manipulate or compromise the logic of a JSON application or service. The injection of unintended JSON content and/or structures into an JSON message can alter the intend logic of the application. Further, JSON injection can cause the insertion of malicious content into the resulting message/document. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/929.html)
https://docs.gitlab.com/user/application_security/api_security_testing/sensitive_information_disclosure_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/sensitive_information_disclosure_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
sensitive_information_disclosure_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Sensitive information disclosure
null
## Description Sensitive information disclosure check. This includes credit card numbers, health records, personal information, etc. ## Remediation Sensitive information leakage is an application weakness where an application reveals sensitive, user-specific data. Sensitive data may be used by an attacker to exploit its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information Leakage, in its most common form, is the result of differences in page responses for valid versus invalid data. Pages that provide different responses based on the validity of the data can also lead to Information Leakage; specifically when data deemed confidential is being revealed as a result of the web application's design. Examples of sensitive data includes (but is not limited to): account numbers, user identifiers (Drivers license number, Passport number, Social Security Numbers, etc.) and user-specific information (passwords, sessions, addresses). Information Leakage in this context deals with exposure of key user data deemed confidential, or secret, that should not be exposed in plain view, even to the user. Credit card numbers and other heavily regulated information are prime examples of user data that needs to be further protected from exposure or leakage even with proper encryption and access controls already in place. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Sensitive information disclosure breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Sensitive information disclosure check. This includes credit card numbers, health records, personal information, etc. ## Remediation Sensitive information leakage is an application weakness where an application reveals sensitive, user-specific data. Sensitive data may be used by an attacker to exploit its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information Leakage, in its most common form, is the result of differences in page responses for valid versus invalid data. Pages that provide different responses based on the validity of the data can also lead to Information Leakage; specifically when data deemed confidential is being revealed as a result of the web application's design. Examples of sensitive data includes (but is not limited to): account numbers, user identifiers (Drivers license number, Passport number, Social Security Numbers, etc.) and user-specific information (passwords, sessions, addresses). Information Leakage in this context deals with exposure of key user data deemed confidential, or secret, that should not be exposed in plain view, even to the user. Credit card numbers and other heavily regulated information are prime examples of user data that needs to be further protected from exposure or leakage even with proper encryption and access controls already in place. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
https://docs.gitlab.com/user/application_security/api_security_testing/sql_injection_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/sql_injection_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
sql_injection_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
SQL injection
null
## Description Check for SQL and NoSQL injection vulnerabilities. A SQL injection attack consists of insertion or "injection" of a SQL query via the input data from the client to the application. A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to effect the execution of predefined SQL commands. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and create a syntax error in the SQL or NoSQL query. Logs and responses are then analyzed to try and detect if an error occurred. If an error is detected there is a high likelihood that a vulnerability exists. ## Remediation The software constructs all or part of an SQL command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component. Without sufficient removal or quoting of SQL syntax in user-controllable inputs, the generated SQL query can cause those inputs to be interpreted as SQL instead of ordinary user data. This can be used to alter query logic to bypass security checks, or to insert additional statements that modify the back-end database, possibly including execution of system commands. SQL injection has become a common issue with database-driven websites. The flaw is easily detected, and easily exploited, and as such, any site or software package with even a minimal user base is likely to be subject to an attempted attack of this kind. This flaw depends on the fact that SQL makes no real distinction between the control and data planes. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/930.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: SQL injection breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for SQL and NoSQL injection vulnerabilities. A SQL injection attack consists of insertion or "injection" of a SQL query via the input data from the client to the application. A successful SQL injection exploit can read sensitive data from the database, modify database data (Insert/Update/Delete), execute administration operations on the database (such as shutdown the DBMS), recover the content of a given file present on the DBMS file system and in some cases issue commands to the operating system. SQL injection attacks are a type of injection attack, in which SQL commands are injected into data-plane input in order to effect the execution of predefined SQL commands. This check modifies parameters in the request (path, query string, headers, JSON, XML, etc.) to try and create a syntax error in the SQL or NoSQL query. Logs and responses are then analyzed to try and detect if an error occurred. If an error is detected there is a high likelihood that a vulnerability exists. ## Remediation The software constructs all or part of an SQL command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component. Without sufficient removal or quoting of SQL syntax in user-controllable inputs, the generated SQL query can cause those inputs to be interpreted as SQL instead of ordinary user data. This can be used to alter query logic to bypass security checks, or to insert additional statements that modify the back-end database, possibly including execution of system commands. SQL injection has become a common issue with database-driven websites. The flaw is easily detected, and easily exploited, and as such, any site or software package with even a minimal user base is likely to be subject to an attempted attack of this kind. This flaw depends on the fact that SQL makes no real distinction between the control and data planes. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/930.html)
https://docs.gitlab.com/user/application_security/api_security_testing/insecure_http_methods_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/insecure_http_methods_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
insecure_http_methods_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Insecure HTTP methods
null
## Description Checks to see if HTTP methods like OPTIONS and TRACE are enabled on any target endpoints. ## Remediation The resource tested supports the OPTIONS HTTP method. Usually, this is considered a security misconfiguration as it leaks supported HTTP methods leading to information gathering about a specific server or resource. However, there is a sub-set of the API community looking to use OPTIONS as a method to self discover resource operations. If this is the intended use for enabling OPTIONS, then this issue can be considered a false positive. The resource tested supports the TRACE HTTP method. In combination with other cross-domain vulnerabilities in web browsers, sensitive information can be leaked from headers. It's recommended the TRACE method be disabled in your server/framework. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Insecure HTTP methods breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Checks to see if HTTP methods like OPTIONS and TRACE are enabled on any target endpoints. ## Remediation The resource tested supports the OPTIONS HTTP method. Usually, this is considered a security misconfiguration as it leaks supported HTTP methods leading to information gathering about a specific server or resource. However, there is a sub-set of the API community looking to use OPTIONS as a method to self discover resource operations. If this is the intended use for enabling OPTIONS, then this issue can be considered a false positive. The resource tested supports the TRACE HTTP method. In combination with other cross-domain vulnerabilities in web browsers, sensitive information can be leaked from headers. It's recommended the TRACE method be disabled in your server/framework. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
https://docs.gitlab.com/user/application_security/api_security_testing/framework_debug_mode_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/framework_debug_mode_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
framework_debug_mode_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Framework debug mode
null
## Description Checks to see if debug mode is enabled in various frameworks such as Flask and ASP.NET. This check has a low false positive rate. ## Remediation The Flask or ASP .NET framework was identified with debug mode enabled. This allows an attacker the ability to download any file on the file system and other capabilities. This is a high severity issue that is easy for an attacker to exploit. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE-23: Relative Path Traversal](https://cwe.mitre.org/data/definitions/23.html) - [CWE-285: Improper Authorization](https://cwe.mitre.org/data/definitions/285.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Framework debug mode breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Checks to see if debug mode is enabled in various frameworks such as Flask and ASP.NET. This check has a low false positive rate. ## Remediation The Flask or ASP .NET framework was identified with debug mode enabled. This allows an attacker the ability to download any file on the file system and other capabilities. This is a high severity issue that is easy for an attacker to exploit. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE-23: Relative Path Traversal](https://cwe.mitre.org/data/definitions/23.html) - [CWE-285: Improper Authorization](https://cwe.mitre.org/data/definitions/285.html)
https://docs.gitlab.com/user/application_security/api_security_testing/dns_rebinding_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/dns_rebinding_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
dns_rebinding_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
DNS rebinding
null
## Description Check for DNS rebinding. This check verifies that the host checks that the HOST header of the request exists and matches the expected name of the host to avoid attacks via malicious DNS entries. ## Remediation DNS rebinding allows a malicious host to spoof or redirect a request to an alternate IP address, potentially allowing an attacker to bypass security authentication or authorization. DNS resolution on its own does not properly constitute a valid authentication mechanism. Servers should validate that the Host header of the request matches the expected hostname of the server. In cases where the hostname is missing or does not match the expected value, the server should return a 400. The X-Forwarded-Host header is sometimes used instead of the Host header in cases where the request is being forwarded. In these cases, the X-Forwarded-Host header should also be validated if it is being used to determine the Host of the original request. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/350.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: DNS rebinding breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for DNS rebinding. This check verifies that the host checks that the HOST header of the request exists and matches the expected name of the host to avoid attacks via malicious DNS entries. ## Remediation DNS rebinding allows a malicious host to spoof or redirect a request to an alternate IP address, potentially allowing an attacker to bypass security authentication or authorization. DNS resolution on its own does not properly constitute a valid authentication mechanism. Servers should validate that the Host header of the request matches the expected hostname of the server. In cases where the hostname is missing or does not match the expected value, the server should return a 400. The X-Forwarded-Host header is sometimes used instead of the Host header in cases where the request is being forwarded. In these cases, the X-Forwarded-Host header should also be validated if it is being used to determine the Host of the original request. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/350.html)
https://docs.gitlab.com/user/application_security/api_security_testing/application_information_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/application_information_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
application_information_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Application information disclosure
null
## Description Application information disclosure check. This includes information such as version numbers, database error messages, stack traces. ## Remediation Application information disclosure is an application weakness where an application reveals sensitive data, such as technical details of the web application or environment. Application data may be used by an attacker to exploit the target web application, its hosting network, or its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information disclosure, in its most common form, is the result of one or more of the following conditions: a failure to scrub out HTML or script comments containing sensitive information or improper application or server configurations. Failure to scrub HTML or script comments prior to a push to the production environment can result in the leak of sensitive, contextual, information such as server directory structure, SQL query structure, and internal network information. Often a developer will leave comments within the HTML and script code to help facilitate the debugging or integration process during the pre-production phase. Although there is no harm in allowing developers to include inline comments within the content they develop, these comments should all be removed prior to the content's public release. Software version numbers and verbose error messages (such as ASP.NET version numbers) are examples of improper server configurations. This information is useful to an attacker by providing detailed insight as to the framework, languages, or pre-built functions being utilized by a web application. Most default server configurations provide software version numbers and verbose error messages for debugging and troubleshooting purposes. Configuration changes can be made to disable these features, preventing the display of this information. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Application information disclosure breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Application information disclosure check. This includes information such as version numbers, database error messages, stack traces. ## Remediation Application information disclosure is an application weakness where an application reveals sensitive data, such as technical details of the web application or environment. Application data may be used by an attacker to exploit the target web application, its hosting network, or its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information disclosure, in its most common form, is the result of one or more of the following conditions: a failure to scrub out HTML or script comments containing sensitive information or improper application or server configurations. Failure to scrub HTML or script comments prior to a push to the production environment can result in the leak of sensitive, contextual, information such as server directory structure, SQL query structure, and internal network information. Often a developer will leave comments within the HTML and script code to help facilitate the debugging or integration process during the pre-production phase. Although there is no harm in allowing developers to include inline comments within the content they develop, these comments should all be removed prior to the content's public release. Software version numbers and verbose error messages (such as ASP.NET version numbers) are examples of improper server configurations. This information is useful to an attacker by providing detailed insight as to the framework, languages, or pre-built functions being utilized by a web application. Most default server configurations provide software version numbers and verbose error messages for debugging and troubleshooting purposes. Configuration changes can be made to disable these features, preventing the display of this information. ## Links - [OWASP](https://owasp.org/Top10/A05_2021-Security_Misconfiguration/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
https://docs.gitlab.com/user/application_security/api_security_testing/cleartext_authentication_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/cleartext_authentication_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
cleartext_authentication_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Cleartext authentication
null
## Description This check looks for cleartext authentication such as HTTP Basic auth with no-TLS. ## Remediation Authentication credentials are transported via unencrypted channel (HTTP). This exposes the transmitted credentials to any attacker who can monitor (sniff) the network traffic during transmission. Sensitive information such as credentials should always be transmitted via encrypted channels such as HTTPS. ## Links - [OWASP](https://owasp.org/Top10/A02_2021-Cryptographic_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/319.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Cleartext authentication breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description This check looks for cleartext authentication such as HTTP Basic auth with no-TLS. ## Remediation Authentication credentials are transported via unencrypted channel (HTTP). This exposes the transmitted credentials to any attacker who can monitor (sniff) the network traffic during transmission. Sensitive information such as credentials should always be transmitted via encrypted channels such as HTTPS. ## Links - [OWASP](https://owasp.org/Top10/A02_2021-Cryptographic_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/319.html)
https://docs.gitlab.com/user/application_security/api_security_testing/heartbleed_open_ssl_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/heartbleed_open_ssl_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
heartbleed_open_ssl_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Heartbleed OpenSSL vulnerability
null
## Description Check for Heartbleed OpenSSL vulnerability. ## Remediation The Heartbleed vulnerability is a serious bug in the popular OpenSSL cryptographic library. OpenSSL is used to encrypt and decrypt communications and secure the Internet traffic. This vulnerability allows the attacker to steal protected information, which should not be accessible under other circumstance such as secret keys that are used to encrypt sensitive information. Anyone on with access to the target API can use the Heartbleed vulnerability to read the memory from protected systems taking advantage of vulnerable versions of OpenSSL library. ## Links - [OWASP](https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/) - [CWE](https://cwe.mitre.org/data/definitions/119.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Heartbleed OpenSSL vulnerability breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for Heartbleed OpenSSL vulnerability. ## Remediation The Heartbleed vulnerability is a serious bug in the popular OpenSSL cryptographic library. OpenSSL is used to encrypt and decrypt communications and secure the Internet traffic. This vulnerability allows the attacker to steal protected information, which should not be accessible under other circumstance such as secret keys that are used to encrypt sensitive information. Anyone on with access to the target API can use the Heartbleed vulnerability to read the memory from protected systems taking advantage of vulnerable versions of OpenSSL library. ## Links - [OWASP](https://owasp.org/Top10/A06_2021-Vulnerable_and_Outdated_Components/) - [CWE](https://cwe.mitre.org/data/definitions/119.html)
https://docs.gitlab.com/user/application_security/api_security_testing/tls_server_configuration_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/tls_server_configuration_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
tls_server_configuration_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
TLS server configuration
null
## Description Check for various TLS Server configuration issues. Checks TLS versions, hmacs, ciphers and compression algs supported by server. ## Remediation Insufficient transport layer protection allows communication to be exposed to untrusted third-parties, providing an attack vector to compromise a web application and/or steal sensitive information. Websites typically use Secure Sockets Layer/Transport Layer Security (SSL/TLS) to provide encryption at the transport layer. However, unless the website is configured to use SSL/TLS and configured to use SSL/TLS properly, the website may be vulnerable to traffic interception and modification. SSL/TLS as a protocol have gone through several revisions over the years. Each new version adds features and fixes weaknesses in the protocol. Over time some versions of the protocol are broken so badly as to become vulnerabilities if supported. It's recommended to support only the most recent TLS versions such as TLS 1.3 (2018), and TLS 1.2 (2008). Compression has been linked to side-channel attacks on TLS connections. Disabling compression can prevent these attacks. One attack in particular, CRIME ("Compression Ratio Info-leak Made Easy") can be prevented. CRIME is an attack that targets clients, but if the server does not support compression the attack is mitigated. Historically, high grade cryptography was restricted from export to outside the United States. Because of this, websites were configured to support weak cryptographic options for those clients that were restricted to only using weak ciphers. Weak ciphers are vulnerable to attack because of the relative ease of breaking them; less than two weeks on a typical home computer and a few seconds using dedicated hardware. Today, all modern browsers and websites use much stronger encryption, but some websites are still configured to support outdated weak ciphers. Because of this, an attacker may be able to force the client to downgrade to a weaker cipher when connecting to the website, allowing the attacker to break the weak encryption. For this reason, the server should be configured to only accept strong ciphers and not provide service to any client that requests using a weaker cipher. In addition, some websites are misconfigured to choose a weaker cipher even when the client will support a much stronger one. OWASP offers a guide to testing for SSL/TLS issues, including weak cipher support and misconfiguration, and there are other resources and tools as well. ## Links - [OWASP](https://owasp.org/Top10/A02_2021-Cryptographic_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/934.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: TLS server configuration breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for various TLS Server configuration issues. Checks TLS versions, hmacs, ciphers and compression algs supported by server. ## Remediation Insufficient transport layer protection allows communication to be exposed to untrusted third-parties, providing an attack vector to compromise a web application and/or steal sensitive information. Websites typically use Secure Sockets Layer/Transport Layer Security (SSL/TLS) to provide encryption at the transport layer. However, unless the website is configured to use SSL/TLS and configured to use SSL/TLS properly, the website may be vulnerable to traffic interception and modification. SSL/TLS as a protocol have gone through several revisions over the years. Each new version adds features and fixes weaknesses in the protocol. Over time some versions of the protocol are broken so badly as to become vulnerabilities if supported. It's recommended to support only the most recent TLS versions such as TLS 1.3 (2018), and TLS 1.2 (2008). Compression has been linked to side-channel attacks on TLS connections. Disabling compression can prevent these attacks. One attack in particular, CRIME ("Compression Ratio Info-leak Made Easy") can be prevented. CRIME is an attack that targets clients, but if the server does not support compression the attack is mitigated. Historically, high grade cryptography was restricted from export to outside the United States. Because of this, websites were configured to support weak cryptographic options for those clients that were restricted to only using weak ciphers. Weak ciphers are vulnerable to attack because of the relative ease of breaking them; less than two weeks on a typical home computer and a few seconds using dedicated hardware. Today, all modern browsers and websites use much stronger encryption, but some websites are still configured to support outdated weak ciphers. Because of this, an attacker may be able to force the client to downgrade to a weaker cipher when connecting to the website, allowing the attacker to break the weak encryption. For this reason, the server should be configured to only accept strong ciphers and not provide service to any client that requests using a weaker cipher. In addition, some websites are misconfigured to choose a weaker cipher even when the client will support a much stronger one. OWASP offers a guide to testing for SSL/TLS issues, including weak cipher support and misconfiguration, and there are other resources and tools as well. ## Links - [OWASP](https://owasp.org/Top10/A02_2021-Cryptographic_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/934.html)
https://docs.gitlab.com/user/application_security/api_security_testing/xml_external_entity_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/xml_external_entity_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
xml_external_entity_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
XML external entity
null
## Description Check for XML DTD processing vulnerabilities. ## Remediation XML external entity Attack is a type of attack against an application that parses XML input. This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data, denial of service, server side request forgery, port scanning from the perspective of the machine where the parser is located, and other system impacts. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/611.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: XML external entity breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for XML DTD processing vulnerabilities. ## Remediation XML external entity Attack is a type of attack against an application that parses XML input. This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data, denial of service, server side request forgery, port scanning from the perspective of the machine where the parser is located, and other system impacts. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/611.html)
https://docs.gitlab.com/user/application_security/api_security_testing/json_hijacking_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/json_hijacking_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
json_hijacking_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
JSON hijacking
null
## Description Checks for JSON data potentially vulnerable to hijacking. This check looks for a GET request that returns a JSON array, which could potentially be hijacked and read by a malicious website. ## Remediation JSON hijacking allows an attacker to send a GET request via a malicious web site or similar attack vector and utilize a user's stored credentials to retrieve sensitive or protected data to which that user has access. A JSON array on its own is valid JavaScript, so a malicious GET request to a resource that returns only a JavaScript array can allow the attacker to use a malicious script to read the data in the array from the request. GET requests should never return a JSON array, even if the resource requires authentication to access. Consider using POST instead of a GET for this request or wrapping the array in a JSON object. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/352.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: JSON hijacking breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Checks for JSON data potentially vulnerable to hijacking. This check looks for a GET request that returns a JSON array, which could potentially be hijacked and read by a malicious website. ## Remediation JSON hijacking allows an attacker to send a GET request via a malicious web site or similar attack vector and utilize a user's stored credentials to retrieve sensitive or protected data to which that user has access. A JSON array on its own is valid JavaScript, so a malicious GET request to a resource that returns only a JavaScript array can allow the attacker to use a malicious script to read the data in the array from the request. GET requests should never return a JSON array, even if the resource requires authentication to access. Consider using POST instead of a GET for this request or wrapping the array in a JSON object. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/352.html)
https://docs.gitlab.com/user/application_security/api_security_testing/session_cookie_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/session_cookie_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
session_cookie_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Session cookie
null
## Description Verify session cookie has correct flags and expiration. ## Remediation HTTP is a stateless protocol, so websites commonly use cookies to store session IDs that uniquely identify a user from request to request. Consequently, each session ID's confidentiality must be maintained in order to prevent multiple users from accessing the same account. A stolen session ID can be used to view another user's account or perform a fraudulent transaction. - One part of securing session IDs is to properly mark them to expire and also require the correct set of flags to ensure they are not transmitted in the clear or accessible from scripting. - HttpOnly is an additional flag included in a Set-Cookie HTTP response header. Using the HttpOnly flag when generating a cookie helps mitigate the risk of client side script accessing the protected cookie (if the browser supports it). If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser will not reveal the cookie to a third party. - The Secure attribute for sensitive cookies in HTTPS sessions is not set, which could cause the user agent to send those cookies in plaintext over an HTTP session. - A session related cookie was identified being used on an insecure transport protocol. Insecure transport protocols are those that do not make use of SSL/TLS to secure the connection. Examples of such protocols are 'http'. - Insufficient Session Expiration occurs when a Web application permits an attacker to reuse old session credentials or session IDs for authorization. Insufficient Session Expiration increases a website's exposure to attacks that steal or reuse a user's session identifiers. ## Links - [OWASP](https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/930.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Session cookie breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Verify session cookie has correct flags and expiration. ## Remediation HTTP is a stateless protocol, so websites commonly use cookies to store session IDs that uniquely identify a user from request to request. Consequently, each session ID's confidentiality must be maintained in order to prevent multiple users from accessing the same account. A stolen session ID can be used to view another user's account or perform a fraudulent transaction. - One part of securing session IDs is to properly mark them to expire and also require the correct set of flags to ensure they are not transmitted in the clear or accessible from scripting. - HttpOnly is an additional flag included in a Set-Cookie HTTP response header. Using the HttpOnly flag when generating a cookie helps mitigate the risk of client side script accessing the protected cookie (if the browser supports it). If the HttpOnly flag (optional) is included in the HTTP response header, the cookie cannot be accessed through client side script (again if the browser supports this flag). As a result, even if a cross-site scripting (XSS) flaw exists, and a user accidentally accesses a link that exploits this flaw, the browser will not reveal the cookie to a third party. - The Secure attribute for sensitive cookies in HTTPS sessions is not set, which could cause the user agent to send those cookies in plaintext over an HTTP session. - A session related cookie was identified being used on an insecure transport protocol. Insecure transport protocols are those that do not make use of SSL/TLS to secure the connection. Examples of such protocols are 'http'. - Insufficient Session Expiration occurs when a Web application permits an attacker to reuse old session credentials or session IDs for authorization. Insufficient Session Expiration increases a website's exposure to attacks that steal or reuse a user's session identifiers. ## Links - [OWASP](https://owasp.org/Top10/A07_2021-Identification_and_Authentication_Failures/) - [CWE](https://cwe.mitre.org/data/definitions/930.html)
https://docs.gitlab.com/user/application_security/api_security_testing/shellshock_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/shellshock_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
shellshock_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Shellshock
null
## Description Check for Shellshock vulnerabilities. ## Remediation Shellshock vulnerability takes advantage of a bug in BASH, in which, BASH incorrectly executes trailing commands when it imports a function definition stored into an environment variable. Any environment which allows defining BASH environmental variables could be vulnerable to this bug, as for example a Apache Web Server using mod_cgi and mod_cgid modules. A known-good request was modified to include malicious content. The malicious content includes an Shell shock attack in which the server-side application returns a specific text (evidence) in the response headers. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/78.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Shellshock breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for Shellshock vulnerabilities. ## Remediation Shellshock vulnerability takes advantage of a bug in BASH, in which, BASH incorrectly executes trailing commands when it imports a function definition stored into an environment variable. Any environment which allows defining BASH environmental variables could be vulnerable to this bug, as for example a Apache Web Server using mod_cgi and mod_cgid modules. A known-good request was modified to include malicious content. The malicious content includes an Shell shock attack in which the server-side application returns a specific text (evidence) in the response headers. ## Links - [OWASP](https://owasp.org/Top10/A03_2021-Injection/) - [CWE](https://cwe.mitre.org/data/definitions/78.html)
https://docs.gitlab.com/user/application_security/api_security_testing/sensitive_file_disclosure_check
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/sensitive_file_disclosure_check.md
2025-08-13
doc/user/application_security/api_security_testing/checks
[ "doc", "user", "application_security", "api_security_testing", "checks" ]
sensitive_file_disclosure_check.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Sensitive file disclosure
null
## Description Check for sensitive file disclosure. This check looks for files that may contain sensitive information. Examples include .htaccess, .htpasswd, .bash_history, etc. ## Remediation Information leakage is an application weakness where an application reveals sensitive data, such as technical details of the web application, environment, or user-specific data. Sensitive data may be used by an attacker to exploit the target web application, its hosting network, or its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information Leakage, in its most common form,is the result of one or more of the following conditions: A failure to scrub out HTML/Script comments containing sensitive information, improper application or server configurations, or differences in page responses for valid versus invalid data. In the case of this failure, one or more files and/or folders are accessible that should not be. This can include files common in home folders like such as command histories or files that contain secrets such as passwords. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Sensitive file disclosure breadcrumbs: - doc - user - application_security - api_security_testing - checks --- ## Description Check for sensitive file disclosure. This check looks for files that may contain sensitive information. Examples include .htaccess, .htpasswd, .bash_history, etc. ## Remediation Information leakage is an application weakness where an application reveals sensitive data, such as technical details of the web application, environment, or user-specific data. Sensitive data may be used by an attacker to exploit the target web application, its hosting network, or its users. Therefore, leakage of sensitive data should be limited or prevented whenever possible. Information Leakage, in its most common form,is the result of one or more of the following conditions: A failure to scrub out HTML/Script comments containing sensitive information, improper application or server configurations, or differences in page responses for valid versus invalid data. In the case of this failure, one or more files and/or folders are accessible that should not be. This can include files common in home folders like such as command histories or files that contain secrets such as passwords. ## Links - [OWASP](https://owasp.org/Top10/A01_2021-Broken_Access_Control/) - [CWE](https://cwe.mitre.org/data/definitions/200.html)
https://docs.gitlab.com/user/application_security/api_security_testing/overriding_analyzer_jobs
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/overriding_analyzer_jobs.md
2025-08-13
doc/user/application_security/api_security_testing/configuration
[ "doc", "user", "application_security", "api_security_testing", "configuration" ]
overriding_analyzer_jobs.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Overriding API security testing jobs
null
To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../../ci/yaml/_index.md#rules)), declare a job with the same name as the DAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this sets the target APIs base URL: ```yaml include: - template: Security/API-Security.gitlab-ci.yml api_security: variables: APISEC_TARGET_URL: https://target/api ```
--- type: reference, howto stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Overriding API security testing jobs breadcrumbs: - doc - user - application_security - api_security_testing - configuration --- To override a job definition, (for example, change properties like `variables`, `dependencies`, or [`rules`](../../../../ci/yaml/_index.md#rules)), declare a job with the same name as the DAST job to override. Place this new job after the template inclusion and specify any additional keys under it. For example, this sets the target APIs base URL: ```yaml include: - template: Security/API-Security.gitlab-ci.yml api_security: variables: APISEC_TARGET_URL: https://target/api ```
https://docs.gitlab.com/user/application_security/api_security_testing/variables
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/application_security/api_security_testing/variables.md
2025-08-13
doc/user/application_security/api_security_testing/configuration
[ "doc", "user", "application_security", "api_security_testing", "configuration" ]
variables.md
Application Security Testing
Dynamic Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Available CI/CD variables and configuration files
null
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/450445) template name from `DAST-API.gitlab-ci.yml` to `API-Security.gitlab-ci.yml` and variable prefixed from `DAST_API_` to `APISEC_` in GitLab 17.1. {{< /history >}} ## Available CI/CD variables | CI/CD variable | Description | |---------------------------------------------------------------------------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. | | `APISEC_DISABLED` | Set to 'true' or '1' to disable API security testing scanning. | | `APISEC_DISABLED_FOR_DEFAULT_BRANCH` | Set to 'true' or '1' to disable API security testing scanning for only the default (production) branch. | | `APISEC_VERSION` | Specify API security testing container version. Defaults to `3`. | | `APISEC_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. | | `APISEC_API_PORT` | Specify the communication port number used by API security testing engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. | | `APISEC_TARGET_URL` | Base URL of API testing target. | | `APISEC_TARGET_CHECK_SKIP` | Disable waiting for target to become available. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | `APISEC_TARGET_CHECK_STATUS_CODE` | Provide the expected status code for target availability check. If not provided, any non-500 status code is acceptable. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | [`APISEC_CONFIG`](#configuration-files) | API security testing configuration file. Defaults to `.gitlab-dast-api.yml`. | | [`APISEC_PROFILE`](#configuration-files) | Configuration profile to use during testing. Defaults to `Quick`. | | [`APISEC_EXCLUDE_PATHS`](customizing_analyzer_settings.md#exclude-paths) | Exclude API URL paths from testing. | | [`APISEC_EXCLUDE_URLS`](customizing_analyzer_settings.md#exclude-urls) | Exclude API URL from testing. | | [`APISEC_EXCLUDE_PARAMETER_ENV`](customizing_analyzer_settings.md#exclude-parameters) | JSON string containing excluded parameters. | | [`APISEC_EXCLUDE_PARAMETER_FILE`](customizing_analyzer_settings.md#exclude-parameters) | Path to a JSON file containing excluded parameters. | | [`APISEC_REQUEST_HEADERS`](customizing_analyzer_settings.md#request-headers) | A comma-separated (`,`) list of headers to include on each scan request. Consider using `APISEC_REQUEST_HEADERS_BASE64` when storing secret header values in a [masked variable](../../../../ci/variables/_index.md#mask-a-cicd-variable), which has character set restrictions. | | [`APISEC_REQUEST_HEADERS_BASE64`](customizing_analyzer_settings.md#request-headers) | A comma-separated (`,`) list of headers to include on each scan request, Base64-encoded. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378440) in GitLab 15.6. | | [`APISEC_OPENAPI`](enabling_the_analyzer.md#openapi-specification) | OpenAPI specification file or URL. | | [`APISEC_OPENAPI_RELAXED_VALIDATION`](enabling_the_analyzer.md#openapi-specification) | Relax document validation. Default is disabled. | | [`APISEC_OPENAPI_ALL_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Use all supported media types instead of one when generating requests. Causes test duration to be longer. Default is disabled. | | [`APISEC_OPENAPI_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Colon (`:`) separated media types accepted for testing. Default is disabled. | | [`APISEC_HAR`](enabling_the_analyzer.md#http-archive-har) | HTTP Archive (HAR) file. | | [`APISEC_GRAPHQL`](enabling_the_analyzer.md#graphql-schema) | Path to GraphQL endpoint, for example `/api/graphql`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | | [`APISEC_GRAPHQL_SCHEMA`](enabling_the_analyzer.md#graphql-schema) | A URL or filename for a GraphQL schema in JSON format. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | | [`APISEC_POSTMAN_COLLECTION`](enabling_the_analyzer.md#postman-collection) | Postman Collection file. | | [`APISEC_POSTMAN_COLLECTION_VARIABLES`](enabling_the_analyzer.md#postman-variables) | Path to a JSON file to extract Postman variable values. The support for comma-separated (`,`) files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. | | [`APISEC_OVERRIDES_FILE`](customizing_analyzer_settings.md#overrides) | Path to a JSON file containing overrides. | | [`APISEC_OVERRIDES_ENV`](customizing_analyzer_settings.md#overrides) | JSON string containing headers to override. | | [`APISEC_OVERRIDES_CMD`](customizing_analyzer_settings.md#overrides) | Overrides command. | | [`APISEC_OVERRIDES_CMD_VERBOSE`](customizing_analyzer_settings.md#overrides) | When set to any value. It logs overrides command output to the `gl-api-security-scanner.log` job artifact file. | | `APISEC_PER_REQUEST_SCRIPT` | Full path and filename for a per-request script. [See demo project for examples.](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-with-request-example) [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13691) in GitLab 17.2. | | `APISEC_PRE_SCRIPT` | Run user command or script before scan session starts. `sudo` must be used for privileged operations like installing packages. | | `APISEC_POST_SCRIPT` | Run user command or script after scan session has finished. `sudo` must be used for privileged operations like installing packages. | | [`APISEC_OVERRIDES_INTERVAL`](customizing_analyzer_settings.md#overrides) | How often to run overrides command in seconds. Defaults to `0` (once). | | [`APISEC_HTTP_USERNAME`](customizing_analyzer_settings.md#http-basic-authentication) | Username for HTTP authentication. | | [`APISEC_HTTP_PASSWORD`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication. Consider using `APISEC_HTTP_PASSWORD_BASE64` instead. | | [`APISEC_HTTP_PASSWORD_BASE64`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication, base64-encoded. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src/-/merge_requests/702) in GitLab 15.4. | | `APISEC_SERVICE_START_TIMEOUT` | How long to wait for target API to become available in seconds. Default is 300 seconds. | | `APISEC_TIMEOUT` | How long to wait for API responses in seconds. Default is 30 seconds. | | `APISEC_SUCCESS_STATUS_CODES` | Specify a comma-separated (`,`) list of HTTP success status codes that determine whether an API security testing scanning job has passed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442219) in GitLab 17.1. Example: `'200, 201, 204'` | ## Configuration files To get you started quickly, GitLab provides the configuration file [`gitlab-dast-api-config.yml`](https://gitlab.com/gitlab-org/security-products/analyzers/dast/-/blob/master/config/gitlab-dast-api-config.yml). This file has several testing profiles that perform various numbers of tests. The run time of each profile increases as the test numbers go up. To use a configuration file, add it to your repository's root as `.gitlab/gitlab-dast-api-config.yml`. ### Profiles The following profiles are pre-defined in the default configuration file. Profiles can be added, removed, and modified by creating a custom configuration. #### Passive - Application Information Check - Cleartext Authentication Check - JSON Hijacking Check - Sensitive Information Check - Session Cookie Check #### Quick - Application Information Check - Cleartext Authentication Check - FrameworkDebugModeCheck - HTML Injection Check - Insecure Http Methods Check - JSON Hijacking Check - JSON Injection Check - Sensitive Information Check - Session Cookie Check - SQL Injection Check - Token Check - XML Injection Check #### Full - Application Information Check - Cleartext AuthenticationCheck - CORS Check - DNS Rebinding Check - Framework Debug Mode Check - HTML Injection Check - Insecure Http Methods Check - JSON Hijacking Check - JSON Injection Check - Open Redirect Check - Sensitive File Check - Sensitive Information Check - Session Cookie Check - SQL Injection Check - TLS Configuration Check - Token Check - XML Injection Check
--- stage: Application Security Testing group: Dynamic Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Available CI/CD variables and configuration files breadcrumbs: - doc - user - application_security - api_security_testing - configuration --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/450445) template name from `DAST-API.gitlab-ci.yml` to `API-Security.gitlab-ci.yml` and variable prefixed from `DAST_API_` to `APISEC_` in GitLab 17.1. {{< /history >}} ## Available CI/CD variables | CI/CD variable | Description | |---------------------------------------------------------------------------------------------|-------------| | `SECURE_ANALYZERS_PREFIX` | Specify the Docker registry base address from which to download the analyzer. | | `APISEC_DISABLED` | Set to 'true' or '1' to disable API security testing scanning. | | `APISEC_DISABLED_FOR_DEFAULT_BRANCH` | Set to 'true' or '1' to disable API security testing scanning for only the default (production) branch. | | `APISEC_VERSION` | Specify API security testing container version. Defaults to `3`. | | `APISEC_IMAGE_SUFFIX` | Specify a container image suffix. Defaults to none. | | `APISEC_API_PORT` | Specify the communication port number used by API security testing engine. Defaults to `5500`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367734) in GitLab 15.5. | | `APISEC_TARGET_URL` | Base URL of API testing target. | | `APISEC_TARGET_CHECK_SKIP` | Disable waiting for target to become available. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | `APISEC_TARGET_CHECK_STATUS_CODE` | Provide the expected status code for target availability check. If not provided, any non-500 status code is acceptable. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442699) in GitLab 17.1. | | [`APISEC_CONFIG`](#configuration-files) | API security testing configuration file. Defaults to `.gitlab-dast-api.yml`. | | [`APISEC_PROFILE`](#configuration-files) | Configuration profile to use during testing. Defaults to `Quick`. | | [`APISEC_EXCLUDE_PATHS`](customizing_analyzer_settings.md#exclude-paths) | Exclude API URL paths from testing. | | [`APISEC_EXCLUDE_URLS`](customizing_analyzer_settings.md#exclude-urls) | Exclude API URL from testing. | | [`APISEC_EXCLUDE_PARAMETER_ENV`](customizing_analyzer_settings.md#exclude-parameters) | JSON string containing excluded parameters. | | [`APISEC_EXCLUDE_PARAMETER_FILE`](customizing_analyzer_settings.md#exclude-parameters) | Path to a JSON file containing excluded parameters. | | [`APISEC_REQUEST_HEADERS`](customizing_analyzer_settings.md#request-headers) | A comma-separated (`,`) list of headers to include on each scan request. Consider using `APISEC_REQUEST_HEADERS_BASE64` when storing secret header values in a [masked variable](../../../../ci/variables/_index.md#mask-a-cicd-variable), which has character set restrictions. | | [`APISEC_REQUEST_HEADERS_BASE64`](customizing_analyzer_settings.md#request-headers) | A comma-separated (`,`) list of headers to include on each scan request, Base64-encoded. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/378440) in GitLab 15.6. | | [`APISEC_OPENAPI`](enabling_the_analyzer.md#openapi-specification) | OpenAPI specification file or URL. | | [`APISEC_OPENAPI_RELAXED_VALIDATION`](enabling_the_analyzer.md#openapi-specification) | Relax document validation. Default is disabled. | | [`APISEC_OPENAPI_ALL_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Use all supported media types instead of one when generating requests. Causes test duration to be longer. Default is disabled. | | [`APISEC_OPENAPI_MEDIA_TYPES`](enabling_the_analyzer.md#openapi-specification) | Colon (`:`) separated media types accepted for testing. Default is disabled. | | [`APISEC_HAR`](enabling_the_analyzer.md#http-archive-har) | HTTP Archive (HAR) file. | | [`APISEC_GRAPHQL`](enabling_the_analyzer.md#graphql-schema) | Path to GraphQL endpoint, for example `/api/graphql`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | | [`APISEC_GRAPHQL_SCHEMA`](enabling_the_analyzer.md#graphql-schema) | A URL or filename for a GraphQL schema in JSON format. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/352780) in GitLab 15.4. | | [`APISEC_POSTMAN_COLLECTION`](enabling_the_analyzer.md#postman-collection) | Postman Collection file. | | [`APISEC_POSTMAN_COLLECTION_VARIABLES`](enabling_the_analyzer.md#postman-variables) | Path to a JSON file to extract Postman variable values. The support for comma-separated (`,`) files was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356312) in GitLab 15.1. | | [`APISEC_OVERRIDES_FILE`](customizing_analyzer_settings.md#overrides) | Path to a JSON file containing overrides. | | [`APISEC_OVERRIDES_ENV`](customizing_analyzer_settings.md#overrides) | JSON string containing headers to override. | | [`APISEC_OVERRIDES_CMD`](customizing_analyzer_settings.md#overrides) | Overrides command. | | [`APISEC_OVERRIDES_CMD_VERBOSE`](customizing_analyzer_settings.md#overrides) | When set to any value. It logs overrides command output to the `gl-api-security-scanner.log` job artifact file. | | `APISEC_PER_REQUEST_SCRIPT` | Full path and filename for a per-request script. [See demo project for examples.](https://gitlab.com/gitlab-org/security-products/demos/api-dast/auth-with-request-example) [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13691) in GitLab 17.2. | | `APISEC_PRE_SCRIPT` | Run user command or script before scan session starts. `sudo` must be used for privileged operations like installing packages. | | `APISEC_POST_SCRIPT` | Run user command or script after scan session has finished. `sudo` must be used for privileged operations like installing packages. | | [`APISEC_OVERRIDES_INTERVAL`](customizing_analyzer_settings.md#overrides) | How often to run overrides command in seconds. Defaults to `0` (once). | | [`APISEC_HTTP_USERNAME`](customizing_analyzer_settings.md#http-basic-authentication) | Username for HTTP authentication. | | [`APISEC_HTTP_PASSWORD`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication. Consider using `APISEC_HTTP_PASSWORD_BASE64` instead. | | [`APISEC_HTTP_PASSWORD_BASE64`](customizing_analyzer_settings.md#http-basic-authentication) | Password for HTTP authentication, base64-encoded. [Introduced](https://gitlab.com/gitlab-org/security-products/analyzers/api-fuzzing-src/-/merge_requests/702) in GitLab 15.4. | | `APISEC_SERVICE_START_TIMEOUT` | How long to wait for target API to become available in seconds. Default is 300 seconds. | | `APISEC_TIMEOUT` | How long to wait for API responses in seconds. Default is 30 seconds. | | `APISEC_SUCCESS_STATUS_CODES` | Specify a comma-separated (`,`) list of HTTP success status codes that determine whether an API security testing scanning job has passed. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/442219) in GitLab 17.1. Example: `'200, 201, 204'` | ## Configuration files To get you started quickly, GitLab provides the configuration file [`gitlab-dast-api-config.yml`](https://gitlab.com/gitlab-org/security-products/analyzers/dast/-/blob/master/config/gitlab-dast-api-config.yml). This file has several testing profiles that perform various numbers of tests. The run time of each profile increases as the test numbers go up. To use a configuration file, add it to your repository's root as `.gitlab/gitlab-dast-api-config.yml`. ### Profiles The following profiles are pre-defined in the default configuration file. Profiles can be added, removed, and modified by creating a custom configuration. #### Passive - Application Information Check - Cleartext Authentication Check - JSON Hijacking Check - Sensitive Information Check - Session Cookie Check #### Quick - Application Information Check - Cleartext Authentication Check - FrameworkDebugModeCheck - HTML Injection Check - Insecure Http Methods Check - JSON Hijacking Check - JSON Injection Check - Sensitive Information Check - Session Cookie Check - SQL Injection Check - Token Check - XML Injection Check #### Full - Application Information Check - Cleartext AuthenticationCheck - CORS Check - DNS Rebinding Check - Framework Debug Mode Check - HTML Injection Check - Insecure Http Methods Check - JSON Hijacking Check - JSON Injection Check - Open Redirect Check - Sensitive File Check - Sensitive Information Check - Session Cookie Check - SQL Injection Check - TLS Configuration Check - Token Check - XML Injection Check