url
stringlengths 24
122
| repo_url
stringlengths 60
156
| date_extracted
stringdate 2025-08-13 00:00:00
2025-08-13 00:00:00
| root
stringlengths 3
85
| breadcrumbs
listlengths 1
6
| filename
stringlengths 6
60
| stage
stringclasses 33
values | group
stringclasses 81
values | info
stringclasses 22
values | title
stringlengths 3
110
⌀ | description
stringlengths 11
359
⌀ | clean_text
stringlengths 47
3.32M
| rich_text
stringlengths 321
3.32M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://docs.gitlab.com/development/internal_analytics/service_ping
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/internal_analytics/_index.md
|
2025-08-13
|
doc/development/internal_analytics/service_ping
|
[
"doc",
"development",
"internal_analytics",
"service_ping"
] |
_index.md
|
Monitor
|
Analytics Instrumentation
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Service Ping development guidelines
| null |
Service Ping is a GitLab process that collects and sends a weekly payload to GitLab.
The payload provides important high-level data that helps our product, support,
and sales teams understand how GitLab is used. The data helps to:
- Compare counts month over month (or week over week) to get a rough sense for how an instance uses
different product features.
- Collect other facts that help us classify and understand GitLab installations.
- Calculate our stage monthly active users (SMAU), which helps to measure the success of our stages
and features.
Service Ping information is not anonymous. It's linked to the instance's hostname, but does
not contain project names, usernames, or any other specific data.
Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) certain metrics on any GitLab Self-Managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
## Service Ping terminology
We use the following terminology to describe the Service Ping components:
- **Service Ping**: the process that collects and generates a JSON payload.
- **Service Data**: the contents of the Service Ping JSON payload. This includes metrics.
- **Metrics**: primarily made up of row counts for different tables in an instance's database. Each
metric has a corresponding [metric definition](../metrics/metrics_dictionary.md#metrics-definition-and-validation)
in a YAML file.
- **MAU**: monthly active users.
- **WAU**: weekly active users.
### Known issues
- Service Ping delivers only [metrics](../_index.md#metric), not individual events.
- A metric has to be present and instrumented in the codebase for a GitLab version to be delivered in Service Pings for that version.
## Service Ping request flow
The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, the GitLab GCP Bucket, the GitLab Snowflake Data Warehouse, and Tableau:
```mermaid
sequenceDiagram
participant GitLab Instance
participant Versions Application
participant Licenses Application
participant Salesforce
participant GCP Bucket
participant Snowflake DW
participant Tableau Dashboards
GitLab Instance->>Versions Application: Send Service Ping
loop Process usage data
Versions Application->>Versions Application: Parse usage data
Versions Application->>Versions Application: Write to database
Versions Application->>Versions Application: Update license ping time
end
loop Process data for Salesforce
Versions Application-xLicenses Application: Request Zuora subscription id
Licenses Application-xVersions Application: Zuora subscription id
Versions Application-xSalesforce: Request Zuora account id by Zuora subscription id
Salesforce-xVersions Application: Zuora account id
Versions Application-xSalesforce: Usage data for the Zuora account
end
Versions Application->>GCP Bucket: Export Versions database
GCP Bucket->>Snowflake DW: Import data
Snowflake DW->>Snowflake DW: Transform data using dbt
Snowflake DW->>Tableau Dashboards: Data available for querying
Versions Application->>GitLab Instance: DevOps Score (Conversational Development Index)
```
## How Service Ping works
1. The Service Ping [cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/gitlab_service_ping_worker.rb#L24) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [`Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb).
1. `Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)` [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L68) into a single JSON payload.
1. The JSON payload is then [posted to the Versions application](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb#L20)
If a firewall exception is needed, the required URL depends on several things. If
the hostname is `version.gitlab.com`, the protocol is `TCP`, and the port number is `443`,
the required URL is <https://version.gitlab.com/>.
1. In case of an error, it will be reported to the Version application along with following pieces of information:
- `uuid` - GitLab instance unique identifier
- `hostname` - GitLab instance hostname
- `version` - GitLab instance current versions
- `elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
- `message` - Error message
```ruby
{
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
}
```
1. Finally, the timing metadata information that is used for diagnostic purposes is submitted to the Versions application. It consists of a list of metric identifiers and the time it took to calculate the metrics:
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37911) in GitLab 15.0 [with a flag](../../../administration/feature_flags/_index.md), enabled by default.
> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/295289) in GitLab 15.2. [Feature flag `measure_service_ping_metric_collection`](https://gitlab.com/gitlab-org/gitlab/-/issues/358128) removed.
```ruby
{
"metadata"=>
{
"uuid"=>"0000000-0000-0000-0000-000000000000",
"metrics"=>
[{"name"=>"version", "time_elapsed"=>1.1811964213848114e-05},
{"name"=>"installation_type", "time_elapsed"=>0.00017242692410945892},
{"name"=>"license_billable_users", "time_elapsed"=>0.009520471096038818},
....
{"name"=>"counts.clusters_platforms_eks",
"time_elapsed"=>0.05638605775311589},
{"name"=>"counts.clusters_platforms_gke",
"time_elapsed"=>0.40995341585949063},
{"name"=>"counts.clusters_platforms_user",
"time_elapsed"=>0.06410990096628666},
{"name"=>"counts.clusters_management_project",
"time_elapsed"=>0.24020783510059118}
]
}
}
```
### On a Geo secondary site
We also collect metrics specific to [Geo](../../../administration/geo/_index.md) secondary sites to send with Service Ping.
1. The [Geo secondary service ping cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/geo/secondary_usage_data_cron_worker.rb) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [`SecondaryUsageData.update_metrics!`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/geo/secondary_usage_data.rb#L33). This collects the relevant metrics from Prometheus and stores the data in the Geo secondary tracking database for transmission to the primary site during a [Geo node status update](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/geo_node_status.rb#L105).
1. Geo node status data is sent with the JSON payload in the process described above. The following is an example of the payload where each object in the array represents a Geo node:
```json
[
{
"git_fetch_event_count_weekly"=>nil,
"git_push_event_count_weekly"=>nil,
... other geo node status fields
}
]
```
## Example Service Ping payload
The following is example content of the Service Ping payload.
```json
{
"uuid": "0000000-0000-0000-0000-000000000000",
"hostname": "example.com",
"version": "12.10.0-pre",
"installation_type": "omnibus-gitlab",
"active_user_count": 999,
"recorded_at": "2020-04-17T07:43:54.162+00:00",
"edition": "EEU",
"license_md5": "00000000000000000000000000000000",
"license_sha256": "0000000000000000000000000000000000000000000000000000000000000000",
"license_id": null,
"historical_max_users": 999,
"licensee": {
"Name": "ABC, Inc.",
"Email": "email@example.com",
"Company": "ABC, Inc."
},
"license_user_count": 999,
"license_starts_at": "2020-01-01",
"license_expires_at": "2021-01-01",
"license_plan": "ultimate",
"license_add_ons": {
},
"license_trial": false,
"counts": {
"assignee_lists": 999,
"boards": 999,
"ci_builds": 999,
...
},
"container_registry_enabled": true,
"dependency_proxy_enabled": false,
"gitlab_shared_runners_enabled": true,
"gravatar_enabled": true,
"influxdb_metrics_enabled": true,
"ldap_enabled": false,
"mattermost_enabled": false,
"omniauth_enabled": true,
"prometheus_enabled": false,
"prometheus_metrics_enabled": false,
"reply_by_email_enabled": "incoming+%{key}@incoming.gitlab.com",
"signup_enabled": true,
"projects_with_expiration_policy_disabled": 999,
"projects_with_expiration_policy_enabled": 999,
...
"elasticsearch_enabled": true,
"license_trial_ends_on": null,
"geo_enabled": false,
"git": {
"version": {
"major": 2,
"minor": 26,
"patch": 1
}
},
"gitaly": {
"version": "12.10.0-rc1-93-g40980d40",
"servers": 56,
"clusters": 14,
"filesystems": [
"EXT_2_3_4"
]
},
"gitlab_pages": {
"enabled": true,
"version": "1.17.0"
},
"container_registry_server": {
"vendor": "gitlab",
"version": "2.9.1-gitlab",
"db_enabled": false
},
"database": {
"adapter": "postgresql",
"version": "9.6.15",
"pg_system_id": 6842684531675334351,
"flavor": "Cloud SQL for PostgreSQL"
},
"analytics_unique_visits": {
"g_analytics_contribution": 999,
...
},
"usage_activity_by_stage": {
"configure": {
"project_clusters_enabled": 999,
...
},
"create": {
"merge_requests": 999,
...
},
"manage": {
"events": 999,
...
},
"monitor": {
"clusters": 999,
...
},
"package": {
"projects_with_packages": 999
},
"plan": {
"issues": 999,
...
},
"release": {
"deployments": 999,
...
},
"secure": {
"user_container_scanning_jobs": 999,
...
},
"verify": {
"ci_builds": 999,
...
}
},
"usage_activity_by_stage_monthly": {
"configure": {
"project_clusters_enabled": 999,
...
},
"create": {
"merge_requests": 999,
...
},
"manage": {
"events": 999,
...
},
"monitor": {
"clusters": 999,
...
},
"package": {
"projects_with_packages": 999
},
"plan": {
"issues": 999,
...
},
"release": {
"deployments": 999,
...
},
"secure": {
"user_container_scanning_jobs": 999,
...
},
"verify": {
"ci_builds": 999,
...
}
},
"topology": {
"duration_s": 0.013836685999194742,
"application_requests_per_hour": 4224,
"query_apdex_weekly_average": 0.996,
"failures": [],
"nodes": [
{
"node_memory_total_bytes": 33269903360,
"node_memory_utilization": 0.35,
"node_cpus": 16,
"node_cpu_utilization": 0.2,
"node_uname_info": {
"machine": "x86_64",
"sysname": "Linux",
"release": "4.19.76-linuxkit"
},
"node_services": [
{
"name": "web",
"process_count": 16,
"process_memory_pss": 233349888,
"process_memory_rss": 788220927,
"process_memory_uss": 195295487,
"server": "puma"
},
{
"name": "sidekiq",
"process_count": 1,
"process_memory_pss": 734080000,
"process_memory_rss": 750051328,
"process_memory_uss": 731533312
},
...
],
...
},
...
]
}
}
```
## Export Service Ping data
Rake tasks exist to export Service Ping data in different formats.
- The Rake tasks export the raw SQL queries for `count`, `distinct_count`, `sum`.
- The Rake tasks export the Redis counter class or the line of the Redis block for `redis_usage_data`.
- The Rake tasks calculate the `alt_usage_data` metrics.
In the home directory of your local GitLab installation run the following Rake tasks for either the YAML or the JSON versions:
```shell
# for YAML export of SQL queries
bin/rake gitlab:usage_data:dump_sql_in_yaml
# for JSON export of SQL queries
bin/rake gitlab:usage_data:dump_sql_in_json
# for JSON export of Non SQL data
bin/rake gitlab:usage_data:dump_non_sql_in_json
# You may pipe the output into a file
bin/rake gitlab:usage_data:dump_sql_in_yaml > ~/Desktop/usage-metrics-2020-09-02.yaml
```
## Fallback values for Service Ping
We return fallback values in these cases:
| Case | Value |
|-----------------------------|-------|
| Deprecated Metric ([Removed with version 14.3](https://gitlab.com/gitlab-org/gitlab/-/issues/335894)) | -1000 |
| Timeouts, general failures | -1 |
| Standard errors in counters | -2 |
| Histogram metrics failure | { '-1' => -1 } |
## Monitoring
Service Ping reporting process state is monitored with [Tableau dashboard](https://10az.online.tableau.com/#/site/gitlab/workbooks/2327447/views).
## Related topics
- [Analytics Instrumentation Direction](https://about.gitlab.com/direction/monitor/analytics-instrumentation/)
- [Data Analysis Process](https://handbook.gitlab.com/handbook/business-technology/data-team/organization/analytics/#data-analysis-process)
- [Data for Product Managers](https://handbook.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)
- [Data Infrastructure](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)
|
---
stage: Monitor
group: Analytics Instrumentation
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Service Ping development guidelines
breadcrumbs:
- doc
- development
- internal_analytics
- service_ping
---
Service Ping is a GitLab process that collects and sends a weekly payload to GitLab.
The payload provides important high-level data that helps our product, support,
and sales teams understand how GitLab is used. The data helps to:
- Compare counts month over month (or week over week) to get a rough sense for how an instance uses
different product features.
- Collect other facts that help us classify and understand GitLab installations.
- Calculate our stage monthly active users (SMAU), which helps to measure the success of our stages
and features.
Service Ping information is not anonymous. It's linked to the instance's hostname, but does
not contain project names, usernames, or any other specific data.
Service Ping is enabled by default. However, you can [disable](../../../administration/settings/usage_statistics.md#enable-or-disable-service-ping) certain metrics on any GitLab Self-Managed instance. When Service Ping is enabled, GitLab gathers data from the other instances and can show your instance's usage statistics to your users.
## Service Ping terminology
We use the following terminology to describe the Service Ping components:
- **Service Ping**: the process that collects and generates a JSON payload.
- **Service Data**: the contents of the Service Ping JSON payload. This includes metrics.
- **Metrics**: primarily made up of row counts for different tables in an instance's database. Each
metric has a corresponding [metric definition](../metrics/metrics_dictionary.md#metrics-definition-and-validation)
in a YAML file.
- **MAU**: monthly active users.
- **WAU**: weekly active users.
### Known issues
- Service Ping delivers only [metrics](../_index.md#metric), not individual events.
- A metric has to be present and instrumented in the codebase for a GitLab version to be delivered in Service Pings for that version.
## Service Ping request flow
The following example shows a basic request/response flow between a GitLab instance, the Versions Application, the License Application, Salesforce, the GitLab GCP Bucket, the GitLab Snowflake Data Warehouse, and Tableau:
```mermaid
sequenceDiagram
participant GitLab Instance
participant Versions Application
participant Licenses Application
participant Salesforce
participant GCP Bucket
participant Snowflake DW
participant Tableau Dashboards
GitLab Instance->>Versions Application: Send Service Ping
loop Process usage data
Versions Application->>Versions Application: Parse usage data
Versions Application->>Versions Application: Write to database
Versions Application->>Versions Application: Update license ping time
end
loop Process data for Salesforce
Versions Application-xLicenses Application: Request Zuora subscription id
Licenses Application-xVersions Application: Zuora subscription id
Versions Application-xSalesforce: Request Zuora account id by Zuora subscription id
Salesforce-xVersions Application: Zuora account id
Versions Application-xSalesforce: Usage data for the Zuora account
end
Versions Application->>GCP Bucket: Export Versions database
GCP Bucket->>Snowflake DW: Import data
Snowflake DW->>Snowflake DW: Transform data using dbt
Snowflake DW->>Tableau Dashboards: Data available for querying
Versions Application->>GitLab Instance: DevOps Score (Conversational Development Index)
```
## How Service Ping works
1. The Service Ping [cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/workers/gitlab_service_ping_worker.rb#L24) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [`Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb).
1. `Gitlab::Usage::ServicePingReport.for(output: :all_metrics_values)` [cascades down](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb) to ~400+ other counter method calls.
1. The response of all methods calls are [merged together](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/usage_data.rb#L68) into a single JSON payload.
1. The JSON payload is then [posted to the Versions application](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/service_ping/submit_service.rb#L20)
If a firewall exception is needed, the required URL depends on several things. If
the hostname is `version.gitlab.com`, the protocol is `TCP`, and the port number is `443`,
the required URL is <https://version.gitlab.com/>.
1. In case of an error, it will be reported to the Version application along with following pieces of information:
- `uuid` - GitLab instance unique identifier
- `hostname` - GitLab instance hostname
- `version` - GitLab instance current versions
- `elapsed` - Amount of time which passed since Service Ping report process started and moment of error occurrence
- `message` - Error message
```ruby
{
"uuid"=>"02333324-1cd7-4c3b-a45b-a4993f05fb1d",
"hostname"=>"127.0.0.1",
"version"=>"14.7.0-pre",
"elapsed"=>0.006946,
"message"=>'PG::UndefinedColumn: ERROR: column \"non_existent_attribute\" does not exist\nLINE 1: SELECT COUNT(non_existent_attribute) FROM \"issues\" /*applica...'
}
```
1. Finally, the timing metadata information that is used for diagnostic purposes is submitted to the Versions application. It consists of a list of metric identifiers and the time it took to calculate the metrics:
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/37911) in GitLab 15.0 [with a flag](../../../administration/feature_flags/_index.md), enabled by default.
> - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/295289) in GitLab 15.2. [Feature flag `measure_service_ping_metric_collection`](https://gitlab.com/gitlab-org/gitlab/-/issues/358128) removed.
```ruby
{
"metadata"=>
{
"uuid"=>"0000000-0000-0000-0000-000000000000",
"metrics"=>
[{"name"=>"version", "time_elapsed"=>1.1811964213848114e-05},
{"name"=>"installation_type", "time_elapsed"=>0.00017242692410945892},
{"name"=>"license_billable_users", "time_elapsed"=>0.009520471096038818},
....
{"name"=>"counts.clusters_platforms_eks",
"time_elapsed"=>0.05638605775311589},
{"name"=>"counts.clusters_platforms_gke",
"time_elapsed"=>0.40995341585949063},
{"name"=>"counts.clusters_platforms_user",
"time_elapsed"=>0.06410990096628666},
{"name"=>"counts.clusters_management_project",
"time_elapsed"=>0.24020783510059118}
]
}
}
```
### On a Geo secondary site
We also collect metrics specific to [Geo](../../../administration/geo/_index.md) secondary sites to send with Service Ping.
1. The [Geo secondary service ping cron job](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/workers/geo/secondary_usage_data_cron_worker.rb) is set in Sidekiq to run weekly.
1. When the cron job runs, it calls [`SecondaryUsageData.update_metrics!`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/geo/secondary_usage_data.rb#L33). This collects the relevant metrics from Prometheus and stores the data in the Geo secondary tracking database for transmission to the primary site during a [Geo node status update](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/models/geo_node_status.rb#L105).
1. Geo node status data is sent with the JSON payload in the process described above. The following is an example of the payload where each object in the array represents a Geo node:
```json
[
{
"git_fetch_event_count_weekly"=>nil,
"git_push_event_count_weekly"=>nil,
... other geo node status fields
}
]
```
## Example Service Ping payload
The following is example content of the Service Ping payload.
```json
{
"uuid": "0000000-0000-0000-0000-000000000000",
"hostname": "example.com",
"version": "12.10.0-pre",
"installation_type": "omnibus-gitlab",
"active_user_count": 999,
"recorded_at": "2020-04-17T07:43:54.162+00:00",
"edition": "EEU",
"license_md5": "00000000000000000000000000000000",
"license_sha256": "0000000000000000000000000000000000000000000000000000000000000000",
"license_id": null,
"historical_max_users": 999,
"licensee": {
"Name": "ABC, Inc.",
"Email": "email@example.com",
"Company": "ABC, Inc."
},
"license_user_count": 999,
"license_starts_at": "2020-01-01",
"license_expires_at": "2021-01-01",
"license_plan": "ultimate",
"license_add_ons": {
},
"license_trial": false,
"counts": {
"assignee_lists": 999,
"boards": 999,
"ci_builds": 999,
...
},
"container_registry_enabled": true,
"dependency_proxy_enabled": false,
"gitlab_shared_runners_enabled": true,
"gravatar_enabled": true,
"influxdb_metrics_enabled": true,
"ldap_enabled": false,
"mattermost_enabled": false,
"omniauth_enabled": true,
"prometheus_enabled": false,
"prometheus_metrics_enabled": false,
"reply_by_email_enabled": "incoming+%{key}@incoming.gitlab.com",
"signup_enabled": true,
"projects_with_expiration_policy_disabled": 999,
"projects_with_expiration_policy_enabled": 999,
...
"elasticsearch_enabled": true,
"license_trial_ends_on": null,
"geo_enabled": false,
"git": {
"version": {
"major": 2,
"minor": 26,
"patch": 1
}
},
"gitaly": {
"version": "12.10.0-rc1-93-g40980d40",
"servers": 56,
"clusters": 14,
"filesystems": [
"EXT_2_3_4"
]
},
"gitlab_pages": {
"enabled": true,
"version": "1.17.0"
},
"container_registry_server": {
"vendor": "gitlab",
"version": "2.9.1-gitlab",
"db_enabled": false
},
"database": {
"adapter": "postgresql",
"version": "9.6.15",
"pg_system_id": 6842684531675334351,
"flavor": "Cloud SQL for PostgreSQL"
},
"analytics_unique_visits": {
"g_analytics_contribution": 999,
...
},
"usage_activity_by_stage": {
"configure": {
"project_clusters_enabled": 999,
...
},
"create": {
"merge_requests": 999,
...
},
"manage": {
"events": 999,
...
},
"monitor": {
"clusters": 999,
...
},
"package": {
"projects_with_packages": 999
},
"plan": {
"issues": 999,
...
},
"release": {
"deployments": 999,
...
},
"secure": {
"user_container_scanning_jobs": 999,
...
},
"verify": {
"ci_builds": 999,
...
}
},
"usage_activity_by_stage_monthly": {
"configure": {
"project_clusters_enabled": 999,
...
},
"create": {
"merge_requests": 999,
...
},
"manage": {
"events": 999,
...
},
"monitor": {
"clusters": 999,
...
},
"package": {
"projects_with_packages": 999
},
"plan": {
"issues": 999,
...
},
"release": {
"deployments": 999,
...
},
"secure": {
"user_container_scanning_jobs": 999,
...
},
"verify": {
"ci_builds": 999,
...
}
},
"topology": {
"duration_s": 0.013836685999194742,
"application_requests_per_hour": 4224,
"query_apdex_weekly_average": 0.996,
"failures": [],
"nodes": [
{
"node_memory_total_bytes": 33269903360,
"node_memory_utilization": 0.35,
"node_cpus": 16,
"node_cpu_utilization": 0.2,
"node_uname_info": {
"machine": "x86_64",
"sysname": "Linux",
"release": "4.19.76-linuxkit"
},
"node_services": [
{
"name": "web",
"process_count": 16,
"process_memory_pss": 233349888,
"process_memory_rss": 788220927,
"process_memory_uss": 195295487,
"server": "puma"
},
{
"name": "sidekiq",
"process_count": 1,
"process_memory_pss": 734080000,
"process_memory_rss": 750051328,
"process_memory_uss": 731533312
},
...
],
...
},
...
]
}
}
```
## Export Service Ping data
Rake tasks exist to export Service Ping data in different formats.
- The Rake tasks export the raw SQL queries for `count`, `distinct_count`, `sum`.
- The Rake tasks export the Redis counter class or the line of the Redis block for `redis_usage_data`.
- The Rake tasks calculate the `alt_usage_data` metrics.
In the home directory of your local GitLab installation run the following Rake tasks for either the YAML or the JSON versions:
```shell
# for YAML export of SQL queries
bin/rake gitlab:usage_data:dump_sql_in_yaml
# for JSON export of SQL queries
bin/rake gitlab:usage_data:dump_sql_in_json
# for JSON export of Non SQL data
bin/rake gitlab:usage_data:dump_non_sql_in_json
# You may pipe the output into a file
bin/rake gitlab:usage_data:dump_sql_in_yaml > ~/Desktop/usage-metrics-2020-09-02.yaml
```
## Fallback values for Service Ping
We return fallback values in these cases:
| Case | Value |
|-----------------------------|-------|
| Deprecated Metric ([Removed with version 14.3](https://gitlab.com/gitlab-org/gitlab/-/issues/335894)) | -1000 |
| Timeouts, general failures | -1 |
| Standard errors in counters | -2 |
| Histogram metrics failure | { '-1' => -1 } |
## Monitoring
Service Ping reporting process state is monitored with [Tableau dashboard](https://10az.online.tableau.com/#/site/gitlab/workbooks/2327447/views).
## Related topics
- [Analytics Instrumentation Direction](https://about.gitlab.com/direction/monitor/analytics-instrumentation/)
- [Data Analysis Process](https://handbook.gitlab.com/handbook/business-technology/data-team/organization/analytics/#data-analysis-process)
- [Data for Product Managers](https://handbook.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)
- [Data Infrastructure](https://handbook.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)
|
https://docs.gitlab.com/development/rails_request
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/rails_request.md
|
2025-08-13
|
doc/development/application_slis
|
[
"doc",
"development",
"application_slis"
] |
rails_request.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Rails request SLIs (service level indicators)
| null |
{{< alert type="note" >}}
This SLI is used for service monitoring. But not for [error budgets for stage groups](../stage_group_observability/_index.md#error-budget)
by default.
{{< /alert >}}
The request Apdex SLI and the error rate SLI are [SLIs defined in the application](_index.md).
The request Apdex measures the duration of successful requests as an indicator for
application performance. This includes the REST and GraphQL API, and the
regular controller endpoints.
The error rate measures unsuccessful requests as an indicator for
server misbehavior. This includes the REST API, and the
regular controller endpoints.
1. `gitlab_sli_rails_request_apdex_total`: This counter gets
incremented for every request that did not result in a response
with a `5xx` status code. It ensures slow failures are not
counted twice, because the request is already counted in the error SLI.
1. `gitlab_sli_rails_request_apdex_success_total`: This counter gets
incremented for every successful request that performed faster than
the [defined target duration depending on the endpoint's urgency](#adjusting-request-urgency).
1. `gitlab_sli_rails_request_error_total`: This counter gets
incremented for every request that resulted in a response
with a `5xx` status code.
1. `gitlab_sli_rails_request_total`: This counter gets
incremented for every request.
These counters are labeled with:
1. `endpoint_id`: The identification of the Rails Controller or the
Grape-API endpoint.
1. `feature_category`: The feature category specified for that
controller or API endpoint.
## Request Apdex SLO
These counters can be combined into a success ratio. The objective for
this ratio is defined in the service catalog per service. For this SLI to meet SLO,
the ratio recorded must be higher than:
- [Web: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/web.jsonnet#L19)
- [API: 0.995](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/api.jsonnet#L19)
- [Git: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/git.jsonnet#L22)
For example: for the web-service, we want at least 99.8% of requests
to be faster than their target duration.
We use these targets for alerting and service monitoring. Set durations taking
these targets into account, so we don't cause alerts. The goal, however, is to
set the urgency to a target that satisfies our users.
Both successful measurements and unsuccessful ones affect the
error budget for stage groups.
## Adjusting request urgency
Not all endpoints perform the same type of work, so it is possible to
define different urgency levels for different endpoints. An endpoint with a
lower urgency can have a longer request duration than endpoints with high urgency.
Long-running requests are more expensive for our infrastructure. While serving
one request, the thread remains occupied for the duration of that request. The thread
can handle nothing else. Due to Ruby's Global VM Lock, the thread might keep the
lock and stall other requests handled by the same Puma worker
process. The request is, in fact, a noisy neighbor for other requests
handled by the worker. We cap the upper bound for a target duration at 5 seconds
for this reason.
## Decreasing the urgency (setting a higher target duration)
You can decrease the urgency on an existing endpoint on
a case-by-case basis. Take the following into account:
1. Apdex is about perceived performance. If a user is actively waiting
for the result of a request, waiting 5 seconds might not be
acceptable. However, if the endpoint is used by an automation
requiring a lot of data, 5 seconds could be acceptable.
A product manager can help to identify how an endpoint is used.
1. The workload for some endpoints can sometimes differ greatly
depending on the parameters specified by the caller. The urgency
needs to accommodate those differences. In some cases, you could
define a separate [application SLI](_index.md#defining-a-new-sli)
for what the endpoint is doing.
When the endpoints in certain cases turn into no-ops, making them
very fast, we should ignore these fast requests when setting the
target. For example, if the `MergeRequests::DraftsController` is
hit for every merge request being viewed, but rarely renders
anything, then we should pick the target that
would still accommodate the endpoint performing work.
1. Consider the dependent resources consumed by the endpoint. If the endpoint
loads a lot of data from Gitaly or the database, and this causes
unsatisfactory performance, consider optimizing the
way the data is loaded rather than increasing the target duration
by lowering the urgency.
In these cases, it might be appropriate to temporarily decrease
urgency to make the endpoint meet SLO, if this is bearable for the
infrastructure. In such cases, create a code comment linking to an issue.
If the endpoint consumes a lot of CPU time, we should also consider
this: these kinds of requests are the kind of noisy neighbors we
should try to keep as short as possible.
1. Traffic characteristics should also be taken into account. If the
traffic to the endpoint sometimes bursts, like CI traffic spinning up a
big batch of jobs hitting the same endpoint, then having these
endpoints take five seconds is unacceptable from an infrastructure point of
view. We cannot scale up the fleet fast enough to accommodate for
the incoming slow requests alongside the regular traffic.
When lowering the urgency for an existing endpoint, involve a
[Scalability team member](https://handbook.gitlab.com/handbook/engineering/infrastructure/team/scalability/)
in the review. We can use request rates and durations available in the
logs to come up with a recommendation. You can pick a threshold
using the same process as for
[increasing urgency](#increasing-urgency-setting-a-lower-target-duration),
picking a duration that is higher than the SLO for the service.
We shouldn't set the longest durations on endpoints in the merge
requests that introduces them, because we don't yet have data to support
the decision.
## Increasing urgency (setting a lower target duration)
When increasing the urgency, we must make sure the endpoint
still meets SLO for the fleet that handles the request. You can use the
information in the logs to check:
1. Open [this table in Kibana](https://log.gprd.gitlab.net/goto/bbb6465c68eb83642269e64a467df3df)
1. The table loads information for the busiest endpoints by
default. To speed the response, add both:
- A filter for `json.meta.caller_id.keyword`.
- The identifier you're interested in, for example:
```ruby
Projects::RawController#show
```
or:
```plaintext
GET /api/:version/projects/:id/snippets/:snippet_id/raw
```
1. Check the [appropriate percentile duration](#request-apdex-slo) for
the service handling the endpoint. The overall duration should
be lower than your intended target.
1. If the overall duration is below the intended target, check the peaks over time
in [this graph](https://log.gprd.gitlab.net/goto/9319c4a402461d204d13f3a4924a89fc)
in Kibana. Here, the percentile in question should not peak above
the target duration we want to set.
As decreasing a threshold too much could result in alerts for the
Apdex degradation, also involve a Scalability team member in
the merge request.
## How to adjust the urgency
You can specify urgency similar to how endpoints
[get a feature category](../feature_categorization/_index.md). Endpoints without a
specific target use the default urgency: 1s duration. These configurations
are available:
| Urgency | Duration in seconds | Notes |
|------------|---------------------|-----------------------------------------------|
| `:high` | [0.25s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L8) | |
| `:medium` | [0.5s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L9) | |
| `:default` | [1s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L10) | The default when nothing is specified. |
| `:low` | [5s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L11) | |
### Rails controller
An urgency can be specified for all actions in a controller:
```ruby
class Boards::ListsController < ApplicationController
urgency :high
end
```
To also specify the urgency for certain actions in a controller:
```ruby
class Boards::ListsController < ApplicationController
urgency :high, [:index, :show]
end
```
A custom RSpec matcher is available to check endpoint's request urgency in the controller specs:
```ruby
specify do
expect(get(:index, params: request_params)).to have_request_urgency(:medium)
end
```
### Grape endpoints
To specify the urgency for an entire API class:
```ruby
module API
class Issues < ::API::Base
urgency :low
end
end
```
To specify the urgency also for certain actions in a API class:
```ruby
module API
class Issues < ::API::Base
urgency :medium, [
'/groups/:id/issues',
'/groups/:id/issues_statistics'
]
end
end
```
Or, we can specify the urgency per endpoint:
```ruby
get 'client/features', urgency: :low do
# endpoint logic
end
```
A custom RSpec matcher is also compatible with grape endpoints' specs:
```ruby
specify do
expect(get(api('/avatar'), params: { email: 'public@example.com' })).to have_request_urgency(:medium)
end
```
{{< alert type="warning" >}}
We can't specify the urgency at the namespace level. The directive is ignored when doing so.
{{< /alert >}}
### Error budget attribution and ownership
This SLI is used for service level monitoring. It feeds into the
[error budget for stage groups](../stage_group_observability/_index.md#error-budget).
For more information, read the epic for
[defining custom SLIs and incorporating them into error budgets](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525)).
The endpoints for the SLI feed into a group's error budget based on the
[feature category declared on it](../feature_categorization/_index.md).
To know which endpoints are included for your group, you can see the
request rates on the
[group dashboard for your group](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
In the **Budget Attribution** row, the **Puma Apdex** log link shows you
how many requests are not meeting a 1s or 5s target.
For more information about the content of the dashboard, see
[Dashboards for stage groups](../stage_group_observability/_index.md). For more information
about our exploration of the error budget itself, see
[issue 1365](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1365).
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Rails request SLIs (service level indicators)
breadcrumbs:
- doc
- development
- application_slis
---
{{< alert type="note" >}}
This SLI is used for service monitoring. But not for [error budgets for stage groups](../stage_group_observability/_index.md#error-budget)
by default.
{{< /alert >}}
The request Apdex SLI and the error rate SLI are [SLIs defined in the application](_index.md).
The request Apdex measures the duration of successful requests as an indicator for
application performance. This includes the REST and GraphQL API, and the
regular controller endpoints.
The error rate measures unsuccessful requests as an indicator for
server misbehavior. This includes the REST API, and the
regular controller endpoints.
1. `gitlab_sli_rails_request_apdex_total`: This counter gets
incremented for every request that did not result in a response
with a `5xx` status code. It ensures slow failures are not
counted twice, because the request is already counted in the error SLI.
1. `gitlab_sli_rails_request_apdex_success_total`: This counter gets
incremented for every successful request that performed faster than
the [defined target duration depending on the endpoint's urgency](#adjusting-request-urgency).
1. `gitlab_sli_rails_request_error_total`: This counter gets
incremented for every request that resulted in a response
with a `5xx` status code.
1. `gitlab_sli_rails_request_total`: This counter gets
incremented for every request.
These counters are labeled with:
1. `endpoint_id`: The identification of the Rails Controller or the
Grape-API endpoint.
1. `feature_category`: The feature category specified for that
controller or API endpoint.
## Request Apdex SLO
These counters can be combined into a success ratio. The objective for
this ratio is defined in the service catalog per service. For this SLI to meet SLO,
the ratio recorded must be higher than:
- [Web: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/web.jsonnet#L19)
- [API: 0.995](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/api.jsonnet#L19)
- [Git: 0.998](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/services/git.jsonnet#L22)
For example: for the web-service, we want at least 99.8% of requests
to be faster than their target duration.
We use these targets for alerting and service monitoring. Set durations taking
these targets into account, so we don't cause alerts. The goal, however, is to
set the urgency to a target that satisfies our users.
Both successful measurements and unsuccessful ones affect the
error budget for stage groups.
## Adjusting request urgency
Not all endpoints perform the same type of work, so it is possible to
define different urgency levels for different endpoints. An endpoint with a
lower urgency can have a longer request duration than endpoints with high urgency.
Long-running requests are more expensive for our infrastructure. While serving
one request, the thread remains occupied for the duration of that request. The thread
can handle nothing else. Due to Ruby's Global VM Lock, the thread might keep the
lock and stall other requests handled by the same Puma worker
process. The request is, in fact, a noisy neighbor for other requests
handled by the worker. We cap the upper bound for a target duration at 5 seconds
for this reason.
## Decreasing the urgency (setting a higher target duration)
You can decrease the urgency on an existing endpoint on
a case-by-case basis. Take the following into account:
1. Apdex is about perceived performance. If a user is actively waiting
for the result of a request, waiting 5 seconds might not be
acceptable. However, if the endpoint is used by an automation
requiring a lot of data, 5 seconds could be acceptable.
A product manager can help to identify how an endpoint is used.
1. The workload for some endpoints can sometimes differ greatly
depending on the parameters specified by the caller. The urgency
needs to accommodate those differences. In some cases, you could
define a separate [application SLI](_index.md#defining-a-new-sli)
for what the endpoint is doing.
When the endpoints in certain cases turn into no-ops, making them
very fast, we should ignore these fast requests when setting the
target. For example, if the `MergeRequests::DraftsController` is
hit for every merge request being viewed, but rarely renders
anything, then we should pick the target that
would still accommodate the endpoint performing work.
1. Consider the dependent resources consumed by the endpoint. If the endpoint
loads a lot of data from Gitaly or the database, and this causes
unsatisfactory performance, consider optimizing the
way the data is loaded rather than increasing the target duration
by lowering the urgency.
In these cases, it might be appropriate to temporarily decrease
urgency to make the endpoint meet SLO, if this is bearable for the
infrastructure. In such cases, create a code comment linking to an issue.
If the endpoint consumes a lot of CPU time, we should also consider
this: these kinds of requests are the kind of noisy neighbors we
should try to keep as short as possible.
1. Traffic characteristics should also be taken into account. If the
traffic to the endpoint sometimes bursts, like CI traffic spinning up a
big batch of jobs hitting the same endpoint, then having these
endpoints take five seconds is unacceptable from an infrastructure point of
view. We cannot scale up the fleet fast enough to accommodate for
the incoming slow requests alongside the regular traffic.
When lowering the urgency for an existing endpoint, involve a
[Scalability team member](https://handbook.gitlab.com/handbook/engineering/infrastructure/team/scalability/)
in the review. We can use request rates and durations available in the
logs to come up with a recommendation. You can pick a threshold
using the same process as for
[increasing urgency](#increasing-urgency-setting-a-lower-target-duration),
picking a duration that is higher than the SLO for the service.
We shouldn't set the longest durations on endpoints in the merge
requests that introduces them, because we don't yet have data to support
the decision.
## Increasing urgency (setting a lower target duration)
When increasing the urgency, we must make sure the endpoint
still meets SLO for the fleet that handles the request. You can use the
information in the logs to check:
1. Open [this table in Kibana](https://log.gprd.gitlab.net/goto/bbb6465c68eb83642269e64a467df3df)
1. The table loads information for the busiest endpoints by
default. To speed the response, add both:
- A filter for `json.meta.caller_id.keyword`.
- The identifier you're interested in, for example:
```ruby
Projects::RawController#show
```
or:
```plaintext
GET /api/:version/projects/:id/snippets/:snippet_id/raw
```
1. Check the [appropriate percentile duration](#request-apdex-slo) for
the service handling the endpoint. The overall duration should
be lower than your intended target.
1. If the overall duration is below the intended target, check the peaks over time
in [this graph](https://log.gprd.gitlab.net/goto/9319c4a402461d204d13f3a4924a89fc)
in Kibana. Here, the percentile in question should not peak above
the target duration we want to set.
As decreasing a threshold too much could result in alerts for the
Apdex degradation, also involve a Scalability team member in
the merge request.
## How to adjust the urgency
You can specify urgency similar to how endpoints
[get a feature category](../feature_categorization/_index.md). Endpoints without a
specific target use the default urgency: 1s duration. These configurations
are available:
| Urgency | Duration in seconds | Notes |
|------------|---------------------|-----------------------------------------------|
| `:high` | [0.25s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L8) | |
| `:medium` | [0.5s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L9) | |
| `:default` | [1s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L10) | The default when nothing is specified. |
| `:low` | [5s](https://gitlab.com/gitlab-org/gitlab/-/blob/2f7a38fe48934b78f04233c4d2c81cde88a06da7/lib/gitlab/endpoint_attributes/config.rb#L11) | |
### Rails controller
An urgency can be specified for all actions in a controller:
```ruby
class Boards::ListsController < ApplicationController
urgency :high
end
```
To also specify the urgency for certain actions in a controller:
```ruby
class Boards::ListsController < ApplicationController
urgency :high, [:index, :show]
end
```
A custom RSpec matcher is available to check endpoint's request urgency in the controller specs:
```ruby
specify do
expect(get(:index, params: request_params)).to have_request_urgency(:medium)
end
```
### Grape endpoints
To specify the urgency for an entire API class:
```ruby
module API
class Issues < ::API::Base
urgency :low
end
end
```
To specify the urgency also for certain actions in a API class:
```ruby
module API
class Issues < ::API::Base
urgency :medium, [
'/groups/:id/issues',
'/groups/:id/issues_statistics'
]
end
end
```
Or, we can specify the urgency per endpoint:
```ruby
get 'client/features', urgency: :low do
# endpoint logic
end
```
A custom RSpec matcher is also compatible with grape endpoints' specs:
```ruby
specify do
expect(get(api('/avatar'), params: { email: 'public@example.com' })).to have_request_urgency(:medium)
end
```
{{< alert type="warning" >}}
We can't specify the urgency at the namespace level. The directive is ignored when doing so.
{{< /alert >}}
### Error budget attribution and ownership
This SLI is used for service level monitoring. It feeds into the
[error budget for stage groups](../stage_group_observability/_index.md#error-budget).
For more information, read the epic for
[defining custom SLIs and incorporating them into error budgets](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/525)).
The endpoints for the SLI feed into a group's error budget based on the
[feature category declared on it](../feature_categorization/_index.md).
To know which endpoints are included for your group, you can see the
request rates on the
[group dashboard for your group](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
In the **Budget Attribution** row, the **Puma Apdex** log link shows you
how many requests are not meeting a 1s or 5s target.
For more information about the content of the dashboard, see
[Dashboards for stage groups](../stage_group_observability/_index.md). For more information
about our exploration of the error budget itself, see
[issue 1365](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1365).
|
https://docs.gitlab.com/development/application_slis
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/application_slis
|
[
"doc",
"development",
"application_slis"
] |
_index.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Application Service Level Indicators (SLIs)
| null |
It is possible to define [Service Level Indicators(SLIs)](https://en.wikipedia.org/wiki/Service_level_indicator)
directly in the Ruby codebase. This keeps the definition of operations
and their success close to the implementation and allows the people
building features to easily define how these features should be
monitored.
## Existing SLIs
1. [`rails_request`](rails_request.md)
1. `global_search_apdex`
1. `global_search_error_rate`
1. `global_search_indexing_apdex`
1. [`sidekiq_execution`](sidekiq_execution.md)
## Defining a new SLI
An SLI can be defined with the `Gitlab::Metrics::Sli::Apdex` or
`Gitlab::Metrics::Sli::ErrorRate` class. When you define an SLI, two
[Prometheus counters](https://prometheus.io/docs/concepts/metric_types/#counter)
are emitted from the Rails application. Both counters work in broadly the same way and contain a total operation count. `Apdex` uses a success rate to calculate a success ratio, and `ErrorRate` uses an error rate to calculate an error ratio.
The following metrics are defined:
- `Gitlab::Metrics::Sli::Apdex.new('foo')` defines:
- `gitlab_sli_foo_apdex_total` for the total number of measurements.
- `gitlab_sli_foo_apdex_success_total` for the number of successful
measurements.
- `Gitlab::Metrics::Sli::ErrorRate.new('foo')` defines:
- `gitlab_sli_foo_total` for the total number of measurements.
- `gitlab_sli_foo_error_total` for the number of error
measurements. Because this metric is an error rate,
errors are divided by the total number.
As shown in this example, they can share a base name (`foo` in this example). We
recommend this when they refer to the same operation.
You should use `Apdex` to measure the performance of successful operations. You don't have to measure the performance of a failing request because that performance should be tracked with `ErrorRate`. For example, you can measure whether a request is performing within a specified latency threshold.
You should use `ErrorRate` to measure the rate of unsuccessful operations. For example, you can measure whether a failed request returns an HTTP status greater than or equal to `500`.
Before the first scrape, it is important to have
[initialized the SLI with all possible label-combinations](https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics).
This avoid confusing results when using these counters in calculations.
To initialize an SLI, use the `.initialize_sli` class method, for
example:
```ruby
Gitlab::Metrics::Sli::Apdex.initialize_sli(:received_email, [
{
feature_category: :team_planning,
email_type: :create_issue
},
{
feature_category: :service_desk,
email_type: :service_desk
},
{
feature_category: :code_review_workflow,
email_type: :create_merge_request
}
])
```
Metrics must be initialized before they get scraped for the first time.
This currently happens during the `on_master_start` [lifecycle event](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cluster/lifecycle_events.rb).
Since this delays application readiness until metrics initialization returns, make sure the overhead
this adds is understood and acceptable.
## Tracking operations for an SLI
Tracking an operation in the newly defined SLI can be done like this:
```ruby
Gitlab::Metrics::Sli::Apdex[:received_email].increment(
labels: {
feature_category: :service_desk,
email_type: :service_desk
},
success: issue_created?
)
```
Calling `#increment` on this SLI will increment the total Prometheus counter
```promql
gitlab_sli:received_email_apdex:total{ feature_category='service_desk', email_type='service_desk' }
```
If the `success:` argument passed is truthy, then the success counter will also
be incremented:
```promql
gitlab_sli:received_email_apdex:success_total{ feature_category='service_desk', email_type='service_desk' }
```
For error rate SLIs, the equivalent argument is called `error:`:
```ruby
Gitlab::Metrics::Sli::ErrorRate[:merge].increment(
labels: {
merge_type: :fast_forward
},
error: !merge_success?
)
```
## Using the SLI in service monitoring and alerts
When the application is emitting metrics for a new SLI, they need
to be consumed from the [metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/tree/master/metrics-catalog)
to result in alerts, and included in the error budget for stage
groups and GitLab.com's overall availability.
Start by adding the new SLI to the
[Application-SLI library](https://gitlab.com/gitlab-com/runbooks/-/blob/d109886dfd5170793eeb8de3d69aafd4a9da78f6/metrics-catalog/gitlab-slis/library.libsonnet#L4).
After that, add the following information:
- `name`: the name of the SLI as defined in code. For example
`received_email`.
- `significantLabels`: an array of Prometheus labels that belong to the
metrics. For example: `["email_type"]`. If the significant labels
for the SLI include `feature_category`, the metrics will also
feed into the
[error budgets for stage groups](../stage_group_observability/_index.md#error-budget).
- `featureCategory`: if the SLI applies to a single feature category,
you can specify it statically through this field to feed the SLI
into the error budgets for stage groups.
- `description`: a Markdown string explaining the SLI. It will
be shown on dashboards and alerts.
- `kind`: the kind of indicator. For example `sliDefinition.apdexKind`.
When done, run `make generate` to generate recording rules for
the new SLI. This command creates recordings for all services
emitting these metrics aggregated over `significantLabels`.
Open up a merge request with these changes and request review from a Scalability
team member.
When these changes are merged, and the aggregations in
[Mimir](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22%3A%7B%22datasource%22%3A%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%2C%22queries%22%3A%5B%7B%22refId%22%3A%22A%22%2C%22expr%22%3A%22%22%2C%22range%22%3Atrue%2C%22instant%22%3Atrue%2C%22datasource%22%3A%7B%22type%22%3A%22prometheus%22%2C%22uid%22%3A%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D%7D%5D%2C%22range%22%3A%7B%22from%22%3A%22now-6h%22%2C%22to%22%3A%22now%22%7D%7D%7D&orgId=1) recorded, query Mimir to see
the success ratio of the new aggregated metrics. For example:
```promql
sum by (environment, stage, type)(application_sli_aggregation:rails_request:apdex:success:rate_1h)
/
sum by (environment, stage, type)(application_sli_aggregation:rails_request:apdex:weight:score_1h)
```
This shows the success ratio, which can guide you to set an
appropriate SLO when adding this SLI to a service.
Then, add the SLI to the appropriate service
catalog file. For example, the [`web` service](https://gitlab.com/gitlab-com/runbooks/-/blob/2b7be37a006c236bd684a4e6a1fbf4c66158292a/metrics-catalog/services/web.jsonnet#L198):
```jsonnet
rails_requests:
sliLibrary.get('rails_request_apdex')
.generateServiceLevelIndicator({ job: 'gitlab-rails' })
```
To pass extra selectors and override properties of the SLI, see the
[service monitoring documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/README.md).
SLIs with statically defined feature categories can already receive
alerts about the SLI in specified Slack channels. For more information, read the
[alert routing documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/alert-routing.md).
In [this project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/614)
we are extending this so alerts for SLIs with a `feature_category`
label in the source metrics can also be routed.
For any question, don't hesitate to create an issue in
[the Scalability issue tracker](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues)
or come find us in
[#g_scalability](https://gitlab.slack.com/archives/CMMF8TKR9) on Slack.
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: GitLab Application Service Level Indicators (SLIs)
breadcrumbs:
- doc
- development
- application_slis
---
It is possible to define [Service Level Indicators(SLIs)](https://en.wikipedia.org/wiki/Service_level_indicator)
directly in the Ruby codebase. This keeps the definition of operations
and their success close to the implementation and allows the people
building features to easily define how these features should be
monitored.
## Existing SLIs
1. [`rails_request`](rails_request.md)
1. `global_search_apdex`
1. `global_search_error_rate`
1. `global_search_indexing_apdex`
1. [`sidekiq_execution`](sidekiq_execution.md)
## Defining a new SLI
An SLI can be defined with the `Gitlab::Metrics::Sli::Apdex` or
`Gitlab::Metrics::Sli::ErrorRate` class. When you define an SLI, two
[Prometheus counters](https://prometheus.io/docs/concepts/metric_types/#counter)
are emitted from the Rails application. Both counters work in broadly the same way and contain a total operation count. `Apdex` uses a success rate to calculate a success ratio, and `ErrorRate` uses an error rate to calculate an error ratio.
The following metrics are defined:
- `Gitlab::Metrics::Sli::Apdex.new('foo')` defines:
- `gitlab_sli_foo_apdex_total` for the total number of measurements.
- `gitlab_sli_foo_apdex_success_total` for the number of successful
measurements.
- `Gitlab::Metrics::Sli::ErrorRate.new('foo')` defines:
- `gitlab_sli_foo_total` for the total number of measurements.
- `gitlab_sli_foo_error_total` for the number of error
measurements. Because this metric is an error rate,
errors are divided by the total number.
As shown in this example, they can share a base name (`foo` in this example). We
recommend this when they refer to the same operation.
You should use `Apdex` to measure the performance of successful operations. You don't have to measure the performance of a failing request because that performance should be tracked with `ErrorRate`. For example, you can measure whether a request is performing within a specified latency threshold.
You should use `ErrorRate` to measure the rate of unsuccessful operations. For example, you can measure whether a failed request returns an HTTP status greater than or equal to `500`.
Before the first scrape, it is important to have
[initialized the SLI with all possible label-combinations](https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics).
This avoid confusing results when using these counters in calculations.
To initialize an SLI, use the `.initialize_sli` class method, for
example:
```ruby
Gitlab::Metrics::Sli::Apdex.initialize_sli(:received_email, [
{
feature_category: :team_planning,
email_type: :create_issue
},
{
feature_category: :service_desk,
email_type: :service_desk
},
{
feature_category: :code_review_workflow,
email_type: :create_merge_request
}
])
```
Metrics must be initialized before they get scraped for the first time.
This currently happens during the `on_master_start` [lifecycle event](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/cluster/lifecycle_events.rb).
Since this delays application readiness until metrics initialization returns, make sure the overhead
this adds is understood and acceptable.
## Tracking operations for an SLI
Tracking an operation in the newly defined SLI can be done like this:
```ruby
Gitlab::Metrics::Sli::Apdex[:received_email].increment(
labels: {
feature_category: :service_desk,
email_type: :service_desk
},
success: issue_created?
)
```
Calling `#increment` on this SLI will increment the total Prometheus counter
```promql
gitlab_sli:received_email_apdex:total{ feature_category='service_desk', email_type='service_desk' }
```
If the `success:` argument passed is truthy, then the success counter will also
be incremented:
```promql
gitlab_sli:received_email_apdex:success_total{ feature_category='service_desk', email_type='service_desk' }
```
For error rate SLIs, the equivalent argument is called `error:`:
```ruby
Gitlab::Metrics::Sli::ErrorRate[:merge].increment(
labels: {
merge_type: :fast_forward
},
error: !merge_success?
)
```
## Using the SLI in service monitoring and alerts
When the application is emitting metrics for a new SLI, they need
to be consumed from the [metrics catalog](https://gitlab.com/gitlab-com/runbooks/-/tree/master/metrics-catalog)
to result in alerts, and included in the error budget for stage
groups and GitLab.com's overall availability.
Start by adding the new SLI to the
[Application-SLI library](https://gitlab.com/gitlab-com/runbooks/-/blob/d109886dfd5170793eeb8de3d69aafd4a9da78f6/metrics-catalog/gitlab-slis/library.libsonnet#L4).
After that, add the following information:
- `name`: the name of the SLI as defined in code. For example
`received_email`.
- `significantLabels`: an array of Prometheus labels that belong to the
metrics. For example: `["email_type"]`. If the significant labels
for the SLI include `feature_category`, the metrics will also
feed into the
[error budgets for stage groups](../stage_group_observability/_index.md#error-budget).
- `featureCategory`: if the SLI applies to a single feature category,
you can specify it statically through this field to feed the SLI
into the error budgets for stage groups.
- `description`: a Markdown string explaining the SLI. It will
be shown on dashboards and alerts.
- `kind`: the kind of indicator. For example `sliDefinition.apdexKind`.
When done, run `make generate` to generate recording rules for
the new SLI. This command creates recordings for all services
emitting these metrics aggregated over `significantLabels`.
Open up a merge request with these changes and request review from a Scalability
team member.
When these changes are merged, and the aggregations in
[Mimir](https://dashboards.gitlab.net/explore?schemaVersion=1&panes=%7B%22m95%22%3A%7B%22datasource%22%3A%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%2C%22queries%22%3A%5B%7B%22refId%22%3A%22A%22%2C%22expr%22%3A%22%22%2C%22range%22%3Atrue%2C%22instant%22%3Atrue%2C%22datasource%22%3A%7B%22type%22%3A%22prometheus%22%2C%22uid%22%3A%22e58c2f51-20f8-4f4b-ad48-2968782ca7d6%22%7D%7D%5D%2C%22range%22%3A%7B%22from%22%3A%22now-6h%22%2C%22to%22%3A%22now%22%7D%7D%7D&orgId=1) recorded, query Mimir to see
the success ratio of the new aggregated metrics. For example:
```promql
sum by (environment, stage, type)(application_sli_aggregation:rails_request:apdex:success:rate_1h)
/
sum by (environment, stage, type)(application_sli_aggregation:rails_request:apdex:weight:score_1h)
```
This shows the success ratio, which can guide you to set an
appropriate SLO when adding this SLI to a service.
Then, add the SLI to the appropriate service
catalog file. For example, the [`web` service](https://gitlab.com/gitlab-com/runbooks/-/blob/2b7be37a006c236bd684a4e6a1fbf4c66158292a/metrics-catalog/services/web.jsonnet#L198):
```jsonnet
rails_requests:
sliLibrary.get('rails_request_apdex')
.generateServiceLevelIndicator({ job: 'gitlab-rails' })
```
To pass extra selectors and override properties of the SLI, see the
[service monitoring documentation](https://gitlab.com/gitlab-com/runbooks/blob/master/metrics-catalog/README.md).
SLIs with statically defined feature categories can already receive
alerts about the SLI in specified Slack channels. For more information, read the
[alert routing documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/uncategorized/alert-routing.md).
In [this project](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/614)
we are extending this so alerts for SLIs with a `feature_category`
label in the source metrics can also be routed.
For any question, don't hesitate to create an issue in
[the Scalability issue tracker](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues)
or come find us in
[#g_scalability](https://gitlab.slack.com/archives/CMMF8TKR9) on Slack.
|
https://docs.gitlab.com/development/sidekiq_execution
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/sidekiq_execution.md
|
2025-08-13
|
doc/development/application_slis
|
[
"doc",
"development",
"application_slis"
] |
sidekiq_execution.md
|
Platforms
|
Scalability
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq execution SLIs (service level indicators)
| null |
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/700) in GitLab 16.0. This version of Sidekiq execution SLIs replaces the old version of the SLI where you can now drill down by workers in the [Application SLI Violations dashboard](https://dashboards.gitlab.net/d/general-application-sli-violations/general-application-sli-violations?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-product_stage=All&var-stage_group=All&var-component=sidekiq_execution) for stage groups.
{{< /history >}}
{{< alert type="note" >}}
This SLI is used for service monitoring. But not for [error budgets for stage groups](../stage_group_observability/_index.md#error-budget)
by default.
{{< /alert >}}
The Sidekiq execution Apdex measures the duration of successful jobs completion as an indicator for
application performance.
The error rate measures unsuccessful jobs completion when exception occurs as an indicator for
server misbehavior.
- `gitlab_sli_sidekiq_execution_apdex_total`: This counter gets
incremented for every successful job execution that does not result in an exception. It ensures slow jobs are not
counted twice, because the job is already counted in the error SLI.
- `gitlab_sli_sidekiq_execution_apdex_success_total`: This counter gets
incremented for every successful job that performed faster than
the [defined target duration depending on the job urgency](../sidekiq/worker_attributes.md#job-urgency).
- `gitlab_sli_sidekiq_execution_error_total`: This counter gets
incremented for every job that encountered an exception.
- `gitlab_sli_sidekiq_execution_total`: This counter gets
incremented for every job execution.
These counters are labeled with:
- `worker`: The identification of the worker.
- `feature_category`: The feature category specified for that worker.
- `urgency`: The urgency attribute specified for that worker.
- `external_dependencies`: The boolean value `yes` or `no` based on the [external dependencies attribute](../sidekiq/worker_attributes.md#jobs-with-external-dependencies).
- `queue`: The queue in which the job is running.
For more information about these SLIs, see the [Sidekiq SLIs documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/sidekiq/sidekiq-slis.md) in runbooks.
## Adjusting job urgency
Not all workers perform the same type of work, so it is possible to
define different urgency levels for different jobs. A job with a
lower urgency can have a longer execution duration than jobs with high urgency.
For more information on the execution latency requirement and how to set a job's urgency, see the [Sidekiq worker attributes page](../sidekiq/worker_attributes.md#job-urgency).
### Error budget attribution and ownership
This SLI is used for service level monitoring. It feeds into the
[error budget for stage groups](../stage_group_observability/_index.md#error-budget).
The workers for the SLI feed into a group's error budget based on the
[feature category declared on it](../feature_categorization/_index.md).
To know which workers are included for your group, see the
Sidekiq Completion Rate panel on the
[group dashboard for your group](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
In the **Budget Attribution** row, the **Sidekiq Execution Apdex** log link shows you
how many jobs are not meeting the 10 second or 300 second target.
## Jobs with external dependencies
Jobs with [external dependencies](../sidekiq/worker_attributes.md#jobs-with-external-dependencies) are excluded from
the Apdex and error ratio calculation.
|
---
stage: Platforms
group: Scalability
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq execution SLIs (service level indicators)
breadcrumbs:
- doc
- development
- application_slis
---
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/700) in GitLab 16.0. This version of Sidekiq execution SLIs replaces the old version of the SLI where you can now drill down by workers in the [Application SLI Violations dashboard](https://dashboards.gitlab.net/d/general-application-sli-violations/general-application-sli-violations?orgId=1&var-PROMETHEUS_DS=Global&var-environment=gprd&var-stage=main&var-product_stage=All&var-stage_group=All&var-component=sidekiq_execution) for stage groups.
{{< /history >}}
{{< alert type="note" >}}
This SLI is used for service monitoring. But not for [error budgets for stage groups](../stage_group_observability/_index.md#error-budget)
by default.
{{< /alert >}}
The Sidekiq execution Apdex measures the duration of successful jobs completion as an indicator for
application performance.
The error rate measures unsuccessful jobs completion when exception occurs as an indicator for
server misbehavior.
- `gitlab_sli_sidekiq_execution_apdex_total`: This counter gets
incremented for every successful job execution that does not result in an exception. It ensures slow jobs are not
counted twice, because the job is already counted in the error SLI.
- `gitlab_sli_sidekiq_execution_apdex_success_total`: This counter gets
incremented for every successful job that performed faster than
the [defined target duration depending on the job urgency](../sidekiq/worker_attributes.md#job-urgency).
- `gitlab_sli_sidekiq_execution_error_total`: This counter gets
incremented for every job that encountered an exception.
- `gitlab_sli_sidekiq_execution_total`: This counter gets
incremented for every job execution.
These counters are labeled with:
- `worker`: The identification of the worker.
- `feature_category`: The feature category specified for that worker.
- `urgency`: The urgency attribute specified for that worker.
- `external_dependencies`: The boolean value `yes` or `no` based on the [external dependencies attribute](../sidekiq/worker_attributes.md#jobs-with-external-dependencies).
- `queue`: The queue in which the job is running.
For more information about these SLIs, see the [Sidekiq SLIs documentation](https://gitlab.com/gitlab-com/runbooks/-/blob/master/docs/sidekiq/sidekiq-slis.md) in runbooks.
## Adjusting job urgency
Not all workers perform the same type of work, so it is possible to
define different urgency levels for different jobs. A job with a
lower urgency can have a longer execution duration than jobs with high urgency.
For more information on the execution latency requirement and how to set a job's urgency, see the [Sidekiq worker attributes page](../sidekiq/worker_attributes.md#job-urgency).
### Error budget attribution and ownership
This SLI is used for service level monitoring. It feeds into the
[error budget for stage groups](../stage_group_observability/_index.md#error-budget).
The workers for the SLI feed into a group's error budget based on the
[feature category declared on it](../feature_categorization/_index.md).
To know which workers are included for your group, see the
Sidekiq Completion Rate panel on the
[group dashboard for your group](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
In the **Budget Attribution** row, the **Sidekiq Execution Apdex** log link shows you
how many jobs are not meeting the 10 second or 300 second target.
## Jobs with external dependencies
Jobs with [external dependencies](../sidekiq/worker_attributes.md#jobs-with-external-dependencies) are excluded from
the Apdex and error ratio calculation.
|
https://docs.gitlab.com/development/labels
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/labels
|
[
"doc",
"development",
"labels"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Labels
| null |
To allow for asynchronous issue handling, we use [milestones](https://gitlab.com/groups/gitlab-org/-/milestones)
and [labels](https://gitlab.com/gitlab-org/gitlab/-/labels). Leads and product managers handle most of the
scheduling into milestones. Labeling is a task for everyone. (For some projects, labels can be set only by GitLab team members and not by community contributors).
Most issues will have labels for at least one of the following:
- Type. For example: `~"type::feature"`, `~"type::bug"`, or `~"type::maintenance"`.
- Stage. For example: `~"devops::plan"` or `~"devops::create"`.
- Group. For example: `~"group::source code"`, `~"group::knowledge"`, or `~"group::editor"`.
- Category. For example: `~"Category:Code Analytics"`, `~"Category:DevOps Reports"`, or `~"Category:Templates"`.
- Feature. For example: `~wiki`, `~ldap`, `~api`, `~issues`, or `~"merge requests"`.
- Department: `~UX`, `~Quality`
- Team: `~"Technical Writing"`, `~Delivery`
- Specialization: `~frontend`, `~backend`, `~documentation`
- Release Scoping: `~Deliverable`, `~Stretch`, `~"Next Patch Release"`
- Priority: `~"priority::1"`, `~"priority::2"`, `~"priority::3"`, `~"priority::4"`
- Severity: `~"severity::1"`, `~"severity::2"`, `~"severity::3"`, `~"severity::4"`
Add `~"breaking change"` label if the issue can be considered as a [breaking change](../deprecation_guidelines/_index.md).
Add `~security` label if the issue is related to application security.
All labels, their meaning and priority are defined on the
[labels page](https://gitlab.com/gitlab-org/gitlab/-/labels).
If you come across an issue that has none of these, and you're allowed to set
labels, you can always add the type, stage, group, and often the category/feature labels.
## Type labels
Type labels are very important. They define what kind of issue this is. Every
issue should have one and only one.
The SSOT for type and subtype labels is [available in the handbook](https://handbook.gitlab.com/handbook/product/groups/product-analysis/engineering/metrics/#work-type-classification).
A number of type labels have a priority assigned to them, which automatically
makes them float to the top, depending on their importance.
Type labels are always lowercase, and can have any color, besides blue (which is
already reserved for category labels).
The descriptions on the [labels page](https://gitlab.com/groups/gitlab-org/-/labels)
explain what falls under each type label.
The GitLab handbook documents [when something is a bug](https://handbook.gitlab.com/handbook/product/product-processes/#bug-issues) and [when it is a feature request](https://handbook.gitlab.com/handbook/product/product-processes/#feature-issues).
## Stage labels
Stage labels specify which [stage](https://handbook.gitlab.com/handbook/product/categories/#hierarchy) the issue belongs to.
### Naming and color convention
Stage labels respects the `devops::<stage_key>` naming convention.
`<stage_key>` is the stage key as it is in the single source of truth for stages at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml>
with `_` replaced with a space.
For instance, the "Manage" stage is represented by the `~"devops::manage"` label in
the `gitlab-org` group since its key under `stages` is `manage`.
The current stage labels can be found by [searching the labels list for `devops::`](https://gitlab.com/groups/gitlab-org/-/labels?search=devops::).
These labels are [scoped labels](../../user/project/labels.md#scoped-labels)
and thus are mutually exclusive.
The Stage labels are used to generate the [direction pages](https://about.gitlab.com/direction/) automatically.
## Group labels
Group labels specify which [groups](https://handbook.gitlab.com/handbook/company/structure/#product-groups) the issue belongs to.
It's highly recommended to add a group label, as it's used by our triage
automation to
[infer the correct stage label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues-and-merge-requests).
### Naming and color convention
Group labels respects the `group::<group_key>` naming convention and
their color is `#A8D695`.
`<group_key>` is the group key as it is in the single source of truth for groups at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml>,
with `_` replaced with a space.
For instance, the "Pipeline Execution" group is represented by the
`~"group::pipeline execution"` label in the `gitlab-org` group since its key
under `stages.manage.groups` is `pipeline_execution`.
The current group labels can be found by [searching the labels list for `group::`](https://gitlab.com/groups/gitlab-org/-/labels?search=group::).
These labels are [scoped labels](../../user/project/labels.md#scoped-labels)
and thus are mutually exclusive.
You can find the groups listed in the [Product Stages, Groups, and Categories](https://handbook.gitlab.com/handbook/product/categories/) page.
We use the term group to map down product requirements from our product stages.
As a team needs some way to collect the work their members are planning to be assigned to, we use the `~group::` labels to do so.
## Category labels
From the handbook's
[Product stages, groups, and categories](https://handbook.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Categories are high-level capabilities that may be a standalone product at
another company, such as Portfolio Management, for example.
It's highly recommended to add a category label, as it's used by our triage
automation to
[infer the correct group and stage labels](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues).
If you are an expert in a particular area, it makes it easier to find issues to
work on. You can also subscribe to those labels to receive an email each time an
issue is labeled with a category label corresponding to your expertise.
### Naming and color convention
Category labels respects the `Category:<Category Name>` naming convention and
their color is `#428BCA`.
`<Category Name>` is the category name as it is in the single source of truth for categories at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/categories.yml>.
For instance, the "DevOps Reports" category is represented by the
`~"Category:DevOps Reports"` label in the `gitlab-org` group since its
`devops_reports.name` value is "DevOps Reports".
If a category's label doesn't respect this naming convention, it should be specified
with [the `label` attribute](https://handbook.gitlab.com/handbook/marketing/digital-experience/website/#category-attributes)
in <https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/categories.yml>.
## Feature labels
From the handbook's
[Product stages, groups, and categories](https://handbook.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Features: Small, discrete functionalities, for example Issue weights. Some common
features are listed within parentheses to facilitate finding responsible PMs by keyword.
It's highly recommended to add a feature label if no category label applies, as
it's used by our triage automation to
[infer the correct group and stage labels](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues).
If you are an expert in a particular area, it makes it easier to find issues to
work on. You can also subscribe to those labels to receive an email each time an
issue is labeled with a feature label corresponding to your expertise.
Examples of feature labels are `~wiki`, `~ldap`, `~api`, `~issues`, and `~"merge requests"`.
### Naming and color convention
Feature labels are all-lowercase.
## Workflow labels
Issues use the following workflow labels to specify the current issue status:
- `~"workflow::awaiting security release"`
- `~"workflow::blocked"`
- `~"workflow::complete"`
- `~"workflow::design"`
- `~"workflow::feature-flagged"`
- `~"workflow::in dev"`
- `~"workflow::in review"`
- `~"workflow::planning breakdown"`
- `~"workflow::problem validation"`
- `~"workflow::production"`
- `~"workflow::ready for design"`
- `~"workflow::ready for development"`
- `~"workflow::refinement"`
- `~"workflow::scheduling"`
- `~"workflow::solution validation"`
- `~"workflow::start"`
- `~"workflow::validation backlog"`
- `~"workflow::verification"`
## Facet labels
To track additional information or context about created issues, developers may
add _facet labels_. Facet labels are also sometimes used for issue prioritization
or for measurements (such as time to close). An example of a facet label is the
`~"customer"` label, which indicates customer interest.
## Department labels
The current department labels are:
- `~"UX"`
- `~"Quality"`
- `~"infrastructure"`
- `~"security"`
## Team labels
**Important**: Most of the historical team labels (like Manage or Plan) are
now deprecated in favor of [Group labels](#group-labels) and [Stage labels](#stage-labels).
Team labels specify what team is responsible for this issue.
Assigning a team label makes sure issues get the attention of the appropriate
people.
The current team labels are:
- `~"Delivery"`
- `~"Technical Writing"`
- `~"Engineering Productivity"`
- `~"Contributor Success"`
### Naming and color convention
Team labels are always capitalized so that they show up as the first label for
any issue.
## Specialization labels
These labels narrow the [specialization](https://handbook.gitlab.com/handbook/company/structure/#specialist) on a unit of work.
- `~"frontend"`
- `~"backend"`
- `~"documentation"`
## Release scoping labels
Release Scoping labels help us clearly communicate expectations of the work for the
release. There are three levels of Release Scoping labels:
- `~"Deliverable"`: Issues that are expected to be delivered in the current
milestone.
- `~"Stretch"`: Issues that are a stretch goal for delivering in the current
milestone. If these issues are not done in the current release, they will
strongly be considered for the next release.
- `~"Next Patch Release"`: Issues to put in the next patch release. Work on these
first, and follow the [patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md) to backport the bug fix to the current version.
Each issue scheduled for the current milestone should be labeled `~"Deliverable"~`
or `~"Stretch"`. Any open issue for a previous milestone should be labeled
`~"Next Patch Release"`, or otherwise rescheduled to a different milestone.
## Priority labels
We have the following priority labels:
- `~"priority::1"`
- `~"priority::2"`
- `~"priority::3"`
- `~"priority::4"`
Refer to the issue triage [priority label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#priority) section in our handbook to see how it's used.
## Severity labels
We have the following severity labels:
- `~"severity::1"`
- `~"severity::2"`
- `~"severity::3"`
- `~"severity::4"`
Refer to the issue triage [severity label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#severity) section in our handbook to see how it's used.
## Label for community contributors
There are many issues that have a clear solution with uncontroversial benefit to GitLab users.
However, GitLab might not have the capacity for all these proposals in the current roadmap.
These issues are labeled `~"Seeking community contributions"` because we welcome merge requests to resolve them.
Community contributors can submit merge requests for any issue they want, but
the `~"Seeking community contributions"` label has a special meaning. It points to
changes that:
1. We already agreed on,
1. Are well-defined,
1. Are likely to get accepted by a maintainer.
We want to avoid a situation when a contributor picks an
~"Seeking community contributions" issue and then their merge request gets closed,
because we realize that it does not fit our vision, or we want to solve it in a
different way.
We manually add the `~"Seeking community contributions"` label to issues
that fit the criteria described above.
We do not automatically add this label, because it requires human evaluation.
We recommend people that have never contributed to any open source project to
look for issues labeled `~"Seeking community contributions"` with a
[weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&assignee_id=None&weight=1) or the `~"quick win"`
[label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&state=opened&label_name[]=quick%20win&assignee_id=None)
attached to it.
More experienced contributors are very welcome to tackle
[any of them](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&assignee_id=None).
For more complex features that have a weight of 2 or more and clear scope, we recommend looking at issues
with the [label `~"Community Challenge"`](https://gitlab.com/gitlab-org/gitlab/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&label_name[]=Community+challenge).
If your MR for the `~"Community Challenge"` issue gets merged, you will also have a chance to win a custom
GitLab merchandise.
If you've decided that you would like to work on an issue, @-mention
the [appropriate product manager](https://handbook.gitlab.com/handbook/product/how-to-engage/)
as soon as possible. The product manager will then pull in appropriate GitLab team
members to further discuss scope, design, and technical considerations. This will
ensure that your contribution is aligned with the GitLab product and minimize
any rework and delay in getting it merged into main.
GitLab team members who apply the `~"Seeking community contributions"` label to an issue
should update the issue description with a responsible product manager, inviting
any potential community contributor to @-mention per above.
## Stewardship label
For issues related to the open source stewardship of GitLab,
there is the `~"stewardship"` label.
This label is to be used for issues in which the stewardship of GitLab
is a topic of discussion. For instance if GitLab Inc. is planning to add
features from GitLab EE to GitLab CE, related issues would be labeled with
`~"stewardship"`.
A recent example of this was the issue for
[bringing the time tracking API to GitLab CE](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/25517#note_20019084).
## Technical debt and Deferred UX
In order to track things that can be improved in the GitLab codebase,
we use the `~"technical debt"` label in the [GitLab issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
We use the `~"Deferred UX"` label when we choose to deviate from the MVC, in a way that harms the user experience.
These labels should be added to issues that describe things that can be improved,
shortcuts that have been taken, features that need additional attention, and all
other things that have been left behind due to high velocity of development.
For example, code that needs refactoring should use the `~"technical debt"` label,
something that didn't ship according to our Design System guidelines should
use the `~"Deferred UX"` label.
Everyone can create an issue, though you may need to ask for adding a specific
label, if you do not have permissions to do it by yourself. Additional labels
can be combined with these labels, to make it easier to schedule
the improvements for a release.
Issues tagged with these labels have the same priority like issues
that describe a new feature to be introduced in GitLab, and should be scheduled
for a release by the appropriate person.
Make sure to mention the merge request that the `~"technical debt"` issue or
`~"Deferred UX"` issue is associated with in the description of the issue.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Labels
breadcrumbs:
- doc
- development
- labels
---
To allow for asynchronous issue handling, we use [milestones](https://gitlab.com/groups/gitlab-org/-/milestones)
and [labels](https://gitlab.com/gitlab-org/gitlab/-/labels). Leads and product managers handle most of the
scheduling into milestones. Labeling is a task for everyone. (For some projects, labels can be set only by GitLab team members and not by community contributors).
Most issues will have labels for at least one of the following:
- Type. For example: `~"type::feature"`, `~"type::bug"`, or `~"type::maintenance"`.
- Stage. For example: `~"devops::plan"` or `~"devops::create"`.
- Group. For example: `~"group::source code"`, `~"group::knowledge"`, or `~"group::editor"`.
- Category. For example: `~"Category:Code Analytics"`, `~"Category:DevOps Reports"`, or `~"Category:Templates"`.
- Feature. For example: `~wiki`, `~ldap`, `~api`, `~issues`, or `~"merge requests"`.
- Department: `~UX`, `~Quality`
- Team: `~"Technical Writing"`, `~Delivery`
- Specialization: `~frontend`, `~backend`, `~documentation`
- Release Scoping: `~Deliverable`, `~Stretch`, `~"Next Patch Release"`
- Priority: `~"priority::1"`, `~"priority::2"`, `~"priority::3"`, `~"priority::4"`
- Severity: `~"severity::1"`, `~"severity::2"`, `~"severity::3"`, `~"severity::4"`
Add `~"breaking change"` label if the issue can be considered as a [breaking change](../deprecation_guidelines/_index.md).
Add `~security` label if the issue is related to application security.
All labels, their meaning and priority are defined on the
[labels page](https://gitlab.com/gitlab-org/gitlab/-/labels).
If you come across an issue that has none of these, and you're allowed to set
labels, you can always add the type, stage, group, and often the category/feature labels.
## Type labels
Type labels are very important. They define what kind of issue this is. Every
issue should have one and only one.
The SSOT for type and subtype labels is [available in the handbook](https://handbook.gitlab.com/handbook/product/groups/product-analysis/engineering/metrics/#work-type-classification).
A number of type labels have a priority assigned to them, which automatically
makes them float to the top, depending on their importance.
Type labels are always lowercase, and can have any color, besides blue (which is
already reserved for category labels).
The descriptions on the [labels page](https://gitlab.com/groups/gitlab-org/-/labels)
explain what falls under each type label.
The GitLab handbook documents [when something is a bug](https://handbook.gitlab.com/handbook/product/product-processes/#bug-issues) and [when it is a feature request](https://handbook.gitlab.com/handbook/product/product-processes/#feature-issues).
## Stage labels
Stage labels specify which [stage](https://handbook.gitlab.com/handbook/product/categories/#hierarchy) the issue belongs to.
### Naming and color convention
Stage labels respects the `devops::<stage_key>` naming convention.
`<stage_key>` is the stage key as it is in the single source of truth for stages at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml>
with `_` replaced with a space.
For instance, the "Manage" stage is represented by the `~"devops::manage"` label in
the `gitlab-org` group since its key under `stages` is `manage`.
The current stage labels can be found by [searching the labels list for `devops::`](https://gitlab.com/groups/gitlab-org/-/labels?search=devops::).
These labels are [scoped labels](../../user/project/labels.md#scoped-labels)
and thus are mutually exclusive.
The Stage labels are used to generate the [direction pages](https://about.gitlab.com/direction/) automatically.
## Group labels
Group labels specify which [groups](https://handbook.gitlab.com/handbook/company/structure/#product-groups) the issue belongs to.
It's highly recommended to add a group label, as it's used by our triage
automation to
[infer the correct stage label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues-and-merge-requests).
### Naming and color convention
Group labels respects the `group::<group_key>` naming convention and
their color is `#A8D695`.
`<group_key>` is the group key as it is in the single source of truth for groups at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml>,
with `_` replaced with a space.
For instance, the "Pipeline Execution" group is represented by the
`~"group::pipeline execution"` label in the `gitlab-org` group since its key
under `stages.manage.groups` is `pipeline_execution`.
The current group labels can be found by [searching the labels list for `group::`](https://gitlab.com/groups/gitlab-org/-/labels?search=group::).
These labels are [scoped labels](../../user/project/labels.md#scoped-labels)
and thus are mutually exclusive.
You can find the groups listed in the [Product Stages, Groups, and Categories](https://handbook.gitlab.com/handbook/product/categories/) page.
We use the term group to map down product requirements from our product stages.
As a team needs some way to collect the work their members are planning to be assigned to, we use the `~group::` labels to do so.
## Category labels
From the handbook's
[Product stages, groups, and categories](https://handbook.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Categories are high-level capabilities that may be a standalone product at
another company, such as Portfolio Management, for example.
It's highly recommended to add a category label, as it's used by our triage
automation to
[infer the correct group and stage labels](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues).
If you are an expert in a particular area, it makes it easier to find issues to
work on. You can also subscribe to those labels to receive an email each time an
issue is labeled with a category label corresponding to your expertise.
### Naming and color convention
Category labels respects the `Category:<Category Name>` naming convention and
their color is `#428BCA`.
`<Category Name>` is the category name as it is in the single source of truth for categories at
<https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/categories.yml>.
For instance, the "DevOps Reports" category is represented by the
`~"Category:DevOps Reports"` label in the `gitlab-org` group since its
`devops_reports.name` value is "DevOps Reports".
If a category's label doesn't respect this naming convention, it should be specified
with [the `label` attribute](https://handbook.gitlab.com/handbook/marketing/digital-experience/website/#category-attributes)
in <https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/categories.yml>.
## Feature labels
From the handbook's
[Product stages, groups, and categories](https://handbook.gitlab.com/handbook/product/categories/#hierarchy)
page:
> Features: Small, discrete functionalities, for example Issue weights. Some common
features are listed within parentheses to facilitate finding responsible PMs by keyword.
It's highly recommended to add a feature label if no category label applies, as
it's used by our triage automation to
[infer the correct group and stage labels](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#auto-labelling-of-issues).
If you are an expert in a particular area, it makes it easier to find issues to
work on. You can also subscribe to those labels to receive an email each time an
issue is labeled with a feature label corresponding to your expertise.
Examples of feature labels are `~wiki`, `~ldap`, `~api`, `~issues`, and `~"merge requests"`.
### Naming and color convention
Feature labels are all-lowercase.
## Workflow labels
Issues use the following workflow labels to specify the current issue status:
- `~"workflow::awaiting security release"`
- `~"workflow::blocked"`
- `~"workflow::complete"`
- `~"workflow::design"`
- `~"workflow::feature-flagged"`
- `~"workflow::in dev"`
- `~"workflow::in review"`
- `~"workflow::planning breakdown"`
- `~"workflow::problem validation"`
- `~"workflow::production"`
- `~"workflow::ready for design"`
- `~"workflow::ready for development"`
- `~"workflow::refinement"`
- `~"workflow::scheduling"`
- `~"workflow::solution validation"`
- `~"workflow::start"`
- `~"workflow::validation backlog"`
- `~"workflow::verification"`
## Facet labels
To track additional information or context about created issues, developers may
add _facet labels_. Facet labels are also sometimes used for issue prioritization
or for measurements (such as time to close). An example of a facet label is the
`~"customer"` label, which indicates customer interest.
## Department labels
The current department labels are:
- `~"UX"`
- `~"Quality"`
- `~"infrastructure"`
- `~"security"`
## Team labels
**Important**: Most of the historical team labels (like Manage or Plan) are
now deprecated in favor of [Group labels](#group-labels) and [Stage labels](#stage-labels).
Team labels specify what team is responsible for this issue.
Assigning a team label makes sure issues get the attention of the appropriate
people.
The current team labels are:
- `~"Delivery"`
- `~"Technical Writing"`
- `~"Engineering Productivity"`
- `~"Contributor Success"`
### Naming and color convention
Team labels are always capitalized so that they show up as the first label for
any issue.
## Specialization labels
These labels narrow the [specialization](https://handbook.gitlab.com/handbook/company/structure/#specialist) on a unit of work.
- `~"frontend"`
- `~"backend"`
- `~"documentation"`
## Release scoping labels
Release Scoping labels help us clearly communicate expectations of the work for the
release. There are three levels of Release Scoping labels:
- `~"Deliverable"`: Issues that are expected to be delivered in the current
milestone.
- `~"Stretch"`: Issues that are a stretch goal for delivering in the current
milestone. If these issues are not done in the current release, they will
strongly be considered for the next release.
- `~"Next Patch Release"`: Issues to put in the next patch release. Work on these
first, and follow the [patch release runbook](https://gitlab.com/gitlab-org/release/docs/-/blob/master/general/patch/engineers.md) to backport the bug fix to the current version.
Each issue scheduled for the current milestone should be labeled `~"Deliverable"~`
or `~"Stretch"`. Any open issue for a previous milestone should be labeled
`~"Next Patch Release"`, or otherwise rescheduled to a different milestone.
## Priority labels
We have the following priority labels:
- `~"priority::1"`
- `~"priority::2"`
- `~"priority::3"`
- `~"priority::4"`
Refer to the issue triage [priority label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#priority) section in our handbook to see how it's used.
## Severity labels
We have the following severity labels:
- `~"severity::1"`
- `~"severity::2"`
- `~"severity::3"`
- `~"severity::4"`
Refer to the issue triage [severity label](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/#severity) section in our handbook to see how it's used.
## Label for community contributors
There are many issues that have a clear solution with uncontroversial benefit to GitLab users.
However, GitLab might not have the capacity for all these proposals in the current roadmap.
These issues are labeled `~"Seeking community contributions"` because we welcome merge requests to resolve them.
Community contributors can submit merge requests for any issue they want, but
the `~"Seeking community contributions"` label has a special meaning. It points to
changes that:
1. We already agreed on,
1. Are well-defined,
1. Are likely to get accepted by a maintainer.
We want to avoid a situation when a contributor picks an
~"Seeking community contributions" issue and then their merge request gets closed,
because we realize that it does not fit our vision, or we want to solve it in a
different way.
We manually add the `~"Seeking community contributions"` label to issues
that fit the criteria described above.
We do not automatically add this label, because it requires human evaluation.
We recommend people that have never contributed to any open source project to
look for issues labeled `~"Seeking community contributions"` with a
[weight of 1](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&assignee_id=None&weight=1) or the `~"quick win"`
[label](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&state=opened&label_name[]=quick%20win&assignee_id=None)
attached to it.
More experienced contributors are very welcome to tackle
[any of them](https://gitlab.com/groups/gitlab-org/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&assignee_id=None).
For more complex features that have a weight of 2 or more and clear scope, we recommend looking at issues
with the [label `~"Community Challenge"`](https://gitlab.com/gitlab-org/gitlab/-/issues?sort=created_date&state=opened&label_name[]=Seeking+community+contributions&label_name[]=Community+challenge).
If your MR for the `~"Community Challenge"` issue gets merged, you will also have a chance to win a custom
GitLab merchandise.
If you've decided that you would like to work on an issue, @-mention
the [appropriate product manager](https://handbook.gitlab.com/handbook/product/how-to-engage/)
as soon as possible. The product manager will then pull in appropriate GitLab team
members to further discuss scope, design, and technical considerations. This will
ensure that your contribution is aligned with the GitLab product and minimize
any rework and delay in getting it merged into main.
GitLab team members who apply the `~"Seeking community contributions"` label to an issue
should update the issue description with a responsible product manager, inviting
any potential community contributor to @-mention per above.
## Stewardship label
For issues related to the open source stewardship of GitLab,
there is the `~"stewardship"` label.
This label is to be used for issues in which the stewardship of GitLab
is a topic of discussion. For instance if GitLab Inc. is planning to add
features from GitLab EE to GitLab CE, related issues would be labeled with
`~"stewardship"`.
A recent example of this was the issue for
[bringing the time tracking API to GitLab CE](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/25517#note_20019084).
## Technical debt and Deferred UX
In order to track things that can be improved in the GitLab codebase,
we use the `~"technical debt"` label in the [GitLab issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues).
We use the `~"Deferred UX"` label when we choose to deviate from the MVC, in a way that harms the user experience.
These labels should be added to issues that describe things that can be improved,
shortcuts that have been taken, features that need additional attention, and all
other things that have been left behind due to high velocity of development.
For example, code that needs refactoring should use the `~"technical debt"` label,
something that didn't ship according to our Design System guidelines should
use the `~"Deferred UX"` label.
Everyone can create an issue, though you may need to ask for adding a specific
label, if you do not have permissions to do it by yourself. Additional labels
can be combined with these labels, to make it easier to schedule
the improvements for a release.
Issues tagged with these labels have the same priority like issues
that describe a new feature to be introduced in GitLab, and should be scheduled
for a release by the appropriate person.
Make sure to mention the merge request that the `~"technical debt"` issue or
`~"Deferred UX"` issue is associated with in the description of the issue.
|
https://docs.gitlab.com/development/compatibility_across_updates
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/compatibility_across_updates.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
compatibility_across_updates.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq Compatibility across Updates
| null |
The arguments for a Sidekiq job are stored in a queue while it is
scheduled for execution. During a online update, this could lead to
several possible situations:
1. An older version of the application publishes a job, which is executed by an
upgraded Sidekiq node.
1. A job is queued before an upgrade, but executed after an upgrade.
1. A job is queued by a node running the newer version of the application, but
executed on a node running an older version of the application.
## Adding new workers
On GitLab.com, we
[do not currently have a Sidekiq deployment in the canary stage](https://gitlab.com/gitlab-org/gitlab/-/issues/19239).
This means that a new worker than can be scheduled from an HTTP endpoint may
be scheduled from canary but not run on Sidekiq until the full
production deployment is complete. This can be several hours later than
scheduling the job. For some workers, this will not be a problem. For
others - particularly [latency-sensitive jobs](worker_attributes.md#latency-sensitive-jobs) -
this will result in a poor user experience.
This only applies to new worker classes when they are first introduced.
As we recommend [using feature flags](../feature_flags/_index.md) as a general
development process, it's best to control the entire change (including
scheduling of the new Sidekiq worker) with a feature flag.
## Changing the arguments for a worker
Jobs need to be backward and forward compatible between consecutive versions
of the application. Adding or removing an argument may cause problems.
During any deployment, there's a period of time where some application nodes have been updated while others haven't.
If an updated node queues a job with new arguments, but an older Sidekiq node processes it, the job will fail due to an argument mismatch.
For GitLab.com, this can occur if there are multiple deployments in the same milestone. Most self-managed deployments update all nodes sequentially in a single deployment cycle each release, so we need to spread the changes across multiple releases.
### Deprecate and remove an argument
**Before you remove arguments from the `perform_async` and `perform` methods.**, deprecate them. The
following example deprecates and then removes `arg2` from the `perform_async` method:
1. Provide a default value (usually `nil`) and use a comment to mark the
argument as deprecated in the coming minor release. (Release M)
```ruby
class ExampleWorker
# Keep arg2 parameter for backwards compatibility.
def perform(object_id, arg1, arg2 = nil)
# ...
end
end
```
1. One minor release later, stop using the argument in `perform_async`. (Release M+1)
```ruby
ExampleWorker.perform_async(object_id, arg1)
```
1. At the next major release, remove the value from the worker class. (Next major release)
```ruby
class ExampleWorker
def perform(object_id, arg1)
# ...
end
end
```
### Add an argument
There are two options for safely adding new arguments to Sidekiq workers:
- Set up a [multi-step release](#multi-step-release) in which the new argument is first added to the worker. Consider using a [parameter hash](#parameter-hash) for future flexibility.
- If a worker already uses a [parameter hash](#parameter-hash) for additional arguments, pass the new argument in the hash. Workers that don't use a parameter hash yet need to go through the multi-step release to add it first.
#### Multi-step release
This approach requires multiple releases.
1. Add the argument to the worker with a default value (Release M).
```ruby
class ExampleWorker
def perform(object_id, new_arg = nil)
# ...
end
end
```
1. Add the new argument to all the invocations of the worker (Release M+1).
```ruby
ExampleWorker.perform_async(object_id, new_arg)
```
1. Remove the default value (Release M+2).
```ruby
class ExampleWorker
def perform(object_id, new_arg)
# ...
end
end
```
#### Parameter hash
This approach doesn't require multiple releases if an existing worker already
uses a parameter hash.
1. Use a parameter hash in the worker to allow future flexibility.
```ruby
class ExampleWorker
def perform(object_id, params = {})
# ...
end
end
```
## Removing worker classes
To remove a worker class, follow these steps over three minor releases:
### In the minor release M
1. Remove any code that enqueues the jobs.
For example, if there is a UI component or an API endpoint that a user can interact with that results in the worker instance getting enqueued, make sure those surface areas are either removed or updated in a way that the worker instance is no longer enqueued.
This ensures that instances related to the worker class are no longer being enqueued.
1. Ensure both the frontend and backend code no longer relies on any of the work that used to be done by the worker.
1. In the relevant worker classes, replace the contents of the `perform` method with a no-op, while keeping any arguments intact.
For example, if you're working with the following `ExampleWorker`:
```ruby
class ExampleWorker
def perform(object_id)
SomeService.run!(object_id)
end
end
```
Implementing the no-op might look like this:
```ruby
class ExampleWorker
def perform(object_id); end
end
```
By implementing this no-op, you can avoid unnecessary cycles once any deprecated jobs that are still enqueued eventually get processed.
### In the M+1 release
Add a migration (not a post-deployment migration) that uses `sidekiq_remove_jobs`:
```ruby
class RemoveMyDeprecatedWorkersJobInstances < Gitlab::Database::Migration[2.1]
# Always use `disable_ddl_transaction!` while using the `sidekiq_remove_jobs` method,
# as we had multiple production incidents due to `idle-in-transaction` timeout.
disable_ddl_transaction!
DEPRECATED_JOB_CLASSES = %w[
MyDeprecatedWorkerOne
MyDeprecatedWorkerTwo
]
def up
Gitlab::SidekiqSharding::Validator.allow_unrouted_sidekiq_calls do
# If the job has been scheduled via `sidekiq-cron`, we must also remove
# it from the scheduled worker set using the key used to define the cron
# schedule in config/initializers/1_settings.rb.
job_to_remove = Sidekiq::Cron::Job.find('my_deprecated_worker')
# The job may be removed entirely:
job_to_remove.destroy if job_to_remove
# The job may be disabled:
job_to_remove.disable! if job_to_remove
end
# Removes scheduled instances from Sidekiq queues
sidekiq_remove_jobs(job_klasses: DEPRECATED_JOB_CLASSES)
end
def down
# This migration removes any instances of deprecated workers and cannot be undone.
end
end
```
### In the M+2 release
Delete the worker class file and follow the guidance in our [Sidekiq queues documentation](_index.md#sidekiq-queues) around running Rake tasks to regenerate/update related files.
## Renaming queues
For the same reasons that removing workers is dangerous, care should be taken
when renaming queues.
When renaming queues, use the `sidekiq_queue_migrate` helper migration method
in a **post-deployment migration**:
```ruby
class MigrateTheRenamedSidekiqQueue < Gitlab::Database::Migration[2.1]
restrict_gitlab_migration gitlab_schema: :gitlab_main
disable_ddl_transaction!
def up
sidekiq_queue_migrate 'old_queue_name', to: 'new_queue_name'
end
def down
sidekiq_queue_migrate 'new_queue_name', to: 'old_queue_name'
end
end
```
You must rename the queue in a post-deployment migration not in a standard
migration. Otherwise, it runs too early, before all the workers that
schedule these jobs have stopped running. See also [other examples](../database/post_deployment_migrations.md#use-cases).
## Renaming worker classes
We should treat this similar to adding a new worker. That means we only start scheduling the newly-named worker after the Sidekiq deployment finishes.
To ensure backward and forward compatibility between consecutive versions
of the application, follow these steps over three minor releases:
1. Create the newly named worker, and have the old worker call the new worker's `#perform` method. Introduce a feature flag to control when we start scheduling the new worker. (Release M)
Any old worker jobs that are still in the queue will delegate to the new worker. When this version is deployed, it is no longer relevant which version of the job is scheduled or which Sidekiq handles it, an old-Sidekiq will use the old worker's full implementation, a new-Sidekiq will delegate to the new worker.
1. Enable the feature flag for GitLab.com, and after that prepare an MR to enable it by default. (Release M+1)
1. Remove the old worker class and the feature flag. (Release M+2)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq Compatibility across Updates
breadcrumbs:
- doc
- development
- sidekiq
---
The arguments for a Sidekiq job are stored in a queue while it is
scheduled for execution. During a online update, this could lead to
several possible situations:
1. An older version of the application publishes a job, which is executed by an
upgraded Sidekiq node.
1. A job is queued before an upgrade, but executed after an upgrade.
1. A job is queued by a node running the newer version of the application, but
executed on a node running an older version of the application.
## Adding new workers
On GitLab.com, we
[do not currently have a Sidekiq deployment in the canary stage](https://gitlab.com/gitlab-org/gitlab/-/issues/19239).
This means that a new worker than can be scheduled from an HTTP endpoint may
be scheduled from canary but not run on Sidekiq until the full
production deployment is complete. This can be several hours later than
scheduling the job. For some workers, this will not be a problem. For
others - particularly [latency-sensitive jobs](worker_attributes.md#latency-sensitive-jobs) -
this will result in a poor user experience.
This only applies to new worker classes when they are first introduced.
As we recommend [using feature flags](../feature_flags/_index.md) as a general
development process, it's best to control the entire change (including
scheduling of the new Sidekiq worker) with a feature flag.
## Changing the arguments for a worker
Jobs need to be backward and forward compatible between consecutive versions
of the application. Adding or removing an argument may cause problems.
During any deployment, there's a period of time where some application nodes have been updated while others haven't.
If an updated node queues a job with new arguments, but an older Sidekiq node processes it, the job will fail due to an argument mismatch.
For GitLab.com, this can occur if there are multiple deployments in the same milestone. Most self-managed deployments update all nodes sequentially in a single deployment cycle each release, so we need to spread the changes across multiple releases.
### Deprecate and remove an argument
**Before you remove arguments from the `perform_async` and `perform` methods.**, deprecate them. The
following example deprecates and then removes `arg2` from the `perform_async` method:
1. Provide a default value (usually `nil`) and use a comment to mark the
argument as deprecated in the coming minor release. (Release M)
```ruby
class ExampleWorker
# Keep arg2 parameter for backwards compatibility.
def perform(object_id, arg1, arg2 = nil)
# ...
end
end
```
1. One minor release later, stop using the argument in `perform_async`. (Release M+1)
```ruby
ExampleWorker.perform_async(object_id, arg1)
```
1. At the next major release, remove the value from the worker class. (Next major release)
```ruby
class ExampleWorker
def perform(object_id, arg1)
# ...
end
end
```
### Add an argument
There are two options for safely adding new arguments to Sidekiq workers:
- Set up a [multi-step release](#multi-step-release) in which the new argument is first added to the worker. Consider using a [parameter hash](#parameter-hash) for future flexibility.
- If a worker already uses a [parameter hash](#parameter-hash) for additional arguments, pass the new argument in the hash. Workers that don't use a parameter hash yet need to go through the multi-step release to add it first.
#### Multi-step release
This approach requires multiple releases.
1. Add the argument to the worker with a default value (Release M).
```ruby
class ExampleWorker
def perform(object_id, new_arg = nil)
# ...
end
end
```
1. Add the new argument to all the invocations of the worker (Release M+1).
```ruby
ExampleWorker.perform_async(object_id, new_arg)
```
1. Remove the default value (Release M+2).
```ruby
class ExampleWorker
def perform(object_id, new_arg)
# ...
end
end
```
#### Parameter hash
This approach doesn't require multiple releases if an existing worker already
uses a parameter hash.
1. Use a parameter hash in the worker to allow future flexibility.
```ruby
class ExampleWorker
def perform(object_id, params = {})
# ...
end
end
```
## Removing worker classes
To remove a worker class, follow these steps over three minor releases:
### In the minor release M
1. Remove any code that enqueues the jobs.
For example, if there is a UI component or an API endpoint that a user can interact with that results in the worker instance getting enqueued, make sure those surface areas are either removed or updated in a way that the worker instance is no longer enqueued.
This ensures that instances related to the worker class are no longer being enqueued.
1. Ensure both the frontend and backend code no longer relies on any of the work that used to be done by the worker.
1. In the relevant worker classes, replace the contents of the `perform` method with a no-op, while keeping any arguments intact.
For example, if you're working with the following `ExampleWorker`:
```ruby
class ExampleWorker
def perform(object_id)
SomeService.run!(object_id)
end
end
```
Implementing the no-op might look like this:
```ruby
class ExampleWorker
def perform(object_id); end
end
```
By implementing this no-op, you can avoid unnecessary cycles once any deprecated jobs that are still enqueued eventually get processed.
### In the M+1 release
Add a migration (not a post-deployment migration) that uses `sidekiq_remove_jobs`:
```ruby
class RemoveMyDeprecatedWorkersJobInstances < Gitlab::Database::Migration[2.1]
# Always use `disable_ddl_transaction!` while using the `sidekiq_remove_jobs` method,
# as we had multiple production incidents due to `idle-in-transaction` timeout.
disable_ddl_transaction!
DEPRECATED_JOB_CLASSES = %w[
MyDeprecatedWorkerOne
MyDeprecatedWorkerTwo
]
def up
Gitlab::SidekiqSharding::Validator.allow_unrouted_sidekiq_calls do
# If the job has been scheduled via `sidekiq-cron`, we must also remove
# it from the scheduled worker set using the key used to define the cron
# schedule in config/initializers/1_settings.rb.
job_to_remove = Sidekiq::Cron::Job.find('my_deprecated_worker')
# The job may be removed entirely:
job_to_remove.destroy if job_to_remove
# The job may be disabled:
job_to_remove.disable! if job_to_remove
end
# Removes scheduled instances from Sidekiq queues
sidekiq_remove_jobs(job_klasses: DEPRECATED_JOB_CLASSES)
end
def down
# This migration removes any instances of deprecated workers and cannot be undone.
end
end
```
### In the M+2 release
Delete the worker class file and follow the guidance in our [Sidekiq queues documentation](_index.md#sidekiq-queues) around running Rake tasks to regenerate/update related files.
## Renaming queues
For the same reasons that removing workers is dangerous, care should be taken
when renaming queues.
When renaming queues, use the `sidekiq_queue_migrate` helper migration method
in a **post-deployment migration**:
```ruby
class MigrateTheRenamedSidekiqQueue < Gitlab::Database::Migration[2.1]
restrict_gitlab_migration gitlab_schema: :gitlab_main
disable_ddl_transaction!
def up
sidekiq_queue_migrate 'old_queue_name', to: 'new_queue_name'
end
def down
sidekiq_queue_migrate 'new_queue_name', to: 'old_queue_name'
end
end
```
You must rename the queue in a post-deployment migration not in a standard
migration. Otherwise, it runs too early, before all the workers that
schedule these jobs have stopped running. See also [other examples](../database/post_deployment_migrations.md#use-cases).
## Renaming worker classes
We should treat this similar to adding a new worker. That means we only start scheduling the newly-named worker after the Sidekiq deployment finishes.
To ensure backward and forward compatibility between consecutive versions
of the application, follow these steps over three minor releases:
1. Create the newly named worker, and have the old worker call the new worker's `#perform` method. Introduce a feature flag to control when we start scheduling the new worker. (Release M)
Any old worker jobs that are still in the queue will delegate to the new worker. When this version is deployed, it is no longer relevant which version of the job is scheduled or which Sidekiq handles it, an old-Sidekiq will use the old worker's full implementation, a new-Sidekiq will delegate to the new worker.
1. Enable the feature flag for GitLab.com, and after that prepare an MR to enable it by default. (Release M+1)
1. Remove the old worker class and the feature flag. (Release M+2)
|
https://docs.gitlab.com/development/limited_capacity_worker
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/limited_capacity_worker.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
limited_capacity_worker.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq limited capacity worker
| null |
{{< alert type="note" >}}
The following documentation for limited capacity worker relates to a specific
type of worker that usually does not take arguments but instead gets work from
a custom queue (for example, a PostgreSQL backlog of work). It cannot be used for
throttling normal Sidekiq workers. To restrict the concurrency of a normal
Sidekiq worker you can use a [concurrency limit](worker_attributes.md#concurrency-limit).
{{< /alert >}}
It is possible to limit the number of concurrent running jobs for a worker class
by using the `LimitedCapacity::Worker` concern.
The worker must implement three methods:
- `perform_work`: The concern implements the usual `perform` method and calls
`perform_work` if there's any available capacity.
- `remaining_work_count`: Number of jobs that have work to perform.
- `max_running_jobs`: Maximum number of jobs allowed to run concurrently.
```ruby
class MyDummyWorker
include ApplicationWorker
include LimitedCapacity::Worker
def perform_work(*args)
end
def remaining_work_count(*args)
5
end
def max_running_jobs
25
end
end
```
To queue this worker, use
`MyDummyWorker.perform_with_capacity(*args)`. The `*args` passed to this worker
are passed to the `perform_work` method. Due to the way this job throttles
and requeues itself, it is expected that you always provide the same
`*args` in every usage. In practice, this type of worker is often not
used with arguments and must instead consume a workload stored
elsewhere (like in PostgreSQL). This design also means it is unsuitable to
take a normal Sidekiq workload with arguments and make it a
`LimitedCapacity::Worker`. Instead, to use this, you might need to
re-architect your queue to be stored elsewhere.
A common use case for this kind of worker is one that runs periodically
consuming a separate queue of work to be done (like from PostgreSQL). In that case,
you need an additional cron worker to start the worker periodically. For
example, in the following scheduler:
```ruby
class ScheduleMyDummyCronWorker
include ApplicationWorker
include CronjobQueue
def perform
MyDummyWorker.perform_with_capacity
end
end
```
## How many jobs are running?
It runs `max_running_jobs` at almost all times.
The cron worker checks the remaining capacity on each execution and it
schedules at most `max_running_jobs` jobs. Those jobs on completion
re-enqueue themselves immediately, but not on failure. The cron worker is in
charge of replacing those failed jobs.
## Handling errors and idempotence
This concern disables Sidekiq retries, logs the errors, and sends the job to the
dead queue. This is done to have only one source that produces jobs and because
the retry would occupy a slot with a job to perform in the distant future.
We let the cron worker enqueue new jobs, this could be seen as our retry and
back off mechanism because the job might fail again if executed immediately.
This means that for every failed job, we run at a lower capacity
until the cron worker fills the capacity again. If it is important for the
worker not to get a backlog, exceptions must be handled in `#perform_work` and
the job should not raise.
The jobs are deduplicated using the `:none` strategy, but the worker is not
marked as `idempotent!`.
## Metrics
This concern exposes three Prometheus metrics of gauge type with the worker class
name as label:
- `limited_capacity_worker_running_jobs`
- `limited_capacity_worker_max_running_jobs`
- `limited_capacity_worker_remaining_work_count`
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq limited capacity worker
breadcrumbs:
- doc
- development
- sidekiq
---
{{< alert type="note" >}}
The following documentation for limited capacity worker relates to a specific
type of worker that usually does not take arguments but instead gets work from
a custom queue (for example, a PostgreSQL backlog of work). It cannot be used for
throttling normal Sidekiq workers. To restrict the concurrency of a normal
Sidekiq worker you can use a [concurrency limit](worker_attributes.md#concurrency-limit).
{{< /alert >}}
It is possible to limit the number of concurrent running jobs for a worker class
by using the `LimitedCapacity::Worker` concern.
The worker must implement three methods:
- `perform_work`: The concern implements the usual `perform` method and calls
`perform_work` if there's any available capacity.
- `remaining_work_count`: Number of jobs that have work to perform.
- `max_running_jobs`: Maximum number of jobs allowed to run concurrently.
```ruby
class MyDummyWorker
include ApplicationWorker
include LimitedCapacity::Worker
def perform_work(*args)
end
def remaining_work_count(*args)
5
end
def max_running_jobs
25
end
end
```
To queue this worker, use
`MyDummyWorker.perform_with_capacity(*args)`. The `*args` passed to this worker
are passed to the `perform_work` method. Due to the way this job throttles
and requeues itself, it is expected that you always provide the same
`*args` in every usage. In practice, this type of worker is often not
used with arguments and must instead consume a workload stored
elsewhere (like in PostgreSQL). This design also means it is unsuitable to
take a normal Sidekiq workload with arguments and make it a
`LimitedCapacity::Worker`. Instead, to use this, you might need to
re-architect your queue to be stored elsewhere.
A common use case for this kind of worker is one that runs periodically
consuming a separate queue of work to be done (like from PostgreSQL). In that case,
you need an additional cron worker to start the worker periodically. For
example, in the following scheduler:
```ruby
class ScheduleMyDummyCronWorker
include ApplicationWorker
include CronjobQueue
def perform
MyDummyWorker.perform_with_capacity
end
end
```
## How many jobs are running?
It runs `max_running_jobs` at almost all times.
The cron worker checks the remaining capacity on each execution and it
schedules at most `max_running_jobs` jobs. Those jobs on completion
re-enqueue themselves immediately, but not on failure. The cron worker is in
charge of replacing those failed jobs.
## Handling errors and idempotence
This concern disables Sidekiq retries, logs the errors, and sends the job to the
dead queue. This is done to have only one source that produces jobs and because
the retry would occupy a slot with a job to perform in the distant future.
We let the cron worker enqueue new jobs, this could be seen as our retry and
back off mechanism because the job might fail again if executed immediately.
This means that for every failed job, we run at a lower capacity
until the cron worker fills the capacity again. If it is important for the
worker not to get a backlog, exceptions must be handled in `#perform_work` and
the job should not raise.
The jobs are deduplicated using the `:none` strategy, but the worker is not
marked as `idempotent!`.
## Metrics
This concern exposes three Prometheus metrics of gauge type with the worker class
name as label:
- `limited_capacity_worker_running_jobs`
- `limited_capacity_worker_max_running_jobs`
- `limited_capacity_worker_remaining_work_count`
|
https://docs.gitlab.com/development/worker_attributes
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/worker_attributes.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
worker_attributes.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq worker attributes
| null |
Worker classes can define certain attributes to control their behavior and add metadata.
Child classes inheriting from other workers also inherit these attributes, so you only
have to redefine them if you want to override their values.
## Job urgency
Jobs can have an `urgency` attribute set, which can be `:high`,
`:low`, or `:throttled`. These have the below targets:
| **Urgency** | **Queue Scheduling Target** | **Execution Latency Requirement** |
|--------------- | ----------------------------- | ------------------------------------ |
| `:high` | 10 seconds | 10 seconds |
| `:low` (default) | 1 minute | 5 minutes |
| `:throttled` | None | 5 minutes |
To set a job's urgency, use the `urgency` class method:
```ruby
class HighUrgencyWorker
include ApplicationWorker
urgency :high
# ...
end
```
### Latency sensitive jobs
If a large number of background jobs get scheduled at once, queueing of jobs may
occur while jobs wait for a worker node to be become available. This is standard
and gives the system resilience by allowing it to gracefully handle spikes in
traffic. Some jobs, however, are more sensitive to latency than others.
In general, latency-sensitive jobs perform operations that a user could
reasonably expect to happen synchronously, rather than asynchronously in a
background worker. A common example is a write following an action. Examples of
these jobs include:
1. A job which updates a merge request following a push to a branch.
1. A job which invalidates a cache of known branches for a project after a push
to the branch.
1. A job which recalculates the groups and projects a user can see after a
change in permissions.
1. A job which updates the status of a CI pipeline after a state change to a job
in the pipeline.
When these jobs are delayed, the user may perceive the delay as a bug: for
example, they may push a branch and then attempt to create a merge request for
that branch, but be told in the UI that the branch does not exist. We deem these
jobs to be `urgency :high`.
Extra effort is made to ensure that these jobs are started within a very short
period of time after being scheduled. However, to ensure throughput,
these jobs also have very strict execution duration requirements:
1. The median job execution time should be less than 1 second.
1. 99% of jobs should complete within 10 seconds.
If a worker cannot meet these expectations, then it cannot be treated as a
`urgency :high` worker: consider redesigning the worker, or splitting the
work between two different workers, one with `urgency :high` code that
executes quickly, and the other with `urgency :low`, which has no
execution latency requirements (but also has lower scheduling targets).
### Changing a queue's urgency
On GitLab.com, we run Sidekiq in several
[shards](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail),
each of which represents a particular type of workload.
When changing a queue's urgency, or adding a new queue, we need to take
into account the expected workload on the new shard. If we're
changing an existing queue, there is also an effect on the old shard,
but that always reduces work.
To do this, we want to calculate the expected increase in total execution time
and RPS (throughput) for the new shard. We can get these values from:
- The [Queue Detail dashboard](https://dashboards.gitlab.net/d/sidekiq-queue-detail/sidekiq-queue-detail)
has values for the queue itself. For a new queue, we can look for
queues that have similar patterns or are scheduled in similar
circumstances.
- The [Shard Detail dashboard](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail)
has Total Execution Time and Throughput (RPS). The Shard Utilization
panel displays if there is currently any excess capacity for this
shard.
We can then calculate the RPS * average runtime (estimated for new jobs)
for the queue we're changing to see what the relative increase in RPS and
execution time we expect for the new shard:
```ruby
new_queue_consumption = queue_rps * queue_duration_avg
shard_consumption = shard_rps * shard_duration_avg
(new_queue_consumption / shard_consumption) * 100
```
If we expect an increase of **less than 5%**, then no further action is needed.
Otherwise, ping `@gitlab-com/gl-infra/data-access/durability` on the merge request and ask
for a review.
## Jobs with External Dependencies
Most background jobs in the GitLab application communicate with other GitLab
services. For example, PostgreSQL, Redis, Gitaly, and Object Storage. These are considered
to be "internal" dependencies for a job.
However, some jobs are dependent on external services to complete
successfully. Some examples include:
1. Jobs which call web-hooks configured by a user.
1. Jobs which deploy an application to a Kubernetes cluster configured by a user.
These jobs have "external dependencies". This is important for the operation of
the background processing cluster in several ways:
1. Most external dependencies (such as web-hooks) do not provide SLOs, and
therefore we cannot guarantee the execution latencies on these jobs. Since we
cannot guarantee execution latency, we cannot ensure throughput and
therefore, in high-traffic environments, we need to ensure that jobs with
external dependencies are separated from high urgency jobs, to ensure
throughput on those queues.
1. Errors in jobs with external dependencies have higher alerting thresholds as
there is a likelihood that the cause of the error is external.
```ruby
class ExternalDependencyWorker
include ApplicationWorker
# Declares that this worker depends on
# third-party, external services in order
# to complete successfully
worker_has_external_dependencies!
# ...
end
```
A job cannot be both high urgency and have external dependencies.
## CPU-bound and Memory-bound Workers
Workers that are constrained by CPU or memory resource limitations should be
annotated with the `worker_resource_boundary` method.
Most workers tend to spend most of their time blocked, waiting on network responses
from other services such as Redis, PostgreSQL, and Gitaly. Since Sidekiq is a
multi-threaded environment, these jobs can be scheduled with high concurrency.
Some workers, however, spend large amounts of time _on-CPU_ running logic in
Ruby. Ruby MRI does not support true multi-threading - it relies on the
[GIL](https://thoughtbot.com/blog/untangling-ruby-threads#the-global-interpreter-lock)
to greatly simplify application development by only allowing one section of Ruby
code in a process to run at a time, no matter how many cores the machine
hosting the process has. For IO bound workers, this is not a problem, since most
of the threads are blocked in underlying libraries (which are outside of the
GIL).
If many threads are attempting to run Ruby code simultaneously, this leads
to contention on the GIL which has the effect of slowing down all
processes.
In high-traffic environments, knowing that a worker is CPU-bound allows us to
run it on a different fleet with lower concurrency. This ensures optimal
performance.
Likewise, if a worker uses large amounts of memory, we can run these on a
bespoke low concurrency, high memory fleet.
Memory-bound workers create heavy GC workloads, with pauses of
10-50 ms. This has an impact on the latency requirements for the
worker. For this reason, `memory` bound, `urgency :high` jobs are not
permitted and fail CI. In general, `memory` bound workers are
discouraged, and alternative approaches to processing the work should be
considered.
If a worker needs large amounts of both memory and CPU time, it should
be marked as memory-bound, due to the above restriction on high urgency
memory-bound workers.
## Declaring a Job as CPU-bound
This example shows how to declare a job as being CPU-bound.
```ruby
class CPUIntensiveWorker
include ApplicationWorker
# Declares that this worker will perform a lot of
# calculations on-CPU.
worker_resource_boundary :cpu
# ...
end
```
## Determining whether a worker is CPU-bound
We use the following approach to determine whether a worker is CPU-bound:
- In the Sidekiq structured JSON logs, aggregate the worker `duration` and
`cpu_s` fields.
- `duration` refers to the total job execution duration, in seconds
- `cpu_s` is derived from the
[`Process::CLOCK_THREAD_CPUTIME_ID`](https://www.rubydoc.info/stdlib/core/Process:clock_gettime)
counter, and is a measure of time spent by the job on-CPU.
- Divide `cpu_s` by `duration` to get the percentage time spend on-CPU.
- If this ratio exceeds 33%, the worker is considered CPU-bound and should be
annotated as such.
- These values should not be used over small sample sizes, but
rather over fairly large aggregates.
## Feature category
All Sidekiq workers must define a known [feature category](../feature_categorization/_index.md#sidekiq-workers).
## Job data consistency strategies
In GitLab 13.11 and earlier, Sidekiq workers would always send database queries to the primary
database node,
both for reads and writes. This ensured that data integrity
is both guaranteed and immediate, since in a single-node scenario it is impossible to encounter
stale reads even for workers that read their own writes.
If a worker writes to the primary, but reads from a replica, however, the possibility
of reading a stale record is non-zero due to replicas potentially lagging behind the primary.
When the number of jobs that rely on the database increases, ensuring immediate data consistency
can put unsustainable load on the primary database server. We therefore added the ability to use
[Database Load Balancing for Sidekiq workers](../../administration/postgresql/database_load_balancing.md).
By configuring a worker's `data_consistency` field, we can then allow the scheduler to target read replicas
under several strategies outlined below.
### Trading immediacy for reduced primary load
We require Sidekiq workers to make an explicit decision around whether they need to use the
primary database node for all reads and writes, or whether reads can be served from replicas. This is
enforced by a RuboCop rule, which ensures that the `data_consistency` field is set.
Before `data_consistency` was introduced, the default behavior mimicked that of `:always`. Since jobs are
now enqueued along with the current database LSN, the replica (for `:sticky` or `:delayed`) is guaranteed
to be caught up to that point, or the job will be retried, or use the primary. This means that the data
will be consistent at least to the point at which the job was enqueued.
The table below shows the `data_consistency` attribute and its values, ordered by the degree to which
they prefer read replicas and wait for replicas to catch up:
| **Data consistency** | **Description** | **Guideline** |
|--------------|-----------------------------|----------|
| `:always` | The job is required to use the primary database for all queries. (Deprecated) | **Deprecated** Only needed for jobs that encounter edge cases around primary stickiness. |
| `:sticky` | The job prefers replicas, but switches to the primary for writes or when encountering replication lag. | This is the preferred option. It should be used for jobs that require to be executed as fast as possible. Replicas are guaranteed to be caught up to the point at which the job was enqueued in Sidekiq. |
| `:delayed` | The job prefers replicas, but switches to the primary for writes. When encountering replication lag before the job starts, the job is retried once. If the replica is still not up to date on the next retry, it switches to the primary. | It should be used for jobs where delaying execution further typically does not matter, such as cache expiration or web hooks execution. It should not be used for jobs where retry is disabled, such as cron jobs. |
In all cases workers read either from a replica that is fully caught up,
or from the primary node, so data consistency is always ensured.
To set a data consistency for a worker, use the `data_consistency` class method:
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :delayed
# ...
end
```
### Overriding data consistency for a decomposed database
GitLab uses multiple decomposed databases. Sidekiq workers usage of the respective databases may be skewed towards
a particular database. For example, `PipelineProcessWorker` has a higher write traffic to the `ci` database compared to the
`main` database. In the event of edge cases around primary stickiness, having separate data consistency defined for each
database allows the worker to more efficiently use read replicas.
If the `overrides` keyword argument is set, the `Gitlab::Database::LoadBalancing::SidekiqServerMiddleware` loads the load
balancing strategy using the data consistency which most prefers the read replicas.
The order of preference in increasing preference is: `:always`, `:sticky`, then `:delayed`.
The overrides only apply if the GitLab instance is using multiple databases or `Gitlab::Database.database_mode == Gitlab::Database::MODE_MULTIPLE_DATABASES`.
To set a data consistency for a worker, use the `data_consistency` class method with the `overrides` keyword argument:
```ruby
class MultipleDataConsistencyWorker
include ApplicationWorker
data_consistency :always, overrides: { ci: :sticky }
# ...
end
```
### `feature_flag` property
The `feature_flag` property allows you to toggle a job's `data_consistency`,
which permits you to safely toggle load balancing capabilities for a specific job.
When `feature_flag` is disabled, the job defaults to `:always`, which means that the job always uses the primary database.
The `feature_flag` property does not allow the use of
[feature gates based on actors](../feature_flags/_index.md).
This means that the feature flag cannot be toggled only for particular
projects, groups, or users, but instead, you can safely use [percentage of time rollout](../feature_flags/_index.md).
Since we check the feature flag on both Sidekiq client and server, rolling out a 10% of the time,
likely results in 1% (`0.1` `[from client]*0.1` `[from server]`) of effective jobs using replicas.
Example:
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :delayed, feature_flag: :load_balancing_for_delayed_worker
# ...
end
```
When using the `feature_flag` property with `overrides`, the jobs defaults to `always` for all database connections.
When the feature flag is enabled, the configured data consistency is then applied to each database independently.
For the below example, when the flag is enabled, the `main` database connections will use the `:always` data consistency while
`ci` database connections will use `:sticky` data consistency.
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :always, overrides: { ci: :sticky }, feature_flag: :load_balancing_for_delayed_worker
# ...
end
```
### Data consistency with idempotent jobs
For [idempotent jobs](idempotent_jobs.md) that declare either `:sticky` or `:delayed` data consistency, we are
[preserving the latest WAL location](idempotent_jobs.md#preserve-the-latest-wal-location-for-idempotent-jobs) while deduplicating,
ensuring that we read from the replica that is fully caught up.
## Job pause control
With the `pause_control` property, you can conditionally pause job processing. If the strategy is active, the job
is stored in a separate `ZSET` and re-enqueued when the strategy becomes inactive. `PauseControl::ResumeWorker` is a cron
worker that checks if any paused jobs must be restarted.
To use `pause_control`, you can:
- Use one of the strategies defined in `lib/gitlab/sidekiq_middleware/pause_control/strategies/`.
- Define a custom strategy in `lib/gitlab/sidekiq_middleware/pause_control/strategies/` and add the strategy to `lib/gitlab/sidekiq_middleware/pause_control.rb`.
For example:
```ruby
module Gitlab
module SidekiqMiddleware
module PauseControl
module Strategies
class CustomStrategy < Base
def should_pause?
ApplicationSetting.current.elasticsearch_pause_indexing?
end
end
end
end
end
end
```
```ruby
class PausedWorker
include ApplicationWorker
pause_control :custom_strategy
# ...
end
```
{{< alert type="warning" >}}
In case you want to remove the middleware for a worker, set the strategy to `:deprecated` to disable it and wait until
a required stop before removing it completely. That ensures that all paused jobs are resumed correctly.
{{< /alert >}}
## Concurrency limit
With the `concurrency_limit` property, you can limit the worker's concurrency. It will put the jobs that are over this limit in
a separate `LIST` and re-enqueued when it falls under the limit. `ConcurrencyLimit::ResumeWorker` is a cron
worker that checks if any throttled jobs should be re-enqueued.
The first job that crosses the defined concurrency limit initiates the throttling process for all other jobs of this class.
Until this happens, jobs are scheduled and executed as usual.
When the throttling starts, newly scheduled and executed jobs will be added to the end of the `LIST` to ensure that
the execution order is preserved. As soon as the `LIST` is empty again, the throttling process ends.
Prometheus metrics are exposed to monitor workers using concurrency limit middleware:
- `sidekiq_concurrency_limit_deferred_jobs_total`
- `sidekiq_concurrency_limit_queue_jobs`
- `sidekiq_concurrency_limit_queue_jobs_total`
- `sidekiq_concurrency_limit_max_concurrent_jobs`
- `sidekiq_concurrency_limit_current_concurrent_jobs_total`
{{< alert type="warning" >}}
If there is a sustained workload over the limit, the `LIST` is going to grow until the limit is disabled or
the workload drops under the limit.
{{< /alert >}}
You should use a lambda to define the limit. If it returns `nil` or `0`, the limit won't be applied.
Negative numbers pause the execution.
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { 60 }
# ...
end
```
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { ApplicationSetting.current.elasticsearch_concurrent_sidekiq_jobs }
# ...
end
```
## Skip execution of workers in Geo secondary
On Geo secondary sites, database writes are disabled.
You must skip execution of workers that attempt database writes from Geo secondary sites,
if those workers get enqueued on Geo secondary sites.
Conveniently, most workers do not get enqueued on Geo secondary sites, because
[most non-GET HTTP requests get proxied to the Geo primary site](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.8.0-ee/workhorse/internal/upstream/routes.go#L382-L431),
and because Geo secondary sites
[disable most Sidekiq-Cron jobs](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.8.0-ee/ee/lib/gitlab/geo/cron_manager.rb#L6-L26).
Ask a Geo engineer if you are unsure.
To skip execution, prepend the `::Geo::SkipSecondary` module to the worker class.
```ruby
class DummyWorker
include ApplicationWorker
prepend ::Geo::SkipSecondary
# ...
end
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq worker attributes
breadcrumbs:
- doc
- development
- sidekiq
---
Worker classes can define certain attributes to control their behavior and add metadata.
Child classes inheriting from other workers also inherit these attributes, so you only
have to redefine them if you want to override their values.
## Job urgency
Jobs can have an `urgency` attribute set, which can be `:high`,
`:low`, or `:throttled`. These have the below targets:
| **Urgency** | **Queue Scheduling Target** | **Execution Latency Requirement** |
|--------------- | ----------------------------- | ------------------------------------ |
| `:high` | 10 seconds | 10 seconds |
| `:low` (default) | 1 minute | 5 minutes |
| `:throttled` | None | 5 minutes |
To set a job's urgency, use the `urgency` class method:
```ruby
class HighUrgencyWorker
include ApplicationWorker
urgency :high
# ...
end
```
### Latency sensitive jobs
If a large number of background jobs get scheduled at once, queueing of jobs may
occur while jobs wait for a worker node to be become available. This is standard
and gives the system resilience by allowing it to gracefully handle spikes in
traffic. Some jobs, however, are more sensitive to latency than others.
In general, latency-sensitive jobs perform operations that a user could
reasonably expect to happen synchronously, rather than asynchronously in a
background worker. A common example is a write following an action. Examples of
these jobs include:
1. A job which updates a merge request following a push to a branch.
1. A job which invalidates a cache of known branches for a project after a push
to the branch.
1. A job which recalculates the groups and projects a user can see after a
change in permissions.
1. A job which updates the status of a CI pipeline after a state change to a job
in the pipeline.
When these jobs are delayed, the user may perceive the delay as a bug: for
example, they may push a branch and then attempt to create a merge request for
that branch, but be told in the UI that the branch does not exist. We deem these
jobs to be `urgency :high`.
Extra effort is made to ensure that these jobs are started within a very short
period of time after being scheduled. However, to ensure throughput,
these jobs also have very strict execution duration requirements:
1. The median job execution time should be less than 1 second.
1. 99% of jobs should complete within 10 seconds.
If a worker cannot meet these expectations, then it cannot be treated as a
`urgency :high` worker: consider redesigning the worker, or splitting the
work between two different workers, one with `urgency :high` code that
executes quickly, and the other with `urgency :low`, which has no
execution latency requirements (but also has lower scheduling targets).
### Changing a queue's urgency
On GitLab.com, we run Sidekiq in several
[shards](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail),
each of which represents a particular type of workload.
When changing a queue's urgency, or adding a new queue, we need to take
into account the expected workload on the new shard. If we're
changing an existing queue, there is also an effect on the old shard,
but that always reduces work.
To do this, we want to calculate the expected increase in total execution time
and RPS (throughput) for the new shard. We can get these values from:
- The [Queue Detail dashboard](https://dashboards.gitlab.net/d/sidekiq-queue-detail/sidekiq-queue-detail)
has values for the queue itself. For a new queue, we can look for
queues that have similar patterns or are scheduled in similar
circumstances.
- The [Shard Detail dashboard](https://dashboards.gitlab.net/d/sidekiq-shard-detail/sidekiq-shard-detail)
has Total Execution Time and Throughput (RPS). The Shard Utilization
panel displays if there is currently any excess capacity for this
shard.
We can then calculate the RPS * average runtime (estimated for new jobs)
for the queue we're changing to see what the relative increase in RPS and
execution time we expect for the new shard:
```ruby
new_queue_consumption = queue_rps * queue_duration_avg
shard_consumption = shard_rps * shard_duration_avg
(new_queue_consumption / shard_consumption) * 100
```
If we expect an increase of **less than 5%**, then no further action is needed.
Otherwise, ping `@gitlab-com/gl-infra/data-access/durability` on the merge request and ask
for a review.
## Jobs with External Dependencies
Most background jobs in the GitLab application communicate with other GitLab
services. For example, PostgreSQL, Redis, Gitaly, and Object Storage. These are considered
to be "internal" dependencies for a job.
However, some jobs are dependent on external services to complete
successfully. Some examples include:
1. Jobs which call web-hooks configured by a user.
1. Jobs which deploy an application to a Kubernetes cluster configured by a user.
These jobs have "external dependencies". This is important for the operation of
the background processing cluster in several ways:
1. Most external dependencies (such as web-hooks) do not provide SLOs, and
therefore we cannot guarantee the execution latencies on these jobs. Since we
cannot guarantee execution latency, we cannot ensure throughput and
therefore, in high-traffic environments, we need to ensure that jobs with
external dependencies are separated from high urgency jobs, to ensure
throughput on those queues.
1. Errors in jobs with external dependencies have higher alerting thresholds as
there is a likelihood that the cause of the error is external.
```ruby
class ExternalDependencyWorker
include ApplicationWorker
# Declares that this worker depends on
# third-party, external services in order
# to complete successfully
worker_has_external_dependencies!
# ...
end
```
A job cannot be both high urgency and have external dependencies.
## CPU-bound and Memory-bound Workers
Workers that are constrained by CPU or memory resource limitations should be
annotated with the `worker_resource_boundary` method.
Most workers tend to spend most of their time blocked, waiting on network responses
from other services such as Redis, PostgreSQL, and Gitaly. Since Sidekiq is a
multi-threaded environment, these jobs can be scheduled with high concurrency.
Some workers, however, spend large amounts of time _on-CPU_ running logic in
Ruby. Ruby MRI does not support true multi-threading - it relies on the
[GIL](https://thoughtbot.com/blog/untangling-ruby-threads#the-global-interpreter-lock)
to greatly simplify application development by only allowing one section of Ruby
code in a process to run at a time, no matter how many cores the machine
hosting the process has. For IO bound workers, this is not a problem, since most
of the threads are blocked in underlying libraries (which are outside of the
GIL).
If many threads are attempting to run Ruby code simultaneously, this leads
to contention on the GIL which has the effect of slowing down all
processes.
In high-traffic environments, knowing that a worker is CPU-bound allows us to
run it on a different fleet with lower concurrency. This ensures optimal
performance.
Likewise, if a worker uses large amounts of memory, we can run these on a
bespoke low concurrency, high memory fleet.
Memory-bound workers create heavy GC workloads, with pauses of
10-50 ms. This has an impact on the latency requirements for the
worker. For this reason, `memory` bound, `urgency :high` jobs are not
permitted and fail CI. In general, `memory` bound workers are
discouraged, and alternative approaches to processing the work should be
considered.
If a worker needs large amounts of both memory and CPU time, it should
be marked as memory-bound, due to the above restriction on high urgency
memory-bound workers.
## Declaring a Job as CPU-bound
This example shows how to declare a job as being CPU-bound.
```ruby
class CPUIntensiveWorker
include ApplicationWorker
# Declares that this worker will perform a lot of
# calculations on-CPU.
worker_resource_boundary :cpu
# ...
end
```
## Determining whether a worker is CPU-bound
We use the following approach to determine whether a worker is CPU-bound:
- In the Sidekiq structured JSON logs, aggregate the worker `duration` and
`cpu_s` fields.
- `duration` refers to the total job execution duration, in seconds
- `cpu_s` is derived from the
[`Process::CLOCK_THREAD_CPUTIME_ID`](https://www.rubydoc.info/stdlib/core/Process:clock_gettime)
counter, and is a measure of time spent by the job on-CPU.
- Divide `cpu_s` by `duration` to get the percentage time spend on-CPU.
- If this ratio exceeds 33%, the worker is considered CPU-bound and should be
annotated as such.
- These values should not be used over small sample sizes, but
rather over fairly large aggregates.
## Feature category
All Sidekiq workers must define a known [feature category](../feature_categorization/_index.md#sidekiq-workers).
## Job data consistency strategies
In GitLab 13.11 and earlier, Sidekiq workers would always send database queries to the primary
database node,
both for reads and writes. This ensured that data integrity
is both guaranteed and immediate, since in a single-node scenario it is impossible to encounter
stale reads even for workers that read their own writes.
If a worker writes to the primary, but reads from a replica, however, the possibility
of reading a stale record is non-zero due to replicas potentially lagging behind the primary.
When the number of jobs that rely on the database increases, ensuring immediate data consistency
can put unsustainable load on the primary database server. We therefore added the ability to use
[Database Load Balancing for Sidekiq workers](../../administration/postgresql/database_load_balancing.md).
By configuring a worker's `data_consistency` field, we can then allow the scheduler to target read replicas
under several strategies outlined below.
### Trading immediacy for reduced primary load
We require Sidekiq workers to make an explicit decision around whether they need to use the
primary database node for all reads and writes, or whether reads can be served from replicas. This is
enforced by a RuboCop rule, which ensures that the `data_consistency` field is set.
Before `data_consistency` was introduced, the default behavior mimicked that of `:always`. Since jobs are
now enqueued along with the current database LSN, the replica (for `:sticky` or `:delayed`) is guaranteed
to be caught up to that point, or the job will be retried, or use the primary. This means that the data
will be consistent at least to the point at which the job was enqueued.
The table below shows the `data_consistency` attribute and its values, ordered by the degree to which
they prefer read replicas and wait for replicas to catch up:
| **Data consistency** | **Description** | **Guideline** |
|--------------|-----------------------------|----------|
| `:always` | The job is required to use the primary database for all queries. (Deprecated) | **Deprecated** Only needed for jobs that encounter edge cases around primary stickiness. |
| `:sticky` | The job prefers replicas, but switches to the primary for writes or when encountering replication lag. | This is the preferred option. It should be used for jobs that require to be executed as fast as possible. Replicas are guaranteed to be caught up to the point at which the job was enqueued in Sidekiq. |
| `:delayed` | The job prefers replicas, but switches to the primary for writes. When encountering replication lag before the job starts, the job is retried once. If the replica is still not up to date on the next retry, it switches to the primary. | It should be used for jobs where delaying execution further typically does not matter, such as cache expiration or web hooks execution. It should not be used for jobs where retry is disabled, such as cron jobs. |
In all cases workers read either from a replica that is fully caught up,
or from the primary node, so data consistency is always ensured.
To set a data consistency for a worker, use the `data_consistency` class method:
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :delayed
# ...
end
```
### Overriding data consistency for a decomposed database
GitLab uses multiple decomposed databases. Sidekiq workers usage of the respective databases may be skewed towards
a particular database. For example, `PipelineProcessWorker` has a higher write traffic to the `ci` database compared to the
`main` database. In the event of edge cases around primary stickiness, having separate data consistency defined for each
database allows the worker to more efficiently use read replicas.
If the `overrides` keyword argument is set, the `Gitlab::Database::LoadBalancing::SidekiqServerMiddleware` loads the load
balancing strategy using the data consistency which most prefers the read replicas.
The order of preference in increasing preference is: `:always`, `:sticky`, then `:delayed`.
The overrides only apply if the GitLab instance is using multiple databases or `Gitlab::Database.database_mode == Gitlab::Database::MODE_MULTIPLE_DATABASES`.
To set a data consistency for a worker, use the `data_consistency` class method with the `overrides` keyword argument:
```ruby
class MultipleDataConsistencyWorker
include ApplicationWorker
data_consistency :always, overrides: { ci: :sticky }
# ...
end
```
### `feature_flag` property
The `feature_flag` property allows you to toggle a job's `data_consistency`,
which permits you to safely toggle load balancing capabilities for a specific job.
When `feature_flag` is disabled, the job defaults to `:always`, which means that the job always uses the primary database.
The `feature_flag` property does not allow the use of
[feature gates based on actors](../feature_flags/_index.md).
This means that the feature flag cannot be toggled only for particular
projects, groups, or users, but instead, you can safely use [percentage of time rollout](../feature_flags/_index.md).
Since we check the feature flag on both Sidekiq client and server, rolling out a 10% of the time,
likely results in 1% (`0.1` `[from client]*0.1` `[from server]`) of effective jobs using replicas.
Example:
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :delayed, feature_flag: :load_balancing_for_delayed_worker
# ...
end
```
When using the `feature_flag` property with `overrides`, the jobs defaults to `always` for all database connections.
When the feature flag is enabled, the configured data consistency is then applied to each database independently.
For the below example, when the flag is enabled, the `main` database connections will use the `:always` data consistency while
`ci` database connections will use `:sticky` data consistency.
```ruby
class DelayedWorker
include ApplicationWorker
data_consistency :always, overrides: { ci: :sticky }, feature_flag: :load_balancing_for_delayed_worker
# ...
end
```
### Data consistency with idempotent jobs
For [idempotent jobs](idempotent_jobs.md) that declare either `:sticky` or `:delayed` data consistency, we are
[preserving the latest WAL location](idempotent_jobs.md#preserve-the-latest-wal-location-for-idempotent-jobs) while deduplicating,
ensuring that we read from the replica that is fully caught up.
## Job pause control
With the `pause_control` property, you can conditionally pause job processing. If the strategy is active, the job
is stored in a separate `ZSET` and re-enqueued when the strategy becomes inactive. `PauseControl::ResumeWorker` is a cron
worker that checks if any paused jobs must be restarted.
To use `pause_control`, you can:
- Use one of the strategies defined in `lib/gitlab/sidekiq_middleware/pause_control/strategies/`.
- Define a custom strategy in `lib/gitlab/sidekiq_middleware/pause_control/strategies/` and add the strategy to `lib/gitlab/sidekiq_middleware/pause_control.rb`.
For example:
```ruby
module Gitlab
module SidekiqMiddleware
module PauseControl
module Strategies
class CustomStrategy < Base
def should_pause?
ApplicationSetting.current.elasticsearch_pause_indexing?
end
end
end
end
end
end
```
```ruby
class PausedWorker
include ApplicationWorker
pause_control :custom_strategy
# ...
end
```
{{< alert type="warning" >}}
In case you want to remove the middleware for a worker, set the strategy to `:deprecated` to disable it and wait until
a required stop before removing it completely. That ensures that all paused jobs are resumed correctly.
{{< /alert >}}
## Concurrency limit
With the `concurrency_limit` property, you can limit the worker's concurrency. It will put the jobs that are over this limit in
a separate `LIST` and re-enqueued when it falls under the limit. `ConcurrencyLimit::ResumeWorker` is a cron
worker that checks if any throttled jobs should be re-enqueued.
The first job that crosses the defined concurrency limit initiates the throttling process for all other jobs of this class.
Until this happens, jobs are scheduled and executed as usual.
When the throttling starts, newly scheduled and executed jobs will be added to the end of the `LIST` to ensure that
the execution order is preserved. As soon as the `LIST` is empty again, the throttling process ends.
Prometheus metrics are exposed to monitor workers using concurrency limit middleware:
- `sidekiq_concurrency_limit_deferred_jobs_total`
- `sidekiq_concurrency_limit_queue_jobs`
- `sidekiq_concurrency_limit_queue_jobs_total`
- `sidekiq_concurrency_limit_max_concurrent_jobs`
- `sidekiq_concurrency_limit_current_concurrent_jobs_total`
{{< alert type="warning" >}}
If there is a sustained workload over the limit, the `LIST` is going to grow until the limit is disabled or
the workload drops under the limit.
{{< /alert >}}
You should use a lambda to define the limit. If it returns `nil` or `0`, the limit won't be applied.
Negative numbers pause the execution.
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { 60 }
# ...
end
```
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { ApplicationSetting.current.elasticsearch_concurrent_sidekiq_jobs }
# ...
end
```
## Skip execution of workers in Geo secondary
On Geo secondary sites, database writes are disabled.
You must skip execution of workers that attempt database writes from Geo secondary sites,
if those workers get enqueued on Geo secondary sites.
Conveniently, most workers do not get enqueued on Geo secondary sites, because
[most non-GET HTTP requests get proxied to the Geo primary site](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.8.0-ee/workhorse/internal/upstream/routes.go#L382-L431),
and because Geo secondary sites
[disable most Sidekiq-Cron jobs](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.8.0-ee/ee/lib/gitlab/geo/cron_manager.rb#L6-L26).
Ask a Geo engineer if you are unsure.
To skip execution, prepend the `::Geo::SkipSecondary` module to the worker class.
```ruby
class DummyWorker
include ApplicationWorker
prepend ::Geo::SkipSecondary
# ...
end
```
|
https://docs.gitlab.com/development/idempotent_jobs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/idempotent_jobs.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
idempotent_jobs.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq idempotent jobs
| null |
It's known that a job can fail for multiple reasons. For example, network outages or bugs.
In order to address this, Sidekiq has a built-in retry mechanism that is
used by default by most workers within GitLab.
It's expected that a job can run again after a failure without major side-effects for the
application or users, which is why Sidekiq encourages
jobs to be [idempotent and transactional](https://github.com/mperham/sidekiq/wiki/Best-Practices#2-make-your-job-idempotent-and-transactional).
As a general rule, a worker can be considered idempotent if:
- It can safely run multiple times with the same arguments.
- Application side-effects are expected to happen only once
(or side-effects of a second run do not have an effect).
A good example of that would be a cache expiration worker.
A job scheduled for an idempotent worker is [deduplicated](#deduplication) when
an unstarted job with the same arguments is already in the queue.
## Ensuring a worker is idempotent
Use the following shared example to see the effects of running a job twice.
```ruby
it_behaves_like 'an idempotent worker'
```
The shared example requires `job_args` to be defined. If not given, it
calls the job without arguments.
When the shared example runs, there should be no mocking in place that would avoid
side-effects of the job. For example, allow the worker to call a service without
stubbing its execute method. This way, we can assert that the job is truly idempotent.
The shared examples include some basic tests. You can add more idempotency tests
specific to the worker in the shared examples block.
```ruby
it_behaves_like 'an idempotent worker' do
it 'checks the side-effects for multiple calls' do
# `perform_idempotent_work` will call the job's perform method 2 times
perform_idempotent_work
expect(model.state).to eq('state')
end
end
```
## Declaring a worker as idempotent
```ruby
class IdempotentWorker
include ApplicationWorker
# Declares a worker is idempotent and can
# safely run multiple times.
idempotent!
# ...
end
```
It's encouraged to only have the `idempotent!` call in the top-most worker class, even if
the `perform` method is defined in another class or module.
If the worker class isn't marked as idempotent, a cop fails. Consider skipping
the cop if you're not confident your job can safely run multiple times.
## Deduplication
When a job for an idempotent worker is enqueued while another
unstarted job is already in the queue, GitLab drops the second
job. The work is skipped because the same work would be
done by the job that was scheduled first; by the time the second
job executed, the first job would do nothing.
### Strategies
GitLab supports two deduplication strategies:
- `until_executing`, which is the default strategy
- `until_executed`
More [deduplication strategies have been suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195).
If you are implementing a worker that could benefit from a different
strategy, comment in the issue.
#### Until Executing
This strategy takes a lock when a job is added to the queue, and removes that lock before the job starts.
For example, `AuthorizedProjectsWorker` takes a user ID. When the
worker runs, it recalculates a user's authorizations. GitLab schedules
this job each time an action potentially changes a user's
authorizations. If the same user is added to two projects at the
same time, the second job can be skipped if the first job hasn't
begun, because when the first job runs, it creates the
authorizations for both projects.
```ruby
module AuthorizedProjectUpdate
class UserRefreshOverUserRangeWorker
include ApplicationWorker
deduplicate :until_executing
idempotent!
# ...
end
end
```
#### Until Executed
This strategy takes a lock when a job is added to the queue, and removes that lock after the job finishes.
It can be used to prevent jobs from running simultaneously multiple times.
```ruby
module Ci
class BuildTraceChunkFlushWorker
include ApplicationWorker
deduplicate :until_executed
idempotent!
# ...
end
end
```
Also, you can pass `if_deduplicated: :reschedule_once` option to re-run a job once after
the currently running job finished and deduplication happened at least once.
This ensures that the latest result is always produced even if a race condition
happened. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/342123) for more information.
### Scheduling jobs in the future
GitLab doesn't skip jobs scheduled in the future, as we assume that
the state has changed by the time the job is scheduled to
execute. Deduplication of jobs scheduled in the future is possible
for both `until_executed` and `until_executing` strategies.
If you do want to deduplicate jobs scheduled in the future,
this can be specified on the worker by passing `including_scheduled: true` argument
when defining deduplication strategy:
```ruby
module AuthorizedProjectUpdate
class UserRefreshOverUserRangeWorker
include ApplicationWorker
deduplicate :until_executing, including_scheduled: true
idempotent!
# ...
end
end
```
## Setting the deduplication time-to-live (TTL)
Deduplication depends on an idempotent key that is stored in Redis. This is usually
cleared by the configured deduplication strategy.
However, the key can remain until its TTL in certain cases like:
1. `until_executing` is used but the job was never enqueued or executed after the Sidekiq
client middleware was run.
1. `until_executed` is used but the job fails to finish due to retry exhaustion, gets
interrupted the maximum number of times, or gets lost.
The default value is 6 hours. During this time, jobs won't be enqueued even if the first
job never executed or finished.
The TTL can be configured with:
```ruby
class ProjectImportScheduleWorker
include ApplicationWorker
idempotent!
deduplicate :until_executing, ttl: 5.minutes
end
```
Duplicate jobs can happen when the TTL is reached, so make sure you lower this only for jobs
that can tolerate some duplication.
### Preserve the latest WAL location for idempotent jobs
The deduplication always take into account the latest binary replication pointer, not the first one.
This happens because we drop the same job scheduled for the second time and the Write-Ahead Log (WAL) is lost.
This could lead to comparing the old WAL location and reading from a stale replica.
To support both deduplication and maintaining data consistency with load balancing,
we are preserving the latest WAL location for idempotent jobs in Redis.
This way we are always comparing the latest binary replication pointer,
making sure that we read from the replica that is fully caught up.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq idempotent jobs
breadcrumbs:
- doc
- development
- sidekiq
---
It's known that a job can fail for multiple reasons. For example, network outages or bugs.
In order to address this, Sidekiq has a built-in retry mechanism that is
used by default by most workers within GitLab.
It's expected that a job can run again after a failure without major side-effects for the
application or users, which is why Sidekiq encourages
jobs to be [idempotent and transactional](https://github.com/mperham/sidekiq/wiki/Best-Practices#2-make-your-job-idempotent-and-transactional).
As a general rule, a worker can be considered idempotent if:
- It can safely run multiple times with the same arguments.
- Application side-effects are expected to happen only once
(or side-effects of a second run do not have an effect).
A good example of that would be a cache expiration worker.
A job scheduled for an idempotent worker is [deduplicated](#deduplication) when
an unstarted job with the same arguments is already in the queue.
## Ensuring a worker is idempotent
Use the following shared example to see the effects of running a job twice.
```ruby
it_behaves_like 'an idempotent worker'
```
The shared example requires `job_args` to be defined. If not given, it
calls the job without arguments.
When the shared example runs, there should be no mocking in place that would avoid
side-effects of the job. For example, allow the worker to call a service without
stubbing its execute method. This way, we can assert that the job is truly idempotent.
The shared examples include some basic tests. You can add more idempotency tests
specific to the worker in the shared examples block.
```ruby
it_behaves_like 'an idempotent worker' do
it 'checks the side-effects for multiple calls' do
# `perform_idempotent_work` will call the job's perform method 2 times
perform_idempotent_work
expect(model.state).to eq('state')
end
end
```
## Declaring a worker as idempotent
```ruby
class IdempotentWorker
include ApplicationWorker
# Declares a worker is idempotent and can
# safely run multiple times.
idempotent!
# ...
end
```
It's encouraged to only have the `idempotent!` call in the top-most worker class, even if
the `perform` method is defined in another class or module.
If the worker class isn't marked as idempotent, a cop fails. Consider skipping
the cop if you're not confident your job can safely run multiple times.
## Deduplication
When a job for an idempotent worker is enqueued while another
unstarted job is already in the queue, GitLab drops the second
job. The work is skipped because the same work would be
done by the job that was scheduled first; by the time the second
job executed, the first job would do nothing.
### Strategies
GitLab supports two deduplication strategies:
- `until_executing`, which is the default strategy
- `until_executed`
More [deduplication strategies have been suggested](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/195).
If you are implementing a worker that could benefit from a different
strategy, comment in the issue.
#### Until Executing
This strategy takes a lock when a job is added to the queue, and removes that lock before the job starts.
For example, `AuthorizedProjectsWorker` takes a user ID. When the
worker runs, it recalculates a user's authorizations. GitLab schedules
this job each time an action potentially changes a user's
authorizations. If the same user is added to two projects at the
same time, the second job can be skipped if the first job hasn't
begun, because when the first job runs, it creates the
authorizations for both projects.
```ruby
module AuthorizedProjectUpdate
class UserRefreshOverUserRangeWorker
include ApplicationWorker
deduplicate :until_executing
idempotent!
# ...
end
end
```
#### Until Executed
This strategy takes a lock when a job is added to the queue, and removes that lock after the job finishes.
It can be used to prevent jobs from running simultaneously multiple times.
```ruby
module Ci
class BuildTraceChunkFlushWorker
include ApplicationWorker
deduplicate :until_executed
idempotent!
# ...
end
end
```
Also, you can pass `if_deduplicated: :reschedule_once` option to re-run a job once after
the currently running job finished and deduplication happened at least once.
This ensures that the latest result is always produced even if a race condition
happened. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/342123) for more information.
### Scheduling jobs in the future
GitLab doesn't skip jobs scheduled in the future, as we assume that
the state has changed by the time the job is scheduled to
execute. Deduplication of jobs scheduled in the future is possible
for both `until_executed` and `until_executing` strategies.
If you do want to deduplicate jobs scheduled in the future,
this can be specified on the worker by passing `including_scheduled: true` argument
when defining deduplication strategy:
```ruby
module AuthorizedProjectUpdate
class UserRefreshOverUserRangeWorker
include ApplicationWorker
deduplicate :until_executing, including_scheduled: true
idempotent!
# ...
end
end
```
## Setting the deduplication time-to-live (TTL)
Deduplication depends on an idempotent key that is stored in Redis. This is usually
cleared by the configured deduplication strategy.
However, the key can remain until its TTL in certain cases like:
1. `until_executing` is used but the job was never enqueued or executed after the Sidekiq
client middleware was run.
1. `until_executed` is used but the job fails to finish due to retry exhaustion, gets
interrupted the maximum number of times, or gets lost.
The default value is 6 hours. During this time, jobs won't be enqueued even if the first
job never executed or finished.
The TTL can be configured with:
```ruby
class ProjectImportScheduleWorker
include ApplicationWorker
idempotent!
deduplicate :until_executing, ttl: 5.minutes
end
```
Duplicate jobs can happen when the TTL is reached, so make sure you lower this only for jobs
that can tolerate some duplication.
### Preserve the latest WAL location for idempotent jobs
The deduplication always take into account the latest binary replication pointer, not the first one.
This happens because we drop the same job scheduled for the second time and the Write-Ahead Log (WAL) is lost.
This could lead to comparing the old WAL location and reading from a stale replica.
To support both deduplication and maintaining data consistency with load balancing,
we are preserving the latest WAL location for idempotent jobs in Redis.
This way we are always comparing the latest binary replication pointer,
making sure that we read from the replica that is fully caught up.
|
https://docs.gitlab.com/development/logging
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/logging.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
logging.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq logging
| null |
## Worker context
To have some more information about workers in the logs, we add
[metadata to the jobs in the form of an `ApplicationContext`](../logging.md#logging-context-metadata-through-rails-or-grape-requests).
In most cases, when scheduling a job from a request, this context is already
deduced from the request and added to the scheduled job.
When a job runs, the context that was active when it was scheduled
is restored. This causes the context to be propagated to any job
scheduled from within the running job.
All this means that in most cases, to add context to jobs, we don't
need to do anything.
There are however some instances when there would be no context
present when the job is scheduled, or the context that is present is
likely to be incorrect. For these instances, we've added RuboCop rules
to draw attention and avoid incorrect metadata in our logs.
As with most our cops, there are perfectly valid reasons for disabling
them. In this case it could be that the context from the request is
correct. Or maybe you've specified a context already in a way that
isn't picked up by the cops. In any case, leave a code comment
pointing to which context to use when disabling the cops.
When you do provide objects to the context, make sure that the
route for namespaces and projects is pre-loaded. This can be done by using
the `.with_route` scope defined on all `Routable`s.
### Cron workers
The context is automatically cleared for workers in the cronjob queue
(`include CronjobQueue`), even when scheduling them from
requests. We do this to avoid incorrect metadata when other jobs are
scheduled from the cron worker.
Cron workers themselves run instance wide, so they aren't scoped to
users, namespaces, projects, or other resources that should be added to
the context.
However, they often run services or schedule other jobs that do require context.
That is why there needs to be an indication of context somewhere in
the worker. This can be done by using one of the following methods
somewhere within the worker:
1. Wrap the code that schedules jobs in the `with_context` helper:
```ruby
def perform
deletion_cutoff = Gitlab::CurrentSettings
.deletion_adjourned_period.days.ago.to_date
projects = Project.with_route.with_namespace
.marked_for_deletion_before(deletion_cutoff)
projects.find_each(batch_size: 100).with_index do |project, index|
delay = index * INTERVAL
with_context(project: project) do
AdjournedProjectDeletionWorker.perform_in(delay, project.id)
end
end
end
```
1. Use the a batch scheduling method that provides context:
```ruby
def schedule_projects_in_batch(projects)
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
end
```
Or, when scheduling with delays:
```ruby
diffs.each_batch(of: BATCH_SIZE) do |diffs, index|
DeleteDiffFilesWorker
.bulk_perform_in_with_contexts(index * 5.minutes,
diffs,
arguments_proc: -> (diff) { diff.id },
context_proc: -> (diff) { { project: diff.merge_request.target_project } })
end
```
### Jobs scheduled in bulk
Often, when scheduling jobs in bulk, these jobs should have a separate
context rather than the overarching context.
If that is the case, `bulk_perform_async` can be replaced by the
`bulk_perform_async_with_context` helper, and instead of
`bulk_perform_in` use `bulk_perform_in_with_context`.
For example:
```ruby
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
```
Each object from the enumerable in the first argument is yielded into 2
blocks:
- The `arguments_proc` which needs to return the list of arguments the
job needs to be scheduled with.
- The `context_proc` which needs to return a hash with the context
information for the job.
## Arguments logging
Sidekiq job arguments are logged by default, unless [`SIDEKIQ_LOG_ARGUMENTS`](../../administration/sidekiq/sidekiq_troubleshooting.md#log-arguments-to-sidekiq-jobs)
is disabled.
By default, the only arguments logged are numeric arguments, because
arguments of other types could contain sensitive information. To
override this, use `loggable_arguments` inside a worker with the indexes
of the arguments to be logged. (Numeric arguments do not need to be
specified here.)
For example:
```ruby
class MyWorker
include ApplicationWorker
loggable_arguments 1, 3
# object_id will be logged as it's numeric
# string_a will be logged due to the loggable_arguments call
# string_b will be filtered from logs
# string_c will be logged due to the loggable_arguments call
def perform(object_id, string_a, string_b, string_c)
end
end
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq logging
breadcrumbs:
- doc
- development
- sidekiq
---
## Worker context
To have some more information about workers in the logs, we add
[metadata to the jobs in the form of an `ApplicationContext`](../logging.md#logging-context-metadata-through-rails-or-grape-requests).
In most cases, when scheduling a job from a request, this context is already
deduced from the request and added to the scheduled job.
When a job runs, the context that was active when it was scheduled
is restored. This causes the context to be propagated to any job
scheduled from within the running job.
All this means that in most cases, to add context to jobs, we don't
need to do anything.
There are however some instances when there would be no context
present when the job is scheduled, or the context that is present is
likely to be incorrect. For these instances, we've added RuboCop rules
to draw attention and avoid incorrect metadata in our logs.
As with most our cops, there are perfectly valid reasons for disabling
them. In this case it could be that the context from the request is
correct. Or maybe you've specified a context already in a way that
isn't picked up by the cops. In any case, leave a code comment
pointing to which context to use when disabling the cops.
When you do provide objects to the context, make sure that the
route for namespaces and projects is pre-loaded. This can be done by using
the `.with_route` scope defined on all `Routable`s.
### Cron workers
The context is automatically cleared for workers in the cronjob queue
(`include CronjobQueue`), even when scheduling them from
requests. We do this to avoid incorrect metadata when other jobs are
scheduled from the cron worker.
Cron workers themselves run instance wide, so they aren't scoped to
users, namespaces, projects, or other resources that should be added to
the context.
However, they often run services or schedule other jobs that do require context.
That is why there needs to be an indication of context somewhere in
the worker. This can be done by using one of the following methods
somewhere within the worker:
1. Wrap the code that schedules jobs in the `with_context` helper:
```ruby
def perform
deletion_cutoff = Gitlab::CurrentSettings
.deletion_adjourned_period.days.ago.to_date
projects = Project.with_route.with_namespace
.marked_for_deletion_before(deletion_cutoff)
projects.find_each(batch_size: 100).with_index do |project, index|
delay = index * INTERVAL
with_context(project: project) do
AdjournedProjectDeletionWorker.perform_in(delay, project.id)
end
end
end
```
1. Use the a batch scheduling method that provides context:
```ruby
def schedule_projects_in_batch(projects)
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
end
```
Or, when scheduling with delays:
```ruby
diffs.each_batch(of: BATCH_SIZE) do |diffs, index|
DeleteDiffFilesWorker
.bulk_perform_in_with_contexts(index * 5.minutes,
diffs,
arguments_proc: -> (diff) { diff.id },
context_proc: -> (diff) { { project: diff.merge_request.target_project } })
end
```
### Jobs scheduled in bulk
Often, when scheduling jobs in bulk, these jobs should have a separate
context rather than the overarching context.
If that is the case, `bulk_perform_async` can be replaced by the
`bulk_perform_async_with_context` helper, and instead of
`bulk_perform_in` use `bulk_perform_in_with_context`.
For example:
```ruby
ProjectImportScheduleWorker.bulk_perform_async_with_contexts(
projects,
arguments_proc: -> (project) { project.id },
context_proc: -> (project) { { project: project } }
)
```
Each object from the enumerable in the first argument is yielded into 2
blocks:
- The `arguments_proc` which needs to return the list of arguments the
job needs to be scheduled with.
- The `context_proc` which needs to return a hash with the context
information for the job.
## Arguments logging
Sidekiq job arguments are logged by default, unless [`SIDEKIQ_LOG_ARGUMENTS`](../../administration/sidekiq/sidekiq_troubleshooting.md#log-arguments-to-sidekiq-jobs)
is disabled.
By default, the only arguments logged are numeric arguments, because
arguments of other types could contain sensitive information. To
override this, use `loggable_arguments` inside a worker with the indexes
of the arguments to be logged. (Numeric arguments do not need to be
specified here.)
For example:
```ruby
class MyWorker
include ApplicationWorker
loggable_arguments 1, 3
# object_id will be logged as it's numeric
# string_a will be logged due to the loggable_arguments call
# string_b will be filtered from logs
# string_c will be logged due to the loggable_arguments call
def perform(object_id, string_a, string_b, string_c)
end
end
```
|
https://docs.gitlab.com/development/sidekiq
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/sidekiq
|
[
"doc",
"development",
"sidekiq"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sidekiq development guidelines
| null |
We use [Sidekiq](https://github.com/mperham/sidekiq) as our background
job processor. These guides are for writing jobs that work well on
GitLab.com and are consistent with our existing worker classes. For
information on administering GitLab, see [configuring Sidekiq](../../administration/sidekiq/_index.md).
There are pages with additional detail on the following topics:
1. [Compatibility across updates](compatibility_across_updates.md)
1. [Job idempotence and job deduplication](idempotent_jobs.md)
1. [Limited capacity worker: continuously performing work with a specified concurrency](limited_capacity_worker.md)
1. [Logging](logging.md)
1. [Worker attributes](worker_attributes.md)
1. **Job urgency** specifies queuing and execution SLOs
1. **Resource boundaries** and **external dependencies** for describing the workload
1. **Feature categorization**
1. **Database load balancing**
## ApplicationWorker
All workers should include `ApplicationWorker` instead of `Sidekiq::Worker`,
which adds some convenience methods and automatically sets the queue based on
the [routing rules](../../administration/sidekiq/processing_specific_job_classes.md#routing-rules).
## Sharding
All calls to Sidekiq APIs must account for sharding. To achieve this,
utilize the Sidekiq API within the `Sidekiq::Client.via` block to guarantee the correct `Sidekiq.redis` pool is utilized.
Obtain the suitable Redis pool by invoking the `Gitlab::SidekiqSharding::Router.get_shard_instance` method.
```ruby
pool_name, pool = Gitlab::SidekiqSharding::Router.get_shard_instance(worker_class.sidekiq_options['store'])
Sidekiq::Client.via(pool) do
...
end
```
Unrouted Sidekiq calls are caught by the validator in all API requests, Sidekiq jobs on the server-side and in tests.
We recommend writing application logic with the use of the `Gitlab::SidekiqSharding::Router`. However, since sharding is an
unreleased feature, if the component does not affect GitLab.com, it is acceptable run it within a `.allow_unrouted_sidekiq_calls` scope like so:
```ruby
# Add a comment explaining why it is safe to allow unrouted Sidekiq calls in this case
Gitlab::SidekiqSharding::Validator.allow_unrouted_sidekiq_calls do
# your unrouted logic
end
```
A past example is the use of `allow_unrouted_sidekiq_calls` in [Geo Rake tasks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149958#note_1906072228)
as it does not affect GitLab.com. However, developer should write shard-aware code where possible since
that is a pre-requisite for sharding to be [released as a feature to users on GitLab Self-Managed](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/3430).
## Retries
Sidekiq defaults to using [25 retries](https://github.com/mperham/sidekiq/wiki/Error-Handling#automatic-job-retry),
with back-off between each retry. 25 retries means that the last retry
would happen around three weeks after the first attempt (assuming all 24
prior retries failed).
This means that a lot can happen in between the job being scheduled
and its execution. Therefore, we must guard workers so they don't
fail 25 times when the state changes after they are scheduled. For
example, a job should not fail when the project it was scheduled for
is deleted.
Instead of:
```ruby
def perform(project_id)
project = Project.find(project_id)
# ...
end
```
Do this:
```ruby
def perform(project_id)
project = Project.find_by_id(project_id)
return unless project
# ...
end
```
For most workers - especially [idempotent workers](idempotent_jobs.md) -
the default of 25 retries is more than sufficient. Many of our older
workers declare 3 retries, which used to be the default within the
GitLab application. 3 retries happen over the course of a couple of
minutes, so the jobs are prone to failing completely.
A lower retry count may be applicable if any of the below apply:
1. The worker contacts an external service and we do not provide
guarantees on delivery. For example, webhooks.
1. The worker is not idempotent and running it multiple times could
leave the system in an inconsistent state. For example, a worker that
posts a system note and then performs an action: if the second step
fails and the worker retries, the system note is posted again.
1. The worker is a cronjob that runs frequently. For example, if a cron
job runs every hour, then we don't need to retry beyond an hour
because we don't need two of the same job running at once.
Each retry for a worker is counted as a failure in our metrics. A worker
which always fails 9 times and succeeds on the 10th would have a 90%
error rate.
If you want to manually retry the worker without tracking the exception in Sentry,
use an exception class inherited from `Gitlab::SidekiqMiddleware::RetryError`.
```ruby
ServiceUnavailable = Class.new(::Gitlab::SidekiqMiddleware::RetryError)
def perform
...
raise ServiceUnavailable if external_service_unavailable?
end
```
## Failure handling
Failures are typically handled by Sidekiq itself, which takes advantage of the inbuilt retry mechanism mentioned above. You should allow exceptions to be raised so that Sidekiq can reschedule the job.
If you need to perform an action when a job fails after all of its retry attempts, add it to the `sidekiq_retries_exhausted` method.
```ruby
sidekiq_retries_exhausted do |msg, ex|
project = Project.find_by_id(msg['args'].first)
return unless project
project.perform_a_rollback # handle the permanent failure
end
def perform(project_id)
project = Project.find_by_id(project_id)
return unless project
project.some_action # throws an exception
end
```
## Concurrency Limit
To prevent system overload and ensure reliable operations, we strongly recommend setting a
[concurrency limit](worker_attributes.md#concurrency-limit) for all workers. Limiting the number of jobs each worker
can schedule helps mitigate the risk of overwhelming the system, which could lead to severe incidents.
This guidance applies both to .com and self-managed customers. A single worker scheduling thousands of jobs can easily disrupt the normal functioning of an SM instance.
{{< alert type="note" >}}
If Sidekiq only has 20 threads and the limit for a specific job is 200 then it will never be able to hit this 200 concurrency so it will not be limited.
{{< /alert >}}
### Static Concurrency Limit
For a static limit, consider the following example:
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { 100 if Feature.enabled?(:concurrency_limit_some_worker, Feature.current_request) }
# ...
end
```
{{< alert type="warning" >}}
Use only boolean feature flags (fully on/off) when rolling out the concurrency limit.
Percentage-based rollouts with `Feature.current_request` can cause inconsistent behavior.
{{< /alert >}}
Alternatively, you can set a fixed limit directly:
```ruby
concurrency_limit -> { 250 }
```
{{< alert type="note" >}}
Keep in mind that using a static limit means any updates or changes require merging an MR and waiting for the next deployment to take effect.
{{< /alert >}}
### Instance-Configurable Concurrency Limit
If you want to allow instance administrators to control the concurrency limit:
```ruby
concurrency_limit -> { ApplicationSetting.current.some_feature_concurrent_sidekiq_jobs }
```
This approach also allows having separate limits for .com and GitLab Self-Managed instances. To achieve this, you can:
1. Create a migration to add the configuration option with a default set to the self-managed limit.
1. In the same MR, ship a migration to update the limit for .com only.
### How to pick the limit
To determine an appropriate limit, you can use the `sidekiq: Worker Concurrency Detail` dashboard as a guide in [Grafana](https://dashboards.gitlab.net/goto/z244H0YNR?orgId=1).
{{< alert type="note" >}}
The [concurrency limit may be momentarily exceeded](https://gitlab.com/gitlab-org/gitlab/-/issues/490936#note_2172737349) and should not be relied on as a strict limit.
{{< /alert >}}
## Deferring Sidekiq workers
Sidekiq workers are deferred by two ways,
1. Manual: Feature flags can be used to explicitly defer a particular worker, more details can be found [here](../feature_flags/_index.md#deferring-sidekiq-jobs).
1. Automatic: Similar to the [throttling mechanism](../database/batched_background_migrations.md#throttling-batched-migrations) in batched migrations, database health indicators are used to defer a Sidekiq worker.
To use the automatic deferring mechanism, worker has to opt-in by calling `defer_on_database_health_signal` with `gitlab_schema`, `delay_by` (time to delay) and tables (which is used by autovacuum db indicator) as it's parameters.
**Example**:
```ruby
module Chaos
class SleepWorker # rubocop:disable Scalability/IdempotentWorker
include ApplicationWorker
data_consistency :always
sidekiq_options retry: 3
include ChaosQueue
defer_on_database_health_signal :gitlab_main, [:users], 1.minute
def perform(duration_s)
Gitlab::Chaos.sleep(duration_s)
end
end
end
```
For deferred jobs, logs contain the following to indicate the source:
- `job_status`: `deferred`
- `job_deferred_by`: `feature_flag` or `database_health_check`
## Sidekiq Queues
Previously, each worker had its own queue, which was automatically set based on the
worker class name. For a worker named `ProcessSomethingWorker`, the queue name
would be `process_something`. You can now route workers to a specific queue using
[queue routing rules](../../administration/sidekiq/processing_specific_job_classes.md#routing-rules).
In GDK, new workers are routed to a queue named `default`.
If you're not sure what queue a worker uses,
you can find it using `SomeWorker.queue`. There is almost never a reason to
manually override the queue name using `sidekiq_options queue: :some_queue`.
After adding a new worker, run `bin/rake gitlab:sidekiq:all_queues_yml:generate`
to regenerate `app/workers/all_queues.yml` or `ee/app/workers/all_queues.yml` so that
it can be picked up by
[`sidekiq-cluster`](../../administration/sidekiq/extra_sidekiq_processes.md)
in installations that don't use routing rules. For more information about potential changes,
see [epic 596](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/596).
Additionally, run
`bin/rake gitlab:sidekiq:sidekiq_queues_yml:generate` to regenerate
`config/sidekiq_queues.yml`.
## Queue Namespaces
While different workers cannot share a queue, they can share a queue namespace.
Defining a queue namespace for a worker makes it possible to start a Sidekiq
process that automatically handles jobs for all workers in that namespace,
without needing to explicitly list all their queue names. If, for example, all
workers that are managed by `sidekiq-cron` use the `cronjob` queue namespace, we
can spin up a Sidekiq process specifically for these kinds of scheduled jobs.
If a new worker using the `cronjob` namespace is added later on, the Sidekiq
process also picks up jobs for that worker (after having been restarted),
without the need to change any configuration.
A queue namespace can be set using the `queue_namespace` DSL class method:
```ruby
class SomeScheduledTaskWorker
include ApplicationWorker
queue_namespace :cronjob
# ...
end
```
Behind the scenes, this sets `SomeScheduledTaskWorker.queue` to
`cronjob:some_scheduled_task`. Commonly used namespaces have their own
concern module that can easily be included into the worker class, and that may
set other Sidekiq options besides the queue namespace. `CronjobQueue`, for
example, sets the namespace, but also disables retries.
`bundle exec sidekiq` is namespace-aware, and listens on all
queues in a namespace (technically: all queues prefixed with the namespace name)
when a namespace is provided instead of a simple queue name in the `--queue`
(`-q`) option, or in the `:queues:` section in `config/sidekiq_queues.yml`.
Adding a worker to an existing namespace should be done with care, as
the extra jobs take resources away from jobs from workers that were already
there, if the resources available to the Sidekiq process handling the namespace
are not adjusted appropriately.
## Versioning
Version can be specified on each Sidekiq worker class.
This is then sent along when the job is created.
```ruby
class FooWorker
include ApplicationWorker
version 2
def perform(*args)
if job_version == 2
foo = args.first['foo']
else
foo = args.first
end
end
end
```
Under this schema, any worker is expected to be able to handle any job that was
enqueued by an older version of that worker. This means that when changing the
arguments a worker takes, you must increment the `version` (or set `version 1`
if this is the first time a worker's arguments are changing), but also make sure
that the worker is still able to handle jobs that were queued with any earlier
version of the arguments. From the worker's `perform` method, you can read
`self.job_version` if you want to specifically branch on job version, or you
can read the number or type of provided arguments.
## Job size
GitLab stores Sidekiq jobs and their arguments in Redis. To avoid
excessive memory usage, we compress the arguments of Sidekiq jobs
if their original size is bigger than 100 KB.
After compression, if their size still exceeds 5 MB, it raises an
[`ExceedLimitError`](https://gitlab.com/gitlab-org/gitlab/-/blob/f3dd89e5e510ea04b43ffdcb58587d8f78a8d77c/lib/gitlab/sidekiq_middleware/size_limiter/exceed_limit_error.rb#L8)
error when scheduling the job.
If this happens, rely on other means of making the data
available in Sidekiq. There are possible workarounds such as:
- Rebuild the data in Sidekiq with data loaded from the database or
elsewhere.
- Store the data in [object storage](../file_storage.md#object-storage)
before scheduling the job, and retrieve it inside the job.
## Job weights
Some jobs have a weight declared. This is only used when running Sidekiq
in the default execution mode - using
[`sidekiq-cluster`](../../administration/sidekiq/extra_sidekiq_processes.md)
does not account for weights.
As we are [moving towards using `sidekiq-cluster` in Free](https://gitlab.com/gitlab-org/gitlab/-/issues/34396), newly-added
workers do not need to have weights specified. They can use the
default weight, which is 1.
## Job parameters
Based on [Sidekiq's recommended best practices](https://github.com/sidekiq/sidekiq/wiki/Best-Practices#1-make-your-job-parameters-small-and-simple), parameters should be small and simple.
For a hash passed as a worker parameter, the keys should be strings and the values
should be of native JSON types. If these expectations are not met in Sidekiq versions 7.0 and later,
[exceptions are raised](https://github.com/sidekiq/sidekiq/blob/main/docs/7.0-Upgrade.md#strict-arguments).
We have disabled these exceptions
[and only display warnings](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/161262)
in development and test mode, to enable us to upgrade to this version.
Going forward, developers should ensure that the keys and values in worker parameters are of native JSON types.
You are encouraged to add a test for code generating worker parameters. For example, this custom
RSpec matcher `param_containing_valid_native_json_types` (defined in `SidekiqJSONMatcher`)
tests the parameter expected to be an array of hashes:
```ruby
it 'passes a valid JSON parameter to MyWorker#perform_async' do
expect(MyWorker).to receive(:perform_async).with(param_containing_valid_native_json_types)
method_calling_worker_perform_sync
end
```
## Tests
Each Sidekiq worker must be tested using RSpec, just like any other class. These
tests should be placed in `spec/workers`.
## Interacting with Sidekiq Redis and APIs
The application should minimise interaction with of any `Sidekiq.redis` and Sidekiq [APIs](https://github.com/mperham/sidekiq/blob/main/lib/sidekiq/api.rb). Such interactions in generic application logic should be abstracted to a [Sidekiq middleware](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/sidekiq_middleware) for re-use across teams. By decoupling application logic from Sidekiq datastore, it allows for greater freedom when horizontally scaling the GitLab background processing setup.
Some exceptions to this rule would be migration-related logic or administration operations.
## Job duration limit
In general it is best-practice for Sidekiq jobs to run for short durations.
Although there is no specific hard limit for job duration, there are two special considerations for long running jobs:
1. Job durations above our [`urgency` attribute](worker_attributes.md#job-urgency) thresholds contribute negatively to
[Sidekiq Apdex](../application_slis/sidekiq_execution.md) and can impact error budgets.
1. Deploys interrupt long-running jobs. On GitLab.com, deploys can happen several times a day, which can [effectively limit the length a job can run](#effect-of-deploys-on-job-duration).
### Effect of deploys on job duration
During a deploy, Sidekiq is given a `TERM` signal. Jobs are given 25 seconds to finish, after which they are
interrupted and forced to stop. The 25 second grace period is the
[Sidekiq default](https://github.com/sidekiq/sidekiq/blob/ba51d286d821777fbe87ea0eff8b04f212aeadf5/lib/sidekiq/config.rb#L18) but can be
[configured through the charts](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/blob/d2bb7cca2130cd9859e5d40e5bd90f5ef061d422/vendor/charts/gitlab/gprd/charts/gitlab/charts/sidekiq/values.yaml#L291).
If a job is forced to stop a certain number of times (3 times by default, configurable
through `max_retries_after_interruption`), they are permanently killed. This happens through
our [`sidekiq-reliable-fetch` gem](https://gitlab.com/gitlab-org/gitlab/-/blob/master/vendor/gems/sidekiq-reliable-fetch/README.md).
This effectively puts a limit on the length of time a job can run
to a span of `max_retries_after_interruption` deploys, or 3 deploys by default.
### Tips for handling jobs with long durations
Instead of having one big job, it's better to have many small jobs.
To decide if a worker needs to be split up and parallelized we can look at the runtime of jobs in the logs.
If the 99th percentile of the job duration is lower than the target for that shard based on the configured
[urgency](worker_attributes.md#job-urgency), there is no need to break up the job.
When breaking up long running jobs into many smaller jobs, do take into account downstream dependencies.
For example, if we schedule thousands of jobs that all need to write to the primary database, this
could create contention on connections to the primary database causing other Sidekiq jobs on the shard to
have to wait to obtain a connection. To circumvent this, we can consider specifying a
[concurrency limit](worker_attributes.md#concurrency-limit).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Sidekiq development guidelines
breadcrumbs:
- doc
- development
- sidekiq
---
We use [Sidekiq](https://github.com/mperham/sidekiq) as our background
job processor. These guides are for writing jobs that work well on
GitLab.com and are consistent with our existing worker classes. For
information on administering GitLab, see [configuring Sidekiq](../../administration/sidekiq/_index.md).
There are pages with additional detail on the following topics:
1. [Compatibility across updates](compatibility_across_updates.md)
1. [Job idempotence and job deduplication](idempotent_jobs.md)
1. [Limited capacity worker: continuously performing work with a specified concurrency](limited_capacity_worker.md)
1. [Logging](logging.md)
1. [Worker attributes](worker_attributes.md)
1. **Job urgency** specifies queuing and execution SLOs
1. **Resource boundaries** and **external dependencies** for describing the workload
1. **Feature categorization**
1. **Database load balancing**
## ApplicationWorker
All workers should include `ApplicationWorker` instead of `Sidekiq::Worker`,
which adds some convenience methods and automatically sets the queue based on
the [routing rules](../../administration/sidekiq/processing_specific_job_classes.md#routing-rules).
## Sharding
All calls to Sidekiq APIs must account for sharding. To achieve this,
utilize the Sidekiq API within the `Sidekiq::Client.via` block to guarantee the correct `Sidekiq.redis` pool is utilized.
Obtain the suitable Redis pool by invoking the `Gitlab::SidekiqSharding::Router.get_shard_instance` method.
```ruby
pool_name, pool = Gitlab::SidekiqSharding::Router.get_shard_instance(worker_class.sidekiq_options['store'])
Sidekiq::Client.via(pool) do
...
end
```
Unrouted Sidekiq calls are caught by the validator in all API requests, Sidekiq jobs on the server-side and in tests.
We recommend writing application logic with the use of the `Gitlab::SidekiqSharding::Router`. However, since sharding is an
unreleased feature, if the component does not affect GitLab.com, it is acceptable run it within a `.allow_unrouted_sidekiq_calls` scope like so:
```ruby
# Add a comment explaining why it is safe to allow unrouted Sidekiq calls in this case
Gitlab::SidekiqSharding::Validator.allow_unrouted_sidekiq_calls do
# your unrouted logic
end
```
A past example is the use of `allow_unrouted_sidekiq_calls` in [Geo Rake tasks](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149958#note_1906072228)
as it does not affect GitLab.com. However, developer should write shard-aware code where possible since
that is a pre-requisite for sharding to be [released as a feature to users on GitLab Self-Managed](https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/3430).
## Retries
Sidekiq defaults to using [25 retries](https://github.com/mperham/sidekiq/wiki/Error-Handling#automatic-job-retry),
with back-off between each retry. 25 retries means that the last retry
would happen around three weeks after the first attempt (assuming all 24
prior retries failed).
This means that a lot can happen in between the job being scheduled
and its execution. Therefore, we must guard workers so they don't
fail 25 times when the state changes after they are scheduled. For
example, a job should not fail when the project it was scheduled for
is deleted.
Instead of:
```ruby
def perform(project_id)
project = Project.find(project_id)
# ...
end
```
Do this:
```ruby
def perform(project_id)
project = Project.find_by_id(project_id)
return unless project
# ...
end
```
For most workers - especially [idempotent workers](idempotent_jobs.md) -
the default of 25 retries is more than sufficient. Many of our older
workers declare 3 retries, which used to be the default within the
GitLab application. 3 retries happen over the course of a couple of
minutes, so the jobs are prone to failing completely.
A lower retry count may be applicable if any of the below apply:
1. The worker contacts an external service and we do not provide
guarantees on delivery. For example, webhooks.
1. The worker is not idempotent and running it multiple times could
leave the system in an inconsistent state. For example, a worker that
posts a system note and then performs an action: if the second step
fails and the worker retries, the system note is posted again.
1. The worker is a cronjob that runs frequently. For example, if a cron
job runs every hour, then we don't need to retry beyond an hour
because we don't need two of the same job running at once.
Each retry for a worker is counted as a failure in our metrics. A worker
which always fails 9 times and succeeds on the 10th would have a 90%
error rate.
If you want to manually retry the worker without tracking the exception in Sentry,
use an exception class inherited from `Gitlab::SidekiqMiddleware::RetryError`.
```ruby
ServiceUnavailable = Class.new(::Gitlab::SidekiqMiddleware::RetryError)
def perform
...
raise ServiceUnavailable if external_service_unavailable?
end
```
## Failure handling
Failures are typically handled by Sidekiq itself, which takes advantage of the inbuilt retry mechanism mentioned above. You should allow exceptions to be raised so that Sidekiq can reschedule the job.
If you need to perform an action when a job fails after all of its retry attempts, add it to the `sidekiq_retries_exhausted` method.
```ruby
sidekiq_retries_exhausted do |msg, ex|
project = Project.find_by_id(msg['args'].first)
return unless project
project.perform_a_rollback # handle the permanent failure
end
def perform(project_id)
project = Project.find_by_id(project_id)
return unless project
project.some_action # throws an exception
end
```
## Concurrency Limit
To prevent system overload and ensure reliable operations, we strongly recommend setting a
[concurrency limit](worker_attributes.md#concurrency-limit) for all workers. Limiting the number of jobs each worker
can schedule helps mitigate the risk of overwhelming the system, which could lead to severe incidents.
This guidance applies both to .com and self-managed customers. A single worker scheduling thousands of jobs can easily disrupt the normal functioning of an SM instance.
{{< alert type="note" >}}
If Sidekiq only has 20 threads and the limit for a specific job is 200 then it will never be able to hit this 200 concurrency so it will not be limited.
{{< /alert >}}
### Static Concurrency Limit
For a static limit, consider the following example:
```ruby
class LimitedWorker
include ApplicationWorker
concurrency_limit -> { 100 if Feature.enabled?(:concurrency_limit_some_worker, Feature.current_request) }
# ...
end
```
{{< alert type="warning" >}}
Use only boolean feature flags (fully on/off) when rolling out the concurrency limit.
Percentage-based rollouts with `Feature.current_request` can cause inconsistent behavior.
{{< /alert >}}
Alternatively, you can set a fixed limit directly:
```ruby
concurrency_limit -> { 250 }
```
{{< alert type="note" >}}
Keep in mind that using a static limit means any updates or changes require merging an MR and waiting for the next deployment to take effect.
{{< /alert >}}
### Instance-Configurable Concurrency Limit
If you want to allow instance administrators to control the concurrency limit:
```ruby
concurrency_limit -> { ApplicationSetting.current.some_feature_concurrent_sidekiq_jobs }
```
This approach also allows having separate limits for .com and GitLab Self-Managed instances. To achieve this, you can:
1. Create a migration to add the configuration option with a default set to the self-managed limit.
1. In the same MR, ship a migration to update the limit for .com only.
### How to pick the limit
To determine an appropriate limit, you can use the `sidekiq: Worker Concurrency Detail` dashboard as a guide in [Grafana](https://dashboards.gitlab.net/goto/z244H0YNR?orgId=1).
{{< alert type="note" >}}
The [concurrency limit may be momentarily exceeded](https://gitlab.com/gitlab-org/gitlab/-/issues/490936#note_2172737349) and should not be relied on as a strict limit.
{{< /alert >}}
## Deferring Sidekiq workers
Sidekiq workers are deferred by two ways,
1. Manual: Feature flags can be used to explicitly defer a particular worker, more details can be found [here](../feature_flags/_index.md#deferring-sidekiq-jobs).
1. Automatic: Similar to the [throttling mechanism](../database/batched_background_migrations.md#throttling-batched-migrations) in batched migrations, database health indicators are used to defer a Sidekiq worker.
To use the automatic deferring mechanism, worker has to opt-in by calling `defer_on_database_health_signal` with `gitlab_schema`, `delay_by` (time to delay) and tables (which is used by autovacuum db indicator) as it's parameters.
**Example**:
```ruby
module Chaos
class SleepWorker # rubocop:disable Scalability/IdempotentWorker
include ApplicationWorker
data_consistency :always
sidekiq_options retry: 3
include ChaosQueue
defer_on_database_health_signal :gitlab_main, [:users], 1.minute
def perform(duration_s)
Gitlab::Chaos.sleep(duration_s)
end
end
end
```
For deferred jobs, logs contain the following to indicate the source:
- `job_status`: `deferred`
- `job_deferred_by`: `feature_flag` or `database_health_check`
## Sidekiq Queues
Previously, each worker had its own queue, which was automatically set based on the
worker class name. For a worker named `ProcessSomethingWorker`, the queue name
would be `process_something`. You can now route workers to a specific queue using
[queue routing rules](../../administration/sidekiq/processing_specific_job_classes.md#routing-rules).
In GDK, new workers are routed to a queue named `default`.
If you're not sure what queue a worker uses,
you can find it using `SomeWorker.queue`. There is almost never a reason to
manually override the queue name using `sidekiq_options queue: :some_queue`.
After adding a new worker, run `bin/rake gitlab:sidekiq:all_queues_yml:generate`
to regenerate `app/workers/all_queues.yml` or `ee/app/workers/all_queues.yml` so that
it can be picked up by
[`sidekiq-cluster`](../../administration/sidekiq/extra_sidekiq_processes.md)
in installations that don't use routing rules. For more information about potential changes,
see [epic 596](https://gitlab.com/groups/gitlab-com/gl-infra/-/epics/596).
Additionally, run
`bin/rake gitlab:sidekiq:sidekiq_queues_yml:generate` to regenerate
`config/sidekiq_queues.yml`.
## Queue Namespaces
While different workers cannot share a queue, they can share a queue namespace.
Defining a queue namespace for a worker makes it possible to start a Sidekiq
process that automatically handles jobs for all workers in that namespace,
without needing to explicitly list all their queue names. If, for example, all
workers that are managed by `sidekiq-cron` use the `cronjob` queue namespace, we
can spin up a Sidekiq process specifically for these kinds of scheduled jobs.
If a new worker using the `cronjob` namespace is added later on, the Sidekiq
process also picks up jobs for that worker (after having been restarted),
without the need to change any configuration.
A queue namespace can be set using the `queue_namespace` DSL class method:
```ruby
class SomeScheduledTaskWorker
include ApplicationWorker
queue_namespace :cronjob
# ...
end
```
Behind the scenes, this sets `SomeScheduledTaskWorker.queue` to
`cronjob:some_scheduled_task`. Commonly used namespaces have their own
concern module that can easily be included into the worker class, and that may
set other Sidekiq options besides the queue namespace. `CronjobQueue`, for
example, sets the namespace, but also disables retries.
`bundle exec sidekiq` is namespace-aware, and listens on all
queues in a namespace (technically: all queues prefixed with the namespace name)
when a namespace is provided instead of a simple queue name in the `--queue`
(`-q`) option, or in the `:queues:` section in `config/sidekiq_queues.yml`.
Adding a worker to an existing namespace should be done with care, as
the extra jobs take resources away from jobs from workers that were already
there, if the resources available to the Sidekiq process handling the namespace
are not adjusted appropriately.
## Versioning
Version can be specified on each Sidekiq worker class.
This is then sent along when the job is created.
```ruby
class FooWorker
include ApplicationWorker
version 2
def perform(*args)
if job_version == 2
foo = args.first['foo']
else
foo = args.first
end
end
end
```
Under this schema, any worker is expected to be able to handle any job that was
enqueued by an older version of that worker. This means that when changing the
arguments a worker takes, you must increment the `version` (or set `version 1`
if this is the first time a worker's arguments are changing), but also make sure
that the worker is still able to handle jobs that were queued with any earlier
version of the arguments. From the worker's `perform` method, you can read
`self.job_version` if you want to specifically branch on job version, or you
can read the number or type of provided arguments.
## Job size
GitLab stores Sidekiq jobs and their arguments in Redis. To avoid
excessive memory usage, we compress the arguments of Sidekiq jobs
if their original size is bigger than 100 KB.
After compression, if their size still exceeds 5 MB, it raises an
[`ExceedLimitError`](https://gitlab.com/gitlab-org/gitlab/-/blob/f3dd89e5e510ea04b43ffdcb58587d8f78a8d77c/lib/gitlab/sidekiq_middleware/size_limiter/exceed_limit_error.rb#L8)
error when scheduling the job.
If this happens, rely on other means of making the data
available in Sidekiq. There are possible workarounds such as:
- Rebuild the data in Sidekiq with data loaded from the database or
elsewhere.
- Store the data in [object storage](../file_storage.md#object-storage)
before scheduling the job, and retrieve it inside the job.
## Job weights
Some jobs have a weight declared. This is only used when running Sidekiq
in the default execution mode - using
[`sidekiq-cluster`](../../administration/sidekiq/extra_sidekiq_processes.md)
does not account for weights.
As we are [moving towards using `sidekiq-cluster` in Free](https://gitlab.com/gitlab-org/gitlab/-/issues/34396), newly-added
workers do not need to have weights specified. They can use the
default weight, which is 1.
## Job parameters
Based on [Sidekiq's recommended best practices](https://github.com/sidekiq/sidekiq/wiki/Best-Practices#1-make-your-job-parameters-small-and-simple), parameters should be small and simple.
For a hash passed as a worker parameter, the keys should be strings and the values
should be of native JSON types. If these expectations are not met in Sidekiq versions 7.0 and later,
[exceptions are raised](https://github.com/sidekiq/sidekiq/blob/main/docs/7.0-Upgrade.md#strict-arguments).
We have disabled these exceptions
[and only display warnings](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/161262)
in development and test mode, to enable us to upgrade to this version.
Going forward, developers should ensure that the keys and values in worker parameters are of native JSON types.
You are encouraged to add a test for code generating worker parameters. For example, this custom
RSpec matcher `param_containing_valid_native_json_types` (defined in `SidekiqJSONMatcher`)
tests the parameter expected to be an array of hashes:
```ruby
it 'passes a valid JSON parameter to MyWorker#perform_async' do
expect(MyWorker).to receive(:perform_async).with(param_containing_valid_native_json_types)
method_calling_worker_perform_sync
end
```
## Tests
Each Sidekiq worker must be tested using RSpec, just like any other class. These
tests should be placed in `spec/workers`.
## Interacting with Sidekiq Redis and APIs
The application should minimise interaction with of any `Sidekiq.redis` and Sidekiq [APIs](https://github.com/mperham/sidekiq/blob/main/lib/sidekiq/api.rb). Such interactions in generic application logic should be abstracted to a [Sidekiq middleware](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/sidekiq_middleware) for re-use across teams. By decoupling application logic from Sidekiq datastore, it allows for greater freedom when horizontally scaling the GitLab background processing setup.
Some exceptions to this rule would be migration-related logic or administration operations.
## Job duration limit
In general it is best-practice for Sidekiq jobs to run for short durations.
Although there is no specific hard limit for job duration, there are two special considerations for long running jobs:
1. Job durations above our [`urgency` attribute](worker_attributes.md#job-urgency) thresholds contribute negatively to
[Sidekiq Apdex](../application_slis/sidekiq_execution.md) and can impact error budgets.
1. Deploys interrupt long-running jobs. On GitLab.com, deploys can happen several times a day, which can [effectively limit the length a job can run](#effect-of-deploys-on-job-duration).
### Effect of deploys on job duration
During a deploy, Sidekiq is given a `TERM` signal. Jobs are given 25 seconds to finish, after which they are
interrupted and forced to stop. The 25 second grace period is the
[Sidekiq default](https://github.com/sidekiq/sidekiq/blob/ba51d286d821777fbe87ea0eff8b04f212aeadf5/lib/sidekiq/config.rb#L18) but can be
[configured through the charts](https://gitlab.com/gitlab-com/gl-infra/k8s-workloads/gitlab-com/blob/d2bb7cca2130cd9859e5d40e5bd90f5ef061d422/vendor/charts/gitlab/gprd/charts/gitlab/charts/sidekiq/values.yaml#L291).
If a job is forced to stop a certain number of times (3 times by default, configurable
through `max_retries_after_interruption`), they are permanently killed. This happens through
our [`sidekiq-reliable-fetch` gem](https://gitlab.com/gitlab-org/gitlab/-/blob/master/vendor/gems/sidekiq-reliable-fetch/README.md).
This effectively puts a limit on the length of time a job can run
to a span of `max_retries_after_interruption` deploys, or 3 deploys by default.
### Tips for handling jobs with long durations
Instead of having one big job, it's better to have many small jobs.
To decide if a worker needs to be split up and parallelized we can look at the runtime of jobs in the logs.
If the 99th percentile of the job duration is lower than the target for that shard based on the configured
[urgency](worker_attributes.md#job-urgency), there is no need to break up the job.
When breaking up long running jobs into many smaller jobs, do take into account downstream dependencies.
For example, if we schedule thousands of jobs that all need to write to the primary database, this
could create contention on connections to the primary database causing other Sidekiq jobs on the shard to
have to wait to obtain a connection. To circumvent this, we can consider specifying a
[concurrency limit](worker_attributes.md#concurrency-limit).
|
https://docs.gitlab.com/development/principles_of_importer_design
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/principles_of_importer_design.md
|
2025-08-13
|
doc/development/import
|
[
"doc",
"development",
"import"
] |
principles_of_importer_design.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Principles of Importer Design
| null |
## Security
- Uploaded files must be validated. Examples:
- [`BulkImports::FileDownloadService`](https://gitlab.com/gitlab-org/gitlab/-/blob/cd4a880cbb2bc56b3a55f14c1d8370f4385319db/app/services/bulk_imports/file_download_service.rb#L38-46)
- [`ImportExport::CommandLineUtil`](https://gitlab.com/gitlab-org/gitlab/blob/139690b3aeac69675119ce70f17f70bc1753de48/lib/gitlab/import_export/command_line_util.rb#L134)
- Importers must not add third-party Ruby gems that make HTTP calls.
Importers use the same
[Ruby gem policy as for integrations](../integrations/_index.md#no-ruby-gems-that-make-http-calls), for more information about Ruby gem use for importers see that page.
- All HTTP calls must use `Import::Clients::HTTP`, which:
- Ensures that [network settings](../../security/webhooks.md) are enforced for HTTP calls.
- Has additional [security hardening](../../security/webhooks.md#enforce-dns-rebinding-attack-protection) features.
- Is our single source of truth for making secure HTTP calls.
- Ensure all response sizes are validated.
## Logging
- Logs should contain the importer type such as `github`, `bitbucket`, `bitbucket_server`. You can find a full list of import sources in [`Gitlab::ImportSources`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_sources.rb#L12).
- Logs should include any information likely to aid in debugging:
- Object identifiers such as `id`, `iid`, and type of object
- Error or status messages
- Logs should not include sensitive or private information, including but not limited to:
- Usernames
- Email addresses
- Where applicable, we should track the error in `Gitlab::Import::ImportFailureService` to aid in displaying errors in the UI.
- Logging should raise an error in development if key identifiers are missing, as demonstrated in [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139469).
- A log line should be created before and after each record is imported, containing that record's identifier.
## Performance
- A cache with a default TTL of 24 hours should be used to prevent duplicate database queries and API calls.
- Workers that loop over collections should be equipped with a progress pointer that allows them to pick up where they left off if interrupted.
- [Example using ID tracking](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134229)
- [Example using page counter](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139775)
- Write-heavy workers should implement [`defer_on_database_health_signal`](../sidekiq/_index.md#deferring-sidekiq-workers) to avoid saturating the database. However, at the time of writing, a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/429871#note_1738917399) prevents us from using this.
- We should enforce limits on worker concurrency to avoid saturating resources. You can find an example of this in the Bitbucket [`ParallelScheduling` class](https://gitlab.com/gitlab-org/gitlab/blob/3254590fd2105fcd995f0ccb5e0b3e214c9a59c6/lib/gitlab/bitbucket_import/parallel_scheduling.rb#L76).
- Importers should be tested at scale on a staging environment, especially when implementing new functionality or enabling a feature flag.
## Resilience
- Workers should be idempotent so they can be retried safely in the case of failure.
- Workers should be re-enqueued with a delay that respects concurrent batch limits.
- Individual workers should not run for a long time. Workers that run for a long time can be [interrupted by Sidekiq due to a deploy](../github_importer.md#increasing-sidekiq-interrupts), or be misidentified by `StuckProjectImportJobsWorker` as being part of an import that is stuck and should be failed.
- If a worker must run for a long time it must [refresh its JID](https://gitlab.com/gitlab-org/gitlab/-/issues/431936) using `Gitlab::Import::RefreshImportJidWorker` to avoid being terminated by `StuckProjectImportJobsWorker`. It may also need to raise its Sidekiq `max_retries_after_interruption`. Refer to the [GitHub importer implementation](../github_importer.md#increasing-sidekiq-interrupts).
- Workers that rely on cached values must implement fall-back mechanisms to fetch data in the event of a cache miss.
- Re-fetch data if possible and performant.
- Gracefully handle missing values.
- Long-running workers should be annotated with `worker_resource_boundary :memory` to place them on a shard with a two hour termination grace period. A long termination grace period is not a replacement for writing fast workers. Apdex SLO compliance can be monitored on the [I&I team Grafana dashboard](https://dashboards.gitlab.net/d/stage-groups-detail-import_and_integrate/b57e3a54-0277-50ff-a67e-4b69c1349274?from=now-7d&orgId=1).
- Workers that create data should not fail an entire import if a single record fails to import. They must log the appropriate error and make a decision on whether or not to retry based on the nature of the error.
- Import _Stage_ workers (which include `StageMethods`) and _Advance Stage_ workers (which include `Gitlab::Import::AdvanceStage`) should have `retries: 6` to make them more resilient to system interruptions. With exponential back-off, six retries spans approximately 20 minutes. Any higher retry holds up an import for too long.
- It should be possible to retry a portion of an import, for example re-importing missing issues without overwriting the entire destination project.
## Consistency
- Importers should fire callbacks after saving records. Problematic callbacks can be disabled for imports on an individual basis:
- Include the [`Importable`](https://gitlab.com/gitlab-org/gitlab/blob/15b878e27e8188e9d22755fd648f75de313f012f/app/models/concerns/importable.rb) module.
- Configure the callback to skip if `importing?`.
- Set the `importing` value on the object under import.
- If records must be inserted in bulk, consider manually running callbacks.
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Principles of Importer Design
breadcrumbs:
- doc
- development
- import
---
## Security
- Uploaded files must be validated. Examples:
- [`BulkImports::FileDownloadService`](https://gitlab.com/gitlab-org/gitlab/-/blob/cd4a880cbb2bc56b3a55f14c1d8370f4385319db/app/services/bulk_imports/file_download_service.rb#L38-46)
- [`ImportExport::CommandLineUtil`](https://gitlab.com/gitlab-org/gitlab/blob/139690b3aeac69675119ce70f17f70bc1753de48/lib/gitlab/import_export/command_line_util.rb#L134)
- Importers must not add third-party Ruby gems that make HTTP calls.
Importers use the same
[Ruby gem policy as for integrations](../integrations/_index.md#no-ruby-gems-that-make-http-calls), for more information about Ruby gem use for importers see that page.
- All HTTP calls must use `Import::Clients::HTTP`, which:
- Ensures that [network settings](../../security/webhooks.md) are enforced for HTTP calls.
- Has additional [security hardening](../../security/webhooks.md#enforce-dns-rebinding-attack-protection) features.
- Is our single source of truth for making secure HTTP calls.
- Ensure all response sizes are validated.
## Logging
- Logs should contain the importer type such as `github`, `bitbucket`, `bitbucket_server`. You can find a full list of import sources in [`Gitlab::ImportSources`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/import_sources.rb#L12).
- Logs should include any information likely to aid in debugging:
- Object identifiers such as `id`, `iid`, and type of object
- Error or status messages
- Logs should not include sensitive or private information, including but not limited to:
- Usernames
- Email addresses
- Where applicable, we should track the error in `Gitlab::Import::ImportFailureService` to aid in displaying errors in the UI.
- Logging should raise an error in development if key identifiers are missing, as demonstrated in [this MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139469).
- A log line should be created before and after each record is imported, containing that record's identifier.
## Performance
- A cache with a default TTL of 24 hours should be used to prevent duplicate database queries and API calls.
- Workers that loop over collections should be equipped with a progress pointer that allows them to pick up where they left off if interrupted.
- [Example using ID tracking](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134229)
- [Example using page counter](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139775)
- Write-heavy workers should implement [`defer_on_database_health_signal`](../sidekiq/_index.md#deferring-sidekiq-workers) to avoid saturating the database. However, at the time of writing, a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/429871#note_1738917399) prevents us from using this.
- We should enforce limits on worker concurrency to avoid saturating resources. You can find an example of this in the Bitbucket [`ParallelScheduling` class](https://gitlab.com/gitlab-org/gitlab/blob/3254590fd2105fcd995f0ccb5e0b3e214c9a59c6/lib/gitlab/bitbucket_import/parallel_scheduling.rb#L76).
- Importers should be tested at scale on a staging environment, especially when implementing new functionality or enabling a feature flag.
## Resilience
- Workers should be idempotent so they can be retried safely in the case of failure.
- Workers should be re-enqueued with a delay that respects concurrent batch limits.
- Individual workers should not run for a long time. Workers that run for a long time can be [interrupted by Sidekiq due to a deploy](../github_importer.md#increasing-sidekiq-interrupts), or be misidentified by `StuckProjectImportJobsWorker` as being part of an import that is stuck and should be failed.
- If a worker must run for a long time it must [refresh its JID](https://gitlab.com/gitlab-org/gitlab/-/issues/431936) using `Gitlab::Import::RefreshImportJidWorker` to avoid being terminated by `StuckProjectImportJobsWorker`. It may also need to raise its Sidekiq `max_retries_after_interruption`. Refer to the [GitHub importer implementation](../github_importer.md#increasing-sidekiq-interrupts).
- Workers that rely on cached values must implement fall-back mechanisms to fetch data in the event of a cache miss.
- Re-fetch data if possible and performant.
- Gracefully handle missing values.
- Long-running workers should be annotated with `worker_resource_boundary :memory` to place them on a shard with a two hour termination grace period. A long termination grace period is not a replacement for writing fast workers. Apdex SLO compliance can be monitored on the [I&I team Grafana dashboard](https://dashboards.gitlab.net/d/stage-groups-detail-import_and_integrate/b57e3a54-0277-50ff-a67e-4b69c1349274?from=now-7d&orgId=1).
- Workers that create data should not fail an entire import if a single record fails to import. They must log the appropriate error and make a decision on whether or not to retry based on the nature of the error.
- Import _Stage_ workers (which include `StageMethods`) and _Advance Stage_ workers (which include `Gitlab::Import::AdvanceStage`) should have `retries: 6` to make them more resilient to system interruptions. With exponential back-off, six retries spans approximately 20 minutes. Any higher retry holds up an import for too long.
- It should be possible to retry a portion of an import, for example re-importing missing issues without overwriting the entire destination project.
## Consistency
- Importers should fire callbacks after saving records. Problematic callbacks can be disabled for imports on an individual basis:
- Include the [`Importable`](https://gitlab.com/gitlab-org/gitlab/blob/15b878e27e8188e9d22755fd648f75de313f012f/app/models/concerns/importable.rb) module.
- Configure the callback to skip if `importing?`.
- Set the `importing` value on the object under import.
- If records must be inserted in bulk, consider manually running callbacks.
|
https://docs.gitlab.com/development/components
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/components.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
components.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Development guide for GitLab official CI/CD components
| null |
This document explains how to develop [CI/CD components](../../ci/components/_index.md) that are maintained by GitLab, either the official public ones or those for internal use.
The location for all official GitLab component projects is the [`gitlab.com/components`](https://gitlab.com/components) group.
This group contains all components that are designed to be generic, served to all GitLab users, and maintained by GitLab.
For example: SAST, Secret Detection and Code Quality components.
A component project can initially be created under a different group (for example `gitlab-org`)
but it needs to be moved into the `components` group before the first version gets published to the catalog. All projects under [`gitlab.com/components`](https://gitlab.com/components) group must be public
Components that are for GitLab internal use only, for example specific to `gitlab-org/gitlab` project, should be
implemented under `gitlab-org` group.
Component projects that are expected to be published in the [CI/CD catalog](../../ci/components/_index.md#cicd-catalog)
should first be dogfooded to ensure we stay on top of the project quality and have first-hand
experience with it.
## Define ownership
Official GitLab components are trusted by the community and require a high degree of quality and timely maintenance.
Components must be kept up to date, monitored for security vulnerabilities, and bugs fixed.
Each component project must have a set of owners and maintainers that are also domain experts.
Experts can be from any department in GitLab, from Engineering to Support, Customer Success, and Developer Relations.
If a component is related to a GitLab feature (for example Secret Detection), the team that owns the
feature category or is most closely related to it should maintain the project.
In this case, the Engineering Manager for the feature category is assigned as the project owner.
Members with the `owner` role for the project are the DRIs responsible for triaging open issues and merge requests to ensure they get addressed promptly.
The component project can be created by a separate team or individual initially but it must be transitioned
to a set of owners before the first version gets published to the catalog.
The `README.md` file in the project repository must indicate the main owners of the project so that
they can be contacted by the wider community if needed.
{{< alert type="note" >}}
If a set of project owners cannot be guaranteed or the components cannot be dogfooded, we strongly recommend
not creating an official GitLab component project and instead let the wider community fulfill the demand
in the catalog.
{{< /alert >}}
## Development process
1. Create a project under [`gitlab.com/components`](https://gitlab.com/components)
or ask one of the group owners to create an empty project for you.
1. Follow the [standard guide for creating components](../../ci/components/_index.md).
1. Add a concise project description that clearly describes the capabilities offered by the component project.
1. Make sure to follow the general guidance given to [write a component](../../ci/components/_index.md#write-a-component) as well as
the guidance [for official components](#best-practices-for-official-components).
1. Add a `LICENSE.md` file with the MIT license ([example](https://gitlab.com/components/ruby/-/blob/d8db5288b01947e8a931d8d1a410befed69325a7/LICENSE.md)).
1. The project must have a `.gitlab-ci.yml` file that:
- Validates all the components in the project correctly
([example](https://gitlab.com/components/secret-detection/-/blob/646d0fcbbf3c2a3e4b576f1884543c874041c633/.gitlab-ci.yml#L11-23)).
- Contains a `release` job to publish newly released tags to the catalog
([example](https://gitlab.com/components/secret-detection/-/blob/646d0fcbbf3c2a3e4b576f1884543c874041c633/.gitlab-ci.yml#L50-58)).
1. For official component projects, upload the [official avatar image](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/development/cicd/img/avatar_component_project_v16_8.png) to the component project.
### Best practices for official components
- Ensure that the `README.md` contains at least the sections below (for example, see the [Code quality component](https://gitlab.com/components/code-quality)):
- **Overview**: The capabilities offered by the component project.
- **Components**: Sub-sections for each component, each with:
- **Usage**: Examples with and without inputs (when optional).
- **Inputs**: A table showing the input names, types, default values (if any) and descriptions.
- **Variables** (when applicable): The variable names, supported values, and descriptions.
- **Contribute**: Notes and how to get in touch with the maintainers.
Usually the contribution process should follow the [official guide](../../ci/components/_index.md).
- When naming `inputs`, use underscores `_` for composite names and hyphens `-` as separators, if necessary. For example: `service_x-project_name`.
- Use `inputs` if you want to allow users to configure `rules`. See an [example here](https://gitlab.com/components/opentofu/-/blob/5e86fd6c5f524785fd3dbd6cdb09f03d19a0cced/templates/fmt.yml#L82-88).
To preserve the default behavior when `rules` is not defined you should use `default: [{when: on_success}]` for the input, until [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/440468) is resolved.
## Review and contribution process for official components
It's possible that components in the project have a related [CI/CD template](templates.md) in the GitLab codebase.
In that case we need to cross link the component project and CI/CD template:
- Add a comment in the CI/CD template with the location of the related component project.
- Add a section in the `README.md` of the component project with the location of the existing CI/CD template.
When changes are applied to these components, check whether we can integrate the changes in the CI/CD template too.
This might not be possible due to the rigidity of versioning in CI/CD templates.
Ping any of the [maintainers](#default-maintainers-of-gitlab-official-components)
for reviews to ensure that the components are written in consistent style and follow the best practices.
## Default maintainers of GitLab official components
Each component project under [`gitlab.com/components`](https://gitlab.com/components) group should
have specific DRIs and maintainers, however the [`@gitlab-org/maintainers/ci-components`](https://gitlab.com/groups/gitlab-org/maintainers/ci-components/-/group_members?with_inherited_permissions=exclude)
group of maintainers is responsible for managing the `components` group in general.
The responsibilities for this group of maintainers:
- Manage any development and helper resources, such as toolkit components and project templates, to provide the best development experience.
- Manage any component projects that is missing a clear DRI, or is in the process of being developed, and work to find the right owners long term.
- Guide and mentor the maintainers of individual component projects, including during code reviews and when troubleshooting issues.
- Ensure best practices are applied and improved over time.
Requirements for becoming a maintainer:
- Have a an in-depth understanding of the [CI/CD YAML syntax](../../ci/yaml/_index.md) and features.
- Understand how CI components work and demonstrate experience developing them.
- Have a solid understanding of how to [write a component](../../ci/components/_index.md#write-a-component).
How to join the `gitlab-components` group of general maintainers:
- Review the [process for becoming a `gitlab-components` maintainer](https://handbook.gitlab.com/handbook/engineering/workflow/code-review/#project-maintainer-process-for-gitlab-components).
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Development guide for GitLab official CI/CD components
breadcrumbs:
- doc
- development
- cicd
---
This document explains how to develop [CI/CD components](../../ci/components/_index.md) that are maintained by GitLab, either the official public ones or those for internal use.
The location for all official GitLab component projects is the [`gitlab.com/components`](https://gitlab.com/components) group.
This group contains all components that are designed to be generic, served to all GitLab users, and maintained by GitLab.
For example: SAST, Secret Detection and Code Quality components.
A component project can initially be created under a different group (for example `gitlab-org`)
but it needs to be moved into the `components` group before the first version gets published to the catalog. All projects under [`gitlab.com/components`](https://gitlab.com/components) group must be public
Components that are for GitLab internal use only, for example specific to `gitlab-org/gitlab` project, should be
implemented under `gitlab-org` group.
Component projects that are expected to be published in the [CI/CD catalog](../../ci/components/_index.md#cicd-catalog)
should first be dogfooded to ensure we stay on top of the project quality and have first-hand
experience with it.
## Define ownership
Official GitLab components are trusted by the community and require a high degree of quality and timely maintenance.
Components must be kept up to date, monitored for security vulnerabilities, and bugs fixed.
Each component project must have a set of owners and maintainers that are also domain experts.
Experts can be from any department in GitLab, from Engineering to Support, Customer Success, and Developer Relations.
If a component is related to a GitLab feature (for example Secret Detection), the team that owns the
feature category or is most closely related to it should maintain the project.
In this case, the Engineering Manager for the feature category is assigned as the project owner.
Members with the `owner` role for the project are the DRIs responsible for triaging open issues and merge requests to ensure they get addressed promptly.
The component project can be created by a separate team or individual initially but it must be transitioned
to a set of owners before the first version gets published to the catalog.
The `README.md` file in the project repository must indicate the main owners of the project so that
they can be contacted by the wider community if needed.
{{< alert type="note" >}}
If a set of project owners cannot be guaranteed or the components cannot be dogfooded, we strongly recommend
not creating an official GitLab component project and instead let the wider community fulfill the demand
in the catalog.
{{< /alert >}}
## Development process
1. Create a project under [`gitlab.com/components`](https://gitlab.com/components)
or ask one of the group owners to create an empty project for you.
1. Follow the [standard guide for creating components](../../ci/components/_index.md).
1. Add a concise project description that clearly describes the capabilities offered by the component project.
1. Make sure to follow the general guidance given to [write a component](../../ci/components/_index.md#write-a-component) as well as
the guidance [for official components](#best-practices-for-official-components).
1. Add a `LICENSE.md` file with the MIT license ([example](https://gitlab.com/components/ruby/-/blob/d8db5288b01947e8a931d8d1a410befed69325a7/LICENSE.md)).
1. The project must have a `.gitlab-ci.yml` file that:
- Validates all the components in the project correctly
([example](https://gitlab.com/components/secret-detection/-/blob/646d0fcbbf3c2a3e4b576f1884543c874041c633/.gitlab-ci.yml#L11-23)).
- Contains a `release` job to publish newly released tags to the catalog
([example](https://gitlab.com/components/secret-detection/-/blob/646d0fcbbf3c2a3e4b576f1884543c874041c633/.gitlab-ci.yml#L50-58)).
1. For official component projects, upload the [official avatar image](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/development/cicd/img/avatar_component_project_v16_8.png) to the component project.
### Best practices for official components
- Ensure that the `README.md` contains at least the sections below (for example, see the [Code quality component](https://gitlab.com/components/code-quality)):
- **Overview**: The capabilities offered by the component project.
- **Components**: Sub-sections for each component, each with:
- **Usage**: Examples with and without inputs (when optional).
- **Inputs**: A table showing the input names, types, default values (if any) and descriptions.
- **Variables** (when applicable): The variable names, supported values, and descriptions.
- **Contribute**: Notes and how to get in touch with the maintainers.
Usually the contribution process should follow the [official guide](../../ci/components/_index.md).
- When naming `inputs`, use underscores `_` for composite names and hyphens `-` as separators, if necessary. For example: `service_x-project_name`.
- Use `inputs` if you want to allow users to configure `rules`. See an [example here](https://gitlab.com/components/opentofu/-/blob/5e86fd6c5f524785fd3dbd6cdb09f03d19a0cced/templates/fmt.yml#L82-88).
To preserve the default behavior when `rules` is not defined you should use `default: [{when: on_success}]` for the input, until [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/440468) is resolved.
## Review and contribution process for official components
It's possible that components in the project have a related [CI/CD template](templates.md) in the GitLab codebase.
In that case we need to cross link the component project and CI/CD template:
- Add a comment in the CI/CD template with the location of the related component project.
- Add a section in the `README.md` of the component project with the location of the existing CI/CD template.
When changes are applied to these components, check whether we can integrate the changes in the CI/CD template too.
This might not be possible due to the rigidity of versioning in CI/CD templates.
Ping any of the [maintainers](#default-maintainers-of-gitlab-official-components)
for reviews to ensure that the components are written in consistent style and follow the best practices.
## Default maintainers of GitLab official components
Each component project under [`gitlab.com/components`](https://gitlab.com/components) group should
have specific DRIs and maintainers, however the [`@gitlab-org/maintainers/ci-components`](https://gitlab.com/groups/gitlab-org/maintainers/ci-components/-/group_members?with_inherited_permissions=exclude)
group of maintainers is responsible for managing the `components` group in general.
The responsibilities for this group of maintainers:
- Manage any development and helper resources, such as toolkit components and project templates, to provide the best development experience.
- Manage any component projects that is missing a clear DRI, or is in the process of being developed, and work to find the right owners long term.
- Guide and mentor the maintainers of individual component projects, including during code reviews and when troubleshooting issues.
- Ensure best practices are applied and improved over time.
Requirements for becoming a maintainer:
- Have a an in-depth understanding of the [CI/CD YAML syntax](../../ci/yaml/_index.md) and features.
- Understand how CI components work and demonstrate experience developing them.
- Have a solid understanding of how to [write a component](../../ci/components/_index.md#write-a-component).
How to join the `gitlab-components` group of general maintainers:
- Review the [process for becoming a `gitlab-components` maintainer](https://handbook.gitlab.com/handbook/engineering/workflow/code-review/#project-maintainer-process-for-gitlab-components).
|
https://docs.gitlab.com/development/configuration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/configuration.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
configuration.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute to the CI/CD configuration
| null |
## Glossary
- **CI/CD configuration**: The YAML file that defines the CI/CD configuration for a project.
- **keyword**: Each keyword in the CI/CD configuration.
- **entry**: An `Entry` class that represents a keyword in the CI/CD configuration.
Not every keyword in the CI/CD configuration is represented by an `Entry` class.
We create `Entry` classes for keywords that have a complex structure or reusable parts.
For example;
- The `image` keyword is represented by the [`Entry::Image`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/image.rb) class.
- The `name` subkeyword of the `image` keyword is not represented by an `Entry` class.
- The `pull_policy` subkeyword of the `image` keyword is represented by the [`Entry::PullPolicy`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/pull_policy.rb) class.
## Adding New Keywords
CI config keywords are added in the [`lib/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/config/entry) directory.
For EE-specific changes, use the [`ee/lib/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/gitlab/ci/config/entry)
or [`ee/lib/ee/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/ee/gitlab/ci/config/entry) directory.
### Inheritance
An entry is represented by a class that inherits from;
- `Entry::Node`: for simple keywords.
(For example, [`Entry::Stage`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/stage.rb))
- `Entry::Simplifiable`: for keywords that have multiple structures.
For example, [`Entry::Retry`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/retry.rb) can be a simple number or a hash configuration.
- `Entry::ComposableArray`: for keywords that have a list of single-type sub-elements.
For example, [`Entry::Includes`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/includes.rb) has a list of `Entry::Include` elements.
- `Entry::ComposableHash`: for keywords that have single-type sub-elements with user-defined keys.
For example, [`Entry::Variables`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/variables.rb) has a list of `Entry::Variable` elements with user-defined keys.
### Helper Classes
The following helper classes are available for use in entries:
- `Entry::Validatable`: Enables the `validations` block in an entry class and provides validations.
- `Entry::Attributable`: Enables the `attributes` method in an entry class. It creates these methods for each attribute; `xxx`, `has_xxx?`, `has_xxx_value?`.
- `Entry::Configurable`: Enables the `entry` method in an entry class. It creates these methods for each entry; `xxx_defined?`, `xxx_entry`, `xxx_value`.
### The `value` Method
The `value` method is the main method of an entry class. It returns the actual value of the entry.
By default, from the `Entry::Node` class, the `value` method returns the hash configuration of the entry unless it has nested entries.
It can be useful for simple entries. For example, `Entry::Paths` has an array of strings as its value. So, it can return the array of strings directly.
In some keywords, we override the `value` method. In this method, we return what and how we want to return from the entry.
The usage of `Entry::Attributable` and `Entry::Configurable` may have a significant role here. For example,
in `Entry::Secret`, we have this;
```ruby
attributes %i[vault file token].freeze
entry :vault, Entry::Vault::Secret
entry :file, ::Gitlab::Config::Entry::Boolean
def value
{
vault: vault_value,
file: file_value,
token: token
}.compact
end
```
- `vault_value` is the value of the nested `vault` entry.
- `file_value` is the value of the nested `file` entry.
- `token` is the value of the basic `token` attribute.
**It is important** that we should always use the `xxx_value` method to get the value of a nested entry.
## Feature Flag Usage
When adding new CI/CD configuration keywords, it is important to use feature flags to control the rollout of the change.
This allows us to test the change in production without affecting all users. For more information, see the [feature flags documentation](../feature_flags/_index.md).
A common place to check for a feature flag is in the `Gitlab::Config::Entry::Node#value` method. For example:
```ruby
def value
{
vault: vault_value,
file: file_available? ? file_value : nil,
token: token
}.compact
end
private
def file_available?
::Gitlab::Ci::Config::FeatureFlags.enabled?(:secret_file_available, type: :beta)
end
```
### Feature Flag Actor
In entry classes, we have no access to the current project or user. However, it's discouraged to use feature flags without [an actor](../feature_flags/_index.md#feature-actors).
To solve this problem, we have three options;
1. Use `Feature.enabled?(:feature_flag, Feature.current_request)`.
1. Use `Config::FeatureFlags.enabled?(:feature_flag)`
1. Do not use feature flags in entry classes and use them in other parts of the code.
## Testing and Validation
When adding or modifying an entry, the corresponding spec file must be either added or updated.
Besides, to have a fully integrated test, it's also important to add/modify tests in the `spec/lib/gitlab/ci/yaml_processor_spec.rb` file or
the files in `spec/lib/gitlab/ci/yaml_processor/test_cases/*` directory.
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute to the CI/CD configuration
breadcrumbs:
- doc
- development
- cicd
---
## Glossary
- **CI/CD configuration**: The YAML file that defines the CI/CD configuration for a project.
- **keyword**: Each keyword in the CI/CD configuration.
- **entry**: An `Entry` class that represents a keyword in the CI/CD configuration.
Not every keyword in the CI/CD configuration is represented by an `Entry` class.
We create `Entry` classes for keywords that have a complex structure or reusable parts.
For example;
- The `image` keyword is represented by the [`Entry::Image`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/image.rb) class.
- The `name` subkeyword of the `image` keyword is not represented by an `Entry` class.
- The `pull_policy` subkeyword of the `image` keyword is represented by the [`Entry::PullPolicy`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/pull_policy.rb) class.
## Adding New Keywords
CI config keywords are added in the [`lib/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/config/entry) directory.
For EE-specific changes, use the [`ee/lib/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/gitlab/ci/config/entry)
or [`ee/lib/ee/gitlab/ci/config/entry`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/ee/gitlab/ci/config/entry) directory.
### Inheritance
An entry is represented by a class that inherits from;
- `Entry::Node`: for simple keywords.
(For example, [`Entry::Stage`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/stage.rb))
- `Entry::Simplifiable`: for keywords that have multiple structures.
For example, [`Entry::Retry`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/retry.rb) can be a simple number or a hash configuration.
- `Entry::ComposableArray`: for keywords that have a list of single-type sub-elements.
For example, [`Entry::Includes`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/includes.rb) has a list of `Entry::Include` elements.
- `Entry::ComposableHash`: for keywords that have single-type sub-elements with user-defined keys.
For example, [`Entry::Variables`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/config/entry/variables.rb) has a list of `Entry::Variable` elements with user-defined keys.
### Helper Classes
The following helper classes are available for use in entries:
- `Entry::Validatable`: Enables the `validations` block in an entry class and provides validations.
- `Entry::Attributable`: Enables the `attributes` method in an entry class. It creates these methods for each attribute; `xxx`, `has_xxx?`, `has_xxx_value?`.
- `Entry::Configurable`: Enables the `entry` method in an entry class. It creates these methods for each entry; `xxx_defined?`, `xxx_entry`, `xxx_value`.
### The `value` Method
The `value` method is the main method of an entry class. It returns the actual value of the entry.
By default, from the `Entry::Node` class, the `value` method returns the hash configuration of the entry unless it has nested entries.
It can be useful for simple entries. For example, `Entry::Paths` has an array of strings as its value. So, it can return the array of strings directly.
In some keywords, we override the `value` method. In this method, we return what and how we want to return from the entry.
The usage of `Entry::Attributable` and `Entry::Configurable` may have a significant role here. For example,
in `Entry::Secret`, we have this;
```ruby
attributes %i[vault file token].freeze
entry :vault, Entry::Vault::Secret
entry :file, ::Gitlab::Config::Entry::Boolean
def value
{
vault: vault_value,
file: file_value,
token: token
}.compact
end
```
- `vault_value` is the value of the nested `vault` entry.
- `file_value` is the value of the nested `file` entry.
- `token` is the value of the basic `token` attribute.
**It is important** that we should always use the `xxx_value` method to get the value of a nested entry.
## Feature Flag Usage
When adding new CI/CD configuration keywords, it is important to use feature flags to control the rollout of the change.
This allows us to test the change in production without affecting all users. For more information, see the [feature flags documentation](../feature_flags/_index.md).
A common place to check for a feature flag is in the `Gitlab::Config::Entry::Node#value` method. For example:
```ruby
def value
{
vault: vault_value,
file: file_available? ? file_value : nil,
token: token
}.compact
end
private
def file_available?
::Gitlab::Ci::Config::FeatureFlags.enabled?(:secret_file_available, type: :beta)
end
```
### Feature Flag Actor
In entry classes, we have no access to the current project or user. However, it's discouraged to use feature flags without [an actor](../feature_flags/_index.md#feature-actors).
To solve this problem, we have three options;
1. Use `Feature.enabled?(:feature_flag, Feature.current_request)`.
1. Use `Config::FeatureFlags.enabled?(:feature_flag)`
1. Do not use feature flags in entry classes and use them in other parts of the code.
## Testing and Validation
When adding or modifying an entry, the corresponding spec file must be either added or updated.
Besides, to have a fully integrated test, it's also important to add/modify tests in the `spec/lib/gitlab/ci/yaml_processor_spec.rb` file or
the files in `spec/lib/gitlab/ci/yaml_processor/test_cases/*` directory.
|
https://docs.gitlab.com/development/cicd
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
_index.md
|
Verify
|
Pipeline Execution
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
CI/CD development guidelines
| null |
CI/CD pipelines are a fundamental part of GitLab development and deployment processes, automating tasks like building,
testing, and deploying code changes.
When developing features that interact with or trigger pipelines, it's essential to consider the broader implications
these actions have on the system's security and operational integrity.
This document provides guidelines to help you develop features that use CI/CD pipelines securely and effectively.
It emphasizes the importance of understanding the implications of running pipelines, managing authentication tokens
responsibly, and integrating security considerations from the beginning of the development process.
## General guidelines
- **Recognize pipelines as write operations**: Triggering a pipeline is a write operation that changes the system's
state. The write operation can initiate deployments, run tests, or alter configurations. Treat pipeline triggers with the same caution
as other critical write operations to prevent unauthorized changes or misuse of the system.
- **Running a pipeline should be an explicit action**: Actions that create a pipeline in the user's context
should be designed so that it is clear to the user that a pipeline (or single job) is started when performing the action.
The user should be aware of the changes executed in the pipeline **before** they are executed.
- **Remote execution and isolation**: The CI/CD pipeline functions as a remote execution environment where jobs can
execute scripts performing a wide range of actions. Ensure that jobs are adequately isolated and do not unintentionally
expose sensitive data or systems.
- **Collaborate with AppSec and Verify teams**: Include [Application Security (AppSec)](https://handbook.gitlab.com/handbook/security/product-security/application-security/)
and [Verify](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/) team members early in
the design process and when drafting proposals. Their expertise can help identify potential security risks and ensure
that security considerations are integrated into the feature from the outset. Additionally, involve them in the code
review process to benefit from their specialized knowledge in identifying vulnerabilities and ensuring compliance with
security standards.
- **Determine the pipeline actor**: When building features that trigger pipelines, it's crucial to consider which user
initiates the pipeline. You need to determine who should be the actor of the event. Is it an intentional pipeline
run where a user directly triggers the pipeline (for example by pushing changes to the repository or clicking the "Run pipeline"
button), or is it a pipeline run initiated by the GitLab system or a policy?
Avoid scenarios in which the user creating the pipeline is not the author of the changes. If the users are not the same,
there is a risk that the author of the changes can run code in the context of the pipeline user.
Understanding the actor helps in managing permissions and ensuring that the pipeline runs in the correct execution context.
- **Variability of job execution users**: The user running a specific job might not be the same user who created the pipeline.
While in the majority of cases the user is the same, there are scenarios where the user of the job changes, for example when
running a manual job or retrying a job. This variability can affect permissions and access levels in the job's execution
context. Always account for this possibility when developing features that use the CI/CD job token (`CI_JOB_TOKEN`). Consider whether the job
user should change and who the actor of the action is.
- **Restrict scope of the operation**: When enabling a new endpoint for use with the CI/CD job token, strongly consider limiting
operations to the same job, pipeline, or project to enhance security. Strongly prefer the smaller scope (job) over larger
scope (project). For example, if allowing access to pipeline data, restrict it to the current pipeline to prevent
cross-project or cross-pipeline data exposure. Evaluate whether cross-project or cross-pipeline access is truly necessary
for the feature; limiting the scope reduces security risks.
- **Monitor and audit activities**: Ensure that the feature is auditable and monitorable. Introduce detailed logs of events
that would trigger a pipeline, including the pipeline user, the actor initiating the action, and event details.
## Other guides
Development guides that are specific to CI/CD are listed here:
- If you are creating new CI/CD templates, read [the development guide for GitLab CI/CD templates](templates.md).
- If you are adding a new keyword or changing the CI schema, refer to the following guides:
- [The CI configuration guide](configuration.md)
- [The CI schema guide](schema.md)
- If you are making a change to core CI/CD process such as linting or pipeline creation, refer to the
[CI/CD testing guide](testing.md)
See the [CI/CD YAML reference documentation guide](cicd_reference_documentation_guide.md)
to learn how to update the [CI/CD YAML syntax reference page](../../ci/yaml/_index.md).
## Metrics
This section describes the dashboards and metrics that can be used by engineers during development, change validation and incident investigation.
- Dashboards for all GitLab teams are available [here](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
You can search for the team that owns the feature category you are interested in.
- The [Pipeline Execution error budget dashboard](https://dashboards.gitlab.net/d/stage-groups-pipeline_execution) contains other useful metrics about pipeline
creation and job execution.
- [Production logs](https://log.gprd.gitlab.net/app/discover) also offer many useful information that can be searched and aggregated in Kibana.
- The [Pipeline creation dashboard](https://log.gprd.gitlab.net/app/r/s/r5Owf) provides useful breakdowns
of the steps involved in the pipeline creation.
Note that this dashboard only contains data of "slow pipelines", those that take longer to be created or have many jobs in it.
It's similar to a SQL "slow query log".
- The [CI partitioning dashboard](https://dashboards.gitlab.net/d/ci-partitioning-main/ci-partitioning3a-ci-data-partitions-tracking) contains information about the current partition number, partition sizes, vacuuming, and other database metrics.
## Examples of CI/CD usage
We maintain a [`ci-sample-projects`](https://gitlab.com/gitlab-org/ci-sample-projects) group, with projects that showcase
examples of `.gitlab-ci.yml` for different use cases of GitLab CI/CD. They also cover specific syntax that could
be used for different scenarios.
## CI Architecture overview
The following is a simplified diagram of the CI architecture. Some details are left out to focus on
the main components.

<!-- Editable diagram available at https://app.diagrams.net/#G1LFl-KW4fgpBPzz8VIH9rsOlAH4t0xwKj -->
On the left side we have the events that can trigger a pipeline based on various events (triggered by a user or automation):
- A `git push` is the most common event that triggers a pipeline.
- The [Web API](../../api/pipelines.md#create-a-new-pipeline).
- A user selecting the "Run pipeline" button in the UI.
- When a [merge request is created or updated](../../ci/pipelines/merge_request_pipelines.md).
- When an MR is added to a [Merge Train](../../ci/pipelines/merge_trains.md).
- A [scheduled pipeline](../../ci/pipelines/schedules.md).
- When project is [subscribed to an upstream project](../../ci/pipelines/_index.md#trigger-a-pipeline-when-an-upstream-project-is-rebuilt-deprecated).
- When [Auto DevOps](../../topics/autodevops/_index.md) is enabled.
- When GitHub integration is used with [external pull requests](../../ci/ci_cd_for_external_repos/_index.md#pipelines-for-external-pull-requests).
- When an upstream pipeline contains a [bridge job](../../ci/yaml/_index.md#trigger) which triggers a downstream pipeline.
Triggering any of these events invokes the [`CreatePipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/create_pipeline_service.rb)
which takes as input event data and the user triggering it, then attempts to create a pipeline.
The `CreatePipelineService` relies heavily on the [`YAML Processor`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/yaml_processor.rb)
component, which is responsible for taking in a YAML blob as input and returns the abstract data structure of a
pipeline (including stages and all jobs). This component also validates the structure of the YAML while
processing it, and returns any syntax or semantic errors. The `YAML Processor` component is where we define
[all the keywords](../../ci/yaml/_index.md) available to structure a pipeline.
The `CreatePipelineService` receives the abstract data structure returned by the `YAML Processor`,
which then converts it to persisted models (like pipeline, stages, and jobs). After that, the pipeline is ready
to be processed. Processing a pipeline means running the jobs in order of execution (stage or `needs`)
until either one of the following:
- All expected jobs have been executed.
- Failures interrupt the pipeline execution.
The component that processes a pipeline is [`ProcessPipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/process_pipeline_service.rb),
which is responsible for moving all the pipeline's jobs to a completed state. When a pipeline is created, all its
jobs are initially in `created` state. This services looks at what jobs in `created` stage are eligible
to be processed based on the pipeline structure. Then it moves them into the `pending` state, which means
they can now [be picked up by a runner](#job-scheduling). After a job has been executed it can complete
successfully or fail. Each status transition for job within a pipeline triggers this service again, which
looks for the next jobs to be transitioned towards completion. While doing that, `ProcessPipelineService`
updates the status of jobs, stages and the overall pipeline.
On the right side of the diagram we have a list of [runners](../../ci/runners/_index.md)
connected to the GitLab instance. These can be instance runners, group runners, or project runners.
The communication between runners and the Rails server occurs through a set of API endpoints, grouped as
the `Runner API Gateway`.
We can register, delete, and verify runners, which also causes read/write queries to the database. After a runner is connected,
it keeps asking for the next job to execute. This invokes the [`RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/register_job_service.rb)
which picks the next job and assigns it to the runner. At this point the job transitions to a
`running` state, which again triggers `ProcessPipelineService` due to the status change.
For more details read [Job scheduling](#job-scheduling)).
While a job is being executed, the runner sends logs back to the server as well any possible artifacts
that must be stored. Also, a job may depend on artifacts from previous jobs to run. In this
case the runner downloads them using a dedicated API endpoint.
Artifacts are stored in object storage, while metadata is kept in the database. An important example of artifacts
are reports (like JUnit, SAST, and DAST) which are parsed and rendered in the merge request.
Job status transitions are not all automated. A user may run [manual jobs](../../ci/jobs/job_control.md#create-a-job-that-must-be-run-manually), cancel a pipeline, retry
specific failed jobs or the entire pipeline. Anything that
causes a job to change status triggers `ProcessPipelineService`, as it's responsible for
tracking the status of the entire pipeline.
A special type of job is the [bridge job](../../ci/yaml/_index.md#trigger) which is executed server-side
when transitioning to the `pending` state. This job is responsible for creating a downstream pipeline, such as
a multi-project or child pipeline. The workflow loop starts again
from the `CreatePipelineService` every time a downstream pipeline is triggered.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
You can watch a walkthrough of the architecture in [CI Backend Architectural Walkthrough](https://www.youtube.com/watch?v=ew4BwohS5OY).
## Job scheduling
When a Pipeline is created all its jobs are created at once for all stages, with an initial state of `created`. This makes it possible to visualize the full content of a pipeline.
A job with the `created` state isn't seen by the runner yet. To make it possible to assign a job to a runner, the job must transition first into the `pending` state, which can happen if:
1. The job is created in the very first stage of the pipeline.
1. The job required a manual start and it has been triggered.
1. All jobs from the previous stage have completed successfully. In this case we transition all jobs from the next stage to `pending`.
1. The job specifies needs dependencies using `needs:` and all the dependent jobs are completed.
1. The job has not been [dropped](#dropping-stuck-builds) because of its not-runnable state by [`Ci::PipelineCreation::DropNotRunnableBuildsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/ee/app/services/ci/pipeline_creation/drop_not_runnable_builds_service.rb).
When the runner is connected, it requests the next `pending` job to run by polling the server continuously.
{{< alert type="note" >}}
API endpoints used by the runner to interact with GitLab are defined in [`lib/api/ci/runner.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/ci/runner.rb)
{{< /alert >}}
After the server receives the request it selects a `pending` job based on the [`Ci::RegisterJobService` algorithm](#ciregisterjobservice), then assigns and sends the job to the runner.
Once all jobs are completed for the current stage, the server "unlocks" all the jobs from the next stage by changing their state to `pending`. These can now be picked by the scheduling algorithm when the runner requests new jobs, and continues like this until all stages are completed.
### Communication between runner and GitLab server
After the runner is [registered](https://docs.gitlab.com/runner/register/) using the registration token, the server knows what type of jobs it can execute. This depends on:
- The type of runner it is registered as:
- an instance runner
- a group runner
- a project runner
- Any associated tags.
The runner initiates the communication by requesting jobs to execute with `POST /api/v4/jobs/request`. Although polling happens every few seconds, we leverage caching through HTTP headers to reduce the server-side work load if the job queue doesn't change.
This API endpoint runs [`Ci::RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/register_job_service.rb), which:
1. Picks the next job to run from the pool of `pending` jobs
1. Assigns it to the runner
1. Presents it to the runner via the API response
### `Ci::RegisterJobService`
There are 3 top level queries that this service uses to gather the majority of the jobs and they are selected based on the level where the runner is registered to:
- Select jobs for instance runner (instance-wide)
- Uses a fair scheduling algorithm which prioritizes projects with fewer running builds
- Select jobs for group runner
- Select jobs for project runner
This list of jobs is then filtered further by matching tags between job and runner tags.
{{< alert type="note" >}}
If a job contains tags, the runner doesn't pick the job if it does not match **all** the tags.
The runner may have more tags than defined for the job, but not vice-versa.
{{< /alert >}}
Finally if the runner can only pick jobs that are tagged, all untagged jobs are filtered out.
At this point we loop through remaining `pending` jobs and we try to assign the first job that the runner "can pick" based on additional policies. For example, runners marked as `protected` can only pick jobs that run against protected branches (such as production deployments).
As we increase the number of runners in the pool we also increase the chances of conflicts which would arise if assigning the same job to different runners. To prevent that we gracefully rescue conflict errors and assign the next job in the list.
### Dropping stuck builds
There are two ways of marking builds as "stuck" and drop them.
1. When a build is created, [`Ci::PipelineCreation::DropNotRunnableBuildsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/ee/app/services/ci/pipeline_creation/drop_not_runnable_builds_service.rb) checks for upfront known conditions that would make jobs not executable:
- If there is not enough [CI/CD Minutes](#compute-quota) to run the build, then the build is immediately dropped with `ci_quota_exceeded`.
- [In the future](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121761), if the project is not on the plan that available runners for the build require via `allowed_plans`, then the build is immediately dropped with `no_matching_runner`.
1. If there is no available Runner to pick up a build, it is dropped after 1 hour by [`Ci::StuckBuilds::DropPendingService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/app/services/ci/stuck_builds/drop_pending_service.rb).
- If a job is not picked up by a runner in 24 hours it is automatically removed from
the processing queue after that time.
- If a pending job is **stuck**, when there is no
runner available that can process it, it is removed from the queue after 1 hour.
- In both cases the job's status is changed to `failed` with an appropriate failure reason.
#### The reason behind this difference
Compute minutes quota mechanism is handled early when the job is created because it is a constant decision for most of the time.
Once a project exceeds the limit, every next job matching it will be applicable for it until next month starts.
Of course, the project owner can buy additional minutes, but that is a manual action that the project need to take.
The same mechanism will be used for `allowed_plans` [soon](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121761).
If the project is not on the required plan and a job is targeting such runner,
it will be failing constantly until the project owner changes the configuration or upgrades the namespace to the required plan.
These two mechanisms are also very SaaS specific and at the same time are quite compute expensive when we consider SaaS' scale.
Doing the check before the job is even transitioned to pending and failing early makes a lot of sense here.
Why we don't handle other cases for pending and drop jobs early?
In some cases, a job is in pending only because the runner is slow on taking up jobs.
This is not something that you can know at GitLab level.
Depending on the runner's configuration and capacity and the size of the queue in GitLab, a job may be taken immediately, or may need to wait.
There may be also other reasons:
- you are handling runner maintenance and it's not available for a while at all,
- you are updating configuration and by mistake, you've messed up the tagging and/or protected flag (or in the case of our SaaS instance runners; you've assigned a wrong cost factor or `allowed_plans` configuration).
All of that are problems that may be temporary and mostly are not expected to happen and are expected to be detected and fixed early.
We definitely don't want to drop jobs immediately when one of these conditions is happening.
Dropping a job only because a runner is at capacity or because there is a temporary unavailability/configuration mistake would be very harmful to users.
## The definition of "Job" in GitLab CI/CD
"Job" in GitLab CI context refers a task to drive Continuous Integration, Delivery and Deployment.
Typically, a pipeline contains multiple stages, and a stage contains multiple jobs.
In Active Record modeling, Job is defined as `CommitStatus` class.
On top of that, we have the following types of jobs:
- `Ci::Build` ... The job to be executed by runners.
- `Ci::Bridge` ... The job to trigger a downstream pipeline.
- `GenericCommitStatus` ... The job to be executed in an external CI/CD system, for example Jenkins.
When you use the "Job" terminology in codebase, readers would
assume that the class/object is any type of above.
If you specifically refer `Ci::Build` class, you should not name the object/class
as "job" as this could cause some confusions. In documentation,
we should use "Job" in general, instead of "Build".
We have a few inconsistencies in our codebase that should be refactored.
For example, `CommitStatus` should be `Ci::Job` and `Ci::JobArtifact` should be `Ci::BuildArtifact`.
See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/16111) for the full refactoring plan.
## Compute quota
{{< history >}}
- [Renamed](https://gitlab.com/groups/gitlab-com/-/epics/2150) from "CI/CD minutes" to "compute quota" and "compute minutes" in GitLab 16.1.
{{< /history >}}
This diagram shows how the [Compute quota](../../ci/pipelines/compute_minutes.md)
feature and its components work.

<!-- Editable diagram available at https://app.diagrams.net/?libs=general;flowchart#G1XjLPvJXbzMofrC3eKRyDEk95clV6ypOb -->
Watch a walkthrough of this feature in details in the video below.
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=NmdWRGT8kZg">CI/CD minutes - architectural overview</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/NmdWRGT8kZg" frameborder="0" allowfullscreen> </iframe>
</figure>
|
---
stage: Verify
group: Pipeline Execution
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: CI/CD development guidelines
breadcrumbs:
- doc
- development
- cicd
---
CI/CD pipelines are a fundamental part of GitLab development and deployment processes, automating tasks like building,
testing, and deploying code changes.
When developing features that interact with or trigger pipelines, it's essential to consider the broader implications
these actions have on the system's security and operational integrity.
This document provides guidelines to help you develop features that use CI/CD pipelines securely and effectively.
It emphasizes the importance of understanding the implications of running pipelines, managing authentication tokens
responsibly, and integrating security considerations from the beginning of the development process.
## General guidelines
- **Recognize pipelines as write operations**: Triggering a pipeline is a write operation that changes the system's
state. The write operation can initiate deployments, run tests, or alter configurations. Treat pipeline triggers with the same caution
as other critical write operations to prevent unauthorized changes or misuse of the system.
- **Running a pipeline should be an explicit action**: Actions that create a pipeline in the user's context
should be designed so that it is clear to the user that a pipeline (or single job) is started when performing the action.
The user should be aware of the changes executed in the pipeline **before** they are executed.
- **Remote execution and isolation**: The CI/CD pipeline functions as a remote execution environment where jobs can
execute scripts performing a wide range of actions. Ensure that jobs are adequately isolated and do not unintentionally
expose sensitive data or systems.
- **Collaborate with AppSec and Verify teams**: Include [Application Security (AppSec)](https://handbook.gitlab.com/handbook/security/product-security/application-security/)
and [Verify](https://handbook.gitlab.com/handbook/engineering/development/ops/verify/) team members early in
the design process and when drafting proposals. Their expertise can help identify potential security risks and ensure
that security considerations are integrated into the feature from the outset. Additionally, involve them in the code
review process to benefit from their specialized knowledge in identifying vulnerabilities and ensuring compliance with
security standards.
- **Determine the pipeline actor**: When building features that trigger pipelines, it's crucial to consider which user
initiates the pipeline. You need to determine who should be the actor of the event. Is it an intentional pipeline
run where a user directly triggers the pipeline (for example by pushing changes to the repository or clicking the "Run pipeline"
button), or is it a pipeline run initiated by the GitLab system or a policy?
Avoid scenarios in which the user creating the pipeline is not the author of the changes. If the users are not the same,
there is a risk that the author of the changes can run code in the context of the pipeline user.
Understanding the actor helps in managing permissions and ensuring that the pipeline runs in the correct execution context.
- **Variability of job execution users**: The user running a specific job might not be the same user who created the pipeline.
While in the majority of cases the user is the same, there are scenarios where the user of the job changes, for example when
running a manual job or retrying a job. This variability can affect permissions and access levels in the job's execution
context. Always account for this possibility when developing features that use the CI/CD job token (`CI_JOB_TOKEN`). Consider whether the job
user should change and who the actor of the action is.
- **Restrict scope of the operation**: When enabling a new endpoint for use with the CI/CD job token, strongly consider limiting
operations to the same job, pipeline, or project to enhance security. Strongly prefer the smaller scope (job) over larger
scope (project). For example, if allowing access to pipeline data, restrict it to the current pipeline to prevent
cross-project or cross-pipeline data exposure. Evaluate whether cross-project or cross-pipeline access is truly necessary
for the feature; limiting the scope reduces security risks.
- **Monitor and audit activities**: Ensure that the feature is auditable and monitorable. Introduce detailed logs of events
that would trigger a pipeline, including the pipeline user, the actor initiating the action, and event details.
## Other guides
Development guides that are specific to CI/CD are listed here:
- If you are creating new CI/CD templates, read [the development guide for GitLab CI/CD templates](templates.md).
- If you are adding a new keyword or changing the CI schema, refer to the following guides:
- [The CI configuration guide](configuration.md)
- [The CI schema guide](schema.md)
- If you are making a change to core CI/CD process such as linting or pipeline creation, refer to the
[CI/CD testing guide](testing.md)
See the [CI/CD YAML reference documentation guide](cicd_reference_documentation_guide.md)
to learn how to update the [CI/CD YAML syntax reference page](../../ci/yaml/_index.md).
## Metrics
This section describes the dashboards and metrics that can be used by engineers during development, change validation and incident investigation.
- Dashboards for all GitLab teams are available [here](https://dashboards.gitlab.net/dashboards/f/stage-groups/stage-groups).
You can search for the team that owns the feature category you are interested in.
- The [Pipeline Execution error budget dashboard](https://dashboards.gitlab.net/d/stage-groups-pipeline_execution) contains other useful metrics about pipeline
creation and job execution.
- [Production logs](https://log.gprd.gitlab.net/app/discover) also offer many useful information that can be searched and aggregated in Kibana.
- The [Pipeline creation dashboard](https://log.gprd.gitlab.net/app/r/s/r5Owf) provides useful breakdowns
of the steps involved in the pipeline creation.
Note that this dashboard only contains data of "slow pipelines", those that take longer to be created or have many jobs in it.
It's similar to a SQL "slow query log".
- The [CI partitioning dashboard](https://dashboards.gitlab.net/d/ci-partitioning-main/ci-partitioning3a-ci-data-partitions-tracking) contains information about the current partition number, partition sizes, vacuuming, and other database metrics.
## Examples of CI/CD usage
We maintain a [`ci-sample-projects`](https://gitlab.com/gitlab-org/ci-sample-projects) group, with projects that showcase
examples of `.gitlab-ci.yml` for different use cases of GitLab CI/CD. They also cover specific syntax that could
be used for different scenarios.
## CI Architecture overview
The following is a simplified diagram of the CI architecture. Some details are left out to focus on
the main components.

<!-- Editable diagram available at https://app.diagrams.net/#G1LFl-KW4fgpBPzz8VIH9rsOlAH4t0xwKj -->
On the left side we have the events that can trigger a pipeline based on various events (triggered by a user or automation):
- A `git push` is the most common event that triggers a pipeline.
- The [Web API](../../api/pipelines.md#create-a-new-pipeline).
- A user selecting the "Run pipeline" button in the UI.
- When a [merge request is created or updated](../../ci/pipelines/merge_request_pipelines.md).
- When an MR is added to a [Merge Train](../../ci/pipelines/merge_trains.md).
- A [scheduled pipeline](../../ci/pipelines/schedules.md).
- When project is [subscribed to an upstream project](../../ci/pipelines/_index.md#trigger-a-pipeline-when-an-upstream-project-is-rebuilt-deprecated).
- When [Auto DevOps](../../topics/autodevops/_index.md) is enabled.
- When GitHub integration is used with [external pull requests](../../ci/ci_cd_for_external_repos/_index.md#pipelines-for-external-pull-requests).
- When an upstream pipeline contains a [bridge job](../../ci/yaml/_index.md#trigger) which triggers a downstream pipeline.
Triggering any of these events invokes the [`CreatePipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/create_pipeline_service.rb)
which takes as input event data and the user triggering it, then attempts to create a pipeline.
The `CreatePipelineService` relies heavily on the [`YAML Processor`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/yaml_processor.rb)
component, which is responsible for taking in a YAML blob as input and returns the abstract data structure of a
pipeline (including stages and all jobs). This component also validates the structure of the YAML while
processing it, and returns any syntax or semantic errors. The `YAML Processor` component is where we define
[all the keywords](../../ci/yaml/_index.md) available to structure a pipeline.
The `CreatePipelineService` receives the abstract data structure returned by the `YAML Processor`,
which then converts it to persisted models (like pipeline, stages, and jobs). After that, the pipeline is ready
to be processed. Processing a pipeline means running the jobs in order of execution (stage or `needs`)
until either one of the following:
- All expected jobs have been executed.
- Failures interrupt the pipeline execution.
The component that processes a pipeline is [`ProcessPipelineService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/process_pipeline_service.rb),
which is responsible for moving all the pipeline's jobs to a completed state. When a pipeline is created, all its
jobs are initially in `created` state. This services looks at what jobs in `created` stage are eligible
to be processed based on the pipeline structure. Then it moves them into the `pending` state, which means
they can now [be picked up by a runner](#job-scheduling). After a job has been executed it can complete
successfully or fail. Each status transition for job within a pipeline triggers this service again, which
looks for the next jobs to be transitioned towards completion. While doing that, `ProcessPipelineService`
updates the status of jobs, stages and the overall pipeline.
On the right side of the diagram we have a list of [runners](../../ci/runners/_index.md)
connected to the GitLab instance. These can be instance runners, group runners, or project runners.
The communication between runners and the Rails server occurs through a set of API endpoints, grouped as
the `Runner API Gateway`.
We can register, delete, and verify runners, which also causes read/write queries to the database. After a runner is connected,
it keeps asking for the next job to execute. This invokes the [`RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/register_job_service.rb)
which picks the next job and assigns it to the runner. At this point the job transitions to a
`running` state, which again triggers `ProcessPipelineService` due to the status change.
For more details read [Job scheduling](#job-scheduling)).
While a job is being executed, the runner sends logs back to the server as well any possible artifacts
that must be stored. Also, a job may depend on artifacts from previous jobs to run. In this
case the runner downloads them using a dedicated API endpoint.
Artifacts are stored in object storage, while metadata is kept in the database. An important example of artifacts
are reports (like JUnit, SAST, and DAST) which are parsed and rendered in the merge request.
Job status transitions are not all automated. A user may run [manual jobs](../../ci/jobs/job_control.md#create-a-job-that-must-be-run-manually), cancel a pipeline, retry
specific failed jobs or the entire pipeline. Anything that
causes a job to change status triggers `ProcessPipelineService`, as it's responsible for
tracking the status of the entire pipeline.
A special type of job is the [bridge job](../../ci/yaml/_index.md#trigger) which is executed server-side
when transitioning to the `pending` state. This job is responsible for creating a downstream pipeline, such as
a multi-project or child pipeline. The workflow loop starts again
from the `CreatePipelineService` every time a downstream pipeline is triggered.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
You can watch a walkthrough of the architecture in [CI Backend Architectural Walkthrough](https://www.youtube.com/watch?v=ew4BwohS5OY).
## Job scheduling
When a Pipeline is created all its jobs are created at once for all stages, with an initial state of `created`. This makes it possible to visualize the full content of a pipeline.
A job with the `created` state isn't seen by the runner yet. To make it possible to assign a job to a runner, the job must transition first into the `pending` state, which can happen if:
1. The job is created in the very first stage of the pipeline.
1. The job required a manual start and it has been triggered.
1. All jobs from the previous stage have completed successfully. In this case we transition all jobs from the next stage to `pending`.
1. The job specifies needs dependencies using `needs:` and all the dependent jobs are completed.
1. The job has not been [dropped](#dropping-stuck-builds) because of its not-runnable state by [`Ci::PipelineCreation::DropNotRunnableBuildsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/ee/app/services/ci/pipeline_creation/drop_not_runnable_builds_service.rb).
When the runner is connected, it requests the next `pending` job to run by polling the server continuously.
{{< alert type="note" >}}
API endpoints used by the runner to interact with GitLab are defined in [`lib/api/ci/runner.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/ci/runner.rb)
{{< /alert >}}
After the server receives the request it selects a `pending` job based on the [`Ci::RegisterJobService` algorithm](#ciregisterjobservice), then assigns and sends the job to the runner.
Once all jobs are completed for the current stage, the server "unlocks" all the jobs from the next stage by changing their state to `pending`. These can now be picked by the scheduling algorithm when the runner requests new jobs, and continues like this until all stages are completed.
### Communication between runner and GitLab server
After the runner is [registered](https://docs.gitlab.com/runner/register/) using the registration token, the server knows what type of jobs it can execute. This depends on:
- The type of runner it is registered as:
- an instance runner
- a group runner
- a project runner
- Any associated tags.
The runner initiates the communication by requesting jobs to execute with `POST /api/v4/jobs/request`. Although polling happens every few seconds, we leverage caching through HTTP headers to reduce the server-side work load if the job queue doesn't change.
This API endpoint runs [`Ci::RegisterJobService`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/services/ci/register_job_service.rb), which:
1. Picks the next job to run from the pool of `pending` jobs
1. Assigns it to the runner
1. Presents it to the runner via the API response
### `Ci::RegisterJobService`
There are 3 top level queries that this service uses to gather the majority of the jobs and they are selected based on the level where the runner is registered to:
- Select jobs for instance runner (instance-wide)
- Uses a fair scheduling algorithm which prioritizes projects with fewer running builds
- Select jobs for group runner
- Select jobs for project runner
This list of jobs is then filtered further by matching tags between job and runner tags.
{{< alert type="note" >}}
If a job contains tags, the runner doesn't pick the job if it does not match **all** the tags.
The runner may have more tags than defined for the job, but not vice-versa.
{{< /alert >}}
Finally if the runner can only pick jobs that are tagged, all untagged jobs are filtered out.
At this point we loop through remaining `pending` jobs and we try to assign the first job that the runner "can pick" based on additional policies. For example, runners marked as `protected` can only pick jobs that run against protected branches (such as production deployments).
As we increase the number of runners in the pool we also increase the chances of conflicts which would arise if assigning the same job to different runners. To prevent that we gracefully rescue conflict errors and assign the next job in the list.
### Dropping stuck builds
There are two ways of marking builds as "stuck" and drop them.
1. When a build is created, [`Ci::PipelineCreation::DropNotRunnableBuildsService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/ee/app/services/ci/pipeline_creation/drop_not_runnable_builds_service.rb) checks for upfront known conditions that would make jobs not executable:
- If there is not enough [CI/CD Minutes](#compute-quota) to run the build, then the build is immediately dropped with `ci_quota_exceeded`.
- [In the future](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121761), if the project is not on the plan that available runners for the build require via `allowed_plans`, then the build is immediately dropped with `no_matching_runner`.
1. If there is no available Runner to pick up a build, it is dropped after 1 hour by [`Ci::StuckBuilds::DropPendingService`](https://gitlab.com/gitlab-org/gitlab/-/blob/v16.0.4-ee/app/services/ci/stuck_builds/drop_pending_service.rb).
- If a job is not picked up by a runner in 24 hours it is automatically removed from
the processing queue after that time.
- If a pending job is **stuck**, when there is no
runner available that can process it, it is removed from the queue after 1 hour.
- In both cases the job's status is changed to `failed` with an appropriate failure reason.
#### The reason behind this difference
Compute minutes quota mechanism is handled early when the job is created because it is a constant decision for most of the time.
Once a project exceeds the limit, every next job matching it will be applicable for it until next month starts.
Of course, the project owner can buy additional minutes, but that is a manual action that the project need to take.
The same mechanism will be used for `allowed_plans` [soon](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/121761).
If the project is not on the required plan and a job is targeting such runner,
it will be failing constantly until the project owner changes the configuration or upgrades the namespace to the required plan.
These two mechanisms are also very SaaS specific and at the same time are quite compute expensive when we consider SaaS' scale.
Doing the check before the job is even transitioned to pending and failing early makes a lot of sense here.
Why we don't handle other cases for pending and drop jobs early?
In some cases, a job is in pending only because the runner is slow on taking up jobs.
This is not something that you can know at GitLab level.
Depending on the runner's configuration and capacity and the size of the queue in GitLab, a job may be taken immediately, or may need to wait.
There may be also other reasons:
- you are handling runner maintenance and it's not available for a while at all,
- you are updating configuration and by mistake, you've messed up the tagging and/or protected flag (or in the case of our SaaS instance runners; you've assigned a wrong cost factor or `allowed_plans` configuration).
All of that are problems that may be temporary and mostly are not expected to happen and are expected to be detected and fixed early.
We definitely don't want to drop jobs immediately when one of these conditions is happening.
Dropping a job only because a runner is at capacity or because there is a temporary unavailability/configuration mistake would be very harmful to users.
## The definition of "Job" in GitLab CI/CD
"Job" in GitLab CI context refers a task to drive Continuous Integration, Delivery and Deployment.
Typically, a pipeline contains multiple stages, and a stage contains multiple jobs.
In Active Record modeling, Job is defined as `CommitStatus` class.
On top of that, we have the following types of jobs:
- `Ci::Build` ... The job to be executed by runners.
- `Ci::Bridge` ... The job to trigger a downstream pipeline.
- `GenericCommitStatus` ... The job to be executed in an external CI/CD system, for example Jenkins.
When you use the "Job" terminology in codebase, readers would
assume that the class/object is any type of above.
If you specifically refer `Ci::Build` class, you should not name the object/class
as "job" as this could cause some confusions. In documentation,
we should use "Job" in general, instead of "Build".
We have a few inconsistencies in our codebase that should be refactored.
For example, `CommitStatus` should be `Ci::Job` and `Ci::JobArtifact` should be `Ci::BuildArtifact`.
See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/16111) for the full refactoring plan.
## Compute quota
{{< history >}}
- [Renamed](https://gitlab.com/groups/gitlab-com/-/epics/2150) from "CI/CD minutes" to "compute quota" and "compute minutes" in GitLab 16.1.
{{< /history >}}
This diagram shows how the [Compute quota](../../ci/pipelines/compute_minutes.md)
feature and its components work.

<!-- Editable diagram available at https://app.diagrams.net/?libs=general;flowchart#G1XjLPvJXbzMofrC3eKRyDEk95clV6ypOb -->
Watch a walkthrough of this feature in details in the video below.
<div class="video-fallback">
See the video: <a href="https://www.youtube.com/watch?v=NmdWRGT8kZg">CI/CD minutes - architectural overview</a>.
</div>
<figure class="video-container">
<iframe src="https://www.youtube-nocookie.com/embed/NmdWRGT8kZg" frameborder="0" allowfullscreen> </iframe>
</figure>
|
https://docs.gitlab.com/development/templates
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/templates.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
templates.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Development guide for GitLab CI/CD templates (Deprecated)
| null |
{{< alert type="note" >}}
With the introduction of the [CI/CD Catalog](../../ci/components/_index.md#cicd-catalog),
GitLab is no longer accepting contributions of new CI/CD templates to the codebase. Instead,
we encourage team members to create [CI/CD components](../../ci/components/_index.md)
for the catalog. This transition enhances the modularity and maintainability of our
shared CI/CD resources, and avoids the complexities of contributing new CI/CD templates.
If you need to update an existing template, you must also update the matching CI/CD component.
If no component exists that matches the CI/CD template yet, consider [creating the matching component](components.md).
This ensures that template and component functionality remain in sync, aligning with
our new development practices.
{{< /alert >}}
This document explains how to develop [GitLab CI/CD templates](../../ci/examples/_index.md#cicd-templates).
## Requirements for CI/CD templates
Before submitting a merge request with a new or updated CI/CD template, you must:
- Place the template in the correct [directory](#template-directories).
- Follow the [CI/CD template authoring guidelines](#template-authoring-guidelines).
- Name the template following the `*.gitlab-ci.yml` format.
- Use valid [`.gitlab-ci.yml` syntax](../../ci/yaml/_index.md). Verify it's valid
with the [CI/CD lint tool](../../ci/yaml/lint.md).
- [Add template metrics](#add-metrics).
- Include [a changelog](../changelog.md) if the merge request introduces a user-facing change.
- Follow the [template review process](#contribute-cicd-template-merge-requests).
- (Optional but highly recommended) Test the template in an example GitLab project
that reviewers can access. Reviewers might not be able to create the data or configuration
that the template requires, so an example project helps the reviewers ensure the
template is correct. The example project pipeline should succeed before submitting
the merge request for review.
## Template directories
All template files are saved in `lib/gitlab/ci/templates`. Save general templates
in this directory, but certain template types have a specific directory reserved for
them. The ability to [select a template in new file UI](#make-sure-the-new-template-can-be-selected-in-ui)
is determined by the directory it is in:
| Subdirectory | Selectable in UI | Template type |
|----------------|------------------|---------------|
| `/*` (root) | Yes | General templates. |
| `/AWS/*` | No | Templates related to Cloud Deployment (AWS). |
| `/Jobs/*` | No | Templates related to Auto DevOps. |
| `/Pages/*` | Yes | Sample templates for using Static site generators with GitLab Pages. |
| `/Security/*` | Yes | Templates related to Security scanners. |
| `/Terraform/*` | No | Templates related to infrastructure as Code (Terraform). |
| `/Verify/*` | Yes | Templates related to Testing features. |
| `/Workflows/*` | No | Sample templates for using the `workflow:` keyword. |
## Template authoring guidelines
Use the following guidelines to ensure your template submission follows standards:
### Template types
Templates have two different types that impact the way the template should be written
and used. The style in a template should match one of these two types:
A **pipeline template** provides an end-to-end CI/CD workflow that matches a project's
structure, language, and so on. It usually should be used by itself in projects that
don't have any other `.gitlab-ci.yml` files.
When authoring pipeline templates:
- Place any [global keywords](../../ci/yaml/_index.md#global-keywords) like `image`
or `before_script` in a [`default`](../../ci/yaml/_index.md#default)
section at the top of the template.
- Note clearly in the [code comments](#explain-the-template-with-comments) if the
template is designed to be used with the `includes` keyword in an existing
`.gitlab-ci.yml` file or not.
A **job template** provides specific jobs that can be added to an existing CI/CD
workflow to accomplish specific tasks. It usually should be used by adding it to
an existing `.gitlab-ci.yml` file by using the [`includes`](../../ci/yaml/_index.md#global-keywords)
keyword. You can also copy and paste the contents into an existing `.gitlab-ci.yml` file.
Configure job templates so that users can add them to their current pipeline with very
few or no modifications. It must be configured to reduce the risk of conflicting with
other pipeline configuration.
When authoring job templates:
- Do not use [global](../../ci/yaml/_index.md#global-keywords) or [`default`](../../ci/yaml/_index.md#default)
keywords. When a root `.gitlab-ci.yml` includes a template, global or default keywords
might be overridden and cause unexpected behavior. If a job template requires a
specific stage, explain in the code comments that users must manually add the stage
to the main `.gitlab-ci.yml` configuration.
- Note clearly in [code comments](#explain-the-template-with-comments) that the template
is designed to be used with the `includes` keyword or copied into an existing configuration.
- Consider [versioning](#versioning) the template with latest and stable versions
to avoid [backward compatibility](#backward-compatibility) problems.
Maintenance of this type of template is more complex, because changes to templates
imported with `includes` can break pipelines for all projects using the template.
Additional points to keep in mind when authoring templates:
| Template design points | Pipeline templates | Job templates |
|------------------------------------------------------|--------------------|---------------|
| Can use global keywords, including `stages`. | Yes | No |
| Can define jobs. | Yes | Yes |
| Can be selected in the new file UI | Yes | No |
| Can include other job templates with `include` | Yes | No |
| Can include other pipeline templates with `include`. | No | No |
### Syntax guidelines
To make templates easier to follow, templates should all use clear syntax styles,
with a consistent format.
The `before_script`, `script`, and `after_script` keywords of every job are linted
using [ShellCheck](https://www.shellcheck.net/) and should follow the
[Shell scripting standards and style guidelines](../shell_scripting_guide/_index.md)
as much as possible.
ShellCheck assumes that the script is designed to run using [Bash](https://www.gnu.org/software/bash/).
Templates which use scripts for shells that aren't compatible with the Bash ShellCheck
rules can be excluded from ShellCheck linting. To exclude a script, add it to the
`EXCLUDED_TEMPLATES` list in [`scripts/lint_templates_bash.rb`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/scripts/lint_templates_bash.rb).
#### Do not hardcode the default branch
Use [`$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH`](../../ci/variables/predefined_variables.md)
instead of a hardcoded `main` branch, and never use `master`:
```yaml
job:
rules:
if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
echo "example job"
```
#### Use `rules` instead of `only` or `except`
Avoid using [`only` or `except`](../../ci/yaml/deprecated_keywords.md#only--except) if possible.
Only and except is not being developed any more, and [`rules`](../../ci/yaml/_index.md#rules)
is now the preferred syntax:
```yaml
job2:
script:
- echo
rules:
- if: $CI_COMMIT_BRANCH
```
#### Break up long commands
If a command is very long, or has many command line flags, like `-o` or `--option`:
- Split these up into a multi-line command to make it easier to see every part of the command.
- Use the long name for the flags, when available.
For example, with a long command with short CLI flags like
`docker run --e SOURCE_CODE="$PWD" -v "$PWD":/code -v /var/run/docker.sock:/var/run/docker.sock "$CODE_QUALITY_IMAGE" /code`:
```yaml
job1:
script:
- docker run
--env SOURCE_CODE="$PWD"
--volume "$PWD":/code
--volume /var/run/docker.sock:/var/run/docker.sock
"$CODE_QUALITY_IMAGE" /code
```
You can also use the `|` and `>` YAML operators to [split up multi-line commands](../../ci/yaml/script.md#split-long-commands).
### Explain the template with comments
You can access template contents from the new file menu, and this might be the only
place users see information about the template. It's important to clearly document
the behavior of the template directly in the template itself.
The following guidelines cover the basic comments expected in all template submissions.
Add additional comments as needed if you think the comments can help users or
[template reviewers](#contribute-cicd-template-merge-requests).
#### Explain requirements and expectations
Give the details on how to use the template in `#` comments at the top of the file.
This includes:
- Repository/project requirements.
- Expected behavior.
- Any places that must be edited by users before using the template.
- If the template should be used by copy pasting it into a configuration file, or
by using it with the `include` keyword in an existing pipeline.
- If any variables must be saved in the project's CI/CD settings.
```yaml
# Use this template to publish an application that uses the ABC server.
# You can copy and paste this template into a new `.gitlab-ci.yml` file.
# You should not add this template to an existing `.gitlab-ci.yml` file by using the `include:` keyword.
#
# Requirements:
# - An ABC project with content saved in /content and tests in /test
# - A CI/CD variable named ABC-PASSWORD saved in the project CI/CD settings. The value
# should be the password used to deploy to your ABC server.
# - An ABC server configured to listen on port 12345.
#
# You must change the URL on line 123 to point to your ABC server and port.
#
# For more information, see https://gitlab.com/example/abcserver/README.md
job1:
...
```
#### Explain how variables affect template behavior
If the template uses variables, explain them in `#` comments where they are first
defined. You can skip the comment when the variable is trivially clear:
```yaml
variables: # Good to have a comment here, for example:
TEST_CODE_PATH: <path/to/code> # Update this variable with the relative path to your Ruby specs
job1:
variables:
ERROR_MESSAGE: "The $TEST_CODE_PATH path is invalid" # (No need for a comment here, it's already clear)
script:
- echo ${ERROR_MESSAGE}
```
#### Use all-caps naming for non-local variables
If you are expecting a variable to be provided via the CI/CD settings, or via the
`variables` keyword, that variable must use all-caps naming with underscores (`_`)
separating words.
```yaml
.with_login:
before_script:
# SECRET_TOKEN should be provided via the project settings
- echo "$SECRET_TOKEN" | docker login -u my-user --password-stdin my-registry
```
Lowercase naming can optionally be used for variables which are defined locally in
one of the `script` keywords:
```yaml
job1:
script:
- response="$(curl "https://example.com/json")"
- message="$(echo "$response" | jq -r .message)"
- 'echo "Server responded with: $message"'
```
### Backward compatibility
A template might be dynamically included with the `include:template:` keyword. If
you make a change to an existing template, you **must** make sure that it doesn't break
CI/CD in existing projects.
For example, changing a job name in a template could break pipelines in an existing project.
In this example, a template named `Performance.gitlab-ci.yml` has the following content:
```yaml
performance:
image: registry.gitlab.com/gitlab-org/verify-tools/performance:v0.1.0
script: ./performance-test $TARGET_URL
```
and users include this template with passing an argument to the `performance` job.
This can be done by specifying the CI/CD variable `TARGET_URL` in _their_ `.gitlab-ci.yml`:
```yaml
include:
template: Performance.gitlab-ci.yml
performance:
variables:
TARGET_URL: https://awesome-app.com
```
If the job name `performance` in the template is renamed to `browser-performance`,
the user's `.gitlab-ci.yml` immediately causes a lint error because there
are no such jobs named `performance` in the included template anymore. Therefore,
users have to fix their `.gitlab-ci.yml` that could annoy their workflow.
Read [versioning](#versioning) section for introducing breaking change safely.
## Versioning
To introduce a breaking change without affecting the existing projects that depend on
the current template, use [stable](#stable-version) and [latest](#latest-version) versioning.
Stable templates usually only receive breaking changes in major version releases, while
latest templates can receive breaking changes in any release. In major release milestones,
the latest template is made the new stable template (and the latest template might be deleted).
Adding a latest template is safe, but comes with a maintenance burden:
- GitLab has to choose a DRI to overwrite the stable template with the contents of the
latest template at the next major release of GitLab. The DRI is responsible for
supporting users who have trouble with the change.
- When we make a new non-breaking change, both the stable and latest templates must be updated
to match, as must as possible.
- A latest template could remain for longer than planned because many users could
directly depend on it continuing to exist.
Before adding a new latest template, see if the change can be made to the stable
template instead, even if it's a breaking change. If the template is intended for copy-paste
usage only, it might be possible to directly change the stable version. Before changing
the stable template with a breaking change in a minor milestone, make sure:
- It's a [pipeline template](#template-types) and it has a [code comment](#explain-requirements-and-expectations)
explaining that it's not designed to be used with the `includes`.
- The [CI/CD template usage metrics](#add-metrics) doesn't show any usage. If the metrics
show zero usage for the template, the template is not actively being used with `include`.
### Stable version
A stable CI/CD template is a template that only introduces breaking changes in major
release milestones. Name the stable version of a template as `<template-name>.gitlab-ci.yml`,
for example `Jobs/Deploy.gitlab-ci.yml`.
You can make a new stable template by copying [the latest template](#latest-version)
available in a major milestone release of GitLab like `15.0`. All breaking changes must be announced
on the [Deprecations and removals by version](../../update/deprecations.md) page.
You can change a stable template version in a minor GitLab release like `15.1` if:
- The change is not a [breaking change](#backward-compatibility).
- The change is ported to [the latest template](#latest-version), if one exists.
### Latest version
Templates marked as `latest` can be updated in any release, even with
[breaking changes](#backward-compatibility). Add `.latest` to the template name if
it's considered the latest version, for example `Jobs/Deploy.latest.gitlab-ci.yml`.
When you introduce [a breaking change](#backward-compatibility),
you **must** test and document [the upgrade path](#verify-breaking-changes).
In general, we should not promote the latest template as the best option, as it could surprise users with unexpected problems.
If the `latest` template does not exist yet, you can copy [the stable template](#stable-version).
### How to include an older stable template
Users may want to use an older [stable template](#stable-version) that is not bundled
in the current GitLab package. For example, the stable templates in GitLab 15.0 and
GitLab 16.0 could be so different that a user wants to continue using the GitLab 15.0
template even after upgrading to GitLab 16.0.
You can add a note in the template or in documentation explaining how to use `include:remote`
to include older template versions. If other templates are included with `include: template`,
they can be combined with the `include: remote`:
```yaml
# To use the v13 stable template, which is not included in v14, fetch the specific
# template from the remote template repository with the `include:remote:` keyword.
# If you fetch from the GitLab canonical project, use the following URL format:
# https://gitlab.com/gitlab-org/gitlab/-/raw/<version>/lib/gitlab/ci/templates/<template-name>
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/v13.0.1-ee/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
```
### Further reading
There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/17716) about
introducing versioning concepts in GitLab CI/CD templates. You can check that issue to
follow the progress.
## Testing
Each CI/CD template must be tested to make sure that it's safe to be published.
### Manual QA
It's always good practice to test the template in a minimal demo project.
To do so, follow the following steps:
1. Create a public sample project on <https://gitlab.com>.
1. Add a `.gitlab-ci.yml` to the project with the proposed template.
1. Run pipelines and make sure that everything runs properly, in all possible cases
(merge request pipelines, schedules, and so on).
1. Link to the project in the description of the merge request that is adding a new template.
This is useful information for reviewers to make sure the template is safe to be merged.
### Make sure the new template can be selected in UI
Templates located under some directories are also [selectable in the **New file** UI](#template-directories).
When you add a template into one of those directories, make sure that it correctly appears in the dropdown list:

### Write an RSpec test
You should write an RSpec test to make sure that pipeline jobs are generated correctly:
1. Add a test file at `spec/lib/gitlab/ci/templates/<template-category>/<template-name>_spec.rb`
1. Test that pipeline jobs are properly created via `Ci::CreatePipelineService`.
### Verify breaking changes
When you introduce a breaking change to [a `latest` template](#latest-version),
you must:
1. Test the upgrade path from [the stable template](#stable-version).
1. Verify what kind of errors users encounter.
1. Document it as a troubleshooting guide.
This information is important for users when [a stable template](#stable-version)
is updated in a major version GitLab release.
### Add metrics
Every CI/CD template must also have metrics defined to track their use. The CI/CD template monthly usage report
can be found in [Sisense (GitLab team members only)](https://app.periscopedata.com/app/gitlab/785953/Pipeline-Authoring-Dashboard?widget=13440051&udv=0).
Select a template to see the graph for that single template.
To add a metric definition for a new template:
1. Install and start the [GitLab GDK](https://gitlab.com/gitlab-org/gitlab-development-kit#installation).
1. In the `gitlab` directory in your GDK, check out the branch that contains the new template.
1. Add the new template event name to the weekly and monthly CI/CD template total count metrics:
- [`config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml)
- [`config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml)
1. Use the same event name as above as the last argument in the following command to
[add new metric definitions](../internal_analytics/metrics/metrics_instrumentation.md#create-a-new-metric-instrumentation-class):
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll ci_templates <template_metric_event_name>
```
The output should look like:
```shell
$ bundle exec rails generate gitlab:usage_metric_definition:redis_hll ci_templates p_ci_templates_my_template_name
create config/metrics/counts_7d/20220120073740_p_ci_templates_my_template_name_weekly.yml
create config/metrics/counts_28d/20220120073746_p_ci_templates_my_template_name_monthly.yml
```
1. Edit both newly generated files as follows:
- `name:` and `performance_indicator_type:`: Delete (not needed).
- `introduced_by_url:`: The URL of the MR adding the template.
- `data_source:`: Set to `redis_hll`.
- `description`: Add a short description of what this metric counts, for example: `Count of pipelines using the latest Auto Deploy template`
- `product_*`: Set to [section, stage, group, and feature category](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
as per the [metrics dictionary guide](../internal_analytics/metrics/metrics_dictionary.md#metrics-definition-and-validation).
If you are unsure what to use for these keywords, you can ask for help in the merge request.
- Add the following to the end of each file:
```yaml
options:
events:
- p_ci_templates_my_template_name
```
1. Commit and push the changes.
For example, these are the metrics configuration files for the
[5 Minute Production App template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/5-Minute-Production-App.gitlab-ci.yml):
- The weekly and monthly metrics definitions:
- [`config/metrics/counts_7d/20210901223501_p_ci_templates_5_minute_production_app_weekly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/1a6eceff3914f240864b2ca15ae2dc076ea67bf6/config/metrics/counts_7d/20210216184515_p_ci_templates_5_min_production_app_weekly.yml)
- [`config/metrics/counts_28d/20210901223505_p_ci_templates_5_minute_production_app_monthly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_28d/20210216184517_p_ci_templates_5_min_production_app_monthly.yml)
- The metrics count totals:
- [`config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml#L19`](https://gitlab.com/gitlab-org/gitlab/-/blob/4e01ef2b094763943348655ef77008aba7a052ae/config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml#L19)
- [`config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml#L19`](https://gitlab.com/gitlab-org/gitlab/-/blob/4e01ef2b094763943348655ef77008aba7a052ae/config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml#L19)
## Security
A template could contain malicious code. For example, a template that contains the `export` shell command in a job
might accidentally expose secret project CI/CD variables in a job log.
If you're unsure if it's secure or not, you must ask security experts for cross-validation.
## Contribute CI/CD template merge requests
After your CI/CD template MR is created and labeled with `ci::templates`, DangerBot
suggests one reviewer and one maintainer that can review your code. When your merge
request is ready for review, [mention](../../user/discussions/_index.md#mentions)
the reviewer and ask them to review your CI/CD template changes. See details in the merge request that added
[a DangerBot task for CI/CD template MRs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/44688).
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Development guide for GitLab CI/CD templates (Deprecated)
breadcrumbs:
- doc
- development
- cicd
---
{{< alert type="note" >}}
With the introduction of the [CI/CD Catalog](../../ci/components/_index.md#cicd-catalog),
GitLab is no longer accepting contributions of new CI/CD templates to the codebase. Instead,
we encourage team members to create [CI/CD components](../../ci/components/_index.md)
for the catalog. This transition enhances the modularity and maintainability of our
shared CI/CD resources, and avoids the complexities of contributing new CI/CD templates.
If you need to update an existing template, you must also update the matching CI/CD component.
If no component exists that matches the CI/CD template yet, consider [creating the matching component](components.md).
This ensures that template and component functionality remain in sync, aligning with
our new development practices.
{{< /alert >}}
This document explains how to develop [GitLab CI/CD templates](../../ci/examples/_index.md#cicd-templates).
## Requirements for CI/CD templates
Before submitting a merge request with a new or updated CI/CD template, you must:
- Place the template in the correct [directory](#template-directories).
- Follow the [CI/CD template authoring guidelines](#template-authoring-guidelines).
- Name the template following the `*.gitlab-ci.yml` format.
- Use valid [`.gitlab-ci.yml` syntax](../../ci/yaml/_index.md). Verify it's valid
with the [CI/CD lint tool](../../ci/yaml/lint.md).
- [Add template metrics](#add-metrics).
- Include [a changelog](../changelog.md) if the merge request introduces a user-facing change.
- Follow the [template review process](#contribute-cicd-template-merge-requests).
- (Optional but highly recommended) Test the template in an example GitLab project
that reviewers can access. Reviewers might not be able to create the data or configuration
that the template requires, so an example project helps the reviewers ensure the
template is correct. The example project pipeline should succeed before submitting
the merge request for review.
## Template directories
All template files are saved in `lib/gitlab/ci/templates`. Save general templates
in this directory, but certain template types have a specific directory reserved for
them. The ability to [select a template in new file UI](#make-sure-the-new-template-can-be-selected-in-ui)
is determined by the directory it is in:
| Subdirectory | Selectable in UI | Template type |
|----------------|------------------|---------------|
| `/*` (root) | Yes | General templates. |
| `/AWS/*` | No | Templates related to Cloud Deployment (AWS). |
| `/Jobs/*` | No | Templates related to Auto DevOps. |
| `/Pages/*` | Yes | Sample templates for using Static site generators with GitLab Pages. |
| `/Security/*` | Yes | Templates related to Security scanners. |
| `/Terraform/*` | No | Templates related to infrastructure as Code (Terraform). |
| `/Verify/*` | Yes | Templates related to Testing features. |
| `/Workflows/*` | No | Sample templates for using the `workflow:` keyword. |
## Template authoring guidelines
Use the following guidelines to ensure your template submission follows standards:
### Template types
Templates have two different types that impact the way the template should be written
and used. The style in a template should match one of these two types:
A **pipeline template** provides an end-to-end CI/CD workflow that matches a project's
structure, language, and so on. It usually should be used by itself in projects that
don't have any other `.gitlab-ci.yml` files.
When authoring pipeline templates:
- Place any [global keywords](../../ci/yaml/_index.md#global-keywords) like `image`
or `before_script` in a [`default`](../../ci/yaml/_index.md#default)
section at the top of the template.
- Note clearly in the [code comments](#explain-the-template-with-comments) if the
template is designed to be used with the `includes` keyword in an existing
`.gitlab-ci.yml` file or not.
A **job template** provides specific jobs that can be added to an existing CI/CD
workflow to accomplish specific tasks. It usually should be used by adding it to
an existing `.gitlab-ci.yml` file by using the [`includes`](../../ci/yaml/_index.md#global-keywords)
keyword. You can also copy and paste the contents into an existing `.gitlab-ci.yml` file.
Configure job templates so that users can add them to their current pipeline with very
few or no modifications. It must be configured to reduce the risk of conflicting with
other pipeline configuration.
When authoring job templates:
- Do not use [global](../../ci/yaml/_index.md#global-keywords) or [`default`](../../ci/yaml/_index.md#default)
keywords. When a root `.gitlab-ci.yml` includes a template, global or default keywords
might be overridden and cause unexpected behavior. If a job template requires a
specific stage, explain in the code comments that users must manually add the stage
to the main `.gitlab-ci.yml` configuration.
- Note clearly in [code comments](#explain-the-template-with-comments) that the template
is designed to be used with the `includes` keyword or copied into an existing configuration.
- Consider [versioning](#versioning) the template with latest and stable versions
to avoid [backward compatibility](#backward-compatibility) problems.
Maintenance of this type of template is more complex, because changes to templates
imported with `includes` can break pipelines for all projects using the template.
Additional points to keep in mind when authoring templates:
| Template design points | Pipeline templates | Job templates |
|------------------------------------------------------|--------------------|---------------|
| Can use global keywords, including `stages`. | Yes | No |
| Can define jobs. | Yes | Yes |
| Can be selected in the new file UI | Yes | No |
| Can include other job templates with `include` | Yes | No |
| Can include other pipeline templates with `include`. | No | No |
### Syntax guidelines
To make templates easier to follow, templates should all use clear syntax styles,
with a consistent format.
The `before_script`, `script`, and `after_script` keywords of every job are linted
using [ShellCheck](https://www.shellcheck.net/) and should follow the
[Shell scripting standards and style guidelines](../shell_scripting_guide/_index.md)
as much as possible.
ShellCheck assumes that the script is designed to run using [Bash](https://www.gnu.org/software/bash/).
Templates which use scripts for shells that aren't compatible with the Bash ShellCheck
rules can be excluded from ShellCheck linting. To exclude a script, add it to the
`EXCLUDED_TEMPLATES` list in [`scripts/lint_templates_bash.rb`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/scripts/lint_templates_bash.rb).
#### Do not hardcode the default branch
Use [`$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH`](../../ci/variables/predefined_variables.md)
instead of a hardcoded `main` branch, and never use `master`:
```yaml
job:
rules:
if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
echo "example job"
```
#### Use `rules` instead of `only` or `except`
Avoid using [`only` or `except`](../../ci/yaml/deprecated_keywords.md#only--except) if possible.
Only and except is not being developed any more, and [`rules`](../../ci/yaml/_index.md#rules)
is now the preferred syntax:
```yaml
job2:
script:
- echo
rules:
- if: $CI_COMMIT_BRANCH
```
#### Break up long commands
If a command is very long, or has many command line flags, like `-o` or `--option`:
- Split these up into a multi-line command to make it easier to see every part of the command.
- Use the long name for the flags, when available.
For example, with a long command with short CLI flags like
`docker run --e SOURCE_CODE="$PWD" -v "$PWD":/code -v /var/run/docker.sock:/var/run/docker.sock "$CODE_QUALITY_IMAGE" /code`:
```yaml
job1:
script:
- docker run
--env SOURCE_CODE="$PWD"
--volume "$PWD":/code
--volume /var/run/docker.sock:/var/run/docker.sock
"$CODE_QUALITY_IMAGE" /code
```
You can also use the `|` and `>` YAML operators to [split up multi-line commands](../../ci/yaml/script.md#split-long-commands).
### Explain the template with comments
You can access template contents from the new file menu, and this might be the only
place users see information about the template. It's important to clearly document
the behavior of the template directly in the template itself.
The following guidelines cover the basic comments expected in all template submissions.
Add additional comments as needed if you think the comments can help users or
[template reviewers](#contribute-cicd-template-merge-requests).
#### Explain requirements and expectations
Give the details on how to use the template in `#` comments at the top of the file.
This includes:
- Repository/project requirements.
- Expected behavior.
- Any places that must be edited by users before using the template.
- If the template should be used by copy pasting it into a configuration file, or
by using it with the `include` keyword in an existing pipeline.
- If any variables must be saved in the project's CI/CD settings.
```yaml
# Use this template to publish an application that uses the ABC server.
# You can copy and paste this template into a new `.gitlab-ci.yml` file.
# You should not add this template to an existing `.gitlab-ci.yml` file by using the `include:` keyword.
#
# Requirements:
# - An ABC project with content saved in /content and tests in /test
# - A CI/CD variable named ABC-PASSWORD saved in the project CI/CD settings. The value
# should be the password used to deploy to your ABC server.
# - An ABC server configured to listen on port 12345.
#
# You must change the URL on line 123 to point to your ABC server and port.
#
# For more information, see https://gitlab.com/example/abcserver/README.md
job1:
...
```
#### Explain how variables affect template behavior
If the template uses variables, explain them in `#` comments where they are first
defined. You can skip the comment when the variable is trivially clear:
```yaml
variables: # Good to have a comment here, for example:
TEST_CODE_PATH: <path/to/code> # Update this variable with the relative path to your Ruby specs
job1:
variables:
ERROR_MESSAGE: "The $TEST_CODE_PATH path is invalid" # (No need for a comment here, it's already clear)
script:
- echo ${ERROR_MESSAGE}
```
#### Use all-caps naming for non-local variables
If you are expecting a variable to be provided via the CI/CD settings, or via the
`variables` keyword, that variable must use all-caps naming with underscores (`_`)
separating words.
```yaml
.with_login:
before_script:
# SECRET_TOKEN should be provided via the project settings
- echo "$SECRET_TOKEN" | docker login -u my-user --password-stdin my-registry
```
Lowercase naming can optionally be used for variables which are defined locally in
one of the `script` keywords:
```yaml
job1:
script:
- response="$(curl "https://example.com/json")"
- message="$(echo "$response" | jq -r .message)"
- 'echo "Server responded with: $message"'
```
### Backward compatibility
A template might be dynamically included with the `include:template:` keyword. If
you make a change to an existing template, you **must** make sure that it doesn't break
CI/CD in existing projects.
For example, changing a job name in a template could break pipelines in an existing project.
In this example, a template named `Performance.gitlab-ci.yml` has the following content:
```yaml
performance:
image: registry.gitlab.com/gitlab-org/verify-tools/performance:v0.1.0
script: ./performance-test $TARGET_URL
```
and users include this template with passing an argument to the `performance` job.
This can be done by specifying the CI/CD variable `TARGET_URL` in _their_ `.gitlab-ci.yml`:
```yaml
include:
template: Performance.gitlab-ci.yml
performance:
variables:
TARGET_URL: https://awesome-app.com
```
If the job name `performance` in the template is renamed to `browser-performance`,
the user's `.gitlab-ci.yml` immediately causes a lint error because there
are no such jobs named `performance` in the included template anymore. Therefore,
users have to fix their `.gitlab-ci.yml` that could annoy their workflow.
Read [versioning](#versioning) section for introducing breaking change safely.
## Versioning
To introduce a breaking change without affecting the existing projects that depend on
the current template, use [stable](#stable-version) and [latest](#latest-version) versioning.
Stable templates usually only receive breaking changes in major version releases, while
latest templates can receive breaking changes in any release. In major release milestones,
the latest template is made the new stable template (and the latest template might be deleted).
Adding a latest template is safe, but comes with a maintenance burden:
- GitLab has to choose a DRI to overwrite the stable template with the contents of the
latest template at the next major release of GitLab. The DRI is responsible for
supporting users who have trouble with the change.
- When we make a new non-breaking change, both the stable and latest templates must be updated
to match, as must as possible.
- A latest template could remain for longer than planned because many users could
directly depend on it continuing to exist.
Before adding a new latest template, see if the change can be made to the stable
template instead, even if it's a breaking change. If the template is intended for copy-paste
usage only, it might be possible to directly change the stable version. Before changing
the stable template with a breaking change in a minor milestone, make sure:
- It's a [pipeline template](#template-types) and it has a [code comment](#explain-requirements-and-expectations)
explaining that it's not designed to be used with the `includes`.
- The [CI/CD template usage metrics](#add-metrics) doesn't show any usage. If the metrics
show zero usage for the template, the template is not actively being used with `include`.
### Stable version
A stable CI/CD template is a template that only introduces breaking changes in major
release milestones. Name the stable version of a template as `<template-name>.gitlab-ci.yml`,
for example `Jobs/Deploy.gitlab-ci.yml`.
You can make a new stable template by copying [the latest template](#latest-version)
available in a major milestone release of GitLab like `15.0`. All breaking changes must be announced
on the [Deprecations and removals by version](../../update/deprecations.md) page.
You can change a stable template version in a minor GitLab release like `15.1` if:
- The change is not a [breaking change](#backward-compatibility).
- The change is ported to [the latest template](#latest-version), if one exists.
### Latest version
Templates marked as `latest` can be updated in any release, even with
[breaking changes](#backward-compatibility). Add `.latest` to the template name if
it's considered the latest version, for example `Jobs/Deploy.latest.gitlab-ci.yml`.
When you introduce [a breaking change](#backward-compatibility),
you **must** test and document [the upgrade path](#verify-breaking-changes).
In general, we should not promote the latest template as the best option, as it could surprise users with unexpected problems.
If the `latest` template does not exist yet, you can copy [the stable template](#stable-version).
### How to include an older stable template
Users may want to use an older [stable template](#stable-version) that is not bundled
in the current GitLab package. For example, the stable templates in GitLab 15.0 and
GitLab 16.0 could be so different that a user wants to continue using the GitLab 15.0
template even after upgrading to GitLab 16.0.
You can add a note in the template or in documentation explaining how to use `include:remote`
to include older template versions. If other templates are included with `include: template`,
they can be combined with the `include: remote`:
```yaml
# To use the v13 stable template, which is not included in v14, fetch the specific
# template from the remote template repository with the `include:remote:` keyword.
# If you fetch from the GitLab canonical project, use the following URL format:
# https://gitlab.com/gitlab-org/gitlab/-/raw/<version>/lib/gitlab/ci/templates/<template-name>
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/v13.0.1-ee/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
```
### Further reading
There is an [open issue](https://gitlab.com/gitlab-org/gitlab/-/issues/17716) about
introducing versioning concepts in GitLab CI/CD templates. You can check that issue to
follow the progress.
## Testing
Each CI/CD template must be tested to make sure that it's safe to be published.
### Manual QA
It's always good practice to test the template in a minimal demo project.
To do so, follow the following steps:
1. Create a public sample project on <https://gitlab.com>.
1. Add a `.gitlab-ci.yml` to the project with the proposed template.
1. Run pipelines and make sure that everything runs properly, in all possible cases
(merge request pipelines, schedules, and so on).
1. Link to the project in the description of the merge request that is adding a new template.
This is useful information for reviewers to make sure the template is safe to be merged.
### Make sure the new template can be selected in UI
Templates located under some directories are also [selectable in the **New file** UI](#template-directories).
When you add a template into one of those directories, make sure that it correctly appears in the dropdown list:

### Write an RSpec test
You should write an RSpec test to make sure that pipeline jobs are generated correctly:
1. Add a test file at `spec/lib/gitlab/ci/templates/<template-category>/<template-name>_spec.rb`
1. Test that pipeline jobs are properly created via `Ci::CreatePipelineService`.
### Verify breaking changes
When you introduce a breaking change to [a `latest` template](#latest-version),
you must:
1. Test the upgrade path from [the stable template](#stable-version).
1. Verify what kind of errors users encounter.
1. Document it as a troubleshooting guide.
This information is important for users when [a stable template](#stable-version)
is updated in a major version GitLab release.
### Add metrics
Every CI/CD template must also have metrics defined to track their use. The CI/CD template monthly usage report
can be found in [Sisense (GitLab team members only)](https://app.periscopedata.com/app/gitlab/785953/Pipeline-Authoring-Dashboard?widget=13440051&udv=0).
Select a template to see the graph for that single template.
To add a metric definition for a new template:
1. Install and start the [GitLab GDK](https://gitlab.com/gitlab-org/gitlab-development-kit#installation).
1. In the `gitlab` directory in your GDK, check out the branch that contains the new template.
1. Add the new template event name to the weekly and monthly CI/CD template total count metrics:
- [`config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml)
- [`config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml)
1. Use the same event name as above as the last argument in the following command to
[add new metric definitions](../internal_analytics/metrics/metrics_instrumentation.md#create-a-new-metric-instrumentation-class):
```shell
bundle exec rails generate gitlab:usage_metric_definition:redis_hll ci_templates <template_metric_event_name>
```
The output should look like:
```shell
$ bundle exec rails generate gitlab:usage_metric_definition:redis_hll ci_templates p_ci_templates_my_template_name
create config/metrics/counts_7d/20220120073740_p_ci_templates_my_template_name_weekly.yml
create config/metrics/counts_28d/20220120073746_p_ci_templates_my_template_name_monthly.yml
```
1. Edit both newly generated files as follows:
- `name:` and `performance_indicator_type:`: Delete (not needed).
- `introduced_by_url:`: The URL of the MR adding the template.
- `data_source:`: Set to `redis_hll`.
- `description`: Add a short description of what this metric counts, for example: `Count of pipelines using the latest Auto Deploy template`
- `product_*`: Set to [section, stage, group, and feature category](https://handbook.gitlab.com/handbook/product/categories/#devops-stages)
as per the [metrics dictionary guide](../internal_analytics/metrics/metrics_dictionary.md#metrics-definition-and-validation).
If you are unsure what to use for these keywords, you can ask for help in the merge request.
- Add the following to the end of each file:
```yaml
options:
events:
- p_ci_templates_my_template_name
```
1. Commit and push the changes.
For example, these are the metrics configuration files for the
[5 Minute Production App template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/5-Minute-Production-App.gitlab-ci.yml):
- The weekly and monthly metrics definitions:
- [`config/metrics/counts_7d/20210901223501_p_ci_templates_5_minute_production_app_weekly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/1a6eceff3914f240864b2ca15ae2dc076ea67bf6/config/metrics/counts_7d/20210216184515_p_ci_templates_5_min_production_app_weekly.yml)
- [`config/metrics/counts_28d/20210901223505_p_ci_templates_5_minute_production_app_monthly.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/metrics/counts_28d/20210216184517_p_ci_templates_5_min_production_app_monthly.yml)
- The metrics count totals:
- [`config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml#L19`](https://gitlab.com/gitlab-org/gitlab/-/blob/4e01ef2b094763943348655ef77008aba7a052ae/config/metrics/counts_7d/20210216184557_ci_templates_total_unique_counts_weekly.yml#L19)
- [`config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml#L19`](https://gitlab.com/gitlab-org/gitlab/-/blob/4e01ef2b094763943348655ef77008aba7a052ae/config/metrics/counts_28d/20210216184559_ci_templates_total_unique_counts_monthly.yml#L19)
## Security
A template could contain malicious code. For example, a template that contains the `export` shell command in a job
might accidentally expose secret project CI/CD variables in a job log.
If you're unsure if it's secure or not, you must ask security experts for cross-validation.
## Contribute CI/CD template merge requests
After your CI/CD template MR is created and labeled with `ci::templates`, DangerBot
suggests one reviewer and one maintainer that can review your code. When your merge
request is ready for review, [mention](../../user/discussions/_index.md#mentions)
the reviewer and ask them to review your CI/CD template changes. See details in the merge request that added
[a DangerBot task for CI/CD template MRs](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/44688).
|
https://docs.gitlab.com/development/testing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/testing.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
testing.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Testing guide for CI/CD Rails application code
| null |
This document contains details for testing CI/CD application code.
## Backend
### Integration specs
The CI/CD specs include informal integration specs for the core CI/CD processes.
#### Linting
Integration specs for linting are kept in `spec/lib/gitlab/ci/yaml_processor_spec.rb` and
`spec/lib/gitlab/ci/yaml_processor/test_cases/`. Add any new specs to the
`test_cases/` directory.
#### Pipeline creation
Integration specs for pipeline creation are kept in `spec/services/ci/create_pipeline_service_spec.rb` and
`spec/services/ci/create_pipeline_service/`. Add new specs to the
`create_pipeline_service/` directory.
#### Pipeline processing
`spec/services/ci/pipeline_processing/atomic_processing_service_spec.rb` runs integration specs for pipeline processing.
To add a new integration spec, add a YAML CI/CD configuration file to `spec/services/ci/pipeline_processing/test_cases`.
It is run automatically with `atomic_processing_service_spec.rb`.
## Frontend
### Fixtures
The following files contain frontend fixtures for CI/CD endpoints used in frontend unit tests:
- `spec/frontend/fixtures/pipelines.rb` - General pipeline fixtures
- `spec/frontend/fixtures/pipeline_create.rb` - Pipeline creation fixtures
- `spec/frontend/fixtures/pipeline_details.rb` - Pipeline details fixtures
- `spec/frontend/fixtures/pipeline_header.rb` - Pipeline header fixtures
- `spec/frontend/fixtures/pipeline_schedules.rb` - Pipeline schedule fixtures
These fixtures provide mock API responses for consistent testing of CI/CD frontend components.
### Unit tests
Frontend unit tests for CI/CD components are located in spec/frontend/ci. These tests verify proper rendering, interactions, and state management for pipeline visualization, job execution, scheduling, and status reporting components.
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Testing guide for CI/CD Rails application code
breadcrumbs:
- doc
- development
- cicd
---
This document contains details for testing CI/CD application code.
## Backend
### Integration specs
The CI/CD specs include informal integration specs for the core CI/CD processes.
#### Linting
Integration specs for linting are kept in `spec/lib/gitlab/ci/yaml_processor_spec.rb` and
`spec/lib/gitlab/ci/yaml_processor/test_cases/`. Add any new specs to the
`test_cases/` directory.
#### Pipeline creation
Integration specs for pipeline creation are kept in `spec/services/ci/create_pipeline_service_spec.rb` and
`spec/services/ci/create_pipeline_service/`. Add new specs to the
`create_pipeline_service/` directory.
#### Pipeline processing
`spec/services/ci/pipeline_processing/atomic_processing_service_spec.rb` runs integration specs for pipeline processing.
To add a new integration spec, add a YAML CI/CD configuration file to `spec/services/ci/pipeline_processing/test_cases`.
It is run automatically with `atomic_processing_service_spec.rb`.
## Frontend
### Fixtures
The following files contain frontend fixtures for CI/CD endpoints used in frontend unit tests:
- `spec/frontend/fixtures/pipelines.rb` - General pipeline fixtures
- `spec/frontend/fixtures/pipeline_create.rb` - Pipeline creation fixtures
- `spec/frontend/fixtures/pipeline_details.rb` - Pipeline details fixtures
- `spec/frontend/fixtures/pipeline_header.rb` - Pipeline header fixtures
- `spec/frontend/fixtures/pipeline_schedules.rb` - Pipeline schedule fixtures
These fixtures provide mock API responses for consistent testing of CI/CD frontend components.
### Unit tests
Frontend unit tests for CI/CD components are located in spec/frontend/ci. These tests verify proper rendering, interactions, and state management for pipeline visualization, job execution, scheduling, and status reporting components.
|
https://docs.gitlab.com/development/pipeline_wizard
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/pipeline_wizard.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
pipeline_wizard.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Pipeline Wizard
| null |
The Pipeline Wizard is a Vue frontend component that helps users create a
pipeline by using input fields. The type of input fields and the form of the final
pipeline is configured by a YAML template.
The Pipeline Wizard expects a single template file that configures the user
flow. The wizard is agnostic with regards to the contents of the file,
so you can use the wizard to display a range of different flows. For example, there
could be one template file for static sites,
one for Docker images, one for mobile apps, and so on. As a first iteration,
these templates are part of the GitLab source code.
The template file defines multiple steps. The last step shown to the user is always
the commit, and is not part of the template definition. An ideal user experience
consists of 2-3 steps, for a total of 3-4 steps visible to the user.
## Usage Example
### Vue Component
```vue
<!-- ~/my_feature/my_component.vue -->
<script>
import PipelineWizard from '~/pipeline_wizard/pipeline_wizard.vue'
import template from '~/pipeline_wizard/templates/my_template.yml';
export default {
name: "MyComponent",
components: { PipelineWizard },
data() {
return { template }
},
methods: {
onDone() {
// redirect
}
}
}
</script>
<template>
<pipeline-wizard :template="template"
project-path="foo/bar"
default-branch="main"
@done="onDone" />
</template>
```
### Template
```yaml
# ~/pipeline_wizard/templates/my_template.yml
id: gitlab/my-template
title: Set up my specific tech pipeline
description: Here's two or three introductory sentences that help the user understand what this wizard is going to set up.
steps:
# Step 1
- inputs:
# First input widget
- label: Select your build image
description: A Docker image that we can use to build your image
placeholder: node:lts
widget: text
target: $BUILD_IMAGE
required: true
pattern: '^(?:(?=[^:\/]{1,253})(?!-)[a-zA-Z0-9-]{1,63}(?<!-)(?:\.(?!-)[a-zA-Z0-9-]{1,63}(?<!-))*(?::[0-9]{1,5})?\/)?((?![._-])(?:[a-z0-9._-]*)(?<![._-])(?:\/(?![._-])[a-z0-9._-]*(?<![._-]))*)(?::(?![.-])[a-zA-Z0-9_.-]{1,128})?$'
invalid-feedback: Please enter a valid docker image
# Second input widget
- label: Installation Steps
description: "Enter the steps that need to run to set up a local build
environment, for example installing dependencies."
placeholder: npm ci
widget: list
target: $INSTALLATION_STEPS
# This is the template to copy to the final pipeline file and updated with
# the values input by the user. Comments are copied as-is.
template:
my-job:
# The Docker image that will be used to build your app
image: $BUILD_IMAGE
before_script: $INSTALLATION_STEPS
artifacts:
paths:
- foo
# Step 2
- inputs:
# This is the only input widget for this step
- label: Installation Steps
description: "Enter the steps that need to run to set up a local build
environment, for example installing dependencies."
placeholder: npm ci
widget: list
target: $INSTALLATION_STEPS
template:
# Functions that should be executed before the build script runs
before_script: $INSTALLATION_STEPS
```
### The result
1. 
1. 
1. 
### The commit step
The last step of the wizard is always the commit step. Users can commit the
newly created file to the repository defined by the [wizard's props](#props).
The user has the option to change the branch to commit to. A future iteration
is planned to add the ability to create a MR from here.
## Component API Reference
### Props
- `template` (required): The template content as an un-parsed string. See
[Template file location](#template-file-location) for more information.
- `project-path` (required): The full path of the project the final file
should be committed to
- `default-branch` (required): The branch that will be pre-selected during
the commit step. This can be changed by the user.
- `default-filename` (optional, default: `.gitlab-ci.yml`): The filename to be used for the file. This can be overridden in the template file.
### Events
- `done` - Emitted after the file has been committed. Use this to redirect the
user to the pipeline, for example.
### Template file location
Template files are usually stored as YAML files in `~/pipeline_wizard/templates/`.
The `PipelineWizard` component expects the `template` property as an un-parsed `String`,
and Webpack is configured to load `.yml` files from the above folder as strings.
If you must load the file from a different place, make sure
Webpack does not parse it as an Object.
## Template Reference
### Template
In the root element of the template file, you can define the following properties:
| Name | Required | Type | Description |
|---------------|--------------------------------------|--------|-------------|
| `id` | {{< icon name="check-circle" >}} Yes | string | A unique template ID. This ID should follow a namespacing pattern, with a forward slash `/` as separator. Templates committed to GitLab source code should always begin with `gitlab`. For example: `gitlab/my-template` |
| `title` | {{< icon name="check-circle" >}} Yes | string | The page title as displayed to the user. It becomes an `h1` heading above the wizard. |
| `description` | {{< icon name="check-circle" >}} Yes | string | The page description as displayed to the user. |
| `filename` | {{< icon name="dotted-circle" >}} No | string | The name of the file that is being generated. Defaults to `.gitlab-ci.yml`. |
| `steps` | {{< icon name="check-circle" >}} Yes | list | A list of [step definitions](#step-reference). |
### `step` Reference
A step makes up one page in a multi-step (or page) process. It consists of one or more
related input fields that build a part of the final `.gitlab-ci.yml`.
Steps include two properties:
| Name | Required | Type | Description |
|------------|--------------------------------------|------|-------------|
| `template` | {{< icon name="check-circle" >}} Yes | map | The raw YAML to deep-merge into the final `.gitlab-ci.yml`. This template section can contain variables denoted by a `$` sign that is replaced with the values from the input fields. |
| `inputs` | {{< icon name="check-circle" >}} Yes | list | A list of [input definitions](#input-reference). |
### `input` Reference
Each step can contain one or more `inputs`. For an ideal user experience, it should not
contain more than three.
The look and feel of the input, as well as the YAML type it produces (string, list, and so on)
depends on the [`widget`](#widgets) used. [`widget: text`](#text) displays a
text input
and inserts the user's input as a string into the template. [`widget: list`](#list)
displays one or more input fields and inserts a list.
All `inputs` must have a `label`, `widget`, and optionally `target`, but
most properties
are dependent on the widget being used:
| Name | Required | Type | Description |
|----------|--------------------------------------|--------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `widget` | {{< icon name="check-circle" >}} Yes | string | The [widget](#widgets) type to use for this input. |
| `target` | {{< icon name="dotted-circle" >}} No | string | The variable name inside the step's template that should be replaced with the value of the input field, for example `$FOO`. |
### Widgets
#### Text
Use as `widget: text`. This inserts a `string` in the YAML file.
| Name | Required | Type | Description |
|-------------------|--------------------------------------|---------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `description` | {{< icon name="dotted-circle" >}} No | string | Help text related to the input field. |
| `required` | {{< icon name="dotted-circle" >}} No | boolean | Whether or not the user must provide a value before proceeding to the next step. `false` if not defined. |
| `placeholder` | {{< icon name="dotted-circle" >}} No | string | A placeholder for the input field. |
| `pattern` | {{< icon name="dotted-circle" >}} No | string | A regular expression that the user's input must match before they can proceed to the next step. |
| `invalidFeedback` | {{< icon name="dotted-circle" >}} No | string | Help text displayed when the pattern validation fails. |
| `default` | {{< icon name="dotted-circle" >}} No | string | The default value for the field. |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
| `monospace` | {{< icon name="dotted-circle" >}} No | boolean | Sets the font of the input to monospace. Useful when users are entering code snippets or shell commands. |
#### List
Use as `widget: list`. This inserts a `list` in the YAML file.
| Name | Required | Type | Description |
|-------------------|--------------------------------------|---------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `description` | {{< icon name="dotted-circle" >}} No | string | Help text related to the input field. |
| `required` | {{< icon name="dotted-circle" >}} No | boolean | Whether or not the user must provide a value before proceeding to the next step. `false` if not defined. |
| `placeholder` | {{< icon name="dotted-circle" >}} No | string | A placeholder for the input field. |
| `pattern` | {{< icon name="dotted-circle" >}} No | string | A regular expression that the user's input must match before they can proceed to the next step. |
| `invalidFeedback` | {{< icon name="dotted-circle" >}} No | string | Help text displayed when the pattern validation fails. |
| `default` | {{< icon name="dotted-circle" >}} No | list | The default value for the list |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
#### Checklist
Use as `widget: checklist`. This inserts a list of checkboxes that need to
be checked before proceeding to the next step.
| Name | Required | Type | Description |
|---------|--------------------------------------|--------|-------------|
| `title` | {{< icon name="dotted-circle" >}} No | string | A title above the checklist items. |
| `items` | {{< icon name="dotted-circle" >}} No | list | A list of items that need to be checked. Each item corresponds to one checkbox, and can be a string or [checklist item](#checklist-item). |
##### Checklist Item
| Name | Required | Type | Description |
|--------|--------------------------------------|--------|-------------|
| `text` | {{< icon name="check-circle" >}} Yes | string | A title above the checklist items. |
| `help` | {{< icon name="dotted-circle" >}} No | string | Help text explaining the item. |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Pipeline Wizard
breadcrumbs:
- doc
- development
- cicd
---
The Pipeline Wizard is a Vue frontend component that helps users create a
pipeline by using input fields. The type of input fields and the form of the final
pipeline is configured by a YAML template.
The Pipeline Wizard expects a single template file that configures the user
flow. The wizard is agnostic with regards to the contents of the file,
so you can use the wizard to display a range of different flows. For example, there
could be one template file for static sites,
one for Docker images, one for mobile apps, and so on. As a first iteration,
these templates are part of the GitLab source code.
The template file defines multiple steps. The last step shown to the user is always
the commit, and is not part of the template definition. An ideal user experience
consists of 2-3 steps, for a total of 3-4 steps visible to the user.
## Usage Example
### Vue Component
```vue
<!-- ~/my_feature/my_component.vue -->
<script>
import PipelineWizard from '~/pipeline_wizard/pipeline_wizard.vue'
import template from '~/pipeline_wizard/templates/my_template.yml';
export default {
name: "MyComponent",
components: { PipelineWizard },
data() {
return { template }
},
methods: {
onDone() {
// redirect
}
}
}
</script>
<template>
<pipeline-wizard :template="template"
project-path="foo/bar"
default-branch="main"
@done="onDone" />
</template>
```
### Template
```yaml
# ~/pipeline_wizard/templates/my_template.yml
id: gitlab/my-template
title: Set up my specific tech pipeline
description: Here's two or three introductory sentences that help the user understand what this wizard is going to set up.
steps:
# Step 1
- inputs:
# First input widget
- label: Select your build image
description: A Docker image that we can use to build your image
placeholder: node:lts
widget: text
target: $BUILD_IMAGE
required: true
pattern: '^(?:(?=[^:\/]{1,253})(?!-)[a-zA-Z0-9-]{1,63}(?<!-)(?:\.(?!-)[a-zA-Z0-9-]{1,63}(?<!-))*(?::[0-9]{1,5})?\/)?((?![._-])(?:[a-z0-9._-]*)(?<![._-])(?:\/(?![._-])[a-z0-9._-]*(?<![._-]))*)(?::(?![.-])[a-zA-Z0-9_.-]{1,128})?$'
invalid-feedback: Please enter a valid docker image
# Second input widget
- label: Installation Steps
description: "Enter the steps that need to run to set up a local build
environment, for example installing dependencies."
placeholder: npm ci
widget: list
target: $INSTALLATION_STEPS
# This is the template to copy to the final pipeline file and updated with
# the values input by the user. Comments are copied as-is.
template:
my-job:
# The Docker image that will be used to build your app
image: $BUILD_IMAGE
before_script: $INSTALLATION_STEPS
artifacts:
paths:
- foo
# Step 2
- inputs:
# This is the only input widget for this step
- label: Installation Steps
description: "Enter the steps that need to run to set up a local build
environment, for example installing dependencies."
placeholder: npm ci
widget: list
target: $INSTALLATION_STEPS
template:
# Functions that should be executed before the build script runs
before_script: $INSTALLATION_STEPS
```
### The result
1. 
1. 
1. 
### The commit step
The last step of the wizard is always the commit step. Users can commit the
newly created file to the repository defined by the [wizard's props](#props).
The user has the option to change the branch to commit to. A future iteration
is planned to add the ability to create a MR from here.
## Component API Reference
### Props
- `template` (required): The template content as an un-parsed string. See
[Template file location](#template-file-location) for more information.
- `project-path` (required): The full path of the project the final file
should be committed to
- `default-branch` (required): The branch that will be pre-selected during
the commit step. This can be changed by the user.
- `default-filename` (optional, default: `.gitlab-ci.yml`): The filename to be used for the file. This can be overridden in the template file.
### Events
- `done` - Emitted after the file has been committed. Use this to redirect the
user to the pipeline, for example.
### Template file location
Template files are usually stored as YAML files in `~/pipeline_wizard/templates/`.
The `PipelineWizard` component expects the `template` property as an un-parsed `String`,
and Webpack is configured to load `.yml` files from the above folder as strings.
If you must load the file from a different place, make sure
Webpack does not parse it as an Object.
## Template Reference
### Template
In the root element of the template file, you can define the following properties:
| Name | Required | Type | Description |
|---------------|--------------------------------------|--------|-------------|
| `id` | {{< icon name="check-circle" >}} Yes | string | A unique template ID. This ID should follow a namespacing pattern, with a forward slash `/` as separator. Templates committed to GitLab source code should always begin with `gitlab`. For example: `gitlab/my-template` |
| `title` | {{< icon name="check-circle" >}} Yes | string | The page title as displayed to the user. It becomes an `h1` heading above the wizard. |
| `description` | {{< icon name="check-circle" >}} Yes | string | The page description as displayed to the user. |
| `filename` | {{< icon name="dotted-circle" >}} No | string | The name of the file that is being generated. Defaults to `.gitlab-ci.yml`. |
| `steps` | {{< icon name="check-circle" >}} Yes | list | A list of [step definitions](#step-reference). |
### `step` Reference
A step makes up one page in a multi-step (or page) process. It consists of one or more
related input fields that build a part of the final `.gitlab-ci.yml`.
Steps include two properties:
| Name | Required | Type | Description |
|------------|--------------------------------------|------|-------------|
| `template` | {{< icon name="check-circle" >}} Yes | map | The raw YAML to deep-merge into the final `.gitlab-ci.yml`. This template section can contain variables denoted by a `$` sign that is replaced with the values from the input fields. |
| `inputs` | {{< icon name="check-circle" >}} Yes | list | A list of [input definitions](#input-reference). |
### `input` Reference
Each step can contain one or more `inputs`. For an ideal user experience, it should not
contain more than three.
The look and feel of the input, as well as the YAML type it produces (string, list, and so on)
depends on the [`widget`](#widgets) used. [`widget: text`](#text) displays a
text input
and inserts the user's input as a string into the template. [`widget: list`](#list)
displays one or more input fields and inserts a list.
All `inputs` must have a `label`, `widget`, and optionally `target`, but
most properties
are dependent on the widget being used:
| Name | Required | Type | Description |
|----------|--------------------------------------|--------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `widget` | {{< icon name="check-circle" >}} Yes | string | The [widget](#widgets) type to use for this input. |
| `target` | {{< icon name="dotted-circle" >}} No | string | The variable name inside the step's template that should be replaced with the value of the input field, for example `$FOO`. |
### Widgets
#### Text
Use as `widget: text`. This inserts a `string` in the YAML file.
| Name | Required | Type | Description |
|-------------------|--------------------------------------|---------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `description` | {{< icon name="dotted-circle" >}} No | string | Help text related to the input field. |
| `required` | {{< icon name="dotted-circle" >}} No | boolean | Whether or not the user must provide a value before proceeding to the next step. `false` if not defined. |
| `placeholder` | {{< icon name="dotted-circle" >}} No | string | A placeholder for the input field. |
| `pattern` | {{< icon name="dotted-circle" >}} No | string | A regular expression that the user's input must match before they can proceed to the next step. |
| `invalidFeedback` | {{< icon name="dotted-circle" >}} No | string | Help text displayed when the pattern validation fails. |
| `default` | {{< icon name="dotted-circle" >}} No | string | The default value for the field. |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
| `monospace` | {{< icon name="dotted-circle" >}} No | boolean | Sets the font of the input to monospace. Useful when users are entering code snippets or shell commands. |
#### List
Use as `widget: list`. This inserts a `list` in the YAML file.
| Name | Required | Type | Description |
|-------------------|--------------------------------------|---------|-------------|
| `label` | {{< icon name="check-circle" >}} Yes | string | The label for the input field. |
| `description` | {{< icon name="dotted-circle" >}} No | string | Help text related to the input field. |
| `required` | {{< icon name="dotted-circle" >}} No | boolean | Whether or not the user must provide a value before proceeding to the next step. `false` if not defined. |
| `placeholder` | {{< icon name="dotted-circle" >}} No | string | A placeholder for the input field. |
| `pattern` | {{< icon name="dotted-circle" >}} No | string | A regular expression that the user's input must match before they can proceed to the next step. |
| `invalidFeedback` | {{< icon name="dotted-circle" >}} No | string | Help text displayed when the pattern validation fails. |
| `default` | {{< icon name="dotted-circle" >}} No | list | The default value for the list |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
#### Checklist
Use as `widget: checklist`. This inserts a list of checkboxes that need to
be checked before proceeding to the next step.
| Name | Required | Type | Description |
|---------|--------------------------------------|--------|-------------|
| `title` | {{< icon name="dotted-circle" >}} No | string | A title above the checklist items. |
| `items` | {{< icon name="dotted-circle" >}} No | list | A list of items that need to be checked. Each item corresponds to one checkbox, and can be a string or [checklist item](#checklist-item). |
##### Checklist Item
| Name | Required | Type | Description |
|--------|--------------------------------------|--------|-------------|
| `text` | {{< icon name="check-circle" >}} Yes | string | A title above the checklist items. |
| `help` | {{< icon name="dotted-circle" >}} No | string | Help text explaining the item. |
| `id` | {{< icon name="dotted-circle" >}} No | string | The input field ID is usually autogenerated but can be overridden by providing this property. |
|
https://docs.gitlab.com/development/cicd_reference_documentation_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/cicd_reference_documentation_guide.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
cicd_reference_documentation_guide.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Documenting pipeline configuration keywords
| null |
The [CI/CD YAML syntax reference](../../ci/yaml/_index.md) uses a standard style to make it easier to use and update.
The reference information should be kept as simple as possible, and expanded details
and examples should be documented on other pages.
## YAML reference structure
Every YAML keyword must have its own section in the reference. The sections should
be nested so that the keywords follow a logical tree structure. For example:
```markdown
### `artifacts`
#### `artifacts:name`
#### `artifacts:paths`
#### `artifacts:reports`
##### `artifacts:reports:dast`
##### `artifacts:reports:sast`
```
## YAML reference style
Each keyword entry in the reference:
- Must have a simple introductory section. The introduction should give the fundamental
information needed to use the keyword. Advanced details and tasks should be in
feature pages, not the reference page.
- Must use the keyword name as the title, for example:
```markdown
### `artifacts`
```
- Should include the following sections:
- [Keyword type](#keyword-type)
- [Supported values](#supported-values)
- [Example of `keyword-name`](#example-of-keyword-name)
- (Optional) Can also include the following sections when needed:
- [Additional details](#additional-details)
- [Related topics](#related-topics)
- Must use a horizontal divider (`---`) to separate keyword entries.
The keyword name must always be in backticks without a final `:`, like `artifacts`, not `artifacts:`.
If it is a subkey of another keyword, write out all the subkeys to the "parent" key the first time it
is used, like `artifacts:reports:dast`. Afterwards, you can use just the subkey alone, like `dast`.
## Keyword type
The keyword can be either a job or global keyword. If it can be used in a `default`
section, make note of that as well. For example:
- `**Keyword type**: Global keyword.`
- `**Keyword type**: Job keyword. You can use it only as part of a job.`
- ``**Keyword type**: Job keyword. You can use it only as part of a job or in the [`default:` section](#default).``
### Supported values
List all the supported values, and any extra details about the values, such as defaults
or changes due to different GitLab versions. For example:
```markdown
**Supported values**:
- `true` (default if not defined) or `false`.
```
```markdown
**Supported values**:
- A single exit code.
- An array of exit codes.
```
```markdown
**Supported values**:
- A string with the long description.
- The path to a file that contains the description. Introduced in [GitLab 13.7](https://gitlab.com/gitlab-org/release-cli/-/merge_requests/67).
- The file location must be relative to the project directory (`$CI_PROJECT_DIR`).
- If the file is a symbolic link, it must be in the `$CI_PROJECT_DIR`.
- The `./path/to/file` and filename can't contain spaces.
```
#### CI/CD variables with keywords
If CI/CD variables can be used with the keyword, add a line to the **Supported values**
section. For example:
```markdown
**Supported values**:
- A string with the long description.
- [CI/CD variables](../variables/where_variables_can_be_used.md#gitlab-ciyml-file).
```
### Example of `keyword-name`
An example of the keyword. Use the minimum number of other keywords necessary
to make the example valid. If the example needs explanation, add it after the example,
for example:
````markdown
**Example of `dast`**:
```yaml
stages:
- build
- dast
include:
- template: DAST.gitlab-ci.yml
dast:
dast_configuration:
site_profile: "Example Co"
scanner_profile: "Quick Passive Test"
```
In this example, the `dast` job extends the `dast` configuration added with the `include:` keyword
to select a specific site profile and scanner profile.
````
If the example uses a CI/CD variable, like `new_keyword: "Description of $CI_COMMIT_BRANCH"`,
the **Supported values** section must explain that CI/CD variables are supported.
If this entry is missing from the supported values, check with the author to see if
variables are supported, then:
- [Add CI/CD variables to the **Supported values** section](#cicd-variables-with-keywords)
if variables are supported.
- Remove the CI/CD variable from the example if variables are not supported.
### Additional details
The additional details should be an unordered list of extra information that is
useful to know, but not important enough to put in the introduction. This information
can include changes introduced in different GitLab versions. For example:
```markdown
**Additional details**:
- The expiration time period begins when the artifact is uploaded and stored on GitLab.
If the expiry time is not defined, it defaults to the [instance wide setting](../../administration/settings/continuous_integration.md#default-artifacts-expiration).
- To override the expiration date and protect artifacts from being automatically deleted:
- Select **Keep** on the job page.
- [In GitLab 13.3 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/22761), set the value of
`expire_in` to `never`.
```
### Related topics
The related topics should be an unordered list of crosslinks to related pages, including:
- Specific tasks that you can accomplish with the keyword.
- Advanced examples of the keyword.
- Other related keywords that can be used together with this keyword.
For example:
```markdown
**Related topics**:
- You can specify a [fallback cache key](../caching/_index.md#use-a-fallback-cache-key)
to use if the specified `cache:key` is not found.
- You can [use multiple cache keys](../caching/_index.md#use-multiple-caches) in a single job.
- See the [common `cache` use cases](../caching/_index.md#common-use-cases-for-caches) for more
`cache:key` examples.
```
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Documenting pipeline configuration keywords
breadcrumbs:
- doc
- development
- cicd
---
The [CI/CD YAML syntax reference](../../ci/yaml/_index.md) uses a standard style to make it easier to use and update.
The reference information should be kept as simple as possible, and expanded details
and examples should be documented on other pages.
## YAML reference structure
Every YAML keyword must have its own section in the reference. The sections should
be nested so that the keywords follow a logical tree structure. For example:
```markdown
### `artifacts`
#### `artifacts:name`
#### `artifacts:paths`
#### `artifacts:reports`
##### `artifacts:reports:dast`
##### `artifacts:reports:sast`
```
## YAML reference style
Each keyword entry in the reference:
- Must have a simple introductory section. The introduction should give the fundamental
information needed to use the keyword. Advanced details and tasks should be in
feature pages, not the reference page.
- Must use the keyword name as the title, for example:
```markdown
### `artifacts`
```
- Should include the following sections:
- [Keyword type](#keyword-type)
- [Supported values](#supported-values)
- [Example of `keyword-name`](#example-of-keyword-name)
- (Optional) Can also include the following sections when needed:
- [Additional details](#additional-details)
- [Related topics](#related-topics)
- Must use a horizontal divider (`---`) to separate keyword entries.
The keyword name must always be in backticks without a final `:`, like `artifacts`, not `artifacts:`.
If it is a subkey of another keyword, write out all the subkeys to the "parent" key the first time it
is used, like `artifacts:reports:dast`. Afterwards, you can use just the subkey alone, like `dast`.
## Keyword type
The keyword can be either a job or global keyword. If it can be used in a `default`
section, make note of that as well. For example:
- `**Keyword type**: Global keyword.`
- `**Keyword type**: Job keyword. You can use it only as part of a job.`
- ``**Keyword type**: Job keyword. You can use it only as part of a job or in the [`default:` section](#default).``
### Supported values
List all the supported values, and any extra details about the values, such as defaults
or changes due to different GitLab versions. For example:
```markdown
**Supported values**:
- `true` (default if not defined) or `false`.
```
```markdown
**Supported values**:
- A single exit code.
- An array of exit codes.
```
```markdown
**Supported values**:
- A string with the long description.
- The path to a file that contains the description. Introduced in [GitLab 13.7](https://gitlab.com/gitlab-org/release-cli/-/merge_requests/67).
- The file location must be relative to the project directory (`$CI_PROJECT_DIR`).
- If the file is a symbolic link, it must be in the `$CI_PROJECT_DIR`.
- The `./path/to/file` and filename can't contain spaces.
```
#### CI/CD variables with keywords
If CI/CD variables can be used with the keyword, add a line to the **Supported values**
section. For example:
```markdown
**Supported values**:
- A string with the long description.
- [CI/CD variables](../variables/where_variables_can_be_used.md#gitlab-ciyml-file).
```
### Example of `keyword-name`
An example of the keyword. Use the minimum number of other keywords necessary
to make the example valid. If the example needs explanation, add it after the example,
for example:
````markdown
**Example of `dast`**:
```yaml
stages:
- build
- dast
include:
- template: DAST.gitlab-ci.yml
dast:
dast_configuration:
site_profile: "Example Co"
scanner_profile: "Quick Passive Test"
```
In this example, the `dast` job extends the `dast` configuration added with the `include:` keyword
to select a specific site profile and scanner profile.
````
If the example uses a CI/CD variable, like `new_keyword: "Description of $CI_COMMIT_BRANCH"`,
the **Supported values** section must explain that CI/CD variables are supported.
If this entry is missing from the supported values, check with the author to see if
variables are supported, then:
- [Add CI/CD variables to the **Supported values** section](#cicd-variables-with-keywords)
if variables are supported.
- Remove the CI/CD variable from the example if variables are not supported.
### Additional details
The additional details should be an unordered list of extra information that is
useful to know, but not important enough to put in the introduction. This information
can include changes introduced in different GitLab versions. For example:
```markdown
**Additional details**:
- The expiration time period begins when the artifact is uploaded and stored on GitLab.
If the expiry time is not defined, it defaults to the [instance wide setting](../../administration/settings/continuous_integration.md#default-artifacts-expiration).
- To override the expiration date and protect artifacts from being automatically deleted:
- Select **Keep** on the job page.
- [In GitLab 13.3 and later](https://gitlab.com/gitlab-org/gitlab/-/issues/22761), set the value of
`expire_in` to `never`.
```
### Related topics
The related topics should be an unordered list of crosslinks to related pages, including:
- Specific tasks that you can accomplish with the keyword.
- Advanced examples of the keyword.
- Other related keywords that can be used together with this keyword.
For example:
```markdown
**Related topics**:
- You can specify a [fallback cache key](../caching/_index.md#use-a-fallback-cache-key)
to use if the specified `cache:key` is not found.
- You can [use multiple cache keys](../caching/_index.md#use-multiple-caches) in a single job.
- See the [common `cache` use cases](../caching/_index.md#common-use-cases-for-caches) for more
`cache:key` examples.
```
|
https://docs.gitlab.com/development/schema
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/schema.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
schema.md
|
Verify
|
Pipeline Authoring
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute to the CI/CD Schema
| null |
The [pipeline editor](../../ci/pipeline_editor/_index.md) uses a CI/CD schema to enhance
the authoring experience of our CI/CD configuration files. With the CI/CD schema, the editor can:
- Validate the content of the CI/CD configuration file as it is being written in the editor.
- Provide autocomplete functionality and suggest available keywords.
- Provide definitions of keywords through annotations.
As the rules and keywords for configuring our CI/CD configuration files change, so too
should our CI/CD schema.
## JSON Schemas
The CI/CD schema follows the [JSON Schema Draft-07](https://json-schema.org/draft-07/json-schema-release-notes)
specification. Although the CI/CD configuration file is written in YAML, it is converted
into JSON by using `monaco-yaml` before it is validated by the CI/CD schema.
If you're new to JSON schemas, consider checking out
[this guide](https://json-schema.org/learn/getting-started-step-by-step) for
a step-by-step introduction on how to work with JSON schemas.
## Update Keywords
The CI/CD schema is at [`app/assets/javascripts/editor/schema/ci.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json).
It contains all the keywords available for authoring CI/CD configuration files.
Check the [CI/CD YAML syntax reference](../../ci/yaml/_index.md) for a comprehensive list of
all available keywords.
All keywords are defined under `definitions`. We use these definitions as
[references](https://json-schema.org/learn/getting-started-step-by-step#references)
to share common data structures across the schema.
For example, this defines the `retry` keyword:
```json
{
"definitions": {
"retry": {
"description": "Retry a job if it fails. Can be a simple integer or object definition.",
"oneOf": [
{
"$ref": "#/definitions/retry_max"
},
{
"type": "object",
"additionalProperties": false,
"properties": {
"max": {
"$ref": "#/definitions/retry_max"
},
"when": {
"description": "Either a single or array of error types to trigger job retry.",
"oneOf": [
{
"$ref": "#/definitions/retry_errors"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/retry_errors"
}
}
]
}
}
}
]
}
}
}
```
With this definition, the `retry` keyword is both a property of
the `job_template` definition and the `default` global keyword. Global keywords
that configure pipeline behavior (such as `workflow` and `stages`) are defined
under the topmost **properties** key.
```json
{
"properties": {
"default": {
"type": "object",
"properties": {
"retry": {
"$ref": "#/definitions/retry"
},
}
}
},
"definitions": {
"job_template": {
"properties": {
"retry": {
"$ref": "#/definitions/retry"
}
},
}
}
}
```
## Guidelines for updating the schema
- Keep definitions atomic when possible, to be flexible with
referencing keywords. For example, `workflow:rules` uses only a subset of
properties in the `rules` definition. The `rules` properties have their
own definitions, so we can reference them individually.
- When adding new keywords, consider adding a `description` with a link to the
keyword definition in the documentation. This information shows up in the annotations
when the user hovers over the keyword.
- For each property, consider if a `minimum`, `maximum`, or
`default` values are required. Some values might be required, and in others we can set
blank. In the blank case, we can add the following to the definition:
```json
{
"keyword": {
"oneOf": [
{
"type": "null"
},
...
]
}
}
```
## Test the schema
### Verify changes
1. Go to **CI/CD** > **Editor**.
1. Write your CI/CD configuration in the editor and verify that the schema validates
it correctly.
### Write specs
All of the CI/CD schema specs are in [`spec/frontend/editor/schema/ci`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/frontend/editor/schema/ci).
Legacy tests are in JSON, but we recommend writing all new tests in YAML.
You can write them as if you're adding a new `.gitlab-ci.yml` configuration file.
Tests are separated into **positive** tests and **negative** tests. Positive tests
are snippets of CI/CD configuration code that use the schema keywords as intended.
Conversely, negative tests give examples of the schema keywords being used incorrectly.
These tests ensure that the schema validates different examples of input as expected.
`ci_schema_spec.js` is responsible for running all of the tests against the schema.
A detailed explanation of how the tests are set up can be found in this
[merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83047).
#### Update schema specs
If a YAML test does not exist for the specified keyword, create new files in
`yaml_tests/positive_tests` and `yaml_tests/negative_tests`. Otherwise, you can update
the existing tests:
1. Write both positive and negative tests to validate different kinds of input.
1. If you created new files, import them in `ci_schema_spec.js` and add each file to their
corresponding object entries. For example:
```javascript
import CacheYaml from './yaml_tests/positive_tests/cache.yml';
import CacheNegativeYaml from './yaml_tests/negative_tests/cache.yml';
// import your new test files
import NewKeywordTestYaml from './yaml_tests/positive_tests/cache.yml';
import NewKeywordTestNegativeYaml from './yaml_tests/negative_tests/cache.yml';
describe('positive tests', () => {
it.each(
Object.entries({
CacheYaml,
NewKeywordTestYaml, // add positive test here
}),
)('schema validates %s', (_, input) => {
expect(input).toValidateJsonSchema(schema);
});
});
describe('negative tests', () => {
it.each(
Object.entries({
CacheNegativeYaml,
NewKeywordTestYaml, // add negative test here
}),
)('schema validates %s', (_, input) => {
expect(input).not.toValidateJsonSchema(schema);
});
});
```
1. Run the command `yarn jest spec/frontend/editor/schema/ci/ci_schema_spec.js`
and verify that all the tests successfully pass.
If the spec covers a change to an existing keyword and it affects the legacy JSON
tests, update them as well.
|
---
stage: Verify
group: Pipeline Authoring
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute to the CI/CD Schema
breadcrumbs:
- doc
- development
- cicd
---
The [pipeline editor](../../ci/pipeline_editor/_index.md) uses a CI/CD schema to enhance
the authoring experience of our CI/CD configuration files. With the CI/CD schema, the editor can:
- Validate the content of the CI/CD configuration file as it is being written in the editor.
- Provide autocomplete functionality and suggest available keywords.
- Provide definitions of keywords through annotations.
As the rules and keywords for configuring our CI/CD configuration files change, so too
should our CI/CD schema.
## JSON Schemas
The CI/CD schema follows the [JSON Schema Draft-07](https://json-schema.org/draft-07/json-schema-release-notes)
specification. Although the CI/CD configuration file is written in YAML, it is converted
into JSON by using `monaco-yaml` before it is validated by the CI/CD schema.
If you're new to JSON schemas, consider checking out
[this guide](https://json-schema.org/learn/getting-started-step-by-step) for
a step-by-step introduction on how to work with JSON schemas.
## Update Keywords
The CI/CD schema is at [`app/assets/javascripts/editor/schema/ci.json`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json).
It contains all the keywords available for authoring CI/CD configuration files.
Check the [CI/CD YAML syntax reference](../../ci/yaml/_index.md) for a comprehensive list of
all available keywords.
All keywords are defined under `definitions`. We use these definitions as
[references](https://json-schema.org/learn/getting-started-step-by-step#references)
to share common data structures across the schema.
For example, this defines the `retry` keyword:
```json
{
"definitions": {
"retry": {
"description": "Retry a job if it fails. Can be a simple integer or object definition.",
"oneOf": [
{
"$ref": "#/definitions/retry_max"
},
{
"type": "object",
"additionalProperties": false,
"properties": {
"max": {
"$ref": "#/definitions/retry_max"
},
"when": {
"description": "Either a single or array of error types to trigger job retry.",
"oneOf": [
{
"$ref": "#/definitions/retry_errors"
},
{
"type": "array",
"items": {
"$ref": "#/definitions/retry_errors"
}
}
]
}
}
}
]
}
}
}
```
With this definition, the `retry` keyword is both a property of
the `job_template` definition and the `default` global keyword. Global keywords
that configure pipeline behavior (such as `workflow` and `stages`) are defined
under the topmost **properties** key.
```json
{
"properties": {
"default": {
"type": "object",
"properties": {
"retry": {
"$ref": "#/definitions/retry"
},
}
}
},
"definitions": {
"job_template": {
"properties": {
"retry": {
"$ref": "#/definitions/retry"
}
},
}
}
}
```
## Guidelines for updating the schema
- Keep definitions atomic when possible, to be flexible with
referencing keywords. For example, `workflow:rules` uses only a subset of
properties in the `rules` definition. The `rules` properties have their
own definitions, so we can reference them individually.
- When adding new keywords, consider adding a `description` with a link to the
keyword definition in the documentation. This information shows up in the annotations
when the user hovers over the keyword.
- For each property, consider if a `minimum`, `maximum`, or
`default` values are required. Some values might be required, and in others we can set
blank. In the blank case, we can add the following to the definition:
```json
{
"keyword": {
"oneOf": [
{
"type": "null"
},
...
]
}
}
```
## Test the schema
### Verify changes
1. Go to **CI/CD** > **Editor**.
1. Write your CI/CD configuration in the editor and verify that the schema validates
it correctly.
### Write specs
All of the CI/CD schema specs are in [`spec/frontend/editor/schema/ci`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/spec/frontend/editor/schema/ci).
Legacy tests are in JSON, but we recommend writing all new tests in YAML.
You can write them as if you're adding a new `.gitlab-ci.yml` configuration file.
Tests are separated into **positive** tests and **negative** tests. Positive tests
are snippets of CI/CD configuration code that use the schema keywords as intended.
Conversely, negative tests give examples of the schema keywords being used incorrectly.
These tests ensure that the schema validates different examples of input as expected.
`ci_schema_spec.js` is responsible for running all of the tests against the schema.
A detailed explanation of how the tests are set up can be found in this
[merge request](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/83047).
#### Update schema specs
If a YAML test does not exist for the specified keyword, create new files in
`yaml_tests/positive_tests` and `yaml_tests/negative_tests`. Otherwise, you can update
the existing tests:
1. Write both positive and negative tests to validate different kinds of input.
1. If you created new files, import them in `ci_schema_spec.js` and add each file to their
corresponding object entries. For example:
```javascript
import CacheYaml from './yaml_tests/positive_tests/cache.yml';
import CacheNegativeYaml from './yaml_tests/negative_tests/cache.yml';
// import your new test files
import NewKeywordTestYaml from './yaml_tests/positive_tests/cache.yml';
import NewKeywordTestNegativeYaml from './yaml_tests/negative_tests/cache.yml';
describe('positive tests', () => {
it.each(
Object.entries({
CacheYaml,
NewKeywordTestYaml, // add positive test here
}),
)('schema validates %s', (_, input) => {
expect(input).toValidateJsonSchema(schema);
});
});
describe('negative tests', () => {
it.each(
Object.entries({
CacheNegativeYaml,
NewKeywordTestYaml, // add negative test here
}),
)('schema validates %s', (_, input) => {
expect(input).not.toValidateJsonSchema(schema);
});
});
```
1. Run the command `yarn jest spec/frontend/editor/schema/ci/ci_schema_spec.js`
and verify that all the tests successfully pass.
If the spec covers a change to an existing keyword and it affects the legacy JSON
tests, update them as well.
|
https://docs.gitlab.com/development/cicd_tables
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/cicd_tables.md
|
2025-08-13
|
doc/development/cicd
|
[
"doc",
"development",
"cicd"
] |
cicd_tables.md
|
Verify
|
Pipeline Execution
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Add new tables to the CI database
| null |
The [pipeline data partitioning](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ci_data_decay/pipeline_partitioning/)
design document describes how to partition existing tables in the CI domain. However,
you still need to add tables for new features. Sometimes these tables hold
references to larger tables that need to be partitioned. To reduce future
work, all tables that use a `belongs_to` association to partitionable tables
should be partitioned from the start.
## Create a new routing table
Here is an example on how to use database helpers to create a new table and foreign keys:
```ruby
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
def up
create_table(:p_ci_examples, primary_key: [:id, :partition_id], options: 'PARTITION BY LIST (partition_id)', if_not_exists: true) do |t|
t.bigserial :id, null: false
t.bigint :partition_id, null: false
t.bigint :build_id, null: false
end
add_concurrent_partitioned_foreign_key(
:p_ci_examples, :p_ci_builds,
column: [:partition_id, :build_id],
target_column: [:partition_id, :id],
on_update: :cascade,
on_delete: :cascade,
reverse_lock_order: true
)
end
def down
drop_table :p_ci_examples
end
```
This table is called a routing table and it does not hold any data. The
data is stored in partitions.
When creating the routing table:
- The table name must start with the `p_` prefix. There are analyzers in place to ensure that all queries go
through the routing tables and do not access the partitions directly.
- Each new table needs a `partition_id` column and its value must equal
the value from the related association. In this example, that is `p_ci_builds`. All resources
belonging to a pipeline share the same `partition_id` value.
- The primary key must have the columns ordered this way to allow efficient
search only by `id`.
- The foreign key constraint must include the `ON UPDATE CASCADE` option because
the `partition_id` value should be able to update it for re-balancing the
partitions.
## Create the first partition
Usually, you rely on the application to create the initial partition at boot time.
However, due to the high traffic on the CI tables and the large number of nodes,
it can be difficult to acquire a lock on the referenced table.
Consequently, during deployment, a node may fail to start.
To prevent this failure, you must ensure that the partition is already in place before
the application runs:
```ruby
disable_ddl_transaction!
def up
with_lock_retries do
connection.execute(<<~SQL)
LOCK TABLE p_ci_builds IN SHARE ROW EXCLUSIVE MODE;
LOCK TABLE ONLY p_ci_examples IN ACCESS EXCLUSIVE MODE;
SQL
connection.execute(<<~SQL)
CREATE TABLE IF NOT EXISTS gitlab_partitions_dynamic.ci_examples_100
PARTITION OF p_ci_examples
FOR VALUES IN (100);
SQL
end
end
```
Partitions are created in `gitlab_partitions_dynamic` schema.
When creating a partition, remember:
- Partition names do not use the `p_` prefix.
- The starting value for `partition_id` is `100`.
## Cascade the partition value
To cascade the partition value, the module should use the `Ci::Partitionable` module:
```ruby
class Ci::Example < Ci::ApplicationRecord
include Ci::Partitionable
self.table_name = :p_ci_examples
self.primary_key = :id
belongs_to :build, class_name: 'Ci::Build'
partitionable scope: :build, partitioned: true
end
```
## Manage partitions
The model must be included in the [`PARTITIONABLE_MODELS`](https://gitlab.com/gitlab-org/gitlab/-/blob/920147293ae304639915f66b260dc14e4f629850/app/models/concerns/ci/partitionable.rb#L25-44)
list because it is used to test that the `partition_id` is
propagated correctly.
If it's missing, specifying `partitioned: true` creates the first partition. The model also needs to be registered in the
[`postgres_partitioning.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/920147293ae304639915f66b260dc14e4f629850/config/initializers/postgres_partitioning.rb)
initializer.
|
---
stage: Verify
group: Pipeline Execution
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Add new tables to the CI database
breadcrumbs:
- doc
- development
- cicd
---
The [pipeline data partitioning](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/ci_data_decay/pipeline_partitioning/)
design document describes how to partition existing tables in the CI domain. However,
you still need to add tables for new features. Sometimes these tables hold
references to larger tables that need to be partitioned. To reduce future
work, all tables that use a `belongs_to` association to partitionable tables
should be partitioned from the start.
## Create a new routing table
Here is an example on how to use database helpers to create a new table and foreign keys:
```ruby
include Gitlab::Database::PartitioningMigrationHelpers
disable_ddl_transaction!
def up
create_table(:p_ci_examples, primary_key: [:id, :partition_id], options: 'PARTITION BY LIST (partition_id)', if_not_exists: true) do |t|
t.bigserial :id, null: false
t.bigint :partition_id, null: false
t.bigint :build_id, null: false
end
add_concurrent_partitioned_foreign_key(
:p_ci_examples, :p_ci_builds,
column: [:partition_id, :build_id],
target_column: [:partition_id, :id],
on_update: :cascade,
on_delete: :cascade,
reverse_lock_order: true
)
end
def down
drop_table :p_ci_examples
end
```
This table is called a routing table and it does not hold any data. The
data is stored in partitions.
When creating the routing table:
- The table name must start with the `p_` prefix. There are analyzers in place to ensure that all queries go
through the routing tables and do not access the partitions directly.
- Each new table needs a `partition_id` column and its value must equal
the value from the related association. In this example, that is `p_ci_builds`. All resources
belonging to a pipeline share the same `partition_id` value.
- The primary key must have the columns ordered this way to allow efficient
search only by `id`.
- The foreign key constraint must include the `ON UPDATE CASCADE` option because
the `partition_id` value should be able to update it for re-balancing the
partitions.
## Create the first partition
Usually, you rely on the application to create the initial partition at boot time.
However, due to the high traffic on the CI tables and the large number of nodes,
it can be difficult to acquire a lock on the referenced table.
Consequently, during deployment, a node may fail to start.
To prevent this failure, you must ensure that the partition is already in place before
the application runs:
```ruby
disable_ddl_transaction!
def up
with_lock_retries do
connection.execute(<<~SQL)
LOCK TABLE p_ci_builds IN SHARE ROW EXCLUSIVE MODE;
LOCK TABLE ONLY p_ci_examples IN ACCESS EXCLUSIVE MODE;
SQL
connection.execute(<<~SQL)
CREATE TABLE IF NOT EXISTS gitlab_partitions_dynamic.ci_examples_100
PARTITION OF p_ci_examples
FOR VALUES IN (100);
SQL
end
end
```
Partitions are created in `gitlab_partitions_dynamic` schema.
When creating a partition, remember:
- Partition names do not use the `p_` prefix.
- The starting value for `partition_id` is `100`.
## Cascade the partition value
To cascade the partition value, the module should use the `Ci::Partitionable` module:
```ruby
class Ci::Example < Ci::ApplicationRecord
include Ci::Partitionable
self.table_name = :p_ci_examples
self.primary_key = :id
belongs_to :build, class_name: 'Ci::Build'
partitionable scope: :build, partitioned: true
end
```
## Manage partitions
The model must be included in the [`PARTITIONABLE_MODELS`](https://gitlab.com/gitlab-org/gitlab/-/blob/920147293ae304639915f66b260dc14e4f629850/app/models/concerns/ci/partitionable.rb#L25-44)
list because it is used to test that the `partition_id` is
propagated correctly.
If it's missing, specifying `partitioned: true` creates the first partition. The model also needs to be registered in the
[`postgres_partitioning.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/920147293ae304639915f66b260dc14e4f629850/config/initializers/postgres_partitioning.rb)
initializer.
|
https://docs.gitlab.com/development/cloud_connector
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/cloud_connector
|
[
"doc",
"development",
"cloud_connector"
] |
_index.md
|
Shared ownership
|
Shared ownership
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Cloud Connector
| null |
GitLab Cloud Connector is a way to access services common to
multiple GitLab deployments, instances, and cells. As of now, Cloud Connector is not a
dedicated service itself, but rather a collection of APIs and code that standardizes the approach to authentication and
other items when integrating Cloud based services with the GitLab instance. This page aims to explain how to use
Cloud Connector to link GitLab Rails to a service.
See the [architecture page](architecture.md) for more information about Cloud Connector. See [terms](architecture.md#terms)
for a list of terms used throughout the document. Also see [configuration](configuration.md) for the information
on how paid features are bundled into GitLab tiers and add-ons.
## Tutorial: Connect a new feature using Cloud Connector
The following sections will cover the following use cases:
- [The new feature is introduced through the existing backend service](#the-new-feature-is-introduced-through-the-existing-backend-service) that is already connected to Cloud Connector (that is, the **AiGateway**).
- [The new feature is introduced through new backend service](#the-new-feature-is-introduced-via-new-backend-service) that needs to be connected to Cloud Connector.
### The new feature is introduced through the existing backend service
The **Ai Gateway** is currently the only backend service that is connected with the CloudConnector.
To add new feature to the existing backend service (**Ai Gateway**):
1. [Register new feature in the JWT issuer](#register-the-new-feature-in-the-jwt-issuer).
1. [Implement permission checks in GitLab Rails](#implement-permission-checks-in-gitlab-rails).
1. [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service).
**Optional:** If the backend service the token is used for requires additional claims to be embedded in the
service access token, contact [#f_cloud_connector](https://gitlab.enterprise.slack.com/archives/CGN8BUCKC) (Slack, internal only)
because we do not currently have interfaces in place to self-service this.
#### Register the new feature in the JWT issuer
- For GitLab Dedicated and GitLab Self-Managed, the CustomersDot is the **JWT issuer**.
- For GitLab.com deployment, GitLab.com is the **JWT issuer**, because it's able to [self-sign and create JWTs](architecture.md#gitlabcom) for every request to a Cloud Connector feature.
#### Register new feature for GitLab Self-Managed, Dedicated and GitLab.com customers
You must register the new feature as a unit primitive in the [`gitlab-cloud-connector`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector) repository.
This repository serves as the Single Source of Truth (SSoT) for all Cloud Connector configurations.
To register a new feature:
1. Create a new YAML file in the `config/unit_primitives/` directory of the `gitlab-cloud-connector` repository.
1. Define the unit primitive configuration, and ensure you follow the [schema](configuration.md#unit-primitive-configuration).
For example, to add a new feature called `new_feature`:
```yaml
# config/unit_primitives/new_feature.yml
---
name: new_feature
description: Description of the new feature
cut_off_date: 2024-10-17T00:00:00+00:00 # Optional, set if not free
min_gitlab_version: '16.9'
min_gitlab_version_for_free_access: '16.8' # Optional
group: group::your_group
feature_category: your_category
documentation_url: https://docs.gitlab.com/ee/path/to/docs
backend_services:
- ai_gateway
add_ons:
- duo_pro
- duo_enterprise
license_types:
- premium
- ultimate
```
##### Backward Compatibility
For backward compatibility where instances are still using the old [legacy structure](configuration.md#legacy-structure), consider adding your unit primitive to the [service configuration](configuration.md#service-configuration) as well.
- If the unit primitive is a stand-alone feature, no further changes are needed, and the service with the same name is generated automatically.
- If the unit primitive is delivered as part of existing service like `duo_chat`, `self_hosted_models` or `vertex_ai_proxy`, add the unit primitive to the desired service in the [`config/services`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/config/services) directory.
##### Deployment process
Follow our [release checklist](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/blob/main/.gitlab/merge_request_templates/Release.md#checklist) for publishing the new version of the library and using it in GitLab project.
#### Implement Permission checks in GitLab Rails
##### New feature is delivered as stand-alone service
###### Access Token
As an example, the feature is delivered as a stand-alone service called `new_feature`.
1. Call `CloudConnector::AvailableServices.find_by_name(:new_feature).access_token(user_or_namespace)`
and include this token in the `Authorization` HTTP header field.
- On GitLab.com, it will self-issue a token with scopes that depend on the provided resource:
- For a user: scopes will be based on the user's seat assignment
- For a namespace: scopes will be based on purchased add-ons for this namespace
- If a service can be accessed for free, the token will include all available scopes for that service.
- For Duo Chat, the **JWT** would include the `documentation_search` and `duo_chat` scopes.
- On GitLab Self-Managed, it will always return `::CloudConnector::ServiceAccessToken` **JWT** token.
- Provided parameters such as user, namespace or extra claims would be ignored for GitLab Self-Managed instances.
Refer to [this section](#the-new-feature-is-introduced-through-the-existing-backend-service) to see how custom claims are handled for GitLab Self-Managed instances.
The **backend service** (AI gateway) must validate this token and any scopes it carries when receiving the request.
1. If you need to embed additional claims in the token specific to your use case, you can pass these
in the `extra_claims` argument.
1. Ensure your request sends the required headers to the [backend service](#implement-authorization-checks-in-backend-service).
These headers can be found in the `gitlab-cloud-connector` [README](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/src/python#authentication).
Some of these headers can be injected by merging the result of the `::CloudConnector#headers` method to your payload.
For AI uses cases and requests targeting the AI gateway, use `::CloudConnector#ai_headers` instead.
###### Permission checks
To decide if the service is available or visible to the end user, we need to:
- Optional. On GitLab Self-Managed, if the new feature is introduced as a new [enterprise feature](../ee_features.md#implement-a-new-ee-feature),
check to determine if user has access to the feature by following the [EE feature guideline](../ee_features.md#guard-your-ee-feature).
```ruby
next true if ::Gitlab::Saas.feature_available?(:new_feature_on_saas)
::License.feature_available?(:new_feature)
```
- On GitLab Self-Managed, check if the customer is using an [online cloud license](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#what-is-cloud-licensing)
- Cloud connector currently only support online cloud license for GitLab Self-Managed customers.
- Trials or legacy licenses are not supported.
- GitLab.com is using a legacy license.
```ruby
::License.current&.online_cloud_license?
```
- Optional. If the service has free access, this usually means that the experimental features are subject to the [Testing Agreement](https://handbook.gitlab.com/handbook/legal/testing-agreement/).
- For GitLab Duo features, the customer needs to enable [experimental toggle](../../user/gitlab_duo/turn_on_off.md#turn-on-beta-and-experimental-features) in order to use experimental features for free.
- On GitLab.com and GitLab Self-Managed, check if the customer's end-user has been assigned to the proper seat.
```ruby
# Returns true if service is allowed to be used.
#
# For provided user, it will check if user is assigned to a proper seat.
current_user.allowed_to_use?(:new_feature)
```
###### Example
The following example is for a request to the service called `:new_feature`.
Here we assume your backend service is called `foo` and is already reachable at `https://cloud.gitlab.com/foo`.
We also assume that the backend service exposes the service using a `/new_feature_endpoint` endpoint.
This allows clients to access the service at `https://cloud.gitlab.com/foo/new_feature_endpoint`.
Add a new policy rule in [ee/global_policy.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/global_policy.rb):
```ruby
condition(:new_feature_licensed) do
next true if ::Gitlab::Saas.feature_available?(:new_feature_on_saas)
next false unless ::License.current.online_cloud_license?
::License.feature_available?(:new_feature)
end
condition(:user_allowed_to_use_new_feature) do
@user.allowed_to_use?(:new_feature)
end
rule { new_feature_licensed & user_allowed_to_use_new_feature }.enable :access_new_feature
```
The request
```ruby
include API::Helpers::CloudConnector
# Check if the service is available for the given user based on seat assignment, add-on purchases
return unauthorized! unless current_user.can?(:access_new_feature)
# For Gitlab.com it will self-issue a token with scopes based on provided resource:
# - For provided user, it will self-issue a token with scopes based on user assignment permissions
# - For provided namespace, it will self-issue a token with scopes based on add-on purchased permissions
#
# For SM, it will return :CloudConnector::ServiceAccessToken instance token, ignoring provided user, namespace and extra claims
token = ::CloudConnector::AvailableServices.find_by_name(:new_feature).access_token(current_user)
Gitlab::HTTP.post(
"https://cloud.gitlab.com/foo/new_feature_endpoint",
headers: {
'Authorization' => "Bearer #{token}",
}.merge(cloud_connector_headers(current_user))
)
```
The introduced policy can be used to control if the front-end is visible. Add a `new_feature_helper.rb`:
```ruby
def show_new_feature?
current_user.can?(:access_new_feature)
end
```
##### New feature is delivered as part of the existing service (Duo Chat)
###### Access Token
If the feature is delivered as part of the existing service, like `Duo Chat`,
calling `CloudConnector::AvailableServices.find_by_name(:duo_chat).access_token(user_or_namespace)` would return an **IJWT** with
access scopes including all authorized features (**unit primitives**).
The **backend service** (AI gateway) would prevent access to the specific feature (**unit primitive**) if the token scope is not included in the **JWT**.
###### Permission checks
If the feature is delivered as part of the existing service, like `Duo Chat`, no additional permission checks are needed.
We can rely on existing global policy rule `user.can?(:access_duo_chat)`.
If end-user has access to at least one feature (**unit primitive**), end-user can access the service.
Access to the individual feature (**unit primitive**), is governed by the **IJWT** scopes, that will be validated by the **backend service** (Ai Gateway).
See [access token](#access-token-1)
#### Implement authorization checks in backend service
GitLab Rails calls a backend service to deliver functionality that would otherwise be unavailable to GitLab Self-Managed and
Dedicated instances. For GitLab Rails to be able to call this, there has to be an endpoint exposed.
The backend service must verify each JWT sent by GitLab Rails in the Authorization header.
For more information and examples on the AI gateway authorization process, check the [Authorization in AI gateway documentation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/auth.md?ref_type=heads#authorization-in-ai-gateway).
### The new feature is introduced via new backend service
To integrate a new backend service that isn't already accessible by Cloud Connector features:
1. [Set up JWT validation](#set-up-jwt-validation).
1. [Make it available at `cloud.gitlab.com`](#add-a-new-cloud-connector-route).
#### Set up JWT validation
As mentioned in the [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service) for services
that already use Cloud Connector, each service must verify that the JWT sent by a GitLab instance is legitimate.
To accomplish this, a backend service must:
1. [Maintain a JSON Web Key Set (JWKS)](#maintain-jwks-for-token-validation).
1. [Validate JWTs with keys in this set](#validate-jwts-with-jwks).
For a detailed explanation of the mechanism behind this, refer to
[Architecture: Access control](architecture.md#access-control).
We strongly suggest to use existing software libraries to handle JWKS and JWT authentication.
Examples include:
- [`go-jwt`](https://github.com/golang-jwt/)
- [`ruby-jwt`](https://github.com/jwt/ruby-jwt)
- [`python-jose`](https://github.com/mpdavis/python-jose)
##### Maintain JWKS for token validation
JWTs are cryptographically signed by the token authority when first issued.
GitLab instances then attach the JWTs in requests made to backend services.
To validate JWT service access tokens, the backend service must first obtain the JWKS
containing the public validation key that corresponds to the private signing key used
to sign the token. Because both GitLab.com and CustomersDot issue tokens,
the backend service must fetch the JWKS from both.
To fetch the JWKS, use the OIDC discovery endpoints exposed by GitLab.com and CustomersDot.
For each of these token authorities:
1. `GET /.well-known/openid-configuration`
Example response:
```json
{
"issuer": "https://customers.gitlab.com/",
"jwks_uri": "https://customers.gitlab.com/oauth/discovery/keys",
"id_token_signing_alg_values_supported": [
"RS256"
]
}
```
1. `GET <jwks_uri>`
Example response:
```json
{
"keys": [
{
"kty": "RSA",
"n": "sGy_cbsSmZ_Y4XV80eK_ICmz46XkyWVf6O667-mhDcN5FcSfPW7gqhyn7s052fWrZYmJJZ4PPyh6ZzZ_gZAaQM7Oe2VrpbFdCeJW0duR51MZj52FwShLfi-NOBz2GH9XuUsRBKnXt7wwKQTabH4WW7XL23Hi0eDjc9dyQmsr2-AbH05yVsrgvEYSsWiCGEgobPgNc51DwBoIcsJ-kFN591aO_qAkbpf1j7yAuAVG7TUxaditQhyZKkourPXXyx1R-u0Lx9UJyAV8ySqFxq3XDE_pg6ZuJ7M0zS0XnGI82g3Js5zAughrQyJMhKd8j5c8UfSGxhRBQh58QNl3UwoMjQ",
"e": "AQAB",
"kid": "ZoObkdsnUfqW_C_EfXp9DM6LUdzl0R-eXj6Hrb2lrNU",
"use": "sig",
"alg": "RS256"
}
]
}
```
1. Cache the response. We suggest to let the cache expire once a day.
The keys obtained this way can be used to validate JWTs issued by the respective token authority.
Exactly how this works depends on the programming language and libraries used. General instructions
can be found in [Locate JSON Web Key Sets](https://auth0.com/docs/secure/tokens/json-web-tokens/locate-json-web-key-sets).
Backend services may merge responses from both token authorities into a single cached result set.
##### Validate JWTs with JWKS
To validate a JWT:
1. Read the token string from the HTTP `Authorization` header.
1. Validate it using a JWT library object and the JWKS [obtained previously](#maintain-jwks-for-token-validation).
When validating a token, ensure that:
1. The token signature is correct.
1. The `aud` claim equals or contains the backend service (this field can be a string or an array).
1. The `iss` claim matches the issuer URL of the key used to validate it.
1. The `scopes` claim covers the functionality exposed by the requested endpoint (see [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service)).
#### Add a new Cloud Connector route
All Cloud Connector features must be accessed through `cloud.gitlab.com`, a global load-balancer that
routes requests into backend services based on paths prefixes. For example, AI features must be requested
from `cloud.gitlab.com/ai/<AI-specific-path>`. The load-balancer then routes `<AI-specific-path>` to the AI gateway.
To connect a new backend service to Cloud Connector, you must claim a new path-prefix to route requests to your
service. For example, if you connect `foo-service`, a new route must be added that routes `cloud.gitlab.com/foo`
to `foo-service`.
Adding new routes requires access to production infrastructure configuration. If you require a new route to be
added, open an issue in the [`gitlab-org/gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
and assign it to the Runway group.
## Testing
An example for how to set up an end-to-end integration with the AI gateway as the backend service can be found [here](../ai_features/_index.md#required-install-ai-gateway).
|
---
stage: Shared ownership
group: Shared ownership
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Cloud Connector
breadcrumbs:
- doc
- development
- cloud_connector
---
GitLab Cloud Connector is a way to access services common to
multiple GitLab deployments, instances, and cells. As of now, Cloud Connector is not a
dedicated service itself, but rather a collection of APIs and code that standardizes the approach to authentication and
other items when integrating Cloud based services with the GitLab instance. This page aims to explain how to use
Cloud Connector to link GitLab Rails to a service.
See the [architecture page](architecture.md) for more information about Cloud Connector. See [terms](architecture.md#terms)
for a list of terms used throughout the document. Also see [configuration](configuration.md) for the information
on how paid features are bundled into GitLab tiers and add-ons.
## Tutorial: Connect a new feature using Cloud Connector
The following sections will cover the following use cases:
- [The new feature is introduced through the existing backend service](#the-new-feature-is-introduced-through-the-existing-backend-service) that is already connected to Cloud Connector (that is, the **AiGateway**).
- [The new feature is introduced through new backend service](#the-new-feature-is-introduced-via-new-backend-service) that needs to be connected to Cloud Connector.
### The new feature is introduced through the existing backend service
The **Ai Gateway** is currently the only backend service that is connected with the CloudConnector.
To add new feature to the existing backend service (**Ai Gateway**):
1. [Register new feature in the JWT issuer](#register-the-new-feature-in-the-jwt-issuer).
1. [Implement permission checks in GitLab Rails](#implement-permission-checks-in-gitlab-rails).
1. [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service).
**Optional:** If the backend service the token is used for requires additional claims to be embedded in the
service access token, contact [#f_cloud_connector](https://gitlab.enterprise.slack.com/archives/CGN8BUCKC) (Slack, internal only)
because we do not currently have interfaces in place to self-service this.
#### Register the new feature in the JWT issuer
- For GitLab Dedicated and GitLab Self-Managed, the CustomersDot is the **JWT issuer**.
- For GitLab.com deployment, GitLab.com is the **JWT issuer**, because it's able to [self-sign and create JWTs](architecture.md#gitlabcom) for every request to a Cloud Connector feature.
#### Register new feature for GitLab Self-Managed, Dedicated and GitLab.com customers
You must register the new feature as a unit primitive in the [`gitlab-cloud-connector`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector) repository.
This repository serves as the Single Source of Truth (SSoT) for all Cloud Connector configurations.
To register a new feature:
1. Create a new YAML file in the `config/unit_primitives/` directory of the `gitlab-cloud-connector` repository.
1. Define the unit primitive configuration, and ensure you follow the [schema](configuration.md#unit-primitive-configuration).
For example, to add a new feature called `new_feature`:
```yaml
# config/unit_primitives/new_feature.yml
---
name: new_feature
description: Description of the new feature
cut_off_date: 2024-10-17T00:00:00+00:00 # Optional, set if not free
min_gitlab_version: '16.9'
min_gitlab_version_for_free_access: '16.8' # Optional
group: group::your_group
feature_category: your_category
documentation_url: https://docs.gitlab.com/ee/path/to/docs
backend_services:
- ai_gateway
add_ons:
- duo_pro
- duo_enterprise
license_types:
- premium
- ultimate
```
##### Backward Compatibility
For backward compatibility where instances are still using the old [legacy structure](configuration.md#legacy-structure), consider adding your unit primitive to the [service configuration](configuration.md#service-configuration) as well.
- If the unit primitive is a stand-alone feature, no further changes are needed, and the service with the same name is generated automatically.
- If the unit primitive is delivered as part of existing service like `duo_chat`, `self_hosted_models` or `vertex_ai_proxy`, add the unit primitive to the desired service in the [`config/services`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/config/services) directory.
##### Deployment process
Follow our [release checklist](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/blob/main/.gitlab/merge_request_templates/Release.md#checklist) for publishing the new version of the library and using it in GitLab project.
#### Implement Permission checks in GitLab Rails
##### New feature is delivered as stand-alone service
###### Access Token
As an example, the feature is delivered as a stand-alone service called `new_feature`.
1. Call `CloudConnector::AvailableServices.find_by_name(:new_feature).access_token(user_or_namespace)`
and include this token in the `Authorization` HTTP header field.
- On GitLab.com, it will self-issue a token with scopes that depend on the provided resource:
- For a user: scopes will be based on the user's seat assignment
- For a namespace: scopes will be based on purchased add-ons for this namespace
- If a service can be accessed for free, the token will include all available scopes for that service.
- For Duo Chat, the **JWT** would include the `documentation_search` and `duo_chat` scopes.
- On GitLab Self-Managed, it will always return `::CloudConnector::ServiceAccessToken` **JWT** token.
- Provided parameters such as user, namespace or extra claims would be ignored for GitLab Self-Managed instances.
Refer to [this section](#the-new-feature-is-introduced-through-the-existing-backend-service) to see how custom claims are handled for GitLab Self-Managed instances.
The **backend service** (AI gateway) must validate this token and any scopes it carries when receiving the request.
1. If you need to embed additional claims in the token specific to your use case, you can pass these
in the `extra_claims` argument.
1. Ensure your request sends the required headers to the [backend service](#implement-authorization-checks-in-backend-service).
These headers can be found in the `gitlab-cloud-connector` [README](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector/-/tree/main/src/python#authentication).
Some of these headers can be injected by merging the result of the `::CloudConnector#headers` method to your payload.
For AI uses cases and requests targeting the AI gateway, use `::CloudConnector#ai_headers` instead.
###### Permission checks
To decide if the service is available or visible to the end user, we need to:
- Optional. On GitLab Self-Managed, if the new feature is introduced as a new [enterprise feature](../ee_features.md#implement-a-new-ee-feature),
check to determine if user has access to the feature by following the [EE feature guideline](../ee_features.md#guard-your-ee-feature).
```ruby
next true if ::Gitlab::Saas.feature_available?(:new_feature_on_saas)
::License.feature_available?(:new_feature)
```
- On GitLab Self-Managed, check if the customer is using an [online cloud license](https://about.gitlab.com/pricing/licensing-faq/cloud-licensing/#what-is-cloud-licensing)
- Cloud connector currently only support online cloud license for GitLab Self-Managed customers.
- Trials or legacy licenses are not supported.
- GitLab.com is using a legacy license.
```ruby
::License.current&.online_cloud_license?
```
- Optional. If the service has free access, this usually means that the experimental features are subject to the [Testing Agreement](https://handbook.gitlab.com/handbook/legal/testing-agreement/).
- For GitLab Duo features, the customer needs to enable [experimental toggle](../../user/gitlab_duo/turn_on_off.md#turn-on-beta-and-experimental-features) in order to use experimental features for free.
- On GitLab.com and GitLab Self-Managed, check if the customer's end-user has been assigned to the proper seat.
```ruby
# Returns true if service is allowed to be used.
#
# For provided user, it will check if user is assigned to a proper seat.
current_user.allowed_to_use?(:new_feature)
```
###### Example
The following example is for a request to the service called `:new_feature`.
Here we assume your backend service is called `foo` and is already reachable at `https://cloud.gitlab.com/foo`.
We also assume that the backend service exposes the service using a `/new_feature_endpoint` endpoint.
This allows clients to access the service at `https://cloud.gitlab.com/foo/new_feature_endpoint`.
Add a new policy rule in [ee/global_policy.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/policies/ee/global_policy.rb):
```ruby
condition(:new_feature_licensed) do
next true if ::Gitlab::Saas.feature_available?(:new_feature_on_saas)
next false unless ::License.current.online_cloud_license?
::License.feature_available?(:new_feature)
end
condition(:user_allowed_to_use_new_feature) do
@user.allowed_to_use?(:new_feature)
end
rule { new_feature_licensed & user_allowed_to_use_new_feature }.enable :access_new_feature
```
The request
```ruby
include API::Helpers::CloudConnector
# Check if the service is available for the given user based on seat assignment, add-on purchases
return unauthorized! unless current_user.can?(:access_new_feature)
# For Gitlab.com it will self-issue a token with scopes based on provided resource:
# - For provided user, it will self-issue a token with scopes based on user assignment permissions
# - For provided namespace, it will self-issue a token with scopes based on add-on purchased permissions
#
# For SM, it will return :CloudConnector::ServiceAccessToken instance token, ignoring provided user, namespace and extra claims
token = ::CloudConnector::AvailableServices.find_by_name(:new_feature).access_token(current_user)
Gitlab::HTTP.post(
"https://cloud.gitlab.com/foo/new_feature_endpoint",
headers: {
'Authorization' => "Bearer #{token}",
}.merge(cloud_connector_headers(current_user))
)
```
The introduced policy can be used to control if the front-end is visible. Add a `new_feature_helper.rb`:
```ruby
def show_new_feature?
current_user.can?(:access_new_feature)
end
```
##### New feature is delivered as part of the existing service (Duo Chat)
###### Access Token
If the feature is delivered as part of the existing service, like `Duo Chat`,
calling `CloudConnector::AvailableServices.find_by_name(:duo_chat).access_token(user_or_namespace)` would return an **IJWT** with
access scopes including all authorized features (**unit primitives**).
The **backend service** (AI gateway) would prevent access to the specific feature (**unit primitive**) if the token scope is not included in the **JWT**.
###### Permission checks
If the feature is delivered as part of the existing service, like `Duo Chat`, no additional permission checks are needed.
We can rely on existing global policy rule `user.can?(:access_duo_chat)`.
If end-user has access to at least one feature (**unit primitive**), end-user can access the service.
Access to the individual feature (**unit primitive**), is governed by the **IJWT** scopes, that will be validated by the **backend service** (Ai Gateway).
See [access token](#access-token-1)
#### Implement authorization checks in backend service
GitLab Rails calls a backend service to deliver functionality that would otherwise be unavailable to GitLab Self-Managed and
Dedicated instances. For GitLab Rails to be able to call this, there has to be an endpoint exposed.
The backend service must verify each JWT sent by GitLab Rails in the Authorization header.
For more information and examples on the AI gateway authorization process, check the [Authorization in AI gateway documentation](https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/-/blob/main/docs/auth.md?ref_type=heads#authorization-in-ai-gateway).
### The new feature is introduced via new backend service
To integrate a new backend service that isn't already accessible by Cloud Connector features:
1. [Set up JWT validation](#set-up-jwt-validation).
1. [Make it available at `cloud.gitlab.com`](#add-a-new-cloud-connector-route).
#### Set up JWT validation
As mentioned in the [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service) for services
that already use Cloud Connector, each service must verify that the JWT sent by a GitLab instance is legitimate.
To accomplish this, a backend service must:
1. [Maintain a JSON Web Key Set (JWKS)](#maintain-jwks-for-token-validation).
1. [Validate JWTs with keys in this set](#validate-jwts-with-jwks).
For a detailed explanation of the mechanism behind this, refer to
[Architecture: Access control](architecture.md#access-control).
We strongly suggest to use existing software libraries to handle JWKS and JWT authentication.
Examples include:
- [`go-jwt`](https://github.com/golang-jwt/)
- [`ruby-jwt`](https://github.com/jwt/ruby-jwt)
- [`python-jose`](https://github.com/mpdavis/python-jose)
##### Maintain JWKS for token validation
JWTs are cryptographically signed by the token authority when first issued.
GitLab instances then attach the JWTs in requests made to backend services.
To validate JWT service access tokens, the backend service must first obtain the JWKS
containing the public validation key that corresponds to the private signing key used
to sign the token. Because both GitLab.com and CustomersDot issue tokens,
the backend service must fetch the JWKS from both.
To fetch the JWKS, use the OIDC discovery endpoints exposed by GitLab.com and CustomersDot.
For each of these token authorities:
1. `GET /.well-known/openid-configuration`
Example response:
```json
{
"issuer": "https://customers.gitlab.com/",
"jwks_uri": "https://customers.gitlab.com/oauth/discovery/keys",
"id_token_signing_alg_values_supported": [
"RS256"
]
}
```
1. `GET <jwks_uri>`
Example response:
```json
{
"keys": [
{
"kty": "RSA",
"n": "sGy_cbsSmZ_Y4XV80eK_ICmz46XkyWVf6O667-mhDcN5FcSfPW7gqhyn7s052fWrZYmJJZ4PPyh6ZzZ_gZAaQM7Oe2VrpbFdCeJW0duR51MZj52FwShLfi-NOBz2GH9XuUsRBKnXt7wwKQTabH4WW7XL23Hi0eDjc9dyQmsr2-AbH05yVsrgvEYSsWiCGEgobPgNc51DwBoIcsJ-kFN591aO_qAkbpf1j7yAuAVG7TUxaditQhyZKkourPXXyx1R-u0Lx9UJyAV8ySqFxq3XDE_pg6ZuJ7M0zS0XnGI82g3Js5zAughrQyJMhKd8j5c8UfSGxhRBQh58QNl3UwoMjQ",
"e": "AQAB",
"kid": "ZoObkdsnUfqW_C_EfXp9DM6LUdzl0R-eXj6Hrb2lrNU",
"use": "sig",
"alg": "RS256"
}
]
}
```
1. Cache the response. We suggest to let the cache expire once a day.
The keys obtained this way can be used to validate JWTs issued by the respective token authority.
Exactly how this works depends on the programming language and libraries used. General instructions
can be found in [Locate JSON Web Key Sets](https://auth0.com/docs/secure/tokens/json-web-tokens/locate-json-web-key-sets).
Backend services may merge responses from both token authorities into a single cached result set.
##### Validate JWTs with JWKS
To validate a JWT:
1. Read the token string from the HTTP `Authorization` header.
1. Validate it using a JWT library object and the JWKS [obtained previously](#maintain-jwks-for-token-validation).
When validating a token, ensure that:
1. The token signature is correct.
1. The `aud` claim equals or contains the backend service (this field can be a string or an array).
1. The `iss` claim matches the issuer URL of the key used to validate it.
1. The `scopes` claim covers the functionality exposed by the requested endpoint (see [Implement authorization checks in backend service](#implement-authorization-checks-in-backend-service)).
#### Add a new Cloud Connector route
All Cloud Connector features must be accessed through `cloud.gitlab.com`, a global load-balancer that
routes requests into backend services based on paths prefixes. For example, AI features must be requested
from `cloud.gitlab.com/ai/<AI-specific-path>`. The load-balancer then routes `<AI-specific-path>` to the AI gateway.
To connect a new backend service to Cloud Connector, you must claim a new path-prefix to route requests to your
service. For example, if you connect `foo-service`, a new route must be added that routes `cloud.gitlab.com/foo`
to `foo-service`.
Adding new routes requires access to production infrastructure configuration. If you require a new route to be
added, open an issue in the [`gitlab-org/gitlab` issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues/new)
and assign it to the Runway group.
## Testing
An example for how to set up an end-to-end integration with the AI gateway as the backend service can be found [here](../ai_features/_index.md#required-install-ai-gateway).
|
https://docs.gitlab.com/development/configuration
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/configuration.md
|
2025-08-13
|
doc/development/cloud_connector
|
[
"doc",
"development",
"cloud_connector"
] |
configuration.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Cloud Connector: Configuration
| null |
A GitLab Rails instance accesses backend services uses a [Cloud Connector Service Access Token](architecture.md#access-control):
- This token is synced to a GitLab instance from CustomersDot daily and stored in the instance's local database.
- For GitLab.com, we do not require this step; instead, we issue short-lived tokens for each request.
The Cloud Connector **JWT** contains a custom claim, which represents the list of access scopes that define which features, or unit primitives, this token is valid for.
## Unit Primitives and Configuration
According to the [Architecture Decision Record (ADR) 003](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cloud_connector/decisions/003_unit_primitives/),
this configuration of unit primitives is maintained in the [`gitlab-cloud-connector`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector) library.
This library serves as the Single Source of Truth (SSoT) for all Cloud Connector configurations and is available as both a Ruby gem and a Python package.
### Configuration format and structure
The configuration in `gitlab-cloud-connector` follows this structure:
```shell
config
├─ unit_primitives/
│ ├─ duo_chat.yml
│ └─ ...
├─ backend_services/
│ ├─ ai_gateway.yml
│ └─ ...
├─ add_ons/
│ ├─ duo_pro.yml
│ └─ ...
├─ services/
│ ├─ duo_chat.yml
│ └─ ...
└─ license_types/
├─ premium.yml
└─ ...
```
#### Unit primitive configuration
We have a YAML file per unit primitive. It contains information on how this unit primitive is bundled with add-ons and license types, and other metadata.
The configuration for each unit primitive adhere to the following schema.
##### Required Fields
| Field | Type | Description |
|-------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | Unit primitive name in `snake_case` format (lowercase letters, numbers, underscores). Should follow `$VERB_$NOUN` pattern (for example, `explain_vulnerability`). |
| `description` | string | Description of the unit primitive's purpose and functionality. |
| `group` | string | Engineering group that owns the unit primitive (for example, "group::duo chat"). |
| `feature_category` | string | Feature category classification (see [categories](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/categories.yml)). |
| `documentation_url` | string | URL to the unit primitive's documentation. |
##### Optional Fields
| Field | Type | Description |
|-------|------|------------------------------------------------------|
| `milestone` | string | GitLab milestone that introduced the unit primitive. |
| `introduced_by_url` | string | Merge request URL that introduced the unit primitive. |
| `unit_primitive_issue_url` | string | Issue URL proposing the unit primitive introduction. |
| `deprecated_by_url` | string | Merge request URL that deprecated the unit primitive. |
| `deprecation_message` | string | Explanation of deprecation context and reasons. |
| `cut_off_date` | datetime | UTC timestamp when free access ends (if applicable). **NOTE:** If you do not define a cut-off date, the `add_ons` element is not enforced and the feature remains in free access. |
| `min_gitlab_version` | string | Minimum required GitLab version (for example, `17.8`). |
| `min_gitlab_version_for_free_access` | string | Minimum version for free access period (for example, `17.8`). |
##### Access Control Fields
| Field | Type | Description |
|-------|------|-------------------------------------------------------------------------|
| `license_types` | array[string] | GitLab license types that can access this primitive. Possible values must match the name field in corresponding files under `config/license_types` (for example, `premium`).|
| `backend_services` | array[string] | Backend services hosting this primitive. Possible values must match the name field in corresponding files under `config/backend_services` (for example, `ai_gateway`).|
| `add_ons` | array[string] | Add-on products including this primitive. To have access to this feature, you must have all listed add-ons. Possible values must match the name field in corresponding files under `config/add_ons` (for example, `duo_pro`). **NOTE:** This field is enforced only with a defined `cut_off_date` beyond which a feature moves out of free access or beta status. |
Example unit primitive configuration:
```yaml
# config/unit_primitives/new_feature.yml
---
name: new_feature
description: Description of the new feature
cut_off_date: 2024-10-17T00:00:00+00:00 # Optional; always set for paid features
min_gitlab_version: '16.9'
min_gitlab_version_for_free_access: '16.8'
group: group::your_group
feature_category: your_category
documentation_url: https://docs.gitlab.com/ee/path/to/docs
backend_services:
- ai_gateway
add_ons:
- duo_pro
- duo_enterprise
license_types:
- premium
- ultimate
```
According to this definition, the feature:
- Describes "New Feature" owned by "Your Group".
- Is available in beta (free of charge) starting with GitLab 16.8.
- It is only available via paid add-ons on GitLab versions 16.9 or newer.
- It transitions from free access to paid access on October 17, 2024 at midnight UTC. Beyond this point, you must have either Duo Pro or Duo Enterprise, and a Premium or Ultimate subscription.
- If the above listed conditions hold true, a Cloud Connector token will carry this feature
in its `scopes` claim, allowing backend services to verify access accordingly.
- This feature is only relevant for requests to the AI gateway. The corresponding entry in `scopes` does not need to be present
in `scopes` when a token is attached to requests sent to other backend services.
{{< alert type="note" >}}
Not setting any `cut_off_date` implies a feature remains freely available, regardless of what
`add_ons` are defined.
{{< /alert >}}
#### Related Configurations
##### Backend Services
Each backend service must have its own YAML configuration under `config/backend_services`. For example:
```yaml
# config/backend_services/ai_gateway.yml
---
name: ai_gateway
project_url: https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist
group: group::ai framework
jwt_aud: gitlab-ai-gateway
```
##### Add-ons
Each add-on must have its own YAML configuration under `config/add_ons`. For example:
```yaml
# config/add_ons/duo_pro.yml
---
name: duo_pro
```
##### License Types
Each license type must have its own YAML configuration under `config/license_types`. For example:
```yaml
# config/license_types/premium.yml
---
name: premium
```
### Backward compatibility
To support backward compatibility for customers running older GitLab versions and with the old [legacy structure](#legacy-structure), we provide a mapping from the new to old format,
and soon to be deprecated "service" abstraction.
#### Service configuration
| Field | Type | Description |
|-------|------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | The unique name of the service, consisting of lowercase alphanumeric characters and underscores. |
| `basic_unit_primitive` | string | The most fundamental unit primitive representing key configuration values like `cut_off_date` and `min_gitlab_version`. If not set, the first unit primitive in the `unit_primitives` list is used. Used to derive these shared properties across the service. |
| `gitlab_realm` | array[string] | An array of environments where the service is available. Possible values: `gitlab-com`, `self-managed`. |
| `description` | string | A brief description of the service. |
| `unit_primitives` | array[string] | An array of unit primitives associated with the service. |
Example of a new service mapping configuration:
```yaml
# config/services/duo_chat.yml
---
name: duo_chat
basic_unit_primitive: duo_chat
gitlab_realm:
- gitlab-com
- self-managed
unit_primitives:
- ask_build
- ask_commit
- ask_epic
- ask_issue
- ask_merge_request
- documentation_search
- duo_chat
- explain_code
- fix_code
- include_dependency_context
- include_file_context
- include_issue_context
- include_local_git_context
- include_merge_request_context
- include_snippet_context
- include_terminal_context
- include_repository_context
- refactor_code
- write_tests
```
### Legacy structure
The information about how paid features are bundled into GitLab tiers and add-ons is configured and stored in a YAML file:
```yaml
services:
code_suggestions:
backend: 'gitlab-ai-gateway'
cut_off_date: 2024-02-15 00:00:00 UTC
min_gitlab_version: '16.8'
bundled_with:
duo_pro:
unit_primitives:
- code_suggestions
duo_chat:
backend: 'gitlab-ai-gateway'
min_gitlab_version_for_beta: '16.8'
min_gitlab_version: '16.9'
bundled_with:
duo_pro:
unit_primitives:
- duo_chat
- documentation_search
```
| Field | Type | Description |
|-------|------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| `unit_primitives` | array[string] | The smallest logical features that a permission or access scope can govern. Should follow `$VERB_$NOUN` naming pattern (for example, `explain_vulnerability`). |
| `service` | string | The service name that delivers the feature. Can be standalone or part of an existing service (for example, `duo_chat`). |
| `bundled_with` | object | Map of add-ons that include this feature. A feature can be bundled with multiple add-ons (for example, `duo_pro`, `duo_enterprise`). |
| `cut_off_date` | datetime | UTC timestamp when free access ends. If not set, feature remains free. |
| `min_gitlab_version` | string | Minimum required GitLab version (for example, `17.8`). If not set, available for all versions. |
| `min_gitlab_version_for_free_access` | string | Minimum version for free access period (for example, `17.8`). If not set, available for all versions. |
| `backend` | string | Name of the backend service hosting this feature, used as token audience claim (for example, `gitlab-ai-gateway`). |
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Cloud Connector: Configuration'
breadcrumbs:
- doc
- development
- cloud_connector
---
A GitLab Rails instance accesses backend services uses a [Cloud Connector Service Access Token](architecture.md#access-control):
- This token is synced to a GitLab instance from CustomersDot daily and stored in the instance's local database.
- For GitLab.com, we do not require this step; instead, we issue short-lived tokens for each request.
The Cloud Connector **JWT** contains a custom claim, which represents the list of access scopes that define which features, or unit primitives, this token is valid for.
## Unit Primitives and Configuration
According to the [Architecture Decision Record (ADR) 003](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cloud_connector/decisions/003_unit_primitives/),
this configuration of unit primitives is maintained in the [`gitlab-cloud-connector`](https://gitlab.com/gitlab-org/cloud-connector/gitlab-cloud-connector) library.
This library serves as the Single Source of Truth (SSoT) for all Cloud Connector configurations and is available as both a Ruby gem and a Python package.
### Configuration format and structure
The configuration in `gitlab-cloud-connector` follows this structure:
```shell
config
├─ unit_primitives/
│ ├─ duo_chat.yml
│ └─ ...
├─ backend_services/
│ ├─ ai_gateway.yml
│ └─ ...
├─ add_ons/
│ ├─ duo_pro.yml
│ └─ ...
├─ services/
│ ├─ duo_chat.yml
│ └─ ...
└─ license_types/
├─ premium.yml
└─ ...
```
#### Unit primitive configuration
We have a YAML file per unit primitive. It contains information on how this unit primitive is bundled with add-ons and license types, and other metadata.
The configuration for each unit primitive adhere to the following schema.
##### Required Fields
| Field | Type | Description |
|-------|------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | Unit primitive name in `snake_case` format (lowercase letters, numbers, underscores). Should follow `$VERB_$NOUN` pattern (for example, `explain_vulnerability`). |
| `description` | string | Description of the unit primitive's purpose and functionality. |
| `group` | string | Engineering group that owns the unit primitive (for example, "group::duo chat"). |
| `feature_category` | string | Feature category classification (see [categories](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/categories.yml)). |
| `documentation_url` | string | URL to the unit primitive's documentation. |
##### Optional Fields
| Field | Type | Description |
|-------|------|------------------------------------------------------|
| `milestone` | string | GitLab milestone that introduced the unit primitive. |
| `introduced_by_url` | string | Merge request URL that introduced the unit primitive. |
| `unit_primitive_issue_url` | string | Issue URL proposing the unit primitive introduction. |
| `deprecated_by_url` | string | Merge request URL that deprecated the unit primitive. |
| `deprecation_message` | string | Explanation of deprecation context and reasons. |
| `cut_off_date` | datetime | UTC timestamp when free access ends (if applicable). **NOTE:** If you do not define a cut-off date, the `add_ons` element is not enforced and the feature remains in free access. |
| `min_gitlab_version` | string | Minimum required GitLab version (for example, `17.8`). |
| `min_gitlab_version_for_free_access` | string | Minimum version for free access period (for example, `17.8`). |
##### Access Control Fields
| Field | Type | Description |
|-------|------|-------------------------------------------------------------------------|
| `license_types` | array[string] | GitLab license types that can access this primitive. Possible values must match the name field in corresponding files under `config/license_types` (for example, `premium`).|
| `backend_services` | array[string] | Backend services hosting this primitive. Possible values must match the name field in corresponding files under `config/backend_services` (for example, `ai_gateway`).|
| `add_ons` | array[string] | Add-on products including this primitive. To have access to this feature, you must have all listed add-ons. Possible values must match the name field in corresponding files under `config/add_ons` (for example, `duo_pro`). **NOTE:** This field is enforced only with a defined `cut_off_date` beyond which a feature moves out of free access or beta status. |
Example unit primitive configuration:
```yaml
# config/unit_primitives/new_feature.yml
---
name: new_feature
description: Description of the new feature
cut_off_date: 2024-10-17T00:00:00+00:00 # Optional; always set for paid features
min_gitlab_version: '16.9'
min_gitlab_version_for_free_access: '16.8'
group: group::your_group
feature_category: your_category
documentation_url: https://docs.gitlab.com/ee/path/to/docs
backend_services:
- ai_gateway
add_ons:
- duo_pro
- duo_enterprise
license_types:
- premium
- ultimate
```
According to this definition, the feature:
- Describes "New Feature" owned by "Your Group".
- Is available in beta (free of charge) starting with GitLab 16.8.
- It is only available via paid add-ons on GitLab versions 16.9 or newer.
- It transitions from free access to paid access on October 17, 2024 at midnight UTC. Beyond this point, you must have either Duo Pro or Duo Enterprise, and a Premium or Ultimate subscription.
- If the above listed conditions hold true, a Cloud Connector token will carry this feature
in its `scopes` claim, allowing backend services to verify access accordingly.
- This feature is only relevant for requests to the AI gateway. The corresponding entry in `scopes` does not need to be present
in `scopes` when a token is attached to requests sent to other backend services.
{{< alert type="note" >}}
Not setting any `cut_off_date` implies a feature remains freely available, regardless of what
`add_ons` are defined.
{{< /alert >}}
#### Related Configurations
##### Backend Services
Each backend service must have its own YAML configuration under `config/backend_services`. For example:
```yaml
# config/backend_services/ai_gateway.yml
---
name: ai_gateway
project_url: https://gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist
group: group::ai framework
jwt_aud: gitlab-ai-gateway
```
##### Add-ons
Each add-on must have its own YAML configuration under `config/add_ons`. For example:
```yaml
# config/add_ons/duo_pro.yml
---
name: duo_pro
```
##### License Types
Each license type must have its own YAML configuration under `config/license_types`. For example:
```yaml
# config/license_types/premium.yml
---
name: premium
```
### Backward compatibility
To support backward compatibility for customers running older GitLab versions and with the old [legacy structure](#legacy-structure), we provide a mapping from the new to old format,
and soon to be deprecated "service" abstraction.
#### Service configuration
| Field | Type | Description |
|-------|------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name` | string | The unique name of the service, consisting of lowercase alphanumeric characters and underscores. |
| `basic_unit_primitive` | string | The most fundamental unit primitive representing key configuration values like `cut_off_date` and `min_gitlab_version`. If not set, the first unit primitive in the `unit_primitives` list is used. Used to derive these shared properties across the service. |
| `gitlab_realm` | array[string] | An array of environments where the service is available. Possible values: `gitlab-com`, `self-managed`. |
| `description` | string | A brief description of the service. |
| `unit_primitives` | array[string] | An array of unit primitives associated with the service. |
Example of a new service mapping configuration:
```yaml
# config/services/duo_chat.yml
---
name: duo_chat
basic_unit_primitive: duo_chat
gitlab_realm:
- gitlab-com
- self-managed
unit_primitives:
- ask_build
- ask_commit
- ask_epic
- ask_issue
- ask_merge_request
- documentation_search
- duo_chat
- explain_code
- fix_code
- include_dependency_context
- include_file_context
- include_issue_context
- include_local_git_context
- include_merge_request_context
- include_snippet_context
- include_terminal_context
- include_repository_context
- refactor_code
- write_tests
```
### Legacy structure
The information about how paid features are bundled into GitLab tiers and add-ons is configured and stored in a YAML file:
```yaml
services:
code_suggestions:
backend: 'gitlab-ai-gateway'
cut_off_date: 2024-02-15 00:00:00 UTC
min_gitlab_version: '16.8'
bundled_with:
duo_pro:
unit_primitives:
- code_suggestions
duo_chat:
backend: 'gitlab-ai-gateway'
min_gitlab_version_for_beta: '16.8'
min_gitlab_version: '16.9'
bundled_with:
duo_pro:
unit_primitives:
- duo_chat
- documentation_search
```
| Field | Type | Description |
|-------|------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| `unit_primitives` | array[string] | The smallest logical features that a permission or access scope can govern. Should follow `$VERB_$NOUN` naming pattern (for example, `explain_vulnerability`). |
| `service` | string | The service name that delivers the feature. Can be standalone or part of an existing service (for example, `duo_chat`). |
| `bundled_with` | object | Map of add-ons that include this feature. A feature can be bundled with multiple add-ons (for example, `duo_pro`, `duo_enterprise`). |
| `cut_off_date` | datetime | UTC timestamp when free access ends. If not set, feature remains free. |
| `min_gitlab_version` | string | Minimum required GitLab version (for example, `17.8`). If not set, available for all versions. |
| `min_gitlab_version_for_free_access` | string | Minimum version for free access period (for example, `17.8`). If not set, available for all versions. |
| `backend` | string | Name of the backend service hosting this feature, used as token audience claim (for example, `gitlab-ai-gateway`). |
|
https://docs.gitlab.com/development/architecture
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/architecture.md
|
2025-08-13
|
doc/development/cloud_connector
|
[
"doc",
"development",
"cloud_connector"
] |
architecture.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Cloud Connector: Architecture
| null |
[GitLab Cloud Connector](https://about.gitlab.com/direction/cloud-connector/) is a way to access services common to
multiple GitLab deployments, instances, and cells. As of now, Cloud Connector is not a
dedicated service itself, but rather a collection of APIs and code that standardizes the approach to authentication and
other items when integrating Cloud based services with a GitLab instance.
This page covers the general architecture of Cloud Connector and is meant to be read as a supplemental
resource to the main developer documentation.
## Terms
When talking about Cloud Connector's constituents and mechanics, we use the following
terms:
- **GitLab Rails**: The main GitLab application.
- **GitLab.com**: The multi-tenant GitLab SaaS deployment operated by GitLab Inc.
- **GitLab Dedicated**: A single-tenant GitLab SaaS deployment operated by GitLab Inc.
- **GitLab Self-Managed**: Any GitLab instance operated by a customer, potentially deployed to a private cloud.
- **GitLab instance**: Any of the above.
- **Backend service**: A GitLab-operated web service invoked by a GitLab instance to deliver functionality
that's part of the Cloud Connector set of features. The AI gateway is one example.
- **CustomersDot**: The [GitLab Customers Portal](https://gitlab.com/gitlab-org/customers-gitlab-com),
used by customers to manage their GitLab subscriptions.
- **OIDC**: [OpenID Connect](https://auth0.com/docs/authenticate/protocols/openid-connect-protocol),
an open standard for implementing identity providers and authN/authZ. JWT
issuers provide OIDC compliant discovery endpoints to publish keys for JWT validators.
- **JWT**: [JSON Web Token](https://auth0.com/docs/secure/tokens/json-web-tokens), an open standard to encode and transmit identity data in the form of a
cryptographically signed token. This token is used to authorize requests between a GitLab instance or user and a
backend service. It can be scoped to either a GitLab instance or a user.
- **JWT issuer**: A GitLab-operated web service providing endpoints to issue JWTs. The OAuth specification refers to this as an `Authorization Server`.
and/or endpoints to provide the public keys necessary to validate such a token.
GitLab.com, CustomersDot and AI gateway are all JWT issuers.
- **JWT validator**: A backend service that validates GitLab instance requests carrying a JWT. The OAuth specification refers to this as a `Resource Server`.
using public keys obtained from a JWT issuer. The AI gateway is one example of a JWT validator.
- **IJWT**: An Instance JSON Web Token, a JWT created for a GitLab instance.
- **UJWT**: A User JSON Web Token, a JWT created for a GitLab user with a shorter lifespan and less permissions than a IJWT.
- **JWKS**: [JSON Web Key Set](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets),
an open standard to encode cryptographic keys to validate JWTs.
- **Unit primitives**: The logical feature that a permission/access scope can govern.
- **Add-On** The group of unit primitives that are bundled and sold together.
Example: `code_suggestions` and `duo_chat` are 2 UPs sold together under the `DUO_PRO` add-on.
## Problem to solve
Most GitLab features can be delivered directly from a GitLab instance, regardless of where it is deployed.
Some features, however, require 3rd party vendor integration or are difficult to operate outside of the
GitLab.com. This presents GitLab Self-Managed and GitLab Dedicated customers with a problem since they are not easily
able to access those features.
Cloud Connector solves this problem by:
- Moving functionality out of GitLab Rails and into GitLab-operated services to save customers
the hassle of manually configuring and operating them.
- Providing a single global entry point into Cloud Connector features at `cloud.gitlab.com` to
access backend services.
- Connects instance license and billing data to access grants,
enabling a GitLab instance to consume features hosted in backend services operated by GitLab Inc.
## Cloud Connector components
Technically, Cloud Connector consists of the following pieces:
1. **A global load-balancer.** Hosted at `cloud.gitlab.com` through Cloudflare, all traffic inbound
to Cloud Connector features such as AI must go through this host. The load balancer makes
routing decisions based on path prefixes. For example:
1. Load balancer maps `/prefix` to a backend service.
1. Client requests `cloud.gitlab.com/prefix/path`.
1. Load balancer strips out `/prefix` and routes `/path` to the backend service.
1. **Electing GitLab.com and CustomersDot as IJWT issuers.** We configure these
deployments with private keys only GitLab Inc. has access to. We use these keys to issue cryptographically
signed IJWT a GitLab Rails instance can use to make requests upstream to a connected service
backend. The public validation keys are published using OIDC discovery API endpoints.
1. **Electing AI gateway as a UJWT issuer and validator.** Similar to the above mentioned IJWT issuers,
except with the purpose of issuing tokens for users only. The AI gateway is its own validator, so the validation
keys are not published on OIDC discovery API endpoints.
1. **Electing backend services as IJWT validators.** Backend services synchronize regularly
with GitLab.com or CustomersDot to obtain the public keys used to validate the signature of a service
token attached to a request. The backend service can then decide whether to accept or reject the
request, based on both signature validity and any claims the token may carry in its body.
1. **Programming APIs to integrate with the above.** We aim to provide the necessary interfaces in
Ruby to make it easier to implement communication between the GitLab Rails application
and a backend service. This is a moving target and we file issues into the
[Cloud Connector abstractions epic](https://gitlab.com/groups/gitlab-org/-/epics/12376) to improve this.
The following diagram outlines how these components interact:
```plantuml
@startuml
node "Cloudflare" {
[cloud.gitlab.com] as LB #yellow
}
node "GitLab SaaS" {
package "Backend service deployments" as BACK {
[Backend 1] as BE1
[Backend 2] as BE2
}
package "OIDC providers" as OIDC {
[GitLab.com] as DOTCOM
[Customers Portal] as CDOT
}
package "GitLab Dedicated" as DED {
[GitLab instance 1]
[GitLab instance 2]
}
}
node "Customer deployments" {
[GitLab instance] as SM
}
BACK -down-> OIDC : "OIDC discovery"
DOTCOM -right-> LB : "request /prefix"
LB -left-> BACK: " route /prefix to backend"
SM -up-> LB : " request /prefix"
SM <-up-> CDOT : "sync access data"
DED <-up-> CDOT : "sync access data"
@enduml
```
## Access control
There are two levels of access control when making requests into backend services:
1. **Instance access.** Granting a particular SM/Dedicated instance access is done by issuing a IJWT bound
to a customer's cloud license billing status. This token is synced to a GitLab instance from CustomersDot
daily and stored in the instance's local database. For GitLab.com, we do not require this step; instead,
we issue short-lived tokens for each request. These tokens are implemented as JWTs and are
cryptographically signed by the issuer.
1. **User access.** We currently expect all end-user requests to go through the respective GitLab instance
first at least once. For certain requests (for example, code completions) we allow users to make requests to
a backend service directly using a backend-scoped UJWT.
This token has a more limited lifespan and access than an instance token. To get a user token
the user will first have to go through the respective GitLab instance to request the token.
Therefore, user-level authentication and authorization is handled as with any REST or GraphQL API request, that is,
either using OAuth or personal access tokens.
The JWT issued for instance access carries the following claims (not exhaustive, subject to change):
- `aud`: The audience. This is the name of the backend service (for example, `gitlab-ai-gateway`).
- `sub`: The subject. This is the UUID of the GitLab instance the token was issued for (for example: `8f6e4253-58ce-42b9-869c-97f5c2287ad2`).
- `iss`: The issuer URL. Either `https://gitlab.com` or `https://customers.gitlab.com`.
- `exp`: The expiration time of the token (UNIX timestamp). Currently 1 hour for GitLab.com and 3 days
for SM/Dedicated.
- `nbf`: The time this token can not be used before (UNIX timestamp), this is set to 5 seconds before the time the token was issued.
- `iat`: The time this token was issued at (UNIX timestamp), this is set to the time the token was issued.
- `jti`: The JWT ID, set to a randomly created UUID (for example: `0099dd6c-b66e-4787-8ae2-c451d86025ae`).
- `gitlab_realm`: A string to differentiate between requests from GitLab Self-Managed and GitLab.com.
This is `self-managed` when issued by the Customers Portal and `saas` when issued by GitLab.com.
- `scopes`: A list of access scopes that define which features this token is valid for. We obtain these
based on decisions such as how paid features are bundled into GitLab tiers and add-ons.
The JWT issues for user access carries the following claims (not exhaustive, subject to change):
- `aud`: The audience. This is the name of the backend service (`gitlab-ai-gateway`).
- `sub`: The subject. This is a globally unique anonymous user ID hash of the GitLab user the token was issued for (for example: `W2HPShrOch8RMah8ZWsjrXtAXo+stqKsNX0exQ1rsQQ=`).
- `iss`: The issuer (`gitlab-ai-gateway`).
- `exp`: The expiration time of the token (UNIX timestamp). Currently 1 hour after the issued at time.
- `nbf`: The time this token can not be used before (UNIX timestamp), this is set to the time the token was issued.
- `iat`: The time this token was issued at (UNIX timestamp), this is set to the time the token was issued.
- `jti`: The JWT ID, set to a randomly created UUID (for example: `0099dd6c-b66e-4787-8ae2-c451d86025ae`).
- `gitlab_realm`: A string to differentiate between requests from GitLab Self-Managed and GitLab.com. Either `self-managed` or `saas`.
- `scopes`: A list of access scopes that define which features this token is valid for. We obtain these
based on decisions such as how paid features are bundled into GitLab tiers and add-ons as well as what features
are allowed to be accessed with a user token.
A JWKS contains the public keys used by token validators to verify a token's signature. All backend
services are currently required to:
- Regularly refresh the JWKS from GitLab.com and CustomersDot so key rotation can happen easily and regularly
without service disruption.
- Perform signature verification of JWTs and access scope checks for each request.
The following flow charts should help to understand what happens when a user consumes a Cloud Connector feature,
such as talking to an AI chat bot, for both GitLab.com and GitLab Dedicated/GitLab Self-Managed deployments.
### GitLab.com
Because the GitLab.com deployment enjoys special trust, it has the advantage of being able to self-sign
and create IJWTs for every request to a Cloud Connector feature, which greatly simplifies the
flow:
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as GitLab.com
participant SB as Backend service
Note over U,SB: End-user flow
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
U->>GL: Use Cloud Connector feature
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
GL->>GL: Create signed IJWT
GL->>SB: Request feature with IJWT
SB->>GL: Fetch public signing keys, if needed
GL-->>SB: JWKS
SB->>SB: Validate IJWT with keys
SB-->>GL: Feature payload
```
### GitLab Dedicated/Self-Managed
For Dedicated and GitLab Self-Managed instances the key problem is one of trust delegation:
we cannot trust any individual GitLab Self-Managed instance and let them issue tokens, but
we can delegate trust by letting an instance regularly authorize itself with CustomersDot,
which is controlled by GitLab Inc. While we do control GitLab Dedicated instances, for simplicity
we currently consider them "self-managed" from a Cloud Connector standpoint.
The main difference to GitLab.com is the addition of the CustomersDot actor, with which customer instances
synchronize regularly to fetch and persist data necessary to access GitLab backend services.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as SM/Dedicated GitLab
participant CD as CustomersDot
participant SB as Backend service
Note over GL,CD: Background: synchronize access data
loop cron job
GL->>CD: Send license key
CD->>CD: Verify customer subscription with license key
CD->>CD: Create and sign IJWT
CD-->>GL: Cloud Connector access data + IJWT
GL->>GL: Store access data + IJWT in DB
end
Note over U,SB: End-user flow
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
U->>GL: Use Cloud Connector feature
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
GL->>GL: Load IJWT from DB
GL->>SB: Request feature with IJWT
SB->>CD: Fetch public signing keys, if needed
CD-->>SB: JWKS
SB->>SB: Validate IJWT with keys
SB-->>GL: Feature payload
```
Cloud Connector access data is structured JSON data that is stored in the instance's local database.
On top of the IJWT, it contains additional information about the services made available
such as whether the service is considered fully launched or in beta stage. This information is particularly
useful for GitLab Self-Managed instances whose upgrade cadence we do not control, because it allows us to
sync in data that are subject to change and control access to some GitLab features remotely.
### AI gateway
AI gateway is able to issue UJWTs which are meant for users to directly communicate with the AI gateway,
that is not having to make a call to a GitLab instance first. This is in addition to using a IJWT.
Only GitLab instances can request a UJWT, which is done by making a request with the IJWT.
AI gateway will then return a short-lived UJWT that the instance can pass over to the user.
The client can use this UJWT to directly communicate with the AI gateway.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as SM/Dedicated GitLab or GitLab.com
participant AIGW as AI gateway
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
loop Initial request, this will be done hourly, only when the UJWT is expired.
U->>GL: Request UJWT
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
Note over GL: Step 6 differs between SM/Dedicated GitLab and GitLab.com
GL->>GL: SM/Dedicated GitLab: Load IJWT from DB<br/>GitLab.com: Create signed IJWT
GL->>AIGW: Request UJWT with IJWT
AIGW->>AIGW: Validate IJWT with keys
AIGW->>AIGW: Create UJWT
AIGW-->>GL: UJWT
GL-->>U: UJWT
end
U->>AIGW: Request feature with UJWT
AIGW->>U: Feature payload
```
## References
- [Cloud Connector design documents and ADRs](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cloud_connector/)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Cloud Connector: Architecture'
breadcrumbs:
- doc
- development
- cloud_connector
---
[GitLab Cloud Connector](https://about.gitlab.com/direction/cloud-connector/) is a way to access services common to
multiple GitLab deployments, instances, and cells. As of now, Cloud Connector is not a
dedicated service itself, but rather a collection of APIs and code that standardizes the approach to authentication and
other items when integrating Cloud based services with a GitLab instance.
This page covers the general architecture of Cloud Connector and is meant to be read as a supplemental
resource to the main developer documentation.
## Terms
When talking about Cloud Connector's constituents and mechanics, we use the following
terms:
- **GitLab Rails**: The main GitLab application.
- **GitLab.com**: The multi-tenant GitLab SaaS deployment operated by GitLab Inc.
- **GitLab Dedicated**: A single-tenant GitLab SaaS deployment operated by GitLab Inc.
- **GitLab Self-Managed**: Any GitLab instance operated by a customer, potentially deployed to a private cloud.
- **GitLab instance**: Any of the above.
- **Backend service**: A GitLab-operated web service invoked by a GitLab instance to deliver functionality
that's part of the Cloud Connector set of features. The AI gateway is one example.
- **CustomersDot**: The [GitLab Customers Portal](https://gitlab.com/gitlab-org/customers-gitlab-com),
used by customers to manage their GitLab subscriptions.
- **OIDC**: [OpenID Connect](https://auth0.com/docs/authenticate/protocols/openid-connect-protocol),
an open standard for implementing identity providers and authN/authZ. JWT
issuers provide OIDC compliant discovery endpoints to publish keys for JWT validators.
- **JWT**: [JSON Web Token](https://auth0.com/docs/secure/tokens/json-web-tokens), an open standard to encode and transmit identity data in the form of a
cryptographically signed token. This token is used to authorize requests between a GitLab instance or user and a
backend service. It can be scoped to either a GitLab instance or a user.
- **JWT issuer**: A GitLab-operated web service providing endpoints to issue JWTs. The OAuth specification refers to this as an `Authorization Server`.
and/or endpoints to provide the public keys necessary to validate such a token.
GitLab.com, CustomersDot and AI gateway are all JWT issuers.
- **JWT validator**: A backend service that validates GitLab instance requests carrying a JWT. The OAuth specification refers to this as a `Resource Server`.
using public keys obtained from a JWT issuer. The AI gateway is one example of a JWT validator.
- **IJWT**: An Instance JSON Web Token, a JWT created for a GitLab instance.
- **UJWT**: A User JSON Web Token, a JWT created for a GitLab user with a shorter lifespan and less permissions than a IJWT.
- **JWKS**: [JSON Web Key Set](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets),
an open standard to encode cryptographic keys to validate JWTs.
- **Unit primitives**: The logical feature that a permission/access scope can govern.
- **Add-On** The group of unit primitives that are bundled and sold together.
Example: `code_suggestions` and `duo_chat` are 2 UPs sold together under the `DUO_PRO` add-on.
## Problem to solve
Most GitLab features can be delivered directly from a GitLab instance, regardless of where it is deployed.
Some features, however, require 3rd party vendor integration or are difficult to operate outside of the
GitLab.com. This presents GitLab Self-Managed and GitLab Dedicated customers with a problem since they are not easily
able to access those features.
Cloud Connector solves this problem by:
- Moving functionality out of GitLab Rails and into GitLab-operated services to save customers
the hassle of manually configuring and operating them.
- Providing a single global entry point into Cloud Connector features at `cloud.gitlab.com` to
access backend services.
- Connects instance license and billing data to access grants,
enabling a GitLab instance to consume features hosted in backend services operated by GitLab Inc.
## Cloud Connector components
Technically, Cloud Connector consists of the following pieces:
1. **A global load-balancer.** Hosted at `cloud.gitlab.com` through Cloudflare, all traffic inbound
to Cloud Connector features such as AI must go through this host. The load balancer makes
routing decisions based on path prefixes. For example:
1. Load balancer maps `/prefix` to a backend service.
1. Client requests `cloud.gitlab.com/prefix/path`.
1. Load balancer strips out `/prefix` and routes `/path` to the backend service.
1. **Electing GitLab.com and CustomersDot as IJWT issuers.** We configure these
deployments with private keys only GitLab Inc. has access to. We use these keys to issue cryptographically
signed IJWT a GitLab Rails instance can use to make requests upstream to a connected service
backend. The public validation keys are published using OIDC discovery API endpoints.
1. **Electing AI gateway as a UJWT issuer and validator.** Similar to the above mentioned IJWT issuers,
except with the purpose of issuing tokens for users only. The AI gateway is its own validator, so the validation
keys are not published on OIDC discovery API endpoints.
1. **Electing backend services as IJWT validators.** Backend services synchronize regularly
with GitLab.com or CustomersDot to obtain the public keys used to validate the signature of a service
token attached to a request. The backend service can then decide whether to accept or reject the
request, based on both signature validity and any claims the token may carry in its body.
1. **Programming APIs to integrate with the above.** We aim to provide the necessary interfaces in
Ruby to make it easier to implement communication between the GitLab Rails application
and a backend service. This is a moving target and we file issues into the
[Cloud Connector abstractions epic](https://gitlab.com/groups/gitlab-org/-/epics/12376) to improve this.
The following diagram outlines how these components interact:
```plantuml
@startuml
node "Cloudflare" {
[cloud.gitlab.com] as LB #yellow
}
node "GitLab SaaS" {
package "Backend service deployments" as BACK {
[Backend 1] as BE1
[Backend 2] as BE2
}
package "OIDC providers" as OIDC {
[GitLab.com] as DOTCOM
[Customers Portal] as CDOT
}
package "GitLab Dedicated" as DED {
[GitLab instance 1]
[GitLab instance 2]
}
}
node "Customer deployments" {
[GitLab instance] as SM
}
BACK -down-> OIDC : "OIDC discovery"
DOTCOM -right-> LB : "request /prefix"
LB -left-> BACK: " route /prefix to backend"
SM -up-> LB : " request /prefix"
SM <-up-> CDOT : "sync access data"
DED <-up-> CDOT : "sync access data"
@enduml
```
## Access control
There are two levels of access control when making requests into backend services:
1. **Instance access.** Granting a particular SM/Dedicated instance access is done by issuing a IJWT bound
to a customer's cloud license billing status. This token is synced to a GitLab instance from CustomersDot
daily and stored in the instance's local database. For GitLab.com, we do not require this step; instead,
we issue short-lived tokens for each request. These tokens are implemented as JWTs and are
cryptographically signed by the issuer.
1. **User access.** We currently expect all end-user requests to go through the respective GitLab instance
first at least once. For certain requests (for example, code completions) we allow users to make requests to
a backend service directly using a backend-scoped UJWT.
This token has a more limited lifespan and access than an instance token. To get a user token
the user will first have to go through the respective GitLab instance to request the token.
Therefore, user-level authentication and authorization is handled as with any REST or GraphQL API request, that is,
either using OAuth or personal access tokens.
The JWT issued for instance access carries the following claims (not exhaustive, subject to change):
- `aud`: The audience. This is the name of the backend service (for example, `gitlab-ai-gateway`).
- `sub`: The subject. This is the UUID of the GitLab instance the token was issued for (for example: `8f6e4253-58ce-42b9-869c-97f5c2287ad2`).
- `iss`: The issuer URL. Either `https://gitlab.com` or `https://customers.gitlab.com`.
- `exp`: The expiration time of the token (UNIX timestamp). Currently 1 hour for GitLab.com and 3 days
for SM/Dedicated.
- `nbf`: The time this token can not be used before (UNIX timestamp), this is set to 5 seconds before the time the token was issued.
- `iat`: The time this token was issued at (UNIX timestamp), this is set to the time the token was issued.
- `jti`: The JWT ID, set to a randomly created UUID (for example: `0099dd6c-b66e-4787-8ae2-c451d86025ae`).
- `gitlab_realm`: A string to differentiate between requests from GitLab Self-Managed and GitLab.com.
This is `self-managed` when issued by the Customers Portal and `saas` when issued by GitLab.com.
- `scopes`: A list of access scopes that define which features this token is valid for. We obtain these
based on decisions such as how paid features are bundled into GitLab tiers and add-ons.
The JWT issues for user access carries the following claims (not exhaustive, subject to change):
- `aud`: The audience. This is the name of the backend service (`gitlab-ai-gateway`).
- `sub`: The subject. This is a globally unique anonymous user ID hash of the GitLab user the token was issued for (for example: `W2HPShrOch8RMah8ZWsjrXtAXo+stqKsNX0exQ1rsQQ=`).
- `iss`: The issuer (`gitlab-ai-gateway`).
- `exp`: The expiration time of the token (UNIX timestamp). Currently 1 hour after the issued at time.
- `nbf`: The time this token can not be used before (UNIX timestamp), this is set to the time the token was issued.
- `iat`: The time this token was issued at (UNIX timestamp), this is set to the time the token was issued.
- `jti`: The JWT ID, set to a randomly created UUID (for example: `0099dd6c-b66e-4787-8ae2-c451d86025ae`).
- `gitlab_realm`: A string to differentiate between requests from GitLab Self-Managed and GitLab.com. Either `self-managed` or `saas`.
- `scopes`: A list of access scopes that define which features this token is valid for. We obtain these
based on decisions such as how paid features are bundled into GitLab tiers and add-ons as well as what features
are allowed to be accessed with a user token.
A JWKS contains the public keys used by token validators to verify a token's signature. All backend
services are currently required to:
- Regularly refresh the JWKS from GitLab.com and CustomersDot so key rotation can happen easily and regularly
without service disruption.
- Perform signature verification of JWTs and access scope checks for each request.
The following flow charts should help to understand what happens when a user consumes a Cloud Connector feature,
such as talking to an AI chat bot, for both GitLab.com and GitLab Dedicated/GitLab Self-Managed deployments.
### GitLab.com
Because the GitLab.com deployment enjoys special trust, it has the advantage of being able to self-sign
and create IJWTs for every request to a Cloud Connector feature, which greatly simplifies the
flow:
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as GitLab.com
participant SB as Backend service
Note over U,SB: End-user flow
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
U->>GL: Use Cloud Connector feature
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
GL->>GL: Create signed IJWT
GL->>SB: Request feature with IJWT
SB->>GL: Fetch public signing keys, if needed
GL-->>SB: JWKS
SB->>SB: Validate IJWT with keys
SB-->>GL: Feature payload
```
### GitLab Dedicated/Self-Managed
For Dedicated and GitLab Self-Managed instances the key problem is one of trust delegation:
we cannot trust any individual GitLab Self-Managed instance and let them issue tokens, but
we can delegate trust by letting an instance regularly authorize itself with CustomersDot,
which is controlled by GitLab Inc. While we do control GitLab Dedicated instances, for simplicity
we currently consider them "self-managed" from a Cloud Connector standpoint.
The main difference to GitLab.com is the addition of the CustomersDot actor, with which customer instances
synchronize regularly to fetch and persist data necessary to access GitLab backend services.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as SM/Dedicated GitLab
participant CD as CustomersDot
participant SB as Backend service
Note over GL,CD: Background: synchronize access data
loop cron job
GL->>CD: Send license key
CD->>CD: Verify customer subscription with license key
CD->>CD: Create and sign IJWT
CD-->>GL: Cloud Connector access data + IJWT
GL->>GL: Store access data + IJWT in DB
end
Note over U,SB: End-user flow
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
U->>GL: Use Cloud Connector feature
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
GL->>GL: Load IJWT from DB
GL->>SB: Request feature with IJWT
SB->>CD: Fetch public signing keys, if needed
CD-->>SB: JWKS
SB->>SB: Validate IJWT with keys
SB-->>GL: Feature payload
```
Cloud Connector access data is structured JSON data that is stored in the instance's local database.
On top of the IJWT, it contains additional information about the services made available
such as whether the service is considered fully launched or in beta stage. This information is particularly
useful for GitLab Self-Managed instances whose upgrade cadence we do not control, because it allows us to
sync in data that are subject to change and control access to some GitLab features remotely.
### AI gateway
AI gateway is able to issue UJWTs which are meant for users to directly communicate with the AI gateway,
that is not having to make a call to a GitLab instance first. This is in addition to using a IJWT.
Only GitLab instances can request a UJWT, which is done by making a request with the IJWT.
AI gateway will then return a short-lived UJWT that the instance can pass over to the user.
The client can use this UJWT to directly communicate with the AI gateway.
```mermaid
sequenceDiagram
autonumber
participant U as User
participant GL as SM/Dedicated GitLab or GitLab.com
participant AIGW as AI gateway
U->>GL: Authorize with GitLab instance
GL-->>U: PAT or Cookie
loop Initial request, this will be done hourly, only when the UJWT is expired.
U->>GL: Request UJWT
GL->>GL: Perform authN/authZ with Cookie or PAT
GL->>GL: Verify user allowed to use feature
Note over GL: Step 6 differs between SM/Dedicated GitLab and GitLab.com
GL->>GL: SM/Dedicated GitLab: Load IJWT from DB<br/>GitLab.com: Create signed IJWT
GL->>AIGW: Request UJWT with IJWT
AIGW->>AIGW: Validate IJWT with keys
AIGW->>AIGW: Create UJWT
AIGW-->>GL: UJWT
GL-->>U: UJWT
end
U->>AIGW: Request feature with UJWT
AIGW->>U: Feature payload
```
## References
- [Cloud Connector design documents and ADRs](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cloud_connector/)
|
https://docs.gitlab.com/development/ruby_style_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/ruby_style_guide.md
|
2025-08-13
|
doc/development/backend
|
[
"doc",
"development",
"backend"
] |
ruby_style_guide.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Ruby style guide
| null |
This is a GitLab-specific style guide for Ruby code. Everything documented in this page can be [reopened for discussion](https://handbook.gitlab.com/handbook/values/#disagree-commit-and-disagree).
We use [RuboCop](../rubocop_development_guide.md) to enforce Ruby style guide rules.
Where a RuboCop rule is absent, refer to the following style guides as general guidelines to write idiomatic Ruby:
- [Ruby Style Guide](https://github.com/rubocop/ruby-style-guide).
- [Rails Style Guide](https://github.com/rubocop/rails-style-guide).
- [RSpec Style Guide](https://github.com/rubocop/rspec-style-guide).
Generally, if a style is not covered by existing RuboCop rules or the above style guides, it shouldn't be a blocker.
Some styles we have decided [no one should not have a strong opinion on](#styles-we-have-no-opinion-on).
See also:
- [Guidelines for reusing abstractions](../reusing_abstractions.md).
- [Test-specific style guides and best practices](../testing_guide/_index.md).
## Styles we have no rule for
These styles are not backed by a RuboCop rule.
For every style added to this section, link the discussion from the section's [history note](../documentation/styleguide/availability_details.md#history) to provide context and serve as a reference.
### Instance variable access using `attr_reader`
Instance variables can be accessed in a variety of ways in a class:
```ruby
# public
class Foo
attr_reader :my_var
def initialize(my_var)
@my_var = my_var
end
def do_stuff
puts my_var
end
end
# private
class Foo
def initialize(my_var)
@my_var = my_var
end
private
attr_reader :my_var
def do_stuff
puts my_var
end
end
# direct
class Foo
def initialize(my_var)
@my_var = my_var
end
private
def do_stuff
puts @my_var
end
end
```
Public attributes should only be used if they are accessed outside of the class.
There is not a strong opinion on what strategy is used when attributes are only
accessed internally, as long as there is consistency in related code.
### Newlines style guide
In addition to the RuboCop's `Layout/EmptyLinesAroundMethodBody` and `Cop/LineBreakAroundConditionalBlock` that enforce some newline styles, we have the following guidelines that are not backed by RuboCop.
#### Rule: separate code with newlines only to group together related logic
```ruby
# bad
def method
issue = Issue.new
issue.save
render json: issue
end
```
```ruby
# good
def method
issue = Issue.new
issue.save
render json: issue
end
```
#### Rule: newline before block
```ruby
# bad
def method
issue = Issue.new
if issue.save
render json: issue
end
end
```
```ruby
# good
def method
issue = Issue.new
if issue.save
render json: issue
end
end
```
##### Exception: no need for a newline when code block starts or ends right inside another code block
```ruby
# bad
def method
if issue
if issue.valid?
issue.save
end
end
end
```
```ruby
# good
def method
if issue
if issue.valid?
issue.save
end
end
end
```
## Rails / ActiveRecord
This section contains GitLab-specific guidelines for Rails and ActiveRecord usage.
### Avoid ActiveRecord callbacks
[ActiveRecord callbacks](https://guides.rubyonrails.org/active_record_callbacks.html) allow
you to "trigger logic before or after an alteration of an object's state."
Use callbacks when no superior alternative exists, but employ them only if you
thoroughly understand the reasons for doing so.
When adding new lifecycle events for ActiveRecord objects, it is preferable to
add the logic to a service class instead of a callback.
#### Why callbacks should be avoided
In general, callbacks should be avoided because:
- Callbacks are hard to reason about because invocation order is not obvious and
they break code narrative.
- Callbacks are harder to locate and navigate because they rely on reflection to
trigger rather than being ordinary method calls.
- Callbacks make it difficult to apply changes selectively to an object's state
because changes always trigger the entire callback chain.
- Callbacks trap logic in the ActiveRecord class. This tight coupling encourages
fat models that contain too much business logic, which could instead live in
service objects that are more reusable, composable, and are easier to test.
- Illegal state transitions of an object can be better enforced through
attribute validations.
- Heavy use of callbacks affects factory creation speed. With some classes
having hundreds of callbacks, creating an instance of that object for
an automated test can be a very slow operation, resulting in slow specs.
Some of these examples are discussed in this [video from thoughtbot](https://www.youtube.com/watch?v=GLBMfB8N1G8).
The GitLab codebase relies heavily on callbacks and it is hard to refactor them
once added due to invisible dependencies. As a result, this guideline does not
call for removing all existing callbacks.
#### When to use callbacks
Callbacks can be used in special cases. Some examples of cases where adding a
callback makes sense:
- A dependency uses callbacks and we would like to override the callback
behavior.
- Incrementing cache counts.
- Data normalization that only relates to data on the current model.
#### Example of moving from a callback to a service
There is a project with the following basic data model:
```ruby
class Project
has_one :repository
end
class Repository
belongs_to :project
end
```
Say we want to create a repository after a project is created and use the
project name as the repository name. A developer familiar with Rails might
immediately think: sounds like a job for an ActiveRecord callback! And add this
code:
```ruby
class Project
has_one :repository
after_initialize :create_random_name
after_create :create_repository
def create_random_name
SecureRandom.alphanumeric
end
def create_repository
Repository.create!(project: self)
end
end
class Repository
after_initialize :set_name
def set_name
name = project.name
end
end
class ProjectsController
def create
Project.create! # also creates a repository and names it
end
end
```
While this seems pretty harmless for a baby Rails app, adding this type of logic
via callbacks has many downsides once your Rails app becomes large and complex
(all of which are listed in this documentation). Instead, we can add this
logic to a service class:
```ruby
class Project
has_one :repository
end
class Repository
belongs_to :project
end
class ProjectCreator
def self.execute
ApplicationRecord.transaction do
name = SecureRandom.alphanumeric
project = Project.create!(name: name)
Repository.create!(project: project, name: name)
end
end
end
class ProjectsController
def create
ProjectCreator.execute
end
end
```
With an application this simple, it can be hard to see the benefits of the second
approach. But we already some benefits:
- Can test `Repository` creation logic separate from `Project` creation logic. Code
no longer violates law of demeter (`Repository` class doesn't need to know
`project.name`).
- Clarity of invocation order.
- Open to change: if we decide there are some scenarios where we do not want a
repository created for a project, we can create a new service class rather
than needing to refactor to `Project` and `Repository` classes.
- Each instance of a `Project` factory does not create a second (`Repository`) object.
### ApplicationRecord / ActiveRecord model scopes
When creating a new scope, consider the following prefixes.
#### `for_`
For scopes which filter `where(belongs_to: record)`.
For example:
```ruby
scope :for_project, ->(project) { where(project: project) }
Timelogs.for_project(project)
```
#### `with_`
For scopes which `joins`, `includes`, or filters `where(has_one: record)` or `where(has_many: record)` or `where(boolean condition)`
For example:
```ruby
scope :with_labels, -> { includes(:labels) }
AbuseReport.with_labels
scope :with_status, ->(status) { where(status: status) }
Clusters::AgentToken.with_status(:active)
scope :with_due_date, -> { where.not(due_date: nil) }
Issue.with_due_date
```
It is also fine to use custom scope names, for example:
```ruby
scope :undeleted, -> { where('policy_index >= 0') }
Security::Policy.undeleted
```
#### `order_by_`
For scopes which `order`.
For example:
```ruby
scope :order_by_name, -> { order(:name) }
Namespace.order_by_name
scope :order_by_updated_at, ->(direction = :asc) { order(updated_at: direction) }
Project.order_by_updated_at(:desc)
```
## Styles we have no opinion on
If a RuboCop rule is proposed and we choose not to add it, we should document that decision in this guide so it is more discoverable and link the relevant discussion as a reference.
### Quoting string literals
Due to the sheer amount of work to rectify, we do not care whether string
literals are single or double-quoted.
Previous discussions include:
- <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/44234>
- <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/36076>
- <https://gitlab.com/gitlab-org/gitlab/-/issues/198046>
Individual groups may [choose to have an opinion](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#coding-standards-for-remote-development-domain) on consistency of quoting styles within the [bounded contexts](../software_design.md#bounded-contexts) they own, but these decisions only apply to code within that context.
### Type safety
Now that we've upgraded to Ruby 3, we have more options available
to enforce [type safety](https://en.wikipedia.org/wiki/Type_safety).
Some of these options are supported as part of the Ruby syntax and do not require the use of specific type safety tools like [Sorbet](https://sorbet.org/) or [RBS](https://github.com/ruby/rbs). However, we might consider these tools in the future as well.
For now, we can use [YARD annotations](../code_comments.md#class-and-method-documentation) to define types.
IDEs such as RubyMine provide support for YARD when showing type-based inspection errors.
For more information, see [Type safety](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#type-safety) in the `remote_development` domain README.
### Functional patterns
Although Ruby and especially Rails are primarily based on [object-oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming) patterns, Ruby is a very flexible language and supports [functional programming](https://en.wikipedia.org/wiki/Functional_programming) patterns as well.
Functional programming patterns, especially in domain logic, can often result in more readable, maintainable, and bug-resistant code while still using idiomatic and familiar Ruby patterns.
However, functional programming patterns should be used carefully because some patterns would cause confusion and should be avoided even if they're directly supported by Ruby. The [`curry` method](https://www.rubydoc.info/stdlib/core/Method:curry) is a likely example.
For more information, see:
- [Functional patterns](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#functional-patterns)
- [Railway-oriented programming and the `Result` class](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#railway-oriented-programming-and-the-result-class)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Ruby style guide
breadcrumbs:
- doc
- development
- backend
---
This is a GitLab-specific style guide for Ruby code. Everything documented in this page can be [reopened for discussion](https://handbook.gitlab.com/handbook/values/#disagree-commit-and-disagree).
We use [RuboCop](../rubocop_development_guide.md) to enforce Ruby style guide rules.
Where a RuboCop rule is absent, refer to the following style guides as general guidelines to write idiomatic Ruby:
- [Ruby Style Guide](https://github.com/rubocop/ruby-style-guide).
- [Rails Style Guide](https://github.com/rubocop/rails-style-guide).
- [RSpec Style Guide](https://github.com/rubocop/rspec-style-guide).
Generally, if a style is not covered by existing RuboCop rules or the above style guides, it shouldn't be a blocker.
Some styles we have decided [no one should not have a strong opinion on](#styles-we-have-no-opinion-on).
See also:
- [Guidelines for reusing abstractions](../reusing_abstractions.md).
- [Test-specific style guides and best practices](../testing_guide/_index.md).
## Styles we have no rule for
These styles are not backed by a RuboCop rule.
For every style added to this section, link the discussion from the section's [history note](../documentation/styleguide/availability_details.md#history) to provide context and serve as a reference.
### Instance variable access using `attr_reader`
Instance variables can be accessed in a variety of ways in a class:
```ruby
# public
class Foo
attr_reader :my_var
def initialize(my_var)
@my_var = my_var
end
def do_stuff
puts my_var
end
end
# private
class Foo
def initialize(my_var)
@my_var = my_var
end
private
attr_reader :my_var
def do_stuff
puts my_var
end
end
# direct
class Foo
def initialize(my_var)
@my_var = my_var
end
private
def do_stuff
puts @my_var
end
end
```
Public attributes should only be used if they are accessed outside of the class.
There is not a strong opinion on what strategy is used when attributes are only
accessed internally, as long as there is consistency in related code.
### Newlines style guide
In addition to the RuboCop's `Layout/EmptyLinesAroundMethodBody` and `Cop/LineBreakAroundConditionalBlock` that enforce some newline styles, we have the following guidelines that are not backed by RuboCop.
#### Rule: separate code with newlines only to group together related logic
```ruby
# bad
def method
issue = Issue.new
issue.save
render json: issue
end
```
```ruby
# good
def method
issue = Issue.new
issue.save
render json: issue
end
```
#### Rule: newline before block
```ruby
# bad
def method
issue = Issue.new
if issue.save
render json: issue
end
end
```
```ruby
# good
def method
issue = Issue.new
if issue.save
render json: issue
end
end
```
##### Exception: no need for a newline when code block starts or ends right inside another code block
```ruby
# bad
def method
if issue
if issue.valid?
issue.save
end
end
end
```
```ruby
# good
def method
if issue
if issue.valid?
issue.save
end
end
end
```
## Rails / ActiveRecord
This section contains GitLab-specific guidelines for Rails and ActiveRecord usage.
### Avoid ActiveRecord callbacks
[ActiveRecord callbacks](https://guides.rubyonrails.org/active_record_callbacks.html) allow
you to "trigger logic before or after an alteration of an object's state."
Use callbacks when no superior alternative exists, but employ them only if you
thoroughly understand the reasons for doing so.
When adding new lifecycle events for ActiveRecord objects, it is preferable to
add the logic to a service class instead of a callback.
#### Why callbacks should be avoided
In general, callbacks should be avoided because:
- Callbacks are hard to reason about because invocation order is not obvious and
they break code narrative.
- Callbacks are harder to locate and navigate because they rely on reflection to
trigger rather than being ordinary method calls.
- Callbacks make it difficult to apply changes selectively to an object's state
because changes always trigger the entire callback chain.
- Callbacks trap logic in the ActiveRecord class. This tight coupling encourages
fat models that contain too much business logic, which could instead live in
service objects that are more reusable, composable, and are easier to test.
- Illegal state transitions of an object can be better enforced through
attribute validations.
- Heavy use of callbacks affects factory creation speed. With some classes
having hundreds of callbacks, creating an instance of that object for
an automated test can be a very slow operation, resulting in slow specs.
Some of these examples are discussed in this [video from thoughtbot](https://www.youtube.com/watch?v=GLBMfB8N1G8).
The GitLab codebase relies heavily on callbacks and it is hard to refactor them
once added due to invisible dependencies. As a result, this guideline does not
call for removing all existing callbacks.
#### When to use callbacks
Callbacks can be used in special cases. Some examples of cases where adding a
callback makes sense:
- A dependency uses callbacks and we would like to override the callback
behavior.
- Incrementing cache counts.
- Data normalization that only relates to data on the current model.
#### Example of moving from a callback to a service
There is a project with the following basic data model:
```ruby
class Project
has_one :repository
end
class Repository
belongs_to :project
end
```
Say we want to create a repository after a project is created and use the
project name as the repository name. A developer familiar with Rails might
immediately think: sounds like a job for an ActiveRecord callback! And add this
code:
```ruby
class Project
has_one :repository
after_initialize :create_random_name
after_create :create_repository
def create_random_name
SecureRandom.alphanumeric
end
def create_repository
Repository.create!(project: self)
end
end
class Repository
after_initialize :set_name
def set_name
name = project.name
end
end
class ProjectsController
def create
Project.create! # also creates a repository and names it
end
end
```
While this seems pretty harmless for a baby Rails app, adding this type of logic
via callbacks has many downsides once your Rails app becomes large and complex
(all of which are listed in this documentation). Instead, we can add this
logic to a service class:
```ruby
class Project
has_one :repository
end
class Repository
belongs_to :project
end
class ProjectCreator
def self.execute
ApplicationRecord.transaction do
name = SecureRandom.alphanumeric
project = Project.create!(name: name)
Repository.create!(project: project, name: name)
end
end
end
class ProjectsController
def create
ProjectCreator.execute
end
end
```
With an application this simple, it can be hard to see the benefits of the second
approach. But we already some benefits:
- Can test `Repository` creation logic separate from `Project` creation logic. Code
no longer violates law of demeter (`Repository` class doesn't need to know
`project.name`).
- Clarity of invocation order.
- Open to change: if we decide there are some scenarios where we do not want a
repository created for a project, we can create a new service class rather
than needing to refactor to `Project` and `Repository` classes.
- Each instance of a `Project` factory does not create a second (`Repository`) object.
### ApplicationRecord / ActiveRecord model scopes
When creating a new scope, consider the following prefixes.
#### `for_`
For scopes which filter `where(belongs_to: record)`.
For example:
```ruby
scope :for_project, ->(project) { where(project: project) }
Timelogs.for_project(project)
```
#### `with_`
For scopes which `joins`, `includes`, or filters `where(has_one: record)` or `where(has_many: record)` or `where(boolean condition)`
For example:
```ruby
scope :with_labels, -> { includes(:labels) }
AbuseReport.with_labels
scope :with_status, ->(status) { where(status: status) }
Clusters::AgentToken.with_status(:active)
scope :with_due_date, -> { where.not(due_date: nil) }
Issue.with_due_date
```
It is also fine to use custom scope names, for example:
```ruby
scope :undeleted, -> { where('policy_index >= 0') }
Security::Policy.undeleted
```
#### `order_by_`
For scopes which `order`.
For example:
```ruby
scope :order_by_name, -> { order(:name) }
Namespace.order_by_name
scope :order_by_updated_at, ->(direction = :asc) { order(updated_at: direction) }
Project.order_by_updated_at(:desc)
```
## Styles we have no opinion on
If a RuboCop rule is proposed and we choose not to add it, we should document that decision in this guide so it is more discoverable and link the relevant discussion as a reference.
### Quoting string literals
Due to the sheer amount of work to rectify, we do not care whether string
literals are single or double-quoted.
Previous discussions include:
- <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/44234>
- <https://gitlab.com/gitlab-org/gitlab-foss/-/issues/36076>
- <https://gitlab.com/gitlab-org/gitlab/-/issues/198046>
Individual groups may [choose to have an opinion](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/remote_development/README.md#coding-standards-for-remote-development-domain) on consistency of quoting styles within the [bounded contexts](../software_design.md#bounded-contexts) they own, but these decisions only apply to code within that context.
### Type safety
Now that we've upgraded to Ruby 3, we have more options available
to enforce [type safety](https://en.wikipedia.org/wiki/Type_safety).
Some of these options are supported as part of the Ruby syntax and do not require the use of specific type safety tools like [Sorbet](https://sorbet.org/) or [RBS](https://github.com/ruby/rbs). However, we might consider these tools in the future as well.
For now, we can use [YARD annotations](../code_comments.md#class-and-method-documentation) to define types.
IDEs such as RubyMine provide support for YARD when showing type-based inspection errors.
For more information, see [Type safety](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#type-safety) in the `remote_development` domain README.
### Functional patterns
Although Ruby and especially Rails are primarily based on [object-oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming) patterns, Ruby is a very flexible language and supports [functional programming](https://en.wikipedia.org/wiki/Functional_programming) patterns as well.
Functional programming patterns, especially in domain logic, can often result in more readable, maintainable, and bug-resistant code while still using idiomatic and familiar Ruby patterns.
However, functional programming patterns should be used carefully because some patterns would cause confusion and should be avoided even if they're directly supported by Ruby. The [`curry` method](https://www.rubydoc.info/stdlib/core/Method:curry) is a likely example.
For more information, see:
- [Functional patterns](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#functional-patterns)
- [Railway-oriented programming and the `Result` class](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/lib/remote_development#railway-oriented-programming-and-the-result-class)
|
https://docs.gitlab.com/development/backend/gitaly_touch_points
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/backend/gitaly_touch_points.md
|
2025-08-13
|
doc/development/backend/create_source_code_be
|
[
"doc",
"development",
"backend",
"create_source_code_be"
] |
gitaly_touch_points.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Source Code - Gitaly Touch Points
| null |
## RPCs
Gitaly is a wrapper around the `git` binary. It provides managed access to the file system housing the `git` repositories, using Go Remote Procedure Calls (RPCs). Other functions are access optimization, caching, and a form of pagination against the file system.
The [Beginner's guide to Gitaly contributions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/beginners_guide.md) is focused on making updates to Gitaly, and offers many insights into how to understand the Gitaly code.
All access to Gitaly from other parts of GitLab are through Create: Source Code endpoints:
## The `Commit` model
After a call is made to Gitaly, Git `commit` information is stored in memory. This information is wrapped by the [Ruby `Commit` Model](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/commit.rb), which is a wrapper around [`Gitlab::Git::Commit`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/git/commit.rb).
The `Commit` model acts like an ActiveRecord object, but it does not have a PostgreSQL backend. Instead, it maps back to Gitaly RPCs.
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Source Code - Gitaly Touch Points
breadcrumbs:
- doc
- development
- backend
- create_source_code_be
---
## RPCs
Gitaly is a wrapper around the `git` binary. It provides managed access to the file system housing the `git` repositories, using Go Remote Procedure Calls (RPCs). Other functions are access optimization, caching, and a form of pagination against the file system.
The [Beginner's guide to Gitaly contributions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/beginners_guide.md) is focused on making updates to Gitaly, and offers many insights into how to understand the Gitaly code.
All access to Gitaly from other parts of GitLab are through Create: Source Code endpoints:
## The `Commit` model
After a call is made to Gitaly, Git `commit` information is stored in memory. This information is wrapped by the [Ruby `Commit` Model](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/commit.rb), which is a wrapper around [`Gitlab::Git::Commit`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/git/commit.rb).
The `Commit` model acts like an ActiveRecord object, but it does not have a PostgreSQL backend. Instead, it maps back to Gitaly RPCs.
|
https://docs.gitlab.com/development/backend/create_source_code_be
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/backend/_index.md
|
2025-08-13
|
doc/development/backend/create_source_code_be
|
[
"doc",
"development",
"backend",
"create_source_code_be"
] |
_index.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Source Code Management
| null |
The Source Code Management team is responsible for all backend aspects of the product categories
that fall under the [Source Code group](https://handbook.gitlab.com/handbook/product/categories/#source-code-group)
of the [Create stage](https://handbook.gitlab.com/handbook/product/categories/#create-stage)
of the [DevOps lifecycle](https://handbook.gitlab.com/handbook/product/categories/#devops-stages).
The Source Code Management team interfaces with the Gitaly and Code Review teams and works across three codebases: Workhorse, GitLab Shell and GitLab Rails.
## Source Code Features Reference
Features owned by the Source Code Management group are listed on the
[Features by Group Page](https://handbook.gitlab.com/handbook/product/categories/features/#create-source-code-group).
### Code Owners
Source Code Management shares ownership of Code Owners with the Code Review group.
- [Feature homepage](../../../user/project/codeowners/_index.md)
- [Developer Reference](../../code_owners/_index.md)
### Approval Rules
- [Approval Rules](../../merge_request_concepts/approval_rules.md)
### Push rules
- [Push rules development guidelines](../../push_rules/_index.md)
### Protected Branches
Details about Protected Branches models can be found in the [Code Owners](../../code_owners/_index.md#related-models) technical reference page.
### Repositories
- [Project Repository Storage Moves](../../repository_storage_moves/_index.md)
### Project Templates
- [Custom group-level project templates development guidelines](../../project_templates/_index.md)
### Git LFS
- [Git LFS Development guidelines](../../lfs.md)
## Technical Stack
## GitLab Rails
### Gitaly touch points
[Gitaly](../../../administration/gitaly/_index.md) provides high-level RPC access to Git repositories.
It is present in every GitLab installation and coordinates Git repository storage and retrieval.
Gitaly implements a client-server architecture with Gitaly as the server and Gitaly clients, also
known as _Gitaly consumers_, including:
- GitLab Rails
- GitLab Shell
- GitLab Workhorse
Gitaly Rails provides API endpoints that are counterparts of Gitaly RPCs. For more information, read [Gitaly touch points](gitaly_touch_points.md).
### Annotated Rails Source Code
The `:source_code_management` annotation indicates which code belongs to the Source Code Management
group in the Rails codebase. The annotated objects are presented the [Source Code owned objects](https://gitlab-com.gitlab.io/gl-infra/platform/stage-groups-index/source-code.html) page, along
with the [Error Budgets dashboards](https://dashboards.gitlab.net/d/stage-groups-source_code/stage-groups3a-source-code3a-group-dashboard?orgId=1).
## GitLab Workhorse
[GitLab Workhorse](../../workhorse/_index.md) is a smart reverse proxy for GitLab. It handles "large" HTTP
requests such as file downloads, file uploads, `git push`, `git pull` and `git` archive downloads.
Workhorse itself is not a feature, but there are several features in GitLab
that would not work efficiently without Workhorse.
## GitLab Shell
GitLab Shell handles Git SSH sessions for GitLab and modifies the list of authorized keys.
For more information, refer to the [GitLab Shell documentation](../../gitlab_shell/_index.md).
To learn about the reasoning behind our creation of `gitlab-sshd`, read the blog post
[Why we implemented our own SSHD solution](https://about.gitlab.com/blog/2022/08/17/why-we-have-implemented-our-own-sshd-solution-on-gitlab-sass/).
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Source Code Management
breadcrumbs:
- doc
- development
- backend
- create_source_code_be
---
The Source Code Management team is responsible for all backend aspects of the product categories
that fall under the [Source Code group](https://handbook.gitlab.com/handbook/product/categories/#source-code-group)
of the [Create stage](https://handbook.gitlab.com/handbook/product/categories/#create-stage)
of the [DevOps lifecycle](https://handbook.gitlab.com/handbook/product/categories/#devops-stages).
The Source Code Management team interfaces with the Gitaly and Code Review teams and works across three codebases: Workhorse, GitLab Shell and GitLab Rails.
## Source Code Features Reference
Features owned by the Source Code Management group are listed on the
[Features by Group Page](https://handbook.gitlab.com/handbook/product/categories/features/#create-source-code-group).
### Code Owners
Source Code Management shares ownership of Code Owners with the Code Review group.
- [Feature homepage](../../../user/project/codeowners/_index.md)
- [Developer Reference](../../code_owners/_index.md)
### Approval Rules
- [Approval Rules](../../merge_request_concepts/approval_rules.md)
### Push rules
- [Push rules development guidelines](../../push_rules/_index.md)
### Protected Branches
Details about Protected Branches models can be found in the [Code Owners](../../code_owners/_index.md#related-models) technical reference page.
### Repositories
- [Project Repository Storage Moves](../../repository_storage_moves/_index.md)
### Project Templates
- [Custom group-level project templates development guidelines](../../project_templates/_index.md)
### Git LFS
- [Git LFS Development guidelines](../../lfs.md)
## Technical Stack
## GitLab Rails
### Gitaly touch points
[Gitaly](../../../administration/gitaly/_index.md) provides high-level RPC access to Git repositories.
It is present in every GitLab installation and coordinates Git repository storage and retrieval.
Gitaly implements a client-server architecture with Gitaly as the server and Gitaly clients, also
known as _Gitaly consumers_, including:
- GitLab Rails
- GitLab Shell
- GitLab Workhorse
Gitaly Rails provides API endpoints that are counterparts of Gitaly RPCs. For more information, read [Gitaly touch points](gitaly_touch_points.md).
### Annotated Rails Source Code
The `:source_code_management` annotation indicates which code belongs to the Source Code Management
group in the Rails codebase. The annotated objects are presented the [Source Code owned objects](https://gitlab-com.gitlab.io/gl-infra/platform/stage-groups-index/source-code.html) page, along
with the [Error Budgets dashboards](https://dashboards.gitlab.net/d/stage-groups-source_code/stage-groups3a-source-code3a-group-dashboard?orgId=1).
## GitLab Workhorse
[GitLab Workhorse](../../workhorse/_index.md) is a smart reverse proxy for GitLab. It handles "large" HTTP
requests such as file downloads, file uploads, `git push`, `git pull` and `git` archive downloads.
Workhorse itself is not a feature, but there are several features in GitLab
that would not work efficiently without Workhorse.
## GitLab Shell
GitLab Shell handles Git SSH sessions for GitLab and modifies the list of authorized keys.
For more information, refer to the [GitLab Shell documentation](../../gitlab_shell/_index.md).
To learn about the reasoning behind our creation of `gitlab-sshd`, read the blog post
[Why we implemented our own SSHD solution](https://about.gitlab.com/blog/2022/08/17/why-we-have-implemented-our-own-sshd-solution-on-gitlab-sass/).
|
https://docs.gitlab.com/development/organization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/organization
|
[
"doc",
"development",
"organization"
] |
_index.md
|
Tenant Scale
|
Organizations
|
See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines
|
Organization
|
Development Guidelines: learn about organization when developing GitLab.
|
The [Organization initiative](../../user/organization/_index.md) focuses on reaching feature parity between
GitLab.com and GitLab Self-Managed.
## Consolidate groups and projects
- [Architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/)
One facet of the Organization initiative is to consolidate groups and projects,
addressing the feature disparity between them. Some features, such as epics, are
only available at the group level. Some features, such as issues, are only available
at the project level. Other features, such as milestones, are available to both groups
and projects.
We receive many requests to add features either to the group or project level.
Moving features around to different levels is problematic on multiple levels:
- It requires engineering time to move the features.
- It requires UX overhead to maintain mental models of feature availability.
- It creates redundant code.
When features are copied from one level (project, group, or instance) to another,
the copies often have small, nuanced differences between them. These nuances cause
extra engineering time when fixes are needed, because the fix must be copied to
several locations. These nuances also create different user experiences when the
feature is used in different places.
A solution for this problem is to consolidate groups and projects into a single
entity, `namespace`. The work on this solution is split into several phases and
is tracked in [epic 6473](https://gitlab.com/groups/gitlab-org/-/epics/6473).
## How to plan features that interact with Group and ProjectNamespace
As of now, every Project in the system has a record in the `namespaces` table. This makes it possible to
use common interface to create features that are shared between Groups and Projects. Shared behavior can be added using
a concerns mechanism. Because the `Namespace` model is responsible for `UserNamespace` methods as well, it is discouraged
to use the `Namespace` model for shared behavior for Projects and Groups.
### Resource-based features
To migrate resource-based features, existing functionality will need to be supported. This can be achieved in two Phases.
**Phase 1 - Setup**
- Link into the namespaces table
- Add a column to the table
- For example, in issues a `project id` points to the projects table. We need to establish a link to the `namespaces` table.
- Modify code so that any new record already has the correct data in it
- Backfill
**Phase 2 - Prerequisite work**
- Investigate the permission model as well as any performance concerns related to that.
- Permissions need to be checked and kept in place.
- Investigate what other models need to support namespaces for functionality dependent on features you migrate in Phase 1.
- Adjust CRUD services and APIs (REST and GraphQL) to point to the new column you added in Phase 1.
- Consider performance when fetching resources.
Introducing new functionality is very much dependent on every single team and feature.
### Settings-related features
Right now, cascading settings are available for `NamespaceSettings`. By creating `ProjectNamespace`,
we can use this framework to make sure that some settings are applicable on the project level as well.
When working on settings, we need to make sure that:
- They are not used in `join` queries or modify those queries.
- Updating settings is taken into consideration.
- If we want to move from project to project namespace, we follow a similar database process to the one described in Phase 1.
## Organizations & cells
For the [Cells](../cells) project, GitLab will rely on organizations. A cell will host one or more organizations. When a request is made, the [HTTP Router Service](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/http_routing_service/) will route it to the correct cell.
### Defining a sharding key for all organizational tables
All tables with the following [`gitlab_schema`](../cells/_index.md#available-cells--organization-schemas) are considered organization level:
- `gitlab_main_cell`
- `gitlab_ci`
- `gitlab_sec`
- `gitlab_main_user`
All newly created organization-level tables are required to have a `sharding_key`
defined in the corresponding `db/docs/` file for that table.
The purpose of the sharding key is documented in the
[Organization isolation blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/isolation/),
but in short this column is used to provide a standard way of determining which
Organization owns a particular row in the database. The column will be used in
the future to enforce constraints on data not cross Organization boundaries. It
will also be used in the future to provide a uniform way to migrate data
between Cells.
The actual name of the foreign key can be anything but it must reference a row
in `projects` or `groups`. The chosen `sharding_key` column must be non-nullable.
Setting multiple `sharding_key`, with nullable columns are also allowed, provided that
the table has a check constraint that correctly ensures exactly one of the keys must be non-nullable for a row in the table.
See [`NOT NULL` constraints for multiple columns](../database/not_null_constraints.md#not-null-constraints-for-multiple-columns)
for instructions on creating these constraints. The reasoning for adding sharding keys, and which keys to add to a table/row, goes like this:
- In order to move organizations across cells, we want `organization_id` on all rows of all tables
- But `organization_id` on rows that are actually owned by a top-level group (or its subgroups or projects) makes top-level group
transfer inefficient (due to `organization_id` rewrites) to the point of being impractical
- Compromise: Add `organization_id` or `namespace_id` to all rows of all tables
- But `namespace_id` on rows of tables that are actually owned by projects makes project transfer (and certain subgroup transfers) inefficient
(due to `namespace_id` rewrites) to the point of being impractical
- Compromise: Add `organization_id` or `namespace_id` or `project_id` to all rows of all tables, which ever is the most specific
#### Conclusions
There is no benefit of filling `namespace_id` if a row is also owned by `project_id`
There is a performance impact on group/project transfer to filling `namespace_id` if a row is also owned by `project_id`.
Though if your table is small then the performance impact is small.
It can be confusing to have 2 sharding key values on some rows.
#### Guideline
Every row must have exactly 1 sharding key, and it should be as specific as possible. Exceptions cannot be made on large tables.
The following are examples of valid sharding keys:
- The table entries belong to a project only:
```yaml
sharding_key:
project_id: projects
```
- The table entries belong to a project and the foreign key is `target_project_id`:
```yaml
sharding_key:
target_project_id: projects
```
- The table entries belong to a namespace/group only:
```yaml
sharding_key:
namespace_id: namespaces
```
- The table entries belong to a namespace/group only and the foreign key is `group_id`:
```yaml
sharding_key:
group_id: namespaces
```
- The table entries belong to a namespace or a project:
```yaml
sharding_key:
project_id: projects
namespace_id: namespaces
```
- (Only for `gitlab_main_user`) The table entries belong to a user only:
```yaml
sharding_key:
user_id: user
```
#### The sharding key must be immutable
The choice of a `sharding_key` should always be immutable. This is because the
sharding key column will be used as an index for the planned
[Org Mover](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/migration/),
and also the
[enforcement of isolation](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/isolation/)
of Organization data.
Any mutation of the `sharding_key` could result in in-consistent data being read.
Therefore, if your feature requires a user experience which allows data to be
moved between projects or groups/namespaces, then you may need to redesign the
move feature to create new rows.
An example of this can be seen in the
[move an issue feature](../../user/project/issues/managing_issues.md#move-an-issue).
This feature does not actually change the `project_id` column for an existing
`issues` row but instead creates a new `issues` row and creates a link in the
database from the original `issues` row.
If there is a particularly challenging
existing feature that needs to allow moving data you will need to reach out to
the Tenant Scale team early on to discuss options for how to manage the
sharding key.
#### Using `namespace_id` as sharding key
The `namespaces` table has rows that can refer to a `Group`, a `ProjectNamespace`,
or a `UserNamespace`. The `UserNamespace` type is also known as a personal namespace.
Using a `namespace_id` as a sharding key is a good option, except when `namespace_id`
refers to a `UserNamespace`. Because a user does not necessarily have a related
`namespace` record, this sharding key can be `NULL`. A sharding key should not
have `NULL` values.
#### Using the same sharding key for projects and namespaces
Developers may also choose to use `namespace_id` only for tables that can
belong to a project where the feature used by the table is being developed
following the
[Consolidating Groups and Projects blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/).
In that case the `namespace_id` would need to be the ID of the
`ProjectNamespace` and not the group that the namespace belongs to.
#### Using `organization_id` as sharding key
Usually, `project_id` or `namespace_id` are the most common sharding keys.
However, there are cases where a table does not belong to a project or a namespace.
In such cases, `organization_id` is an option for the sharding key, provided the below guidelines are followed:
- The `sharding_key` column still needs to be [immutable](#the-sharding-key-must-be-immutable).
- Only add `organization_id` for root level models (for example, `namespaces`), and not leaf-level models (for example, `issues`).
- Ensure such tables do not contain data related to groups, or projects (or records that belong to groups / projects).
Instead, use `project_id`, or `namespace_id`.
- Tables with lots of rows are not good candidates because we would need to re-write every row if we move the entity to a different organization which can be expensive.
- When there are other tables referencing this table, the application should continue to work if the referencing table records are moved to a different organization.
If you believe that the `organization_id` is the best option for the sharding key, seek approval from the Tenant Scale group.
This is crucial because it has implications for data migration and may require reconsideration of the choice of sharding key.
As an example, see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/462758), which added `organization_id` as a sharding key to an existing table.
For more information about development with organizations, see [Organization](../organization)
#### Add a sharding key to a pre-existing table
See the following [guidance](sharding/_index.md).
#### Define a `desired_sharding_key` to automatically backfill a `sharding_key`
We need to backfill a `sharding_key` to hundreds of tables that do not have one.
This process will involve creating a merge request like
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136800> to add the new
column, backfill the data from a related table in the database, and then create
subsequent merge requests to add indexes, foreign keys and not-null
constraints.
In order to minimize the amount of repetitive effort for developers we've
introduced a concise declarative way to describe how to backfill the
`sharding_key` for this specific table. This content will later be used in
automation to create all the necessary merge requests.
An example of the `desired_sharding_key` was added in
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139336> and it looks like:
```yaml
--- # db/docs/security_findings.yml
table_name: security_findings
classes:
- Security::Finding
# ...
desired_sharding_key:
project_id:
references: projects
backfill_via:
parent:
foreign_key: scanner_id
table: vulnerability_scanners
table_primary_key: id # Optional. Defaults to 'id'
sharding_key: project_id
belongs_to: scanner
```
To understand best how this YAML data will be used you can map it onto
the merge request we created manually in GraphQL
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136800>. The idea
will be to automatically create this. The content of the YAML specifies
the parent table and its `sharding_key` to backfill from in the batched
background migration. It also specifies a `belongs_to` relation which
will be added to the model to automatically populate the `sharding_key` in
the `before_save`.
##### Define a `desired_sharding_key` when the parent table also has one
By default, a `desired_sharding_key` configuration will validate that the chosen `sharding_key`
exists on the parent table. However, if the parent table also has a `desired_sharding_key` configuration
and is itself waiting to be backfilled, you need to include the `awaiting_backfill_on_parent` field.
For example:
```yaml
desired_sharding_key:
project_id:
references: projects
backfill_via:
parent:
foreign_key: package_file_id
table: packages_package_files
table_primary_key: id # Optional. Defaults to 'id'
sharding_key: project_id
belongs_to: package_file
awaiting_backfill_on_parent: true
```
There are likely edge cases where this `desired_sharding_key` structure is not
suitable for backfilling a `sharding_key`. In such cases the team owning the
table will need to create the necessary merge requests to add the
`sharding_key` manually.
#### Exempting certain tables from having sharding keys
Certain tables can be exempted from having sharding keys by adding
```yaml
exempt_from_sharding: true
```
to the table's database dictionary file. This can be used for:
- JiHu specific tables, since they do not have any data on the .com database. [!145905](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/145905)
- tables that are marked to be dropped soon, like `operations_feature_flag_scopes`. [!147541](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/147541).
These tables should be dropped as soon as practical.
Do not use `exempt_from_sharding` for any other purposes.
Tables which are exempt breaks our efforts at isolation and will introduce issues later in the Organizations and Cells projects.
When tables are exempted from sharding key requirements, they also do not show up in our
[progress dashboard](https://cells-progress-tracker-gitlab-org-tenant-scale-g-f4ad96bf01d25f.gitlab.io/sharding_keys).
Exempted tables must not have foreign key, or loose foreign key references, as
this may cause the target cell's database to have foreign key violations when data is
moved.
See [#471182](https://gitlab.com/gitlab-org/gitlab/-/issues/471182) for examples and possible solutions.
### Ensure sharding key presence on application level
When you define your sharding key you must make sure it's filled on application level.
Every `ApplicationRecord` model includes a helper `populate_sharding_key`, which
provides a convenient way of defining sharding key logic,
and also a corresponding matcher to test your sharding key logic. For example:
```ruby
# in model.rb
populate_sharding_key :project_id, source: :merge_request, field: :target_project_id
# in model_spec.rb
it { is_expected.to populate_sharding_key(:project_id).from(:merge_request, :target_project_id) }
```
See more [helper examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/populates_sharding_key.rb)
and [RSpec matcher examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/matchers/populate_sharding_key_matcher.rb).
### Map a request to an organization with `Current.organization`
The application needs to know how to map incoming requests to an organization. The mapping logic is encapsulated in [`Gitlab::Current::Organization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/current/organization.rb). The outcome of this mapping is stored in a [`ActiveSupport::CurrentAttributes`](https://api.rubyonrails.org/classes/ActiveSupport/CurrentAttributes.html) instance called `Current`. You can then access the current organization using the `Current.organization` method.
### Availability of `Current.organization`
Since this mapping depends on HTTP requests, `Current.organization` is available only in the request layer. You can use it in:
- Rails controllers that inherit from `ApplicationController`
- GraphQL queries and mutations
- Grape API endpoints (requires [usage of a helper](#usage-in-grape-api)
In these request layers, it is safe to assume that `Current.organization` is not `nil`.
You cannot use `Current.organization` in:
- Rake tasks
- Cron jobs
- Sidekiq workers
This restriction is enforced by a RuboCop rule. For these cases, derive the organization ID from related data or pass it as an argument.
### Writing tests for code that depends on `Current.organization`
If you need a `current_organization` for RSpec, you can use the [`with_current_organization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/shared_contexts/current_organization_context.rb) shared context. This will create a `current_organization` method that will be returned by `Gitlab::Current::Organization` class
```ruby
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe MyController, :with_current_organization do
let(:project) { create(:project, organization: current_organization) }
subject { project.organization }
it {is_expected.to eq(current_organization) }
end
```
### Usage in Grape API
`Current.organization` is not available in all Grape API endpoints. Use the `set_current_organization` helper to set `Current.organization`:
```ruby
module API
class SomeAPIEndpoint < ::API::Base
before do
set_current_organization # This will set Current.organization
end
# ... api logic ...
end
end
```
### The default organization
Do not rely on a default organization. Only one cell can access the default organization, and other cells cannot access it.
Default organizations were initially used to assign existing data when introducing the Organization data structure. However, the application no longer depends on default organizations. Do not create or assign default organization objects.
The default organization remains available on GitLab.com only until all data is assigned to new organizations. Hard-coded dependencies on the default organization do not work in cells. All cells should be treated the same.
### Organization data sources
An organization serves two purposes:
- A logical grouping of data (for example: an User belongs to one or more Organizations)
- [Sharding key](../cells) for Cells
For data modeling purposes, there is no need to have redundant `organization_id` attributes. For example, the projects table has an `organization_id` column. From a normalization point of view, this is not needed because a project belongs to a namespace and a namespace belongs to an organization.
However, for sharding purposes, we violate this normalization rule. Tables that have a parent-child relationship still define `organization_id` on both the parent table and the child.
To populate the `organization_id` column, use these methods in order of preference:
1. Derive from related data. For example, a subgroup can use the organization that is assigned to the parent group.
1. `Current.organization`. This is available in the request layer and can be passed into Sidekiq workers.
1. Ask the user. In some cases, the UI needs to be updated and should include a way of selecting an organization.
## Related topics
- [Consolidating groups and projects](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/)
architecture documentation
- [Organization user documentation](../../user/organization/_index.md)
|
---
stage: Tenant Scale
group: Organizations
info: 'See the Technical Writers assigned to Development Guidelines: https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments-to-development-guidelines'
description: 'Development Guidelines: learn about organization when developing GitLab.'
title: Organization
breadcrumbs:
- doc
- development
- organization
---
The [Organization initiative](../../user/organization/_index.md) focuses on reaching feature parity between
GitLab.com and GitLab Self-Managed.
## Consolidate groups and projects
- [Architecture design document](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/)
One facet of the Organization initiative is to consolidate groups and projects,
addressing the feature disparity between them. Some features, such as epics, are
only available at the group level. Some features, such as issues, are only available
at the project level. Other features, such as milestones, are available to both groups
and projects.
We receive many requests to add features either to the group or project level.
Moving features around to different levels is problematic on multiple levels:
- It requires engineering time to move the features.
- It requires UX overhead to maintain mental models of feature availability.
- It creates redundant code.
When features are copied from one level (project, group, or instance) to another,
the copies often have small, nuanced differences between them. These nuances cause
extra engineering time when fixes are needed, because the fix must be copied to
several locations. These nuances also create different user experiences when the
feature is used in different places.
A solution for this problem is to consolidate groups and projects into a single
entity, `namespace`. The work on this solution is split into several phases and
is tracked in [epic 6473](https://gitlab.com/groups/gitlab-org/-/epics/6473).
## How to plan features that interact with Group and ProjectNamespace
As of now, every Project in the system has a record in the `namespaces` table. This makes it possible to
use common interface to create features that are shared between Groups and Projects. Shared behavior can be added using
a concerns mechanism. Because the `Namespace` model is responsible for `UserNamespace` methods as well, it is discouraged
to use the `Namespace` model for shared behavior for Projects and Groups.
### Resource-based features
To migrate resource-based features, existing functionality will need to be supported. This can be achieved in two Phases.
**Phase 1 - Setup**
- Link into the namespaces table
- Add a column to the table
- For example, in issues a `project id` points to the projects table. We need to establish a link to the `namespaces` table.
- Modify code so that any new record already has the correct data in it
- Backfill
**Phase 2 - Prerequisite work**
- Investigate the permission model as well as any performance concerns related to that.
- Permissions need to be checked and kept in place.
- Investigate what other models need to support namespaces for functionality dependent on features you migrate in Phase 1.
- Adjust CRUD services and APIs (REST and GraphQL) to point to the new column you added in Phase 1.
- Consider performance when fetching resources.
Introducing new functionality is very much dependent on every single team and feature.
### Settings-related features
Right now, cascading settings are available for `NamespaceSettings`. By creating `ProjectNamespace`,
we can use this framework to make sure that some settings are applicable on the project level as well.
When working on settings, we need to make sure that:
- They are not used in `join` queries or modify those queries.
- Updating settings is taken into consideration.
- If we want to move from project to project namespace, we follow a similar database process to the one described in Phase 1.
## Organizations & cells
For the [Cells](../cells) project, GitLab will rely on organizations. A cell will host one or more organizations. When a request is made, the [HTTP Router Service](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/http_routing_service/) will route it to the correct cell.
### Defining a sharding key for all organizational tables
All tables with the following [`gitlab_schema`](../cells/_index.md#available-cells--organization-schemas) are considered organization level:
- `gitlab_main_cell`
- `gitlab_ci`
- `gitlab_sec`
- `gitlab_main_user`
All newly created organization-level tables are required to have a `sharding_key`
defined in the corresponding `db/docs/` file for that table.
The purpose of the sharding key is documented in the
[Organization isolation blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/isolation/),
but in short this column is used to provide a standard way of determining which
Organization owns a particular row in the database. The column will be used in
the future to enforce constraints on data not cross Organization boundaries. It
will also be used in the future to provide a uniform way to migrate data
between Cells.
The actual name of the foreign key can be anything but it must reference a row
in `projects` or `groups`. The chosen `sharding_key` column must be non-nullable.
Setting multiple `sharding_key`, with nullable columns are also allowed, provided that
the table has a check constraint that correctly ensures exactly one of the keys must be non-nullable for a row in the table.
See [`NOT NULL` constraints for multiple columns](../database/not_null_constraints.md#not-null-constraints-for-multiple-columns)
for instructions on creating these constraints. The reasoning for adding sharding keys, and which keys to add to a table/row, goes like this:
- In order to move organizations across cells, we want `organization_id` on all rows of all tables
- But `organization_id` on rows that are actually owned by a top-level group (or its subgroups or projects) makes top-level group
transfer inefficient (due to `organization_id` rewrites) to the point of being impractical
- Compromise: Add `organization_id` or `namespace_id` to all rows of all tables
- But `namespace_id` on rows of tables that are actually owned by projects makes project transfer (and certain subgroup transfers) inefficient
(due to `namespace_id` rewrites) to the point of being impractical
- Compromise: Add `organization_id` or `namespace_id` or `project_id` to all rows of all tables, which ever is the most specific
#### Conclusions
There is no benefit of filling `namespace_id` if a row is also owned by `project_id`
There is a performance impact on group/project transfer to filling `namespace_id` if a row is also owned by `project_id`.
Though if your table is small then the performance impact is small.
It can be confusing to have 2 sharding key values on some rows.
#### Guideline
Every row must have exactly 1 sharding key, and it should be as specific as possible. Exceptions cannot be made on large tables.
The following are examples of valid sharding keys:
- The table entries belong to a project only:
```yaml
sharding_key:
project_id: projects
```
- The table entries belong to a project and the foreign key is `target_project_id`:
```yaml
sharding_key:
target_project_id: projects
```
- The table entries belong to a namespace/group only:
```yaml
sharding_key:
namespace_id: namespaces
```
- The table entries belong to a namespace/group only and the foreign key is `group_id`:
```yaml
sharding_key:
group_id: namespaces
```
- The table entries belong to a namespace or a project:
```yaml
sharding_key:
project_id: projects
namespace_id: namespaces
```
- (Only for `gitlab_main_user`) The table entries belong to a user only:
```yaml
sharding_key:
user_id: user
```
#### The sharding key must be immutable
The choice of a `sharding_key` should always be immutable. This is because the
sharding key column will be used as an index for the planned
[Org Mover](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/cells/migration/),
and also the
[enforcement of isolation](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/isolation/)
of Organization data.
Any mutation of the `sharding_key` could result in in-consistent data being read.
Therefore, if your feature requires a user experience which allows data to be
moved between projects or groups/namespaces, then you may need to redesign the
move feature to create new rows.
An example of this can be seen in the
[move an issue feature](../../user/project/issues/managing_issues.md#move-an-issue).
This feature does not actually change the `project_id` column for an existing
`issues` row but instead creates a new `issues` row and creates a link in the
database from the original `issues` row.
If there is a particularly challenging
existing feature that needs to allow moving data you will need to reach out to
the Tenant Scale team early on to discuss options for how to manage the
sharding key.
#### Using `namespace_id` as sharding key
The `namespaces` table has rows that can refer to a `Group`, a `ProjectNamespace`,
or a `UserNamespace`. The `UserNamespace` type is also known as a personal namespace.
Using a `namespace_id` as a sharding key is a good option, except when `namespace_id`
refers to a `UserNamespace`. Because a user does not necessarily have a related
`namespace` record, this sharding key can be `NULL`. A sharding key should not
have `NULL` values.
#### Using the same sharding key for projects and namespaces
Developers may also choose to use `namespace_id` only for tables that can
belong to a project where the feature used by the table is being developed
following the
[Consolidating Groups and Projects blueprint](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/).
In that case the `namespace_id` would need to be the ID of the
`ProjectNamespace` and not the group that the namespace belongs to.
#### Using `organization_id` as sharding key
Usually, `project_id` or `namespace_id` are the most common sharding keys.
However, there are cases where a table does not belong to a project or a namespace.
In such cases, `organization_id` is an option for the sharding key, provided the below guidelines are followed:
- The `sharding_key` column still needs to be [immutable](#the-sharding-key-must-be-immutable).
- Only add `organization_id` for root level models (for example, `namespaces`), and not leaf-level models (for example, `issues`).
- Ensure such tables do not contain data related to groups, or projects (or records that belong to groups / projects).
Instead, use `project_id`, or `namespace_id`.
- Tables with lots of rows are not good candidates because we would need to re-write every row if we move the entity to a different organization which can be expensive.
- When there are other tables referencing this table, the application should continue to work if the referencing table records are moved to a different organization.
If you believe that the `organization_id` is the best option for the sharding key, seek approval from the Tenant Scale group.
This is crucial because it has implications for data migration and may require reconsideration of the choice of sharding key.
As an example, see [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/462758), which added `organization_id` as a sharding key to an existing table.
For more information about development with organizations, see [Organization](../organization)
#### Add a sharding key to a pre-existing table
See the following [guidance](sharding/_index.md).
#### Define a `desired_sharding_key` to automatically backfill a `sharding_key`
We need to backfill a `sharding_key` to hundreds of tables that do not have one.
This process will involve creating a merge request like
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136800> to add the new
column, backfill the data from a related table in the database, and then create
subsequent merge requests to add indexes, foreign keys and not-null
constraints.
In order to minimize the amount of repetitive effort for developers we've
introduced a concise declarative way to describe how to backfill the
`sharding_key` for this specific table. This content will later be used in
automation to create all the necessary merge requests.
An example of the `desired_sharding_key` was added in
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/139336> and it looks like:
```yaml
--- # db/docs/security_findings.yml
table_name: security_findings
classes:
- Security::Finding
# ...
desired_sharding_key:
project_id:
references: projects
backfill_via:
parent:
foreign_key: scanner_id
table: vulnerability_scanners
table_primary_key: id # Optional. Defaults to 'id'
sharding_key: project_id
belongs_to: scanner
```
To understand best how this YAML data will be used you can map it onto
the merge request we created manually in GraphQL
<https://gitlab.com/gitlab-org/gitlab/-/merge_requests/136800>. The idea
will be to automatically create this. The content of the YAML specifies
the parent table and its `sharding_key` to backfill from in the batched
background migration. It also specifies a `belongs_to` relation which
will be added to the model to automatically populate the `sharding_key` in
the `before_save`.
##### Define a `desired_sharding_key` when the parent table also has one
By default, a `desired_sharding_key` configuration will validate that the chosen `sharding_key`
exists on the parent table. However, if the parent table also has a `desired_sharding_key` configuration
and is itself waiting to be backfilled, you need to include the `awaiting_backfill_on_parent` field.
For example:
```yaml
desired_sharding_key:
project_id:
references: projects
backfill_via:
parent:
foreign_key: package_file_id
table: packages_package_files
table_primary_key: id # Optional. Defaults to 'id'
sharding_key: project_id
belongs_to: package_file
awaiting_backfill_on_parent: true
```
There are likely edge cases where this `desired_sharding_key` structure is not
suitable for backfilling a `sharding_key`. In such cases the team owning the
table will need to create the necessary merge requests to add the
`sharding_key` manually.
#### Exempting certain tables from having sharding keys
Certain tables can be exempted from having sharding keys by adding
```yaml
exempt_from_sharding: true
```
to the table's database dictionary file. This can be used for:
- JiHu specific tables, since they do not have any data on the .com database. [!145905](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/145905)
- tables that are marked to be dropped soon, like `operations_feature_flag_scopes`. [!147541](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/147541).
These tables should be dropped as soon as practical.
Do not use `exempt_from_sharding` for any other purposes.
Tables which are exempt breaks our efforts at isolation and will introduce issues later in the Organizations and Cells projects.
When tables are exempted from sharding key requirements, they also do not show up in our
[progress dashboard](https://cells-progress-tracker-gitlab-org-tenant-scale-g-f4ad96bf01d25f.gitlab.io/sharding_keys).
Exempted tables must not have foreign key, or loose foreign key references, as
this may cause the target cell's database to have foreign key violations when data is
moved.
See [#471182](https://gitlab.com/gitlab-org/gitlab/-/issues/471182) for examples and possible solutions.
### Ensure sharding key presence on application level
When you define your sharding key you must make sure it's filled on application level.
Every `ApplicationRecord` model includes a helper `populate_sharding_key`, which
provides a convenient way of defining sharding key logic,
and also a corresponding matcher to test your sharding key logic. For example:
```ruby
# in model.rb
populate_sharding_key :project_id, source: :merge_request, field: :target_project_id
# in model_spec.rb
it { is_expected.to populate_sharding_key(:project_id).from(:merge_request, :target_project_id) }
```
See more [helper examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/populates_sharding_key.rb)
and [RSpec matcher examples](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/matchers/populate_sharding_key_matcher.rb).
### Map a request to an organization with `Current.organization`
The application needs to know how to map incoming requests to an organization. The mapping logic is encapsulated in [`Gitlab::Current::Organization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/current/organization.rb). The outcome of this mapping is stored in a [`ActiveSupport::CurrentAttributes`](https://api.rubyonrails.org/classes/ActiveSupport/CurrentAttributes.html) instance called `Current`. You can then access the current organization using the `Current.organization` method.
### Availability of `Current.organization`
Since this mapping depends on HTTP requests, `Current.organization` is available only in the request layer. You can use it in:
- Rails controllers that inherit from `ApplicationController`
- GraphQL queries and mutations
- Grape API endpoints (requires [usage of a helper](#usage-in-grape-api)
In these request layers, it is safe to assume that `Current.organization` is not `nil`.
You cannot use `Current.organization` in:
- Rake tasks
- Cron jobs
- Sidekiq workers
This restriction is enforced by a RuboCop rule. For these cases, derive the organization ID from related data or pass it as an argument.
### Writing tests for code that depends on `Current.organization`
If you need a `current_organization` for RSpec, you can use the [`with_current_organization`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/spec/support/shared_contexts/current_organization_context.rb) shared context. This will create a `current_organization` method that will be returned by `Gitlab::Current::Organization` class
```ruby
# frozen_string_literal: true
require 'spec_helper'
RSpec.describe MyController, :with_current_organization do
let(:project) { create(:project, organization: current_organization) }
subject { project.organization }
it {is_expected.to eq(current_organization) }
end
```
### Usage in Grape API
`Current.organization` is not available in all Grape API endpoints. Use the `set_current_organization` helper to set `Current.organization`:
```ruby
module API
class SomeAPIEndpoint < ::API::Base
before do
set_current_organization # This will set Current.organization
end
# ... api logic ...
end
end
```
### The default organization
Do not rely on a default organization. Only one cell can access the default organization, and other cells cannot access it.
Default organizations were initially used to assign existing data when introducing the Organization data structure. However, the application no longer depends on default organizations. Do not create or assign default organization objects.
The default organization remains available on GitLab.com only until all data is assigned to new organizations. Hard-coded dependencies on the default organization do not work in cells. All cells should be treated the same.
### Organization data sources
An organization serves two purposes:
- A logical grouping of data (for example: an User belongs to one or more Organizations)
- [Sharding key](../cells) for Cells
For data modeling purposes, there is no need to have redundant `organization_id` attributes. For example, the projects table has an `organization_id` column. From a normalization point of view, this is not needed because a project belongs to a namespace and a namespace belongs to an organization.
However, for sharding purposes, we violate this normalization rule. Tables that have a parent-child relationship still define `organization_id` on both the parent table and the child.
To populate the `organization_id` column, use these methods in order of preference:
1. Derive from related data. For example, a subgroup can use the organization that is assigned to the parent group.
1. `Current.organization`. This is available in the request layer and can be passed into Sidekiq workers.
1. Ask the user. In some cases, the UI needs to be updated and should include a way of selecting an organization.
## Related topics
- [Consolidating groups and projects](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/consolidating_groups_and_projects/)
architecture documentation
- [Organization user documentation](../../user/organization/_index.md)
|
https://docs.gitlab.com/development/organization/sharding
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/organization/_index.md
|
2025-08-13
|
doc/development/organization/sharding
|
[
"doc",
"development",
"organization",
"sharding"
] |
_index.md
|
Tenant Scale
|
Organizations
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Sharding guidelines
|
Guidance and principles for sharding database tables to support organization isolation
|
The sharding initiative is a long-running project to ensure that most GitLab database tables can be related to an `Organization`, either directly or indirectly. This involves adding an `organization_id`, `namespace_id` or `project_id` column to tables, and backfilling their `NOT NULL` fallback data. This work is important for the delivery of Cells and Organizations. For more information, see the [design goals of Organizations](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/#organization-sharding).
## Sharding principles
Follow this guidance to complete the remaining sharding key work and resolve outstanding issues.
## Use unique issues for each table
We have a number of tables which share an issue. For example, [eight tables point to the same issue here](https://gitlab.com/search?search=sharding_key_issue_url%3A%20https%3A%2F%2Fgitlab.com%2Fgitlab-org%2Fgitlab%2F-%2Fissues%2F493768&nav_source=navbar&project_id=278964&group_id=9970&search_code=true&repository_ref=master). This makes tracking progress and resolving blockers difficult.
You should break out these shared issues into a single one per table, and update the YAML files to match.
## Update unresolved, closed issues
Some of the issues linked in the database YAML docs have been closed, sometimes in favor of new issues, but the YAML files still point to the original URL.
You should update these to point to the correct items to ensure we're accurately measuring progress.
## Add more information to sharding issues
Every sharding issue should have an assignee, an associated milestone, and should link to blockers, if applicable.
This helps us plan the work and estimate completion dates. It also ensures each issue names someone to contact in the case of problems or concerns. It also helps us to visualize the project work by highlighting blocker issues so we can help resolve them.
Note that a blocker can be a dependency. For example, the `notes` table needs to be fully migrated before other tables can proceed. Any downstream issues should mark the related item as a blocker to help us understand these relationships.
## Tables marked `exempt_from_sharding` should be sharded
This section was moved to [another location](../_index.md#exempting-certain-tables-from-having-sharding-keys).
|
---
stage: Tenant Scale
group: Organizations
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Guidance and principles for sharding database tables to support organization
isolation
title: Sharding guidelines
breadcrumbs:
- doc
- development
- organization
- sharding
---
The sharding initiative is a long-running project to ensure that most GitLab database tables can be related to an `Organization`, either directly or indirectly. This involves adding an `organization_id`, `namespace_id` or `project_id` column to tables, and backfilling their `NOT NULL` fallback data. This work is important for the delivery of Cells and Organizations. For more information, see the [design goals of Organizations](https://handbook.gitlab.com/handbook/engineering/architecture/design-documents/organization/#organization-sharding).
## Sharding principles
Follow this guidance to complete the remaining sharding key work and resolve outstanding issues.
## Use unique issues for each table
We have a number of tables which share an issue. For example, [eight tables point to the same issue here](https://gitlab.com/search?search=sharding_key_issue_url%3A%20https%3A%2F%2Fgitlab.com%2Fgitlab-org%2Fgitlab%2F-%2Fissues%2F493768&nav_source=navbar&project_id=278964&group_id=9970&search_code=true&repository_ref=master). This makes tracking progress and resolving blockers difficult.
You should break out these shared issues into a single one per table, and update the YAML files to match.
## Update unresolved, closed issues
Some of the issues linked in the database YAML docs have been closed, sometimes in favor of new issues, but the YAML files still point to the original URL.
You should update these to point to the correct items to ensure we're accurately measuring progress.
## Add more information to sharding issues
Every sharding issue should have an assignee, an associated milestone, and should link to blockers, if applicable.
This helps us plan the work and estimate completion dates. It also ensures each issue names someone to contact in the case of problems or concerns. It also helps us to visualize the project work by highlighting blocker issues so we can help resolve them.
Note that a blocker can be a dependency. For example, the `notes` table needs to be fully migrated before other tables can proceed. Any downstream issues should mark the related item as a blocker to help us understand these relationships.
## Tables marked `exempt_from_sharding` should be sharded
This section was moved to [another location](../_index.md#exempting-certain-tables-from-having-sharding-keys).
|
https://docs.gitlab.com/development/rails_endpoints
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/rails_endpoints
|
[
"doc",
"development",
"rails_endpoints"
] |
_index.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Rails Endpoints
| null |
Rails Endpoints are used by different GitLab components, they cannot be
used by other consumers. This documentation is intended for people
working on the GitLab codebase.
These Rails Endpoints:
- May not have extensive documentation or follow the same conventions as our public or private APIs.
- May not adhere to standardized rules or guidelines.
- Are designed to serve specific internal purposes in the codebase.
- Are subject to change at any time.
## Proof of concept period: Feedback Request
We are evaluating a new approach for documenting Rails endpoints. [Check out the Feedback Issue](https://gitlab.com/gitlab-org/gitlab/-/issues/411605) and feel free to share your thoughts, suggestions, or concerns. We appreciate your participation in helping us improve the documentation!
## SAST Scanners
Static Application Security Testing (SAST) checks your source code for known vulnerabilities. When SAST is enabled
on a Project these endpoints are available.
### List existing merge request code quality findings sorted by files
Get a list of existing code quality Findings, if any, sorted by files.
```plaintext
GET /projects/:id/merge_requests/:merge_request_iid/codequality_mr_diff_reports.json
```
Response:
```json
{
"files": {
"index.js": [
{
"line": 1,
"description": "Unexpected 'debugger' statement.",
"severity": "major"
}
]
}
}
```
### List new, resolved and existing merge request code quality findings
Get a list of new, resolved, and existing code quality Findings, if any.
```plaintext
GET /projects/:id/merge_requests/:merge_request_iid/codequality_reports.json
```
```json
{
"status": "failed",
"new_errors": [
{
"description": "Unexpected 'debugger' statement.",
"severity": "major",
"file_path": "index.js",
"line": 1,
"web_url": "https://gitlab.com/jannik_lehmann/code-quality-test/-/blob/ed1c1b3052fe6963beda0e416d5e2ba3378eb715/noise.rb#L12",
"engine_name": "eslint"
}
],
"resolved_errors": [],
"existing_errors": [],
"summary": { "total": 1, "resolved": 0, "errored": 1 }
}
```
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Rails Endpoints
breadcrumbs:
- doc
- development
- rails_endpoints
---
Rails Endpoints are used by different GitLab components, they cannot be
used by other consumers. This documentation is intended for people
working on the GitLab codebase.
These Rails Endpoints:
- May not have extensive documentation or follow the same conventions as our public or private APIs.
- May not adhere to standardized rules or guidelines.
- Are designed to serve specific internal purposes in the codebase.
- Are subject to change at any time.
## Proof of concept period: Feedback Request
We are evaluating a new approach for documenting Rails endpoints. [Check out the Feedback Issue](https://gitlab.com/gitlab-org/gitlab/-/issues/411605) and feel free to share your thoughts, suggestions, or concerns. We appreciate your participation in helping us improve the documentation!
## SAST Scanners
Static Application Security Testing (SAST) checks your source code for known vulnerabilities. When SAST is enabled
on a Project these endpoints are available.
### List existing merge request code quality findings sorted by files
Get a list of existing code quality Findings, if any, sorted by files.
```plaintext
GET /projects/:id/merge_requests/:merge_request_iid/codequality_mr_diff_reports.json
```
Response:
```json
{
"files": {
"index.js": [
{
"line": 1,
"description": "Unexpected 'debugger' statement.",
"severity": "major"
}
]
}
}
```
### List new, resolved and existing merge request code quality findings
Get a list of new, resolved, and existing code quality Findings, if any.
```plaintext
GET /projects/:id/merge_requests/:merge_request_iid/codequality_reports.json
```
```json
{
"status": "failed",
"new_errors": [
{
"description": "Unexpected 'debugger' statement.",
"severity": "major",
"file_path": "index.js",
"line": 1,
"web_url": "https://gitlab.com/jannik_lehmann/code-quality-test/-/blob/ed1c1b3052fe6963beda0e416d5e2ba3378eb715/noise.rb#L12",
"engine_name": "eslint"
}
],
"resolved_errors": [],
"existing_errors": [],
"summary": { "total": 1, "resolved": 0, "errored": 1 }
}
```
|
https://docs.gitlab.com/development/backup_gitlab
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/backup_gitlab.md
|
2025-08-13
|
doc/development/backup_and_restore
|
[
"doc",
"development",
"backup_and_restore"
] |
backup_gitlab.md
|
Tenant Scale
|
Geo
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
How GitLab backups work?
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides recommendations for how to create application backups across different installation types and different hosting architectures. We provide simple tools to create a point-in-time application backup, as well as specialized documentation for how to handle complex Cloud-based installation backups.
Backup and Restore relies primarily on the `gitlab-backup` tool that is shipped with the Linux Package and Docker installation methods.
There is an additional tool shipped only for Kubernetes installations: `backup-utility` that has a different implementation.
## The `gitlab-backup` Tool
Current GitLab documentation on performing backup [creation](../../administration/backup_restore/backup_gitlab.md#backup-command) and [restoration](../../administration/backup_restore/restore_gitlab.md#restore-for-linux-package-installations) points to using a special command we ship inside the system packages created with Omnibus: `gitlab-backup`. This command has a very simple interface with two subcommand options:
```shell
# To create a backup
sudo gitlab-backup create
# This corresponds to gitlab-rake gitlab:backup:create
# To restore a previously-captured backup
sudo gitlab-backup restore BACKUP=<backup_id>
# This corresponds to gitlab-rake gitlab:backup:restore BACKUP=<backup_id>
```
This command is actually a shell script that serves to wrap the core backup and restore Rake tasks defined in the GitLab Rails application. Rake tasks are generally invoked with environmental variables to define parameters and runtime configuration, and these commands will pass any significant environmental settings to the Rake process when invoked. However, the main backup creation and restoration work is defined inside the Rake tasks, which we will discuss in greater depth in the next section.
## Rake Tasks
Today, the GitLab Rails application provides several Rake tasks that are the primary means for administrators to capture a backup of application data and then to subsequently restore it.
### Creating a Point-In-Time Backup Archive
```shell
sudo gitlab-rake gitlab:backup:create [env-overrides]
```
The backup creation Rake task has the goal of capturing the state of all families of GitLab application data at the time of execution. In general, when successfully invoked, the creation task will build a backup archive tarball file.
The content and format of the archive tarball may be significantly altered by both system-wide configuration settings in `/etc/gitlab/gitlab.rb` or through environmental variables set at the time of invocation. Furthermore, these different settings can determine where backups are stored after creation, or where they can be discovered upon restoration. There are options to tweak performance while doing these operations on installations that have a much larger data burden than a typical 1K install.
#### Default Backup Creation Procedure
When a user executes the backup creation Rake task, the following sequence of high level steps will be executed:
1. Create a temporary directory to store all application backup data and metadata.
1. Dump each PostgreSQL database used by the application in a SQL file in the `db` subdirectory of the archive. This is generally done by invoking `pg_dump` on each significant database. Each `.sql` file created is further compressed with `gzip`.
1. Request a bundle export of each Git repository in the application through Gitaly. All of this data is retained in the `repositories` directory of the archive. This includes any "wiki" or "design" data associated with projects, as those features are stored as associated Git repositories.
1. For each remaining "blob"-oriented data feature, each blob corresponds to a file in a directory. So, for each binary data feature, copy each of its blob entries to a named file in a temporary directory in the archive. Once all data has been copied over, compress and serialize the directory into a `.tar.gz` file that is itself embedded in the archive. This is done for each of the following features:
- `artifacts`
- `builds`
- `ci_secure_files`
- `external_diffs`
- `lfs`
- `packages`
- `pages`
- `registry`
- `uploads`
- `terraform_state`
1. Record the parameters and status of the backup operation in a YAML file named `backup_information.yml` in the top level of the archive directory.
1. Serialize the temporary archive directory into a single `.tar` tarball file.
1. Move the tarball file to its final storage place. Depending on system configuration and parameters, this may be a directory on the machine running the creation task, or this may be a storage bucket on a cloud storage service like S3 or Google Storage.
A number of configuration and environmental parameters may alter this general procedure. These parameters are covered in the next sections.
#### Customizing Backup Creation
The system-wide GitLab configuration file, typically located at `/etc/gitlab/gitlab.rb`, allows setting a number of standard parameters for any backup creation or restoration invocation. In particular, the following table shows keys that may be set on the `gitlab_rails` configuration object which impact the execution of backup operations:
| Configuration Key |
|----------------------------------|
| `backup_archive_permissions` |
| `backup_encryption` |
| `backup_encryption_key` |
| `backup_gitaly_backup_path` |
| `backup_keep_time` |
| `backup_path` |
| `backup_upload_connection` |
| `backup_upload_remote_directory` |
| `backup_upload_storage_class` |
| `backup_upload_storage_options` |
Furthermore, each execution of a backup creation or restoration operation may set environmental variables to modify the backup algorithm, data access locations, or archive storage formatting. For the act of creating a backup archive, the Rake task supports the following environmental variable settings:
| Environmental Variable |
|-----------------------------------------|
| `BACKUP` |
| `COMPRESS_CMD` |
| `CRON` |
| `GITLAB_BACKUP_ENCRYPTION_KEY` |
| `GITLAB_BACKUP_MAX_CONCURRENCY` |
| `GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY` |
| `GZIP_RSYNCABLE` |
| `INCREMENTAL` |
| `PREVIOUS_BACKUP` |
| `REPOSITORIES_PATHS` |
| `REPOSITORIES_SERVER_SIDE` |
| `REPOSITORIES_STORAGES` |
| `SKIP_REPOSITORIES_PATHS` |
| `STRATEGY` |
##### Restoring a Point-In-Time Backup Archive
```shell
sudo gitlab-rake gitlab:backup:restore BACKUP=<backup_id> [env-overrides]
```
Once a user has run the backup creation task successfully at some prior time, they will have access to an archive tarball file that may be used to restore the application data state to roughly that point in time. These archive files will be stored either on the local system in a specific directory, or in a cloud object storage bucket, depending on the system configuration. But once an administrator is sure that backups have been captured, they can request restoration of a particular backup using the `gitlab-rake` command shown above, where `<backup_id>` indicates the base file name of the backup tarball.
Running a restore operation will obliterate the current state of application data. Thus, the Rake task will pause to confirm the destructive action with the user before proceeding.
#### Default Backup Restoration Procedure
When a user executes the backup restoration Rake task, a sequence of steps are carried out that mirror the steps performed during the creation process. This sequence is outlined as follows:
1. Create a temporary directory to serve as a working directory during restoration
1. Fetch a copy of the target archive tarball and unpack its content inside the work directory
1. Validate that the archive data is able to be restored:
1. Read its backup metadata from the `backup_information.yml` file if exists.
1. Verify the GitLab application version at the time of backup matches the current application version.
1. Fail out of the whole restore process if any of these files do not exist, are malformed, or if the version does not match.
1. Confirm the user wishes to destroy all current GitLab data before proceeding with restoration.
1. Read and decompress each `.sql.gz` file corresponding to a known application database. Run the SQL content to overwrite the full state of each database.
1. Fetch all repository bundle data stored in the `repositories` archive directory. Work with the Gitaly service to restore each repository to its expected storage location using the saved bundle data. find a `.tar.gz` file in the top archive directory that corresponds to the target feature. Decompress each feature tarball and read its binary file contents, copying to the appropriate blob storage configured for the system. This action is performed for each of the following features:
- `uploads`
- `builds`
- `artifacts`
- `pages`
- `lfs`
- `terraform_state`
- `registry`
- `packages`
- `ci_secure_files`
1. Reconfigure SSH access and rebuild an `authorized_keys` file by running GitLab Shell setup task.
1. Clear any cache data.
As with the backup creation operation, there are numerous configuration file values and environmental variables that may alter how the restoration task is performed.
|
---
stage: Tenant Scale
group: Geo
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: How GitLab backups work?
breadcrumbs:
- doc
- development
- backup_and_restore
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab Self-Managed
{{< /details >}}
GitLab provides recommendations for how to create application backups across different installation types and different hosting architectures. We provide simple tools to create a point-in-time application backup, as well as specialized documentation for how to handle complex Cloud-based installation backups.
Backup and Restore relies primarily on the `gitlab-backup` tool that is shipped with the Linux Package and Docker installation methods.
There is an additional tool shipped only for Kubernetes installations: `backup-utility` that has a different implementation.
## The `gitlab-backup` Tool
Current GitLab documentation on performing backup [creation](../../administration/backup_restore/backup_gitlab.md#backup-command) and [restoration](../../administration/backup_restore/restore_gitlab.md#restore-for-linux-package-installations) points to using a special command we ship inside the system packages created with Omnibus: `gitlab-backup`. This command has a very simple interface with two subcommand options:
```shell
# To create a backup
sudo gitlab-backup create
# This corresponds to gitlab-rake gitlab:backup:create
# To restore a previously-captured backup
sudo gitlab-backup restore BACKUP=<backup_id>
# This corresponds to gitlab-rake gitlab:backup:restore BACKUP=<backup_id>
```
This command is actually a shell script that serves to wrap the core backup and restore Rake tasks defined in the GitLab Rails application. Rake tasks are generally invoked with environmental variables to define parameters and runtime configuration, and these commands will pass any significant environmental settings to the Rake process when invoked. However, the main backup creation and restoration work is defined inside the Rake tasks, which we will discuss in greater depth in the next section.
## Rake Tasks
Today, the GitLab Rails application provides several Rake tasks that are the primary means for administrators to capture a backup of application data and then to subsequently restore it.
### Creating a Point-In-Time Backup Archive
```shell
sudo gitlab-rake gitlab:backup:create [env-overrides]
```
The backup creation Rake task has the goal of capturing the state of all families of GitLab application data at the time of execution. In general, when successfully invoked, the creation task will build a backup archive tarball file.
The content and format of the archive tarball may be significantly altered by both system-wide configuration settings in `/etc/gitlab/gitlab.rb` or through environmental variables set at the time of invocation. Furthermore, these different settings can determine where backups are stored after creation, or where they can be discovered upon restoration. There are options to tweak performance while doing these operations on installations that have a much larger data burden than a typical 1K install.
#### Default Backup Creation Procedure
When a user executes the backup creation Rake task, the following sequence of high level steps will be executed:
1. Create a temporary directory to store all application backup data and metadata.
1. Dump each PostgreSQL database used by the application in a SQL file in the `db` subdirectory of the archive. This is generally done by invoking `pg_dump` on each significant database. Each `.sql` file created is further compressed with `gzip`.
1. Request a bundle export of each Git repository in the application through Gitaly. All of this data is retained in the `repositories` directory of the archive. This includes any "wiki" or "design" data associated with projects, as those features are stored as associated Git repositories.
1. For each remaining "blob"-oriented data feature, each blob corresponds to a file in a directory. So, for each binary data feature, copy each of its blob entries to a named file in a temporary directory in the archive. Once all data has been copied over, compress and serialize the directory into a `.tar.gz` file that is itself embedded in the archive. This is done for each of the following features:
- `artifacts`
- `builds`
- `ci_secure_files`
- `external_diffs`
- `lfs`
- `packages`
- `pages`
- `registry`
- `uploads`
- `terraform_state`
1. Record the parameters and status of the backup operation in a YAML file named `backup_information.yml` in the top level of the archive directory.
1. Serialize the temporary archive directory into a single `.tar` tarball file.
1. Move the tarball file to its final storage place. Depending on system configuration and parameters, this may be a directory on the machine running the creation task, or this may be a storage bucket on a cloud storage service like S3 or Google Storage.
A number of configuration and environmental parameters may alter this general procedure. These parameters are covered in the next sections.
#### Customizing Backup Creation
The system-wide GitLab configuration file, typically located at `/etc/gitlab/gitlab.rb`, allows setting a number of standard parameters for any backup creation or restoration invocation. In particular, the following table shows keys that may be set on the `gitlab_rails` configuration object which impact the execution of backup operations:
| Configuration Key |
|----------------------------------|
| `backup_archive_permissions` |
| `backup_encryption` |
| `backup_encryption_key` |
| `backup_gitaly_backup_path` |
| `backup_keep_time` |
| `backup_path` |
| `backup_upload_connection` |
| `backup_upload_remote_directory` |
| `backup_upload_storage_class` |
| `backup_upload_storage_options` |
Furthermore, each execution of a backup creation or restoration operation may set environmental variables to modify the backup algorithm, data access locations, or archive storage formatting. For the act of creating a backup archive, the Rake task supports the following environmental variable settings:
| Environmental Variable |
|-----------------------------------------|
| `BACKUP` |
| `COMPRESS_CMD` |
| `CRON` |
| `GITLAB_BACKUP_ENCRYPTION_KEY` |
| `GITLAB_BACKUP_MAX_CONCURRENCY` |
| `GITLAB_BACKUP_MAX_STORAGE_CONCURRENCY` |
| `GZIP_RSYNCABLE` |
| `INCREMENTAL` |
| `PREVIOUS_BACKUP` |
| `REPOSITORIES_PATHS` |
| `REPOSITORIES_SERVER_SIDE` |
| `REPOSITORIES_STORAGES` |
| `SKIP_REPOSITORIES_PATHS` |
| `STRATEGY` |
##### Restoring a Point-In-Time Backup Archive
```shell
sudo gitlab-rake gitlab:backup:restore BACKUP=<backup_id> [env-overrides]
```
Once a user has run the backup creation task successfully at some prior time, they will have access to an archive tarball file that may be used to restore the application data state to roughly that point in time. These archive files will be stored either on the local system in a specific directory, or in a cloud object storage bucket, depending on the system configuration. But once an administrator is sure that backups have been captured, they can request restoration of a particular backup using the `gitlab-rake` command shown above, where `<backup_id>` indicates the base file name of the backup tarball.
Running a restore operation will obliterate the current state of application data. Thus, the Rake task will pause to confirm the destructive action with the user before proceeding.
#### Default Backup Restoration Procedure
When a user executes the backup restoration Rake task, a sequence of steps are carried out that mirror the steps performed during the creation process. This sequence is outlined as follows:
1. Create a temporary directory to serve as a working directory during restoration
1. Fetch a copy of the target archive tarball and unpack its content inside the work directory
1. Validate that the archive data is able to be restored:
1. Read its backup metadata from the `backup_information.yml` file if exists.
1. Verify the GitLab application version at the time of backup matches the current application version.
1. Fail out of the whole restore process if any of these files do not exist, are malformed, or if the version does not match.
1. Confirm the user wishes to destroy all current GitLab data before proceeding with restoration.
1. Read and decompress each `.sql.gz` file corresponding to a known application database. Run the SQL content to overwrite the full state of each database.
1. Fetch all repository bundle data stored in the `repositories` archive directory. Work with the Gitaly service to restore each repository to its expected storage location using the saved bundle data. find a `.tar.gz` file in the top archive directory that corresponds to the target feature. Decompress each feature tarball and read its binary file contents, copying to the appropriate blob storage configured for the system. This action is performed for each of the following features:
- `uploads`
- `builds`
- `artifacts`
- `pages`
- `lfs`
- `terraform_state`
- `registry`
- `packages`
- `ci_secure_files`
1. Reconfigure SSH access and rebuild an `authorized_keys` file by running GitLab Shell setup task.
1. Clear any cache data.
As with the backup creation operation, there are numerous configuration file values and environmental variables that may alter how the restoration task is performed.
|
https://docs.gitlab.com/development/translation
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/translation.md
|
2025-08-13
|
doc/development/i18n
|
[
"doc",
"development",
"i18n"
] |
translation.md
|
none
|
Localization
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Translating GitLab
| null |
For managing the translation process, we use [Crowdin](https://crowdin.com).
To contribute translations at [`translate.gitlab.com`](https://translate.gitlab.com),
you must create a Crowdin account. You may create a new account or use any of their supported
sign-in services.
## Language selections
GitLab is being translated into many languages. To select a language to contribute to:
1. Find the language that you want to contribute to, in the
[GitLab Crowdin project](https://crowdin.com/project/gitlab-ee).
- If the language you want is available, proceed to the next step.
- If the language you want is not available:
1. Check the [Localization issues](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=group%3A%3Alocalization)
to see if there is already an open request for that language. If an issue exists,
you can add your support for the language in a comment.
1. If there is no request for the language, create a new issue for the language.
Notify our Crowdin administrators by including `@gitlab-com/localization/maintainers`
in a comment or in the description of your issue.
- After the issue and any merge requests are complete, restart this procedure.
1. View the list of files and folders. Select `gitlab.pot` to open the translation editor.
## Translation editor
The online translation editor is the easiest way to contribute translations.

- Strings for translation are listed in the left panel.
- Translations are entered into the central panel. Multiple translations are required for strings
that contain plurals. The string to translate is shown in the above image with glossary terms
highlighted. If the string to translate isn't clear, you can request context.
A glossary of common terms is available in the **Terms** tab in the right panel. In the **Comments**
tab, you can add comments to discuss a translation with the community.
Remember to **Save** each translation.
### Context
In Crowdin, each string contains a link that shows all instances of the string in the entire GitLab codebase.
When you translate a string, you can go to the relevant commit or merge request to get more context.

When you select the link, code search results appear for that string.
You can [view Git blame from code search](../../user/search/_index.md#view-git-blame-from-code-search)
to see the commits that added the string.
For a list of relevant merge requests, select a commit.

## General Translation Guidelines
Be sure to check the following guidelines before you translate any strings.
### Namespaced strings
A namespace precedes the string and is separated from it by a `|` (`namespace|string`). When you see
a namespace before an externalized string, you should remove the namespace from the final
translation. For example, in `OpenedNDaysAgo|Opened`, remove `OpenedNDaysAgo|`. If translating to
French, translate `OpenedNDaysAgo|Opened` to `Ouvert•e`, not `OpenedNDaysAgo|Ouvert•e`.
### Technical terms
You should treat some technical terms like proper nouns and not translate them. Technical terms that
should always be in English are noted in the glossary when using
[`translate.gitlab.com`](https://translate.gitlab.com).
This helps maintain a logical connection and consistency between tools (for example, a Git client)
and GitLab.
To find the list of technical terms:
1. Go to [`translate.gitlab.com`](https://translate.gitlab.com).
1. Select the language to translate.
1. Select **Glossary**.
### Formality
The level of formality used in software varies by language:
| Language | Formality | Example |
| -------- | --------- | ------- |
| French | formal | `vous` for `you` |
| German | informal | `du` for `you` |
| Spanish | informal | `tú` for `you` |
Refer to other translated strings and notes in the glossary to assist you in determining a suitable
level of formality.
### Inclusive language
[Diversity, inclusion, and belonging](https://handbook.gitlab.com/handbook/values/#diversity-inclusion)
are GitLab values. We ask you to avoid translations that exclude people based on their gender or
ethnicity. In languages that distinguish between a male and female form, use both or choose a
neutral formulation.
<!-- vale gitlab_base.Spelling = NO -->
For example, in German, the word _user_ can be translated into _Benutzer_ (male) or _Benutzerin_
(female). Therefore, _create a new user_ translates to _Benutzer(in) anlegen_.
<!-- vale gitlab_base.Spelling = YES -->
### Updating the glossary
To propose additions to the glossary,
[open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
## French translation guidelines
<!-- vale gitlab_base.Spelling = NO -->
In French, the _écriture inclusive_ is now over (see on [Legifrance](https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000036068906/)).
To include both genders, write _Utilisateurs et utilisatrices_ instead of _Utilisateur·rice·s_. If
there is not enough space, use the male gender alone.
<!-- vale gitlab_base.Spelling = YES -->
|
---
stage: none
group: Localization
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Translating GitLab
breadcrumbs:
- doc
- development
- i18n
---
For managing the translation process, we use [Crowdin](https://crowdin.com).
To contribute translations at [`translate.gitlab.com`](https://translate.gitlab.com),
you must create a Crowdin account. You may create a new account or use any of their supported
sign-in services.
## Language selections
GitLab is being translated into many languages. To select a language to contribute to:
1. Find the language that you want to contribute to, in the
[GitLab Crowdin project](https://crowdin.com/project/gitlab-ee).
- If the language you want is available, proceed to the next step.
- If the language you want is not available:
1. Check the [Localization issues](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=group%3A%3Alocalization)
to see if there is already an open request for that language. If an issue exists,
you can add your support for the language in a comment.
1. If there is no request for the language, create a new issue for the language.
Notify our Crowdin administrators by including `@gitlab-com/localization/maintainers`
in a comment or in the description of your issue.
- After the issue and any merge requests are complete, restart this procedure.
1. View the list of files and folders. Select `gitlab.pot` to open the translation editor.
## Translation editor
The online translation editor is the easiest way to contribute translations.

- Strings for translation are listed in the left panel.
- Translations are entered into the central panel. Multiple translations are required for strings
that contain plurals. The string to translate is shown in the above image with glossary terms
highlighted. If the string to translate isn't clear, you can request context.
A glossary of common terms is available in the **Terms** tab in the right panel. In the **Comments**
tab, you can add comments to discuss a translation with the community.
Remember to **Save** each translation.
### Context
In Crowdin, each string contains a link that shows all instances of the string in the entire GitLab codebase.
When you translate a string, you can go to the relevant commit or merge request to get more context.

When you select the link, code search results appear for that string.
You can [view Git blame from code search](../../user/search/_index.md#view-git-blame-from-code-search)
to see the commits that added the string.
For a list of relevant merge requests, select a commit.

## General Translation Guidelines
Be sure to check the following guidelines before you translate any strings.
### Namespaced strings
A namespace precedes the string and is separated from it by a `|` (`namespace|string`). When you see
a namespace before an externalized string, you should remove the namespace from the final
translation. For example, in `OpenedNDaysAgo|Opened`, remove `OpenedNDaysAgo|`. If translating to
French, translate `OpenedNDaysAgo|Opened` to `Ouvert•e`, not `OpenedNDaysAgo|Ouvert•e`.
### Technical terms
You should treat some technical terms like proper nouns and not translate them. Technical terms that
should always be in English are noted in the glossary when using
[`translate.gitlab.com`](https://translate.gitlab.com).
This helps maintain a logical connection and consistency between tools (for example, a Git client)
and GitLab.
To find the list of technical terms:
1. Go to [`translate.gitlab.com`](https://translate.gitlab.com).
1. Select the language to translate.
1. Select **Glossary**.
### Formality
The level of formality used in software varies by language:
| Language | Formality | Example |
| -------- | --------- | ------- |
| French | formal | `vous` for `you` |
| German | informal | `du` for `you` |
| Spanish | informal | `tú` for `you` |
Refer to other translated strings and notes in the glossary to assist you in determining a suitable
level of formality.
### Inclusive language
[Diversity, inclusion, and belonging](https://handbook.gitlab.com/handbook/values/#diversity-inclusion)
are GitLab values. We ask you to avoid translations that exclude people based on their gender or
ethnicity. In languages that distinguish between a male and female form, use both or choose a
neutral formulation.
<!-- vale gitlab_base.Spelling = NO -->
For example, in German, the word _user_ can be translated into _Benutzer_ (male) or _Benutzerin_
(female). Therefore, _create a new user_ translates to _Benutzer(in) anlegen_.
<!-- vale gitlab_base.Spelling = YES -->
### Updating the glossary
To propose additions to the glossary,
[open an issue](https://gitlab.com/gitlab-org/gitlab/-/issues?scope=all&utf8=✓&state=all&label_name[]=Category%3AInternationalization).
## French translation guidelines
<!-- vale gitlab_base.Spelling = NO -->
In French, the _écriture inclusive_ is now over (see on [Legifrance](https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000036068906/)).
To include both genders, write _Utilisateurs et utilisatrices_ instead of _Utilisateur·rice·s_. If
there is not enough space, use the male gender alone.
<!-- vale gitlab_base.Spelling = YES -->
|
https://docs.gitlab.com/development/proofreader
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/proofreader.md
|
2025-08-13
|
doc/development/i18n
|
[
"doc",
"development",
"i18n"
] |
proofreader.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Proofread Translations
| null |
Most translations are contributed, reviewed, and accepted by the community. We
are very appreciative of the work done by translators and proofreaders!
## Proofreaders
<!-- vale gitlab_base.Spelling = NO -->
- Albanian
- Proofreaders needed.
- Amharic
- Proofreaders needed.
- Arabic
- Proofreaders needed.
- Basque
- Unai Tolosa - [GitLab](https://gitlab.com/utolosa002), [Crowdin](https://crowdin.com/profile/utolosa002)
- Belarusian
- Anton Katsuba - [GitLab](https://gitlab.com/coinvariant), [Crowdin](https://crowdin.com/profile/aerialfiddle)
- Bosnian
- Proofreaders needed.
- Bulgarian
- Proofreaders needed.
- Catalan
- Proofreaders needed.
- Chinese Simplified 简体中文
- Zhiyuan Lu - [GitLab](https://gitlab.com/luzhiyuan.deer), [Crowdin](https://crowdin.com/profile/luzhiyuan.deer)
- Chinese Traditional 繁體中文
- Hansel Wang - [GitLab](https://gitlab.com/airness), [Crowdin](https://crowdin.com/profile/airness)
- Chinese Traditional, Hong Kong 繁體中文 (香港)
- Proofreaders needed.
- Croatian
- Proofreaders needed.
- Czech
- Proofreaders needed.
- Danish
- scootergrisen - [GitLab](https://gitlab.com/scootergrisen), [Crowdin](https://crowdin.com/profile/scootergrisen)
- Dutch
- Proofreaders needed.
- English (UK)
- Proofreaders needed.
- Esperanto
- Proofreaders needed.
- Estonian
- Proofreaders needed.
- Farsi
- Iman manati [GitLab](https://gitlab.com/baratiiman3), [Crowdin](https://crowdin.com/profile/iman31)
- Filipino
- Proofreaders needed.
- French
- Xavier Delatour - [GitLab](https://gitlab.com/xdelatour), [Crowdin](https://crowdin.com/profile/xdelatour)
- Galician
- Proofreaders needed.
- German
- Vladislav Wanner - [GitLab](https://gitlab.com/RumBugen), [Crowdin](https://crowdin.com/profile/RumBugen)
- Daniel Ziegenberg - [GitLab](https://gitlab.com/ziegenberg), [Crowdin](https://crowdin.com/profile/ziegenberg)
- Greek
- Proofreaders needed.
- Hebrew
- Yaron Shahrabani - [GitLab](https://gitlab.com/yarons), [Crowdin](https://crowdin.com/profile/YaronSh)
- Hindi
- Proofreaders needed.
- Hungarian
- Proofreaders needed.
- Indonesian
- Rahayu Rafika - [GitLab](https://gitlab.com/Vkfikaa), [Crowdin](https://crowdin.com/profile/rahayurafika_12)
- Irish
- Aindriú Mac Giolla Eoin - [GitLab](https://gitlab.com/aindriu80), [Crowdin](https://crowdin.com/profile/aindriu80)
- Italian
- Proofreaders needed.
- Japanese
- Tomo Dote - [GitLab](https://gitlab.com/fu7mu4), [Crowdin](https://crowdin.com/profile/fu7mu4)
- Tsukasa Komatsubara - [GitLab](https://gitlab.com/tkomatsubara), [Crowdin](https://crowdin.com/profile/tkomatsubara)
- Noriko Akiyama - [GitLab](https://gitlab.com/nakiyama-ext), [Crowdin](https://crowdin.com/profile/norikoakiyama)
- Naoko Shirakuni - [GitLab](https://gitlab.com/SNaoko), [Crowdin](https://crowdin.com/profile/tamongen)
- Megumi Uchikawa - [GitLab](https://gitlab.com/muchikawa), [Crowdin](https://crowdin.com/profile/muchikawa)
- Korean
- Sunjung Park - [GitLab](https://gitlab.com/sunjungp), [Crowdin](https://crowdin.com/profile/sunjungp)
- Hwanyong Lee - [GitLab](https://gitlab.com/hwan_ajou), [Crowdin](https://crowdin.com/profile/grbear)
- Latvian
- ℂ𝕠𝕠𝕠𝕝 - [GitLab](https://gitlab.com/Coool), [Crowdin](https://crowdin.com/profile/Coool)
- Mongolian
- Proofreaders needed.
- Norwegian Bokmal
- Imre Kristoffer Eilertsen - [GitLab](https://gitlab.com/DandelionSprout), [Crowdin](https://crowdin.com/profile/DandelionSprout)
- Polish
- Proofreaders needed.
- Portuguese
- Proofreaders needed.
- Portuguese, Brazilian
- Eduardo Addad de Oliveira - [GitLab](https://gitlab.com/eduardoaddad), [Crowdin](https://crowdin.com/profile/eduardoaddad)
- Romanian
- Proofreaders needed.
- Russian
- Alexey Butkeev - [GitLab](https://gitlab.com/abutkeev), [Crowdin](https://crowdin.com/profile/abutkeev)
- Dmitry Fedoroff - [GitLab](https://gitlab.com/DmitryFedoroff), [Crowdin](https://crowdin.com/profile/DmitryFedoroff)
- Mark Minakou - [GitLab](https://gitlab.com/sandzhaj), [Crowdin](https://crowdin.com/profile/sandzhaj)
- Andrey Komarov - [GitLab](https://gitlab.com/elkamarado), [Crowdin](https://crowdin.com/profile/kamarado)
- Serbian (Latin and Cyrillic)
- Proofreaders needed.
- Sinhalese/Sinhala සිංහල
- හෙළබස (HelaBasa) - [GitLab](https://gitlab.com/helabasa), [Crowdin](https://crowdin.com/profile/helabasa)
- Slovak
- Proofreaders needed.
- Spanish
- David Elizondo - [GitLab](https://gitlab.com/daelmo), [Crowdin](https://crowdin.com/profile/daelmo)
- Pablo Reyes - [GitLab](https://gitlab.com/pabloryst9n), [Crowdin](https://crowdin.com/profile/pabloryst9n)
- Gustavo Román - [GitLab](https://gitlab.com/GustavoStark), [Crowdin](https://crowdin.com/profile/gustavonewton)
- Swedish
- Johannes Nilsson - [GitLab](https://gitlab.com/pixelregn), [Crowdin](https://crowdin.com/profile/pixelregn)
- Turkish
- Proofreaders needed.
- Ukrainian
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [Crowdin](https://crowdin.com/profile/andruwa13)
- Welsh
- Proofreaders needed.
<!-- vale gitlab_base.Spelling = YES -->
## Become a proofreader
Before requesting proofreader permissions in Crowdin, be sure you have a history of contributing
translations to the GitLab project.
1. Contribute translations to GitLab. See instructions for
[translating GitLab](translation.md).
Translating GitLab is a community effort that requires teamwork and attention to detail.
Proofreaders play an important role helping new contributors, and ensuring the consistency and
quality of translations. Your conduct and contributions as a translator should reflect this
before requesting to be a proofreader.
1. Request proofreader permissions by opening a merge request to add yourself to the list of
proofreaders.
Open the [`proofreader.md` source file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/development/i18n/proofreader.md) and select **Edit**.
Add your language in alphabetical order and add yourself to the list, including:
- Name
- Link to your GitLab profile
- Link to your Crowdin profile
In the merge request description, include links to any projects you have previously translated.
1. [GitLab team members](https://about.gitlab.com/company/team/),
[core team members](https://about.gitlab.com/community/core-team/),
[globalization and localization team members](https://handbook.gitlab.com/handbook/marketing/localization/),
or current proofreaders fluent in the language consider your request to become a proofreader
based on the merits of your previous translations.
- If you request to become the first proofreader for a language and there are no GitLab or Core
team members who speak that language, we request links to previous translation work in other
communities or projects.
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Proofread Translations
breadcrumbs:
- doc
- development
- i18n
---
Most translations are contributed, reviewed, and accepted by the community. We
are very appreciative of the work done by translators and proofreaders!
## Proofreaders
<!-- vale gitlab_base.Spelling = NO -->
- Albanian
- Proofreaders needed.
- Amharic
- Proofreaders needed.
- Arabic
- Proofreaders needed.
- Basque
- Unai Tolosa - [GitLab](https://gitlab.com/utolosa002), [Crowdin](https://crowdin.com/profile/utolosa002)
- Belarusian
- Anton Katsuba - [GitLab](https://gitlab.com/coinvariant), [Crowdin](https://crowdin.com/profile/aerialfiddle)
- Bosnian
- Proofreaders needed.
- Bulgarian
- Proofreaders needed.
- Catalan
- Proofreaders needed.
- Chinese Simplified 简体中文
- Zhiyuan Lu - [GitLab](https://gitlab.com/luzhiyuan.deer), [Crowdin](https://crowdin.com/profile/luzhiyuan.deer)
- Chinese Traditional 繁體中文
- Hansel Wang - [GitLab](https://gitlab.com/airness), [Crowdin](https://crowdin.com/profile/airness)
- Chinese Traditional, Hong Kong 繁體中文 (香港)
- Proofreaders needed.
- Croatian
- Proofreaders needed.
- Czech
- Proofreaders needed.
- Danish
- scootergrisen - [GitLab](https://gitlab.com/scootergrisen), [Crowdin](https://crowdin.com/profile/scootergrisen)
- Dutch
- Proofreaders needed.
- English (UK)
- Proofreaders needed.
- Esperanto
- Proofreaders needed.
- Estonian
- Proofreaders needed.
- Farsi
- Iman manati [GitLab](https://gitlab.com/baratiiman3), [Crowdin](https://crowdin.com/profile/iman31)
- Filipino
- Proofreaders needed.
- French
- Xavier Delatour - [GitLab](https://gitlab.com/xdelatour), [Crowdin](https://crowdin.com/profile/xdelatour)
- Galician
- Proofreaders needed.
- German
- Vladislav Wanner - [GitLab](https://gitlab.com/RumBugen), [Crowdin](https://crowdin.com/profile/RumBugen)
- Daniel Ziegenberg - [GitLab](https://gitlab.com/ziegenberg), [Crowdin](https://crowdin.com/profile/ziegenberg)
- Greek
- Proofreaders needed.
- Hebrew
- Yaron Shahrabani - [GitLab](https://gitlab.com/yarons), [Crowdin](https://crowdin.com/profile/YaronSh)
- Hindi
- Proofreaders needed.
- Hungarian
- Proofreaders needed.
- Indonesian
- Rahayu Rafika - [GitLab](https://gitlab.com/Vkfikaa), [Crowdin](https://crowdin.com/profile/rahayurafika_12)
- Irish
- Aindriú Mac Giolla Eoin - [GitLab](https://gitlab.com/aindriu80), [Crowdin](https://crowdin.com/profile/aindriu80)
- Italian
- Proofreaders needed.
- Japanese
- Tomo Dote - [GitLab](https://gitlab.com/fu7mu4), [Crowdin](https://crowdin.com/profile/fu7mu4)
- Tsukasa Komatsubara - [GitLab](https://gitlab.com/tkomatsubara), [Crowdin](https://crowdin.com/profile/tkomatsubara)
- Noriko Akiyama - [GitLab](https://gitlab.com/nakiyama-ext), [Crowdin](https://crowdin.com/profile/norikoakiyama)
- Naoko Shirakuni - [GitLab](https://gitlab.com/SNaoko), [Crowdin](https://crowdin.com/profile/tamongen)
- Megumi Uchikawa - [GitLab](https://gitlab.com/muchikawa), [Crowdin](https://crowdin.com/profile/muchikawa)
- Korean
- Sunjung Park - [GitLab](https://gitlab.com/sunjungp), [Crowdin](https://crowdin.com/profile/sunjungp)
- Hwanyong Lee - [GitLab](https://gitlab.com/hwan_ajou), [Crowdin](https://crowdin.com/profile/grbear)
- Latvian
- ℂ𝕠𝕠𝕠𝕝 - [GitLab](https://gitlab.com/Coool), [Crowdin](https://crowdin.com/profile/Coool)
- Mongolian
- Proofreaders needed.
- Norwegian Bokmal
- Imre Kristoffer Eilertsen - [GitLab](https://gitlab.com/DandelionSprout), [Crowdin](https://crowdin.com/profile/DandelionSprout)
- Polish
- Proofreaders needed.
- Portuguese
- Proofreaders needed.
- Portuguese, Brazilian
- Eduardo Addad de Oliveira - [GitLab](https://gitlab.com/eduardoaddad), [Crowdin](https://crowdin.com/profile/eduardoaddad)
- Romanian
- Proofreaders needed.
- Russian
- Alexey Butkeev - [GitLab](https://gitlab.com/abutkeev), [Crowdin](https://crowdin.com/profile/abutkeev)
- Dmitry Fedoroff - [GitLab](https://gitlab.com/DmitryFedoroff), [Crowdin](https://crowdin.com/profile/DmitryFedoroff)
- Mark Minakou - [GitLab](https://gitlab.com/sandzhaj), [Crowdin](https://crowdin.com/profile/sandzhaj)
- Andrey Komarov - [GitLab](https://gitlab.com/elkamarado), [Crowdin](https://crowdin.com/profile/kamarado)
- Serbian (Latin and Cyrillic)
- Proofreaders needed.
- Sinhalese/Sinhala සිංහල
- හෙළබස (HelaBasa) - [GitLab](https://gitlab.com/helabasa), [Crowdin](https://crowdin.com/profile/helabasa)
- Slovak
- Proofreaders needed.
- Spanish
- David Elizondo - [GitLab](https://gitlab.com/daelmo), [Crowdin](https://crowdin.com/profile/daelmo)
- Pablo Reyes - [GitLab](https://gitlab.com/pabloryst9n), [Crowdin](https://crowdin.com/profile/pabloryst9n)
- Gustavo Román - [GitLab](https://gitlab.com/GustavoStark), [Crowdin](https://crowdin.com/profile/gustavonewton)
- Swedish
- Johannes Nilsson - [GitLab](https://gitlab.com/pixelregn), [Crowdin](https://crowdin.com/profile/pixelregn)
- Turkish
- Proofreaders needed.
- Ukrainian
- Andrew Vityuk - [GitLab](https://gitlab.com/3_1_3_u), [Crowdin](https://crowdin.com/profile/andruwa13)
- Welsh
- Proofreaders needed.
<!-- vale gitlab_base.Spelling = YES -->
## Become a proofreader
Before requesting proofreader permissions in Crowdin, be sure you have a history of contributing
translations to the GitLab project.
1. Contribute translations to GitLab. See instructions for
[translating GitLab](translation.md).
Translating GitLab is a community effort that requires teamwork and attention to detail.
Proofreaders play an important role helping new contributors, and ensuring the consistency and
quality of translations. Your conduct and contributions as a translator should reflect this
before requesting to be a proofreader.
1. Request proofreader permissions by opening a merge request to add yourself to the list of
proofreaders.
Open the [`proofreader.md` source file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/development/i18n/proofreader.md) and select **Edit**.
Add your language in alphabetical order and add yourself to the list, including:
- Name
- Link to your GitLab profile
- Link to your Crowdin profile
In the merge request description, include links to any projects you have previously translated.
1. [GitLab team members](https://about.gitlab.com/company/team/),
[core team members](https://about.gitlab.com/community/core-team/),
[globalization and localization team members](https://handbook.gitlab.com/handbook/marketing/localization/),
or current proofreaders fluent in the language consider your request to become a proofreader
based on the merits of your previous translations.
- If you request to become the first proofreader for a language and there are no GitLab or Core
team members who speak that language, we request links to previous translation work in other
communities or projects.
|
https://docs.gitlab.com/development/merging_translations
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merging_translations.md
|
2025-08-13
|
doc/development/i18n
|
[
"doc",
"development",
"i18n"
] |
merging_translations.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merging translations from Crowdin
| null |
Crowdin automatically syncs the `gitlab.pot` file with the Crowdin service, presenting
newly added externalized strings to the community of translators.
The [GitLab Crowdin Bot](https://gitlab.com/gitlab-crowdin-bot) also creates merge requests
to take newly approved translation submissions and merge them into the `locale/<language>/gitlab.po`
files. Check the [merge requests created by `gitlab-crowdin-bot`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests?scope=all&state=opened&author_username=gitlab-crowdin-bot)
to see new and merged merge requests.
## Validation
By default Crowdin commits translations with `[skip ci]` in the commit
message. This avoids an excessive number of pipelines from running.
Before merging translations, make sure to trigger a pipeline to validate
translations. Static analysis validates things Crowdin doesn't do. Create
a new pipeline at [`https://gitlab.com/gitlab-org/gitlab/pipelines/new`](https://gitlab.com/gitlab-org/gitlab/pipelines/new)
(requires the Developer role) for the `master-i18n` branch.
The pipeline job validates translations with the [`PoLinter` class](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n/po_linter.rb).
If the linter finds any errors, they appear in the job log.
For an example of a failed pipeline, see [these error messages](https://gitlab.com/gitlab-org/gitlab/-/jobs/6771832489#L873).
If validation errors occur, you must manually disapprove the offending string
in Crowdin and leave a comment about how to fix the errors:
1. Sign in to Crowdin with the `gitlab-crowdin-bot` account.
1. Find the offending string.
1. Select **Current translation is wrong** to disapprove the translation for the specific target language.
1. Include the error message from the job log as a comment.
The invalid translation is then excluded, and the merge request is updated.
Automating this process is proposed in [issue 23256](https://gitlab.com/gitlab-org/gitlab/-/issues/23256).
If the translation fails validation due to angle brackets (`<` or `>`),
it should be disapproved in Crowdin. Our strings must use [variables](externalization.md#html)
for HTML instead.
It might be useful to pause the integration on the Crowdin side for a
moment so translations don't keep coming. You can do this by selecting
**Pause sync** on the [Crowdin integration settings page](https://translate.gitlab.com/project/gitlab-ee/settings#integration).
## Merging translations
After all translations are determined to be appropriate and the pipelines pass,
you can merge the translations into the default branch. When merging translations,
be sure to select the **Remove source branch** checkbox. This causes Crowdin
to recreate the `master-i18n` branch from the default branch after merging the new
translation.
We are discussing [automating this entire process](https://gitlab.com/gitlab-org/gitlab/-/issues/19896).
## Recreate the merge request
Crowdin creates a new merge request as soon as the old one is closed
or merged. But it does not recreate the `master-i18n` branch every
time. To force Crowdin to recreate the branch, close any [open merge requests](https://gitlab.com/gitlab-org/gitlab/-/merge_requests?scope=all&state=opened&author_username=gitlab-crowdin-bot)
and delete the [`master-18n`](https://gitlab.com/gitlab-org/gitlab/-/branches/all?utf8=✓&search=master-i18n) branch.
This might be needed when the merge request contains failures that
have been fixed on the default branch.
## Recreate the GitLab integration in Crowdin
{{< alert type="note" >}}
These instructions work only for GitLab Team Members.
{{< /alert >}}
If for some reason the GitLab integration in Crowdin doesn't exist, you can
recreate it with the following steps:
1. Sign in to GitLab as `gitlab-crowdin-bot`. (If you're a GitLab Team Member,
find credentials in the GitLab shared
[1Password account](https://handbook.gitlab.com/handbook/security/password-guidelines/#1password-for-teams).)
1. Sign in to Crowdin with the GitLab integration.
1. Go to **Settings > Integrations > GitLab > Set Up Integration**.
1. Select the `gitlab-org/gitlab` repository.
1. In **Select Branches for Translation**, select `master`.
1. Ensure the **Service Branch Name** is `master-i18n`.
## Manually update the translation levels
There's no automated way to pull the translation levels from Crowdin, to display
this information in the language selection dropdown list. Therefore, the translation
levels are hard-coded in the `TRANSLATION_LEVELS` constant in [`i18n.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n.rb),
and must be regularly updated.
To update the translation levels:
1. Get the translation levels (percentage of approved words) from [Crowdin](https://crowdin.com/project/gitlab-ee/settings#translations).
1. Update the hard-coded translation levels in [`i18n.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n.rb#L40).
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Merging translations from Crowdin
breadcrumbs:
- doc
- development
- i18n
---
Crowdin automatically syncs the `gitlab.pot` file with the Crowdin service, presenting
newly added externalized strings to the community of translators.
The [GitLab Crowdin Bot](https://gitlab.com/gitlab-crowdin-bot) also creates merge requests
to take newly approved translation submissions and merge them into the `locale/<language>/gitlab.po`
files. Check the [merge requests created by `gitlab-crowdin-bot`](https://gitlab.com/gitlab-org/gitlab/-/merge_requests?scope=all&state=opened&author_username=gitlab-crowdin-bot)
to see new and merged merge requests.
## Validation
By default Crowdin commits translations with `[skip ci]` in the commit
message. This avoids an excessive number of pipelines from running.
Before merging translations, make sure to trigger a pipeline to validate
translations. Static analysis validates things Crowdin doesn't do. Create
a new pipeline at [`https://gitlab.com/gitlab-org/gitlab/pipelines/new`](https://gitlab.com/gitlab-org/gitlab/pipelines/new)
(requires the Developer role) for the `master-i18n` branch.
The pipeline job validates translations with the [`PoLinter` class](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n/po_linter.rb).
If the linter finds any errors, they appear in the job log.
For an example of a failed pipeline, see [these error messages](https://gitlab.com/gitlab-org/gitlab/-/jobs/6771832489#L873).
If validation errors occur, you must manually disapprove the offending string
in Crowdin and leave a comment about how to fix the errors:
1. Sign in to Crowdin with the `gitlab-crowdin-bot` account.
1. Find the offending string.
1. Select **Current translation is wrong** to disapprove the translation for the specific target language.
1. Include the error message from the job log as a comment.
The invalid translation is then excluded, and the merge request is updated.
Automating this process is proposed in [issue 23256](https://gitlab.com/gitlab-org/gitlab/-/issues/23256).
If the translation fails validation due to angle brackets (`<` or `>`),
it should be disapproved in Crowdin. Our strings must use [variables](externalization.md#html)
for HTML instead.
It might be useful to pause the integration on the Crowdin side for a
moment so translations don't keep coming. You can do this by selecting
**Pause sync** on the [Crowdin integration settings page](https://translate.gitlab.com/project/gitlab-ee/settings#integration).
## Merging translations
After all translations are determined to be appropriate and the pipelines pass,
you can merge the translations into the default branch. When merging translations,
be sure to select the **Remove source branch** checkbox. This causes Crowdin
to recreate the `master-i18n` branch from the default branch after merging the new
translation.
We are discussing [automating this entire process](https://gitlab.com/gitlab-org/gitlab/-/issues/19896).
## Recreate the merge request
Crowdin creates a new merge request as soon as the old one is closed
or merged. But it does not recreate the `master-i18n` branch every
time. To force Crowdin to recreate the branch, close any [open merge requests](https://gitlab.com/gitlab-org/gitlab/-/merge_requests?scope=all&state=opened&author_username=gitlab-crowdin-bot)
and delete the [`master-18n`](https://gitlab.com/gitlab-org/gitlab/-/branches/all?utf8=✓&search=master-i18n) branch.
This might be needed when the merge request contains failures that
have been fixed on the default branch.
## Recreate the GitLab integration in Crowdin
{{< alert type="note" >}}
These instructions work only for GitLab Team Members.
{{< /alert >}}
If for some reason the GitLab integration in Crowdin doesn't exist, you can
recreate it with the following steps:
1. Sign in to GitLab as `gitlab-crowdin-bot`. (If you're a GitLab Team Member,
find credentials in the GitLab shared
[1Password account](https://handbook.gitlab.com/handbook/security/password-guidelines/#1password-for-teams).)
1. Sign in to Crowdin with the GitLab integration.
1. Go to **Settings > Integrations > GitLab > Set Up Integration**.
1. Select the `gitlab-org/gitlab` repository.
1. In **Select Branches for Translation**, select `master`.
1. Ensure the **Service Branch Name** is `master-i18n`.
## Manually update the translation levels
There's no automated way to pull the translation levels from Crowdin, to display
this information in the language selection dropdown list. Therefore, the translation
levels are hard-coded in the `TRANSLATION_LEVELS` constant in [`i18n.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n.rb),
and must be regularly updated.
To update the translation levels:
1. Get the translation levels (percentage of approved words) from [Crowdin](https://crowdin.com/project/gitlab-ee/settings#translations).
1. Update the hard-coded translation levels in [`i18n.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/i18n.rb#L40).
|
https://docs.gitlab.com/development/externalization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/externalization.md
|
2025-08-13
|
doc/development/i18n
|
[
"doc",
"development",
"i18n"
] |
externalization.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Internationalization for GitLab
| null |
For working with internationalization (i18n),
[GNU gettext](https://www.gnu.org/software/gettext/) is used given it's the most
used tool for this task and there are many applications that help us work with it.
{{< alert type="note" >}}
All `rake` commands described on this page must be run on a GitLab instance. This instance is
usually the GitLab Development Kit (GDK).
{{< /alert >}}
## Setting up the GitLab Development Kit (GDK)
To work on the [GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss)
project, you must download and configure it through the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/set-up-gdk.md).
After you have the GitLab project ready, you can start working on the translation.
## Tools
The following tools are used:
- Custom written tools to aid day-to-day development work with translations:
- `tooling/bin/gettext_extractor locale/gitlab.pot`: scan all source files for [new content to translate](#updating-the-po-files-with-the-new-content)
- `rake gettext:compile`: reads the contents of the PO files and generates JS files which
contain all the available translations for the Frontend.
- `rake gettext:lint`: [validate PO files](#validating-po-files)
- [`gettext_i18n_rails`](https://github.com/grosser/gettext_i18n_rails):
this gem allows us to translate content from models, views, and controllers.
It uses [`fast_gettext`](https://github.com/grosser/fast_gettext) under the hood.
It also provides access to the following Rake tasks, which are rarely needed in day-to-day:
- `rake gettext:add_language[language]`: [adding a new language](#adding-a-new-language)
- `rake gettext:find`: parses almost all the files from the Rails application looking for content
marked for translation. It then updates the PO files with this content.
- `rake gettext:pack`: processes the PO files and generates the binary MO files that the
application uses.
- PO editor: there are multiple applications that can help us work with PO files.
A good option is [Poedit](https://poedit.net/download),
which is available for macOS, GNU/Linux, and Windows.
## Preparing a page for translation
You must mark strings as translatable with the following available helpers. Keep in mind that
strings are translated in tools where their context of use might not be obvious. Consider
[namespacing](#namespaces) domain-specific strings to provide more context to the translators.
There are four file types:
- Ruby files: models and controllers.
- HAML files: view files.
- ERB files: used for email templates.
- JavaScript files: we mostly work with Vue templates.
### Ruby files
If there is a method or variable that works with a raw string, for instance:
```ruby
def hello
"Hello world!"
end
```
Or:
```ruby
hello = "Hello world!"
```
You can mark that content for translation with:
```ruby
def hello
_("Hello world!")
end
```
Or:
```ruby
hello = _("Hello world!")
```
Be careful when translating strings at the class or module level because these are only evaluated once
at class load time. For example:
```ruby
validates :group_id, uniqueness: { scope: [:project_id], message: _("already shared with this group") }
```
This is translated when the class loads and results in the error message always being in the default
locale. Active Record's `:message` option accepts a `Proc`, so do this instead:
```ruby
validates :group_id, uniqueness: { scope: [:project_id], message: -> (object, data) { _("already shared with this group") } }
```
Messages in the API (`lib/api/` or `app/graphql`) do not need to be externalized.
### HAML files
Given the following content in HAML:
```ruby
%h1 Hello world!
```
You can mark that content for translation with:
```ruby
%h1= _("Hello world!")
```
### ERB files
Given the following content in ERB:
```erb
<h1>Hello world!</h1>
```
You can mark that content for translation with:
```erb
<h1><%= _("Hello world!") %></h1>
```
### JavaScript files
The `~/locale` module exports the following key functions for externalization:
- `__()` Mark content for translation (double underscore parenthesis).
- `s__()` Mark namespaced content for translation (s double underscore parenthesis).
- `n__()` Mark pluralized content for translation (n double underscore parenthesis).
```javascript
import { __, s__, n__ } from '~/locale';
const defaultErrorMessage = s__('Branches|Create branch failed.');
const label = __('Subscribe');
const message = n__('Apple', 'Apples', 3)
```
To test JavaScript translations, learn about [manually testing translations from the UI](#manually-test-translations-from-the-ui).
### Vue files
In Vue files, we make the following functions available to Vue templates using the `translate` mixin:
- `__()`
- `s__()`
- `n__()`
- `sprintf`
This means you can externalize strings in Vue templates without having to import these functions from the `~/locale` file:
```html
<template>
<h1>{{ s__('Branches|Create a new branch') }}</h1>
<gl-button>{{ __('Create branch') }}</gl-button>
</template>
```
If you need to translate strings in the Vue component's JavaScript, you can import the necessary externalization function from the `~/locale` file as described in the [JavaScript files](#javascript-files) section.
To test Vue translations, learn about [manually testing translations from the UI](#manually-test-translations-from-the-ui).
### Test files (RSpec)
For RSpec tests, expectations against externalized contents should not be hard coded,
because we may need to run the tests with non-default locale, and tests with
hard coded contents will fail.
This means any expectations against externalized contents should call the
same externalizing method to match the translation.
Bad:
```ruby
click_button 'Submit review'
expect(rendered).to have_content('Thank you for your feedback!')
```
Good:
```ruby
click_button _('Submit review')
expect(rendered).to have_content(_('Thank you for your feedback!'))
```
### Test files (Jest)
For Frontend Jest tests, expectations do not need to reference externalization methods. Externalization is mocked
in the Frontend test environment, so the expectations are deterministic across locales
([see relevant MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128531)).
Example:
```javascript
// Bad. Not necessary in Frontend environment.
expect(findText()).toBe(__('Lorem ipsum dolor sit'));
// Good.
expect(findText()).toBe('Lorem ipsum dolor sit');
```
#### Recommendations
Put translations as close as possible to where they are used.
Preferably, use inline translations over variables with translations.
The best description for a translation is its key.
This improves code readability and helps with the cognitive load of preserving code context.
Also, it makes refactoring easier as we do not have to maintain variables in addition to the translations.
```javascript
// Bad. A variable is defined far from where it is used
const TITLE = __('Organisations');
function transform() {
return TITLE;
}
// Good.
function transform() {
return __('Organisations');
}
```
##### Shared translations
Sometimes a translation can be used in several places in a file or a module. In this case, we can use variables that share translations, but with the following considerations:
- Inline translations have better code clarity. Do not use the DRY principle as the only driver for putting translations into variables.
- Be cautious when inserting or joining translations. For more information, see
[using variables to insert text dynamically](#using-variables-to-insert-text-dynamically).
- If two translations share the same English key, it doesn't mean those two places have the same translation in other languages. Consider using [namespaces](#namespaces) where appropriate.
If using variables with translations is preferred in a particular case, follow these guidelines on how to declare and place them.
In JavaScript files, declare a constant with the translation:
```javascript
const ORGANISATIONS_TITLE = __('Organisations');
```
In Vue Single-File Components, you can define an `i18n` property in the component's `$options` object.
```javascript
<script>
export default {
i18n: {
buttonLabel: s__('Plan|Button Label')
}
},
</script>
<template>
<gl-button :aria-label="$options.i18n.buttonLabel">
{{ $options.i18n.buttonLabel }}
</gl-button>
</template>
```
In modules, if we reuse the same translation in multiple files, we can add them to a `constants.js` or a `i18n.js` file and import those translations across the module. However, this adds yet another level of complexity to our codebase and thus should be used with caution.
Another practice to avoid when exporting copy strings is to import them in specs. While it might seem like a much more efficient test (if we change the copy, the test will still pass!) it creates additional problems:
- There is a risk that the value we import is `undefined` and we might get a false-positive in our tests (even more so if we import an `i18n` object, see [export constants as primitives](../fe_guide/style/javascript.md#export-constants-as-primitives)).
- It is harder to know what we are testing (which copy to expect).
- There is a higher risk of typos being missed because we are not re-writing the assertion, but assuming that the value of our constant is the correct one.
- The benefit of this approach is minor. Updating the copy in our component and not updating specs is not a big enough benefit to outweigh the potential issues.
As an example:
```javascript
import { MSG_ALERT_SETTINGS_FORM_ERROR } from 'path/to/constants.js';
// Bad. What is the actual text for `MSG_ALERT_SETTINGS_FORM_ERROR`? If `wrapper.text()` returns undefined, the test may still pass with the wrong values!
expect(wrapper.text()).toBe(MSG_ALERT_SETTINGS_FORM_ERROR);
// Very bad. Same problem as above and we are going through the vm property!
expect(wrapper.text()).toBe(MyComponent.vm.i18n.buttonLabel);
// Good. What we are expecting is very clear and there can be no surprises.
expect(wrapper.text()).toBe('There was an error: Please refresh and hope for the best!');
```
### Dynamic translations
For more details you can see how we [keep translations dynamic](#keep-translations-dynamic).
## Making changes to translated strings
If you change the source strings in GitLab, you must [update the `pot` file](#updating-the-po-files-with-the-new-content) before pushing your changes.
If the `pot` file is out of date, pre-push checks and a pipeline job for `gettext` fail.
## Working with special content
### Interpolation
Placeholders in translated text should match the respective source file's code style. For example
use `%{created_at}` in Ruby but `%{createdAt}` in JavaScript. Make sure to
[avoid splitting sentences when adding links](#avoid-splitting-sentences-when-adding-links).
- In Ruby/HAML:
```ruby
format(_("Hello %{name}"), name: 'Joe') => 'Hello Joe'
```
- In Vue:
Use the [`GlSprintf`](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/utilities-sprintf--sentence-with-link) component if:
- You are including child components in the translation string.
- You are including HTML in your translation string.
- You are using `sprintf` and are passing `false` as the third argument to
prevent it from escaping placeholder values.
For example:
```html
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}')">
<template #link="{ content }">
<gl-link :href="somePath">{{ content }}</gl-link>
</template>
</gl-sprintf>
```
In other cases, it might be simpler to use `sprintf`, perhaps in a computed
property. For example:
```html
<script>
import { __, sprintf } from '~/locale';
export default {
...
computed: {
userWelcome() {
return sprintf(__('Hello %{username}'), { username: this.user.name });
}
}
...
}
</script>
<template>
<span>{{ userWelcome }}</span>
</template>
```
- In JavaScript (when Vue cannot be used):
```javascript
import { __, sprintf } from '~/locale';
sprintf(__('Hello %{username}'), { username: 'Joe' }); // => 'Hello Joe'
```
If you need to use markup within the translation, use `sprintf` and stop it
from escaping placeholder values by passing `false` as its third argument.
You **must** escape any interpolated dynamic values yourself, for instance
using `escape` from `lodash`.
```javascript
import { escape } from 'lodash';
import { __, sprintf } from '~/locale';
let someDynamicValue = '<script>alert("evil")</script>';
// Dangerous:
sprintf(__('This is %{value}'), { value: `<strong>${someDynamicValue}</strong>`, false);
// => 'This is <strong><script>alert('evil')</script></strong>'
// Incorrect:
sprintf(__('This is %{value}'), { value: `<strong>${someDynamicValue}</strong>` });
// => 'This is <strong><script>alert('evil')</script></strong>'
// OK:
sprintf(__('This is %{value}'), { value: `<strong>${escape(someDynamicValue)}</strong>` }, false);
// => 'This is <strong><script>alert('evil')</script></strong>'
```
### Plurals
- In Ruby/HAML:
```ruby
n_('Apple', 'Apples', 3)
# => 'Apples'
```
Using interpolation:
```ruby
n_("There is a mouse.", "There are %d mice.", size) % size
# => When size == 1: 'There is a mouse.'
# => When size == 2: 'There are 2 mice.'
```
Avoid using `%d` or count variables in singular strings. This allows more natural translation in
some languages.
- In JavaScript:
```javascript
n__('Apple', 'Apples', 3)
// => 'Apples'
```
Using interpolation:
```javascript
n__('Last day', 'Last %d days', x)
// => When x == 1: 'Last day'
// => When x == 2: 'Last 2 days'
```
- In Vue:
One of [the recommended ways to organize translated strings for Vue files](#vue-files) is to extract them into a `constants.js` file.
That can be difficult to do when there are pluralized strings because the `count` variable won't be known inside the constants file.
To overcome this, we recommend creating a function which takes a `count` argument:
```javascript
// .../feature/constants.js
import { n__ } from '~/locale';
export const I18N = {
// Strings that are only singular don't need to be a function
someDaysRemain: __('Some days remain'),
daysRemaining(count) { return n__('%d day remaining', '%d days remaining', count); },
};
```
Then within a Vue component the function can be used to retrieve the correct pluralization form of the string:
```javascript
// .../feature/components/days_remaining.vue
import { sprintf } from '~/locale';
import { I18N } from '../constants';
<script>
export default {
props: {
days: {
type: Number,
required: true,
},
},
i18n: I18N,
};
</script>
<template>
<div>
<span>
A singular string:
{{ $options.i18n.someDaysRemain }}
</span>
<span>
A plural string:
{{ $options.i18n.daysRemaining(days) }}
</span>
</div>
</template>
```
The `n_` and `n__` methods should only be used to fetch pluralized translations of the same
string, not to control the logic of showing different strings for different
quantities. For similar strings, pluralize the entire sentence to provide the most context
when translating. Some languages have different quantities of target plural forms.
For example, Chinese (simplified) has only one target plural form in our
translation tool. This means the translator has to choose to translate only one
of the strings, and the translation doesn't behave as intended in the other case.
Below are some examples:
Example 1: For different strings
Use this:
```ruby
if selected_projects.one?
selected_projects.first.name
else
n_("Project selected", "%d projects selected", selected_projects.count)
end
```
Instead of this:
```ruby
# incorrect usage example
format(n_("%{project_name}", "%d projects selected", count), project_name: 'GitLab')
```
Example 2: For similar strings
Use this:
```ruby
n__('Last day', 'Last %d days', days.length)
```
Instead of this:
```ruby
# incorrect usage example
const pluralize = n__('day', 'days', days.length)
if (days.length === 1 ) {
return sprintf(s__('Last %{pluralize}', pluralize)
}
return sprintf(s__('Last %{dayNumber} %{pluralize}'), { dayNumber: days.length, pluralize })
```
### Namespaces
A namespace is a way to group translations that belong together. They provide context to our
translators by adding a prefix followed by the bar symbol (`|`). For example:
```ruby
'Namespace|Translated string'
```
A namespace:
- Addresses ambiguity in words. For example: `Promotions|Promote` vs `Epic|Promote`.
- Allows translators to focus on translating externalized strings that belong to the same product
area, rather than arbitrary ones.
- Gives a linguistic context to help the translator.
Some languages are more contextual than English.
For example, `cancel` can be translated in different ways depending on how it's used.
To define the context of use, always add a namespace to UI text in English.
Namespaces should be PascalCase.
- In Ruby/HAML:
```ruby
s_('OpenedNDaysAgo|Opened')
```
If the translation isn't found, `Opened` is returned.
- In JavaScript:
```javascript
s__('OpenedNDaysAgo|Opened')
```
The namespace should be removed from the translation. For more details, see the
[translation guidelines](translation.md#namespaced-strings).
### HTML
We no longer include HTML directly in the strings that are submitted for translation. This is
because:
1. The translated string can accidentally include invalid HTML.
1. Translated strings can become an attack vector for XSS, as noted by the
[Open Web Application Security Project (OWASP)](https://owasp.org/www-community/attacks/xss/).
To include formatting in the translated string, you can do the following:
- In Ruby/HAML:
```ruby
safe_format(_('Some %{strongOpen}bold%{strongClose} text.'), tag_pair(tag.strong, :strongOpen, :strongClose))
# => 'Some <strong>bold</strong> text.'
```
- In JavaScript:
```javascript
sprintf(__('Some %{strongOpen}bold%{strongClose} text.'), { strongOpen: '<strong>', strongClose: '</strong>'}, false);
// => 'Some <strong>bold</strong> text.'
```
- In Vue:
See the section on [interpolation](#interpolation).
When [this translation helper issue](https://gitlab.com/gitlab-org/gitlab/-/issues/217935)
is complete, we plan to update the process of including formatting in translated strings.
#### Including Angle Brackets
If a string contains angle brackets (`<`/`>`) that are not used for HTML, the `rake gettext:lint`
linter still flags it. To avoid this error, use the applicable HTML entity code (`<` or `>`)
instead:
- In Ruby/HAML:
```ruby
safe_format(_('In < 1 hour'))
# => 'In < 1 hour'
```
- In JavaScript:
```javascript
import { sanitize } from '~/lib/dompurify';
const i18n = { LESS_THAN_ONE_HOUR: sanitize(__('In < 1 hour'), { ALLOWED_TAGS: [] }) };
// ... using the string
element.innerHTML = i18n.LESS_THAN_ONE_HOUR;
// => 'In < 1 hour'
```
- In Vue:
```vue
<gl-sprintf :message="s__('In < 1 hours')"/>
// => 'In < 1 hour'
```
### Numbers
Different locales may use different number formats. To support localization of numbers, we use
`formatNumber`, which leverages [`toLocaleString()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString).
By default, `formatNumber` formats numbers as strings using the current user locale.
- In JavaScript:
```javascript
import { formatNumber } from '~/locale';
// Assuming "User Preferences > Language" is set to "English":
const tenThousand = formatNumber(10000); // "10,000" (uses comma as decimal symbol in English locale)
const fiftyPercent = formatNumber(0.5, { style: 'percent' }) // "50%" (other options are passed to toLocaleString)
```
- In Vue templates:
```html
<script>
import { formatNumber } from '~/locale';
export default {
//...
methods: {
// ...
formatNumber,
},
}
</script>
<template>
<div class="my-number">
{{ formatNumber(10000) }} <!-- 10,000 -->
</div>
<div class="my-percent">
{{ formatNumber(0.5, { style: 'percent' }) }} <!-- 50% -->
</div>
</template>
```
### Dates / times
- In JavaScript:
```javascript
import { createDateTimeFormat } from '~/locale';
const dateFormat = createDateTimeFormat({ year: 'numeric', month: 'long', day: 'numeric' });
console.log(dateFormat.format(new Date('2063-04-05'))) // April 5, 2063
```
This makes use of [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat).
- In Ruby/HAML, there are two ways of adding format to dates and times:
- **Using the `l` helper**: for example, `l(active_session.created_at, format: :short)`. We have
some predefined formats for [dates](https://gitlab.com/gitlab-org/gitlab/-/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L54)
and [times](https://gitlab.com/gitlab-org/gitlab/-/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L262).
If you need to add a new format, because other parts of the code could benefit from it, add it
to the file [`en.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/locales/en.yml).
- **Using `strftime`**: for example, `milestone.start_date.strftime('%b %-d')`. We use `strftime`
in case none of the formats defined in [`en.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/locales/en.yml)
match the date/time specifications we need, and if there's no need to add it as a new format
because it's very particular (for example, it's only used in a single view).
## Best practices
### Minimize translation updates
Updates can result in the loss of the translations for this string. To minimize risks, avoid changes
to strings unless they:
- Add value for the user.
- Include extra context for translators.
For example, avoid changes like this:
```diff
- _('Number of things: %{count}') % { count: 10 }
+ n_('Number of things: %d', 10)
```
### Keep translations dynamic
There are cases when it makes sense to keep translations together within an array or a hash.
Examples:
- Mappings for a dropdown list
- Error messages
To store these kinds of data, using a constant seems like the best choice. However, this doesn't
work for translations.
For example, avoid this:
```ruby
class MyPresenter
MY_LIST = {
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}
end
```
The translation method (`_`) is called when the class loads for the first time and translates the
text to the default locale. Regardless of the user's locale, these values are not translated a
second time.
A similar thing happens when using class methods with memoization.
For example, avoid this:
```ruby
class MyModel
def self.list
@list ||= {
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}
end
end
```
This method memoizes the translations using the locale of the user who first called this method.
To avoid these problems, keep the translations dynamic.
Good:
```ruby
class MyPresenter
def self.my_list
{
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}.freeze
end
end
```
Sometimes there are dynamic translations that the parser can't find when running
`bin/rake gettext:find`. For these scenarios you can use the [`N_` method](https://github.com/grosser/gettext_i18n_rails/blob/c09e38d481e0899ca7d3fc01786834fa8e7aab97/Readme.md#unfound-translations-with-rake-gettextfind).
There's also an alternative method to [translate messages from validation errors](https://github.com/grosser/gettext_i18n_rails/blob/c09e38d481e0899ca7d3fc01786834fa8e7aab97/Readme.md#option-a).
### Splitting sentences
Never split a sentence, as it assumes the sentence's grammar and structure is the same in all
languages.
For example, this:
```javascript
{{ s__("mrWidget|Set by") }}
{{ author.name }}
{{ s__("mrWidget|to be merged automatically when the pipeline succeeds") }}
```
Should be externalized as follows:
```javascript
{{ sprintf(s__("mrWidget|Set by %{author} to be merged automatically when the pipeline succeeds"), { author: author.name }) }}
```
#### Avoid splitting sentences when adding links
This also applies when using links in between translated sentences. Otherwise, these texts are not
translatable in certain languages.
- In Ruby/HAML, instead of:
```haml
- zones_link = link_to(s_('ClusterIntegration|zones'), 'https://cloud.google.com/compute/docs/regions-zones/regions-zones', target: '_blank', rel: 'noopener noreferrer')
= s_('ClusterIntegration|Learn more about %{zones_link}').html_safe % { zones_link: zones_link }
```
Set the link starting and ending HTML fragments as variables:
```haml
- zones_link_url = 'https://cloud.google.com/compute/docs/regions-zones/regions-zones'
- zones_link = link_to('', zones_link_url, target: '_blank', rel: 'noopener noreferrer')
= safe_format(s_('ClusterIntegration|Learn more about %{zones_link_start}zones%{zones_link_end}'), tag_pair(zones_link, :zones_link_start, :zones_link_end))
```
- In Vue, instead of:
```html
<template>
<div>
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{link}')">
<template #link>
<gl-link
href="https://cloud.google.com/compute/docs/regions-zones/regions-zones"
target="_blank"
>zones</gl-link>
</template>
</gl-sprintf>
</div>
</template>
```
Set the link starting and ending HTML fragments as placeholders:
```html
<template>
<div>
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}')">
<template #link="{ content }">
<gl-link
href="https://cloud.google.com/compute/docs/regions-zones/regions-zones"
target="_blank"
>{{ content }}</gl-link>
</template>
</gl-sprintf>
</div>
</template>
```
- In JavaScript (when Vue cannot be used), instead of:
```javascript
{{
sprintf(s__("ClusterIntegration|Learn more about %{link}"), {
link: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">zones</a>'
}, false)
}}
```
Set the link starting and ending HTML fragments as placeholders:
```javascript
{{
sprintf(s__("ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}"), {
linkStart: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">',
linkEnd: '</a>',
}, false)
}}
```
The reasoning behind this is that in some languages words change depending on context. For example,
in Japanese は is added to the subject of a sentence and を to the object. This is impossible to
translate correctly if you extract individual words from the sentence.
When in doubt, try to follow the best practices described in this [Mozilla Developer documentation](https://mozilla-l10n.github.io/documentation/localization/dev_best_practices.html#splitting-and-composing-sentences).
### Always pass string literals to the translation helpers
The `tooling/bin/gettext_extractor locale/gitlab.pot` script parses the codebase and extracts all the strings from the
[translation helpers](#preparing-a-page-for-translation) ready to be translated.
The script cannot resolve the strings if they are passed as variables or function calls. Therefore,
make sure to always pass string literals to the helpers.
```javascript
// Good
__('Some label');
s__('Namespace', 'Label');
s__('Namespace|Label');
n__('%d apple', '%d apples', appleCount);
// Bad
__(LABEL);
s__(getLabel());
s__(NAMESPACE, LABEL);
n__(LABEL_SINGULAR, LABEL_PLURAL, appleCount);
```
### Using variables to insert text dynamically
When text values are used in translatable strings as variables, special care must be taken to ensure grammatical correctness across different languages.
#### Risks and challenges
When using variables to add text into translatable strings, several localization challenges can arise:
- **Gender agreement**: Languages with grammatical gender may require different forms of articles, adjectives or pronouns depending on the gender of the inserted noun. For example, in French, articles, adjectives and some past participles must agree with the noun's gender and position in the sentence.
- **Case and declension**: In languages with cases (like German), the inserted text may need different forms depending on its grammatical role in the sentence.
- **Word order**: Different languages have different word order requirements, and inserted text may need to appear in different positions in the sentence for natural-sounding translations.
#### Best practices
1. **Avoid adding text as variables when possible**:
- Instead of one string with a variable, create unique strings for each case. For example:
```ruby
# Instead of:
s_('WorkItem|Adds this %{workItemType} as related to %{relatedWorkItemType}')
# Create separate strings:
s_('WorkItem|Adds this task as related to incident')
s_('WorkItem|Adds this incident as related to task')
```
1. **Use topic-comment structure over sentence-like arrangement**:
When variable use cannot be avoided, consider restructuring the message to use a topic-comment arrangement rather than a full sentence:
```ruby
# Instead of a sentence with inserted variables:
s_('WorkItem|Adds this %{workItemType} as related to %{relatedWorkItemType}')
# Use topic-comment structure:
s_('WorkItem|Related items: %{workItemType} → %{relatedWorkItemType}')
```
## Case transformation in translatable strings
Different languages have different capitalization rules that may not match English. For example, in German all nouns are capitalized regardless of their position in the sentence. Avoid using `downcase` or `toLocaleLowerCase()` on translatable strings. Let translators control text.
- **Context-dependent cases**
While the `toLocaleLowerCase()` method is locale-aware, it cannot handle context-specific capitalization needs. For example:
```ruby
# This forces lowercase, but it may not work for many languages:
job_type = "CI/CD Pipeline".toLocaleLowerCase()
s_("Jobs|Starting a new %{job_type}") % { job_type: job_type }
# In German this would incorrectly show:
# "Starting a new ci/cd pipeline"
# When it should be:
# "Starting a new CI/CD Pipeline" (Pipeline is a noun and must be capitalized)
# In French it might show:
# "Starting a new ci/cd pipeline"
# When it should be:
# "Démarrer un nouveau pipeline CI/CD" (technical terms might keep original case)
```
## Updating the PO files with the new content
Now that the new content is marked for translation, run this command to update the
`locale/gitlab.pot` files:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
This command updates the `locale/gitlab.pot` file with the newly externalized strings and removes
any unused strings. Once the changes are on the default branch, [Crowdin](https://translate.gitlab.com)
picks them up and presents them for translation.
You don't need to check in any changes to the `locale/[language]/gitlab.po` files. They are updated
automatically when [translations from Crowdin are merged](merging_translations.md).
If there are merge conflicts in the `gitlab.pot` file, you can delete the file and regenerate it
using the same command.
### Validating PO files
To make sure we keep our translation files up to date, there's a linter that runs on CI as part of
the `static-analysis` job. To lint the adjustments in PO files locally, you can run
`rake gettext:lint`.
The linter takes the following into account:
- Valid PO-file syntax.
- Variable usage.
- Only one unnamed (`%d`) variable, since the order of variables might change in different
languages.
- All variables used in the message ID are used in the translation.
- There should be no variables used in a translation that aren't in the message ID.
- Errors during translation.
- Presence of angle brackets (`<` or `>`).
The errors are grouped per file, and per message ID:
```plaintext
Errors in `locale/zh_HK/gitlab.po`:
PO-syntax errors
SimplePoParser::ParserErrorSyntax error in lines
Syntax error in msgctxt
Syntax error in msgid
Syntax error in msgstr
Syntax error in message_line
There should be only whitespace until the end of line after the double quote character of a message text.
Parsing result before error: '{:msgid=>["", "You are going to delete %{project_name_with_namespace}.\\n", "Deleted projects CANNOT be restored!\\n", "Are you ABSOLUTELY sure?"]}'
SimplePoParser filtered backtrace: SimplePoParser::ParserError
Errors in `locale/zh_TW/gitlab.po`:
1 pipeline
<%d 條流水線> is using unknown variables: [%d]
Failure translating to zh_TW with []: too few arguments
```
In this output, `locale/zh_HK/gitlab.po` has syntax errors. The file `locale/zh_TW/gitlab.po` has
variables in the translation that aren't in the message with ID `1 pipeline`.
## Adding a new language
A new language should only be added as an option in User Preferences once at least 10% of the
strings have been translated and approved. Even though a larger number of strings may have been
translated, only the approved translations display in the GitLab UI.
{{< alert type="note" >}}
Languages with less than 2% of translations are not available in the UI.
{{< /alert >}}
Suppose you want to add translations for a new language, for example, French:
1. Register the new language in `lib/gitlab/i18n.rb`:
```ruby
...
AVAILABLE_LANGUAGES = {
...,
'fr' => 'Français'
}.freeze
...
```
1. Add the language:
```shell
bin/rake gettext:add_language[fr]
```
If you want to add a new language for a specific region, the command is similar. You must
separate the region with an underscore (`_`), specify the region in capital letters. For example:
```shell
bin/rake gettext:add_language[en_GB]
```
1. Adding the language also creates a new directory at the path `locale/fr/`. You can now start
using your PO editor to edit the PO file located at `locale/fr/gitlab.edit.po`.
1. After updating the translations, you must process the PO files to generate the binary MO files,
and update the JSON files containing the translations:
```shell
bin/rake gettext:compile
```
1. To see the translated content, you must change your preferred language. You can find this under
the user's **Settings** (`/profile`).
1. After checking that the changes are ok, commit the new files. For example:
```shell
git add locale/fr/ app/assets/javascripts/locale/fr/
git commit -m "Add French translations for Value Stream Analytics page"
```
## Manually test translations from the UI
To manually test Vue translations:
1. Change the GitLab localization to another language than English.
1. Generate JSON files using `bin/rake gettext:compile`.
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Internationalization for GitLab
breadcrumbs:
- doc
- development
- i18n
---
For working with internationalization (i18n),
[GNU gettext](https://www.gnu.org/software/gettext/) is used given it's the most
used tool for this task and there are many applications that help us work with it.
{{< alert type="note" >}}
All `rake` commands described on this page must be run on a GitLab instance. This instance is
usually the GitLab Development Kit (GDK).
{{< /alert >}}
## Setting up the GitLab Development Kit (GDK)
To work on the [GitLab Community Edition](https://gitlab.com/gitlab-org/gitlab-foss)
project, you must download and configure it through the [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/blob/main/doc/set-up-gdk.md).
After you have the GitLab project ready, you can start working on the translation.
## Tools
The following tools are used:
- Custom written tools to aid day-to-day development work with translations:
- `tooling/bin/gettext_extractor locale/gitlab.pot`: scan all source files for [new content to translate](#updating-the-po-files-with-the-new-content)
- `rake gettext:compile`: reads the contents of the PO files and generates JS files which
contain all the available translations for the Frontend.
- `rake gettext:lint`: [validate PO files](#validating-po-files)
- [`gettext_i18n_rails`](https://github.com/grosser/gettext_i18n_rails):
this gem allows us to translate content from models, views, and controllers.
It uses [`fast_gettext`](https://github.com/grosser/fast_gettext) under the hood.
It also provides access to the following Rake tasks, which are rarely needed in day-to-day:
- `rake gettext:add_language[language]`: [adding a new language](#adding-a-new-language)
- `rake gettext:find`: parses almost all the files from the Rails application looking for content
marked for translation. It then updates the PO files with this content.
- `rake gettext:pack`: processes the PO files and generates the binary MO files that the
application uses.
- PO editor: there are multiple applications that can help us work with PO files.
A good option is [Poedit](https://poedit.net/download),
which is available for macOS, GNU/Linux, and Windows.
## Preparing a page for translation
You must mark strings as translatable with the following available helpers. Keep in mind that
strings are translated in tools where their context of use might not be obvious. Consider
[namespacing](#namespaces) domain-specific strings to provide more context to the translators.
There are four file types:
- Ruby files: models and controllers.
- HAML files: view files.
- ERB files: used for email templates.
- JavaScript files: we mostly work with Vue templates.
### Ruby files
If there is a method or variable that works with a raw string, for instance:
```ruby
def hello
"Hello world!"
end
```
Or:
```ruby
hello = "Hello world!"
```
You can mark that content for translation with:
```ruby
def hello
_("Hello world!")
end
```
Or:
```ruby
hello = _("Hello world!")
```
Be careful when translating strings at the class or module level because these are only evaluated once
at class load time. For example:
```ruby
validates :group_id, uniqueness: { scope: [:project_id], message: _("already shared with this group") }
```
This is translated when the class loads and results in the error message always being in the default
locale. Active Record's `:message` option accepts a `Proc`, so do this instead:
```ruby
validates :group_id, uniqueness: { scope: [:project_id], message: -> (object, data) { _("already shared with this group") } }
```
Messages in the API (`lib/api/` or `app/graphql`) do not need to be externalized.
### HAML files
Given the following content in HAML:
```ruby
%h1 Hello world!
```
You can mark that content for translation with:
```ruby
%h1= _("Hello world!")
```
### ERB files
Given the following content in ERB:
```erb
<h1>Hello world!</h1>
```
You can mark that content for translation with:
```erb
<h1><%= _("Hello world!") %></h1>
```
### JavaScript files
The `~/locale` module exports the following key functions for externalization:
- `__()` Mark content for translation (double underscore parenthesis).
- `s__()` Mark namespaced content for translation (s double underscore parenthesis).
- `n__()` Mark pluralized content for translation (n double underscore parenthesis).
```javascript
import { __, s__, n__ } from '~/locale';
const defaultErrorMessage = s__('Branches|Create branch failed.');
const label = __('Subscribe');
const message = n__('Apple', 'Apples', 3)
```
To test JavaScript translations, learn about [manually testing translations from the UI](#manually-test-translations-from-the-ui).
### Vue files
In Vue files, we make the following functions available to Vue templates using the `translate` mixin:
- `__()`
- `s__()`
- `n__()`
- `sprintf`
This means you can externalize strings in Vue templates without having to import these functions from the `~/locale` file:
```html
<template>
<h1>{{ s__('Branches|Create a new branch') }}</h1>
<gl-button>{{ __('Create branch') }}</gl-button>
</template>
```
If you need to translate strings in the Vue component's JavaScript, you can import the necessary externalization function from the `~/locale` file as described in the [JavaScript files](#javascript-files) section.
To test Vue translations, learn about [manually testing translations from the UI](#manually-test-translations-from-the-ui).
### Test files (RSpec)
For RSpec tests, expectations against externalized contents should not be hard coded,
because we may need to run the tests with non-default locale, and tests with
hard coded contents will fail.
This means any expectations against externalized contents should call the
same externalizing method to match the translation.
Bad:
```ruby
click_button 'Submit review'
expect(rendered).to have_content('Thank you for your feedback!')
```
Good:
```ruby
click_button _('Submit review')
expect(rendered).to have_content(_('Thank you for your feedback!'))
```
### Test files (Jest)
For Frontend Jest tests, expectations do not need to reference externalization methods. Externalization is mocked
in the Frontend test environment, so the expectations are deterministic across locales
([see relevant MR](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128531)).
Example:
```javascript
// Bad. Not necessary in Frontend environment.
expect(findText()).toBe(__('Lorem ipsum dolor sit'));
// Good.
expect(findText()).toBe('Lorem ipsum dolor sit');
```
#### Recommendations
Put translations as close as possible to where they are used.
Preferably, use inline translations over variables with translations.
The best description for a translation is its key.
This improves code readability and helps with the cognitive load of preserving code context.
Also, it makes refactoring easier as we do not have to maintain variables in addition to the translations.
```javascript
// Bad. A variable is defined far from where it is used
const TITLE = __('Organisations');
function transform() {
return TITLE;
}
// Good.
function transform() {
return __('Organisations');
}
```
##### Shared translations
Sometimes a translation can be used in several places in a file or a module. In this case, we can use variables that share translations, but with the following considerations:
- Inline translations have better code clarity. Do not use the DRY principle as the only driver for putting translations into variables.
- Be cautious when inserting or joining translations. For more information, see
[using variables to insert text dynamically](#using-variables-to-insert-text-dynamically).
- If two translations share the same English key, it doesn't mean those two places have the same translation in other languages. Consider using [namespaces](#namespaces) where appropriate.
If using variables with translations is preferred in a particular case, follow these guidelines on how to declare and place them.
In JavaScript files, declare a constant with the translation:
```javascript
const ORGANISATIONS_TITLE = __('Organisations');
```
In Vue Single-File Components, you can define an `i18n` property in the component's `$options` object.
```javascript
<script>
export default {
i18n: {
buttonLabel: s__('Plan|Button Label')
}
},
</script>
<template>
<gl-button :aria-label="$options.i18n.buttonLabel">
{{ $options.i18n.buttonLabel }}
</gl-button>
</template>
```
In modules, if we reuse the same translation in multiple files, we can add them to a `constants.js` or a `i18n.js` file and import those translations across the module. However, this adds yet another level of complexity to our codebase and thus should be used with caution.
Another practice to avoid when exporting copy strings is to import them in specs. While it might seem like a much more efficient test (if we change the copy, the test will still pass!) it creates additional problems:
- There is a risk that the value we import is `undefined` and we might get a false-positive in our tests (even more so if we import an `i18n` object, see [export constants as primitives](../fe_guide/style/javascript.md#export-constants-as-primitives)).
- It is harder to know what we are testing (which copy to expect).
- There is a higher risk of typos being missed because we are not re-writing the assertion, but assuming that the value of our constant is the correct one.
- The benefit of this approach is minor. Updating the copy in our component and not updating specs is not a big enough benefit to outweigh the potential issues.
As an example:
```javascript
import { MSG_ALERT_SETTINGS_FORM_ERROR } from 'path/to/constants.js';
// Bad. What is the actual text for `MSG_ALERT_SETTINGS_FORM_ERROR`? If `wrapper.text()` returns undefined, the test may still pass with the wrong values!
expect(wrapper.text()).toBe(MSG_ALERT_SETTINGS_FORM_ERROR);
// Very bad. Same problem as above and we are going through the vm property!
expect(wrapper.text()).toBe(MyComponent.vm.i18n.buttonLabel);
// Good. What we are expecting is very clear and there can be no surprises.
expect(wrapper.text()).toBe('There was an error: Please refresh and hope for the best!');
```
### Dynamic translations
For more details you can see how we [keep translations dynamic](#keep-translations-dynamic).
## Making changes to translated strings
If you change the source strings in GitLab, you must [update the `pot` file](#updating-the-po-files-with-the-new-content) before pushing your changes.
If the `pot` file is out of date, pre-push checks and a pipeline job for `gettext` fail.
## Working with special content
### Interpolation
Placeholders in translated text should match the respective source file's code style. For example
use `%{created_at}` in Ruby but `%{createdAt}` in JavaScript. Make sure to
[avoid splitting sentences when adding links](#avoid-splitting-sentences-when-adding-links).
- In Ruby/HAML:
```ruby
format(_("Hello %{name}"), name: 'Joe') => 'Hello Joe'
```
- In Vue:
Use the [`GlSprintf`](https://gitlab-org.gitlab.io/gitlab-ui/?path=/docs/utilities-sprintf--sentence-with-link) component if:
- You are including child components in the translation string.
- You are including HTML in your translation string.
- You are using `sprintf` and are passing `false` as the third argument to
prevent it from escaping placeholder values.
For example:
```html
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}')">
<template #link="{ content }">
<gl-link :href="somePath">{{ content }}</gl-link>
</template>
</gl-sprintf>
```
In other cases, it might be simpler to use `sprintf`, perhaps in a computed
property. For example:
```html
<script>
import { __, sprintf } from '~/locale';
export default {
...
computed: {
userWelcome() {
return sprintf(__('Hello %{username}'), { username: this.user.name });
}
}
...
}
</script>
<template>
<span>{{ userWelcome }}</span>
</template>
```
- In JavaScript (when Vue cannot be used):
```javascript
import { __, sprintf } from '~/locale';
sprintf(__('Hello %{username}'), { username: 'Joe' }); // => 'Hello Joe'
```
If you need to use markup within the translation, use `sprintf` and stop it
from escaping placeholder values by passing `false` as its third argument.
You **must** escape any interpolated dynamic values yourself, for instance
using `escape` from `lodash`.
```javascript
import { escape } from 'lodash';
import { __, sprintf } from '~/locale';
let someDynamicValue = '<script>alert("evil")</script>';
// Dangerous:
sprintf(__('This is %{value}'), { value: `<strong>${someDynamicValue}</strong>`, false);
// => 'This is <strong><script>alert('evil')</script></strong>'
// Incorrect:
sprintf(__('This is %{value}'), { value: `<strong>${someDynamicValue}</strong>` });
// => 'This is <strong><script>alert('evil')</script></strong>'
// OK:
sprintf(__('This is %{value}'), { value: `<strong>${escape(someDynamicValue)}</strong>` }, false);
// => 'This is <strong><script>alert('evil')</script></strong>'
```
### Plurals
- In Ruby/HAML:
```ruby
n_('Apple', 'Apples', 3)
# => 'Apples'
```
Using interpolation:
```ruby
n_("There is a mouse.", "There are %d mice.", size) % size
# => When size == 1: 'There is a mouse.'
# => When size == 2: 'There are 2 mice.'
```
Avoid using `%d` or count variables in singular strings. This allows more natural translation in
some languages.
- In JavaScript:
```javascript
n__('Apple', 'Apples', 3)
// => 'Apples'
```
Using interpolation:
```javascript
n__('Last day', 'Last %d days', x)
// => When x == 1: 'Last day'
// => When x == 2: 'Last 2 days'
```
- In Vue:
One of [the recommended ways to organize translated strings for Vue files](#vue-files) is to extract them into a `constants.js` file.
That can be difficult to do when there are pluralized strings because the `count` variable won't be known inside the constants file.
To overcome this, we recommend creating a function which takes a `count` argument:
```javascript
// .../feature/constants.js
import { n__ } from '~/locale';
export const I18N = {
// Strings that are only singular don't need to be a function
someDaysRemain: __('Some days remain'),
daysRemaining(count) { return n__('%d day remaining', '%d days remaining', count); },
};
```
Then within a Vue component the function can be used to retrieve the correct pluralization form of the string:
```javascript
// .../feature/components/days_remaining.vue
import { sprintf } from '~/locale';
import { I18N } from '../constants';
<script>
export default {
props: {
days: {
type: Number,
required: true,
},
},
i18n: I18N,
};
</script>
<template>
<div>
<span>
A singular string:
{{ $options.i18n.someDaysRemain }}
</span>
<span>
A plural string:
{{ $options.i18n.daysRemaining(days) }}
</span>
</div>
</template>
```
The `n_` and `n__` methods should only be used to fetch pluralized translations of the same
string, not to control the logic of showing different strings for different
quantities. For similar strings, pluralize the entire sentence to provide the most context
when translating. Some languages have different quantities of target plural forms.
For example, Chinese (simplified) has only one target plural form in our
translation tool. This means the translator has to choose to translate only one
of the strings, and the translation doesn't behave as intended in the other case.
Below are some examples:
Example 1: For different strings
Use this:
```ruby
if selected_projects.one?
selected_projects.first.name
else
n_("Project selected", "%d projects selected", selected_projects.count)
end
```
Instead of this:
```ruby
# incorrect usage example
format(n_("%{project_name}", "%d projects selected", count), project_name: 'GitLab')
```
Example 2: For similar strings
Use this:
```ruby
n__('Last day', 'Last %d days', days.length)
```
Instead of this:
```ruby
# incorrect usage example
const pluralize = n__('day', 'days', days.length)
if (days.length === 1 ) {
return sprintf(s__('Last %{pluralize}', pluralize)
}
return sprintf(s__('Last %{dayNumber} %{pluralize}'), { dayNumber: days.length, pluralize })
```
### Namespaces
A namespace is a way to group translations that belong together. They provide context to our
translators by adding a prefix followed by the bar symbol (`|`). For example:
```ruby
'Namespace|Translated string'
```
A namespace:
- Addresses ambiguity in words. For example: `Promotions|Promote` vs `Epic|Promote`.
- Allows translators to focus on translating externalized strings that belong to the same product
area, rather than arbitrary ones.
- Gives a linguistic context to help the translator.
Some languages are more contextual than English.
For example, `cancel` can be translated in different ways depending on how it's used.
To define the context of use, always add a namespace to UI text in English.
Namespaces should be PascalCase.
- In Ruby/HAML:
```ruby
s_('OpenedNDaysAgo|Opened')
```
If the translation isn't found, `Opened` is returned.
- In JavaScript:
```javascript
s__('OpenedNDaysAgo|Opened')
```
The namespace should be removed from the translation. For more details, see the
[translation guidelines](translation.md#namespaced-strings).
### HTML
We no longer include HTML directly in the strings that are submitted for translation. This is
because:
1. The translated string can accidentally include invalid HTML.
1. Translated strings can become an attack vector for XSS, as noted by the
[Open Web Application Security Project (OWASP)](https://owasp.org/www-community/attacks/xss/).
To include formatting in the translated string, you can do the following:
- In Ruby/HAML:
```ruby
safe_format(_('Some %{strongOpen}bold%{strongClose} text.'), tag_pair(tag.strong, :strongOpen, :strongClose))
# => 'Some <strong>bold</strong> text.'
```
- In JavaScript:
```javascript
sprintf(__('Some %{strongOpen}bold%{strongClose} text.'), { strongOpen: '<strong>', strongClose: '</strong>'}, false);
// => 'Some <strong>bold</strong> text.'
```
- In Vue:
See the section on [interpolation](#interpolation).
When [this translation helper issue](https://gitlab.com/gitlab-org/gitlab/-/issues/217935)
is complete, we plan to update the process of including formatting in translated strings.
#### Including Angle Brackets
If a string contains angle brackets (`<`/`>`) that are not used for HTML, the `rake gettext:lint`
linter still flags it. To avoid this error, use the applicable HTML entity code (`<` or `>`)
instead:
- In Ruby/HAML:
```ruby
safe_format(_('In < 1 hour'))
# => 'In < 1 hour'
```
- In JavaScript:
```javascript
import { sanitize } from '~/lib/dompurify';
const i18n = { LESS_THAN_ONE_HOUR: sanitize(__('In < 1 hour'), { ALLOWED_TAGS: [] }) };
// ... using the string
element.innerHTML = i18n.LESS_THAN_ONE_HOUR;
// => 'In < 1 hour'
```
- In Vue:
```vue
<gl-sprintf :message="s__('In < 1 hours')"/>
// => 'In < 1 hour'
```
### Numbers
Different locales may use different number formats. To support localization of numbers, we use
`formatNumber`, which leverages [`toLocaleString()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString).
By default, `formatNumber` formats numbers as strings using the current user locale.
- In JavaScript:
```javascript
import { formatNumber } from '~/locale';
// Assuming "User Preferences > Language" is set to "English":
const tenThousand = formatNumber(10000); // "10,000" (uses comma as decimal symbol in English locale)
const fiftyPercent = formatNumber(0.5, { style: 'percent' }) // "50%" (other options are passed to toLocaleString)
```
- In Vue templates:
```html
<script>
import { formatNumber } from '~/locale';
export default {
//...
methods: {
// ...
formatNumber,
},
}
</script>
<template>
<div class="my-number">
{{ formatNumber(10000) }} <!-- 10,000 -->
</div>
<div class="my-percent">
{{ formatNumber(0.5, { style: 'percent' }) }} <!-- 50% -->
</div>
</template>
```
### Dates / times
- In JavaScript:
```javascript
import { createDateTimeFormat } from '~/locale';
const dateFormat = createDateTimeFormat({ year: 'numeric', month: 'long', day: 'numeric' });
console.log(dateFormat.format(new Date('2063-04-05'))) // April 5, 2063
```
This makes use of [`Intl.DateTimeFormat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat).
- In Ruby/HAML, there are two ways of adding format to dates and times:
- **Using the `l` helper**: for example, `l(active_session.created_at, format: :short)`. We have
some predefined formats for [dates](https://gitlab.com/gitlab-org/gitlab/-/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L54)
and [times](https://gitlab.com/gitlab-org/gitlab/-/blob/4ab54c2233e91f60a80e5b6fa2181e6899fdcc3e/config/locales/en.yml#L262).
If you need to add a new format, because other parts of the code could benefit from it, add it
to the file [`en.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/locales/en.yml).
- **Using `strftime`**: for example, `milestone.start_date.strftime('%b %-d')`. We use `strftime`
in case none of the formats defined in [`en.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/locales/en.yml)
match the date/time specifications we need, and if there's no need to add it as a new format
because it's very particular (for example, it's only used in a single view).
## Best practices
### Minimize translation updates
Updates can result in the loss of the translations for this string. To minimize risks, avoid changes
to strings unless they:
- Add value for the user.
- Include extra context for translators.
For example, avoid changes like this:
```diff
- _('Number of things: %{count}') % { count: 10 }
+ n_('Number of things: %d', 10)
```
### Keep translations dynamic
There are cases when it makes sense to keep translations together within an array or a hash.
Examples:
- Mappings for a dropdown list
- Error messages
To store these kinds of data, using a constant seems like the best choice. However, this doesn't
work for translations.
For example, avoid this:
```ruby
class MyPresenter
MY_LIST = {
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}
end
```
The translation method (`_`) is called when the class loads for the first time and translates the
text to the default locale. Regardless of the user's locale, these values are not translated a
second time.
A similar thing happens when using class methods with memoization.
For example, avoid this:
```ruby
class MyModel
def self.list
@list ||= {
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}
end
end
```
This method memoizes the translations using the locale of the user who first called this method.
To avoid these problems, keep the translations dynamic.
Good:
```ruby
class MyPresenter
def self.my_list
{
key_1: _('item 1'),
key_2: _('item 2'),
key_3: _('item 3')
}.freeze
end
end
```
Sometimes there are dynamic translations that the parser can't find when running
`bin/rake gettext:find`. For these scenarios you can use the [`N_` method](https://github.com/grosser/gettext_i18n_rails/blob/c09e38d481e0899ca7d3fc01786834fa8e7aab97/Readme.md#unfound-translations-with-rake-gettextfind).
There's also an alternative method to [translate messages from validation errors](https://github.com/grosser/gettext_i18n_rails/blob/c09e38d481e0899ca7d3fc01786834fa8e7aab97/Readme.md#option-a).
### Splitting sentences
Never split a sentence, as it assumes the sentence's grammar and structure is the same in all
languages.
For example, this:
```javascript
{{ s__("mrWidget|Set by") }}
{{ author.name }}
{{ s__("mrWidget|to be merged automatically when the pipeline succeeds") }}
```
Should be externalized as follows:
```javascript
{{ sprintf(s__("mrWidget|Set by %{author} to be merged automatically when the pipeline succeeds"), { author: author.name }) }}
```
#### Avoid splitting sentences when adding links
This also applies when using links in between translated sentences. Otherwise, these texts are not
translatable in certain languages.
- In Ruby/HAML, instead of:
```haml
- zones_link = link_to(s_('ClusterIntegration|zones'), 'https://cloud.google.com/compute/docs/regions-zones/regions-zones', target: '_blank', rel: 'noopener noreferrer')
= s_('ClusterIntegration|Learn more about %{zones_link}').html_safe % { zones_link: zones_link }
```
Set the link starting and ending HTML fragments as variables:
```haml
- zones_link_url = 'https://cloud.google.com/compute/docs/regions-zones/regions-zones'
- zones_link = link_to('', zones_link_url, target: '_blank', rel: 'noopener noreferrer')
= safe_format(s_('ClusterIntegration|Learn more about %{zones_link_start}zones%{zones_link_end}'), tag_pair(zones_link, :zones_link_start, :zones_link_end))
```
- In Vue, instead of:
```html
<template>
<div>
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{link}')">
<template #link>
<gl-link
href="https://cloud.google.com/compute/docs/regions-zones/regions-zones"
target="_blank"
>zones</gl-link>
</template>
</gl-sprintf>
</div>
</template>
```
Set the link starting and ending HTML fragments as placeholders:
```html
<template>
<div>
<gl-sprintf :message="s__('ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}')">
<template #link="{ content }">
<gl-link
href="https://cloud.google.com/compute/docs/regions-zones/regions-zones"
target="_blank"
>{{ content }}</gl-link>
</template>
</gl-sprintf>
</div>
</template>
```
- In JavaScript (when Vue cannot be used), instead of:
```javascript
{{
sprintf(s__("ClusterIntegration|Learn more about %{link}"), {
link: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">zones</a>'
}, false)
}}
```
Set the link starting and ending HTML fragments as placeholders:
```javascript
{{
sprintf(s__("ClusterIntegration|Learn more about %{linkStart}zones%{linkEnd}"), {
linkStart: '<a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" target="_blank" rel="noopener noreferrer">',
linkEnd: '</a>',
}, false)
}}
```
The reasoning behind this is that in some languages words change depending on context. For example,
in Japanese は is added to the subject of a sentence and を to the object. This is impossible to
translate correctly if you extract individual words from the sentence.
When in doubt, try to follow the best practices described in this [Mozilla Developer documentation](https://mozilla-l10n.github.io/documentation/localization/dev_best_practices.html#splitting-and-composing-sentences).
### Always pass string literals to the translation helpers
The `tooling/bin/gettext_extractor locale/gitlab.pot` script parses the codebase and extracts all the strings from the
[translation helpers](#preparing-a-page-for-translation) ready to be translated.
The script cannot resolve the strings if they are passed as variables or function calls. Therefore,
make sure to always pass string literals to the helpers.
```javascript
// Good
__('Some label');
s__('Namespace', 'Label');
s__('Namespace|Label');
n__('%d apple', '%d apples', appleCount);
// Bad
__(LABEL);
s__(getLabel());
s__(NAMESPACE, LABEL);
n__(LABEL_SINGULAR, LABEL_PLURAL, appleCount);
```
### Using variables to insert text dynamically
When text values are used in translatable strings as variables, special care must be taken to ensure grammatical correctness across different languages.
#### Risks and challenges
When using variables to add text into translatable strings, several localization challenges can arise:
- **Gender agreement**: Languages with grammatical gender may require different forms of articles, adjectives or pronouns depending on the gender of the inserted noun. For example, in French, articles, adjectives and some past participles must agree with the noun's gender and position in the sentence.
- **Case and declension**: In languages with cases (like German), the inserted text may need different forms depending on its grammatical role in the sentence.
- **Word order**: Different languages have different word order requirements, and inserted text may need to appear in different positions in the sentence for natural-sounding translations.
#### Best practices
1. **Avoid adding text as variables when possible**:
- Instead of one string with a variable, create unique strings for each case. For example:
```ruby
# Instead of:
s_('WorkItem|Adds this %{workItemType} as related to %{relatedWorkItemType}')
# Create separate strings:
s_('WorkItem|Adds this task as related to incident')
s_('WorkItem|Adds this incident as related to task')
```
1. **Use topic-comment structure over sentence-like arrangement**:
When variable use cannot be avoided, consider restructuring the message to use a topic-comment arrangement rather than a full sentence:
```ruby
# Instead of a sentence with inserted variables:
s_('WorkItem|Adds this %{workItemType} as related to %{relatedWorkItemType}')
# Use topic-comment structure:
s_('WorkItem|Related items: %{workItemType} → %{relatedWorkItemType}')
```
## Case transformation in translatable strings
Different languages have different capitalization rules that may not match English. For example, in German all nouns are capitalized regardless of their position in the sentence. Avoid using `downcase` or `toLocaleLowerCase()` on translatable strings. Let translators control text.
- **Context-dependent cases**
While the `toLocaleLowerCase()` method is locale-aware, it cannot handle context-specific capitalization needs. For example:
```ruby
# This forces lowercase, but it may not work for many languages:
job_type = "CI/CD Pipeline".toLocaleLowerCase()
s_("Jobs|Starting a new %{job_type}") % { job_type: job_type }
# In German this would incorrectly show:
# "Starting a new ci/cd pipeline"
# When it should be:
# "Starting a new CI/CD Pipeline" (Pipeline is a noun and must be capitalized)
# In French it might show:
# "Starting a new ci/cd pipeline"
# When it should be:
# "Démarrer un nouveau pipeline CI/CD" (technical terms might keep original case)
```
## Updating the PO files with the new content
Now that the new content is marked for translation, run this command to update the
`locale/gitlab.pot` files:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
This command updates the `locale/gitlab.pot` file with the newly externalized strings and removes
any unused strings. Once the changes are on the default branch, [Crowdin](https://translate.gitlab.com)
picks them up and presents them for translation.
You don't need to check in any changes to the `locale/[language]/gitlab.po` files. They are updated
automatically when [translations from Crowdin are merged](merging_translations.md).
If there are merge conflicts in the `gitlab.pot` file, you can delete the file and regenerate it
using the same command.
### Validating PO files
To make sure we keep our translation files up to date, there's a linter that runs on CI as part of
the `static-analysis` job. To lint the adjustments in PO files locally, you can run
`rake gettext:lint`.
The linter takes the following into account:
- Valid PO-file syntax.
- Variable usage.
- Only one unnamed (`%d`) variable, since the order of variables might change in different
languages.
- All variables used in the message ID are used in the translation.
- There should be no variables used in a translation that aren't in the message ID.
- Errors during translation.
- Presence of angle brackets (`<` or `>`).
The errors are grouped per file, and per message ID:
```plaintext
Errors in `locale/zh_HK/gitlab.po`:
PO-syntax errors
SimplePoParser::ParserErrorSyntax error in lines
Syntax error in msgctxt
Syntax error in msgid
Syntax error in msgstr
Syntax error in message_line
There should be only whitespace until the end of line after the double quote character of a message text.
Parsing result before error: '{:msgid=>["", "You are going to delete %{project_name_with_namespace}.\\n", "Deleted projects CANNOT be restored!\\n", "Are you ABSOLUTELY sure?"]}'
SimplePoParser filtered backtrace: SimplePoParser::ParserError
Errors in `locale/zh_TW/gitlab.po`:
1 pipeline
<%d 條流水線> is using unknown variables: [%d]
Failure translating to zh_TW with []: too few arguments
```
In this output, `locale/zh_HK/gitlab.po` has syntax errors. The file `locale/zh_TW/gitlab.po` has
variables in the translation that aren't in the message with ID `1 pipeline`.
## Adding a new language
A new language should only be added as an option in User Preferences once at least 10% of the
strings have been translated and approved. Even though a larger number of strings may have been
translated, only the approved translations display in the GitLab UI.
{{< alert type="note" >}}
Languages with less than 2% of translations are not available in the UI.
{{< /alert >}}
Suppose you want to add translations for a new language, for example, French:
1. Register the new language in `lib/gitlab/i18n.rb`:
```ruby
...
AVAILABLE_LANGUAGES = {
...,
'fr' => 'Français'
}.freeze
...
```
1. Add the language:
```shell
bin/rake gettext:add_language[fr]
```
If you want to add a new language for a specific region, the command is similar. You must
separate the region with an underscore (`_`), specify the region in capital letters. For example:
```shell
bin/rake gettext:add_language[en_GB]
```
1. Adding the language also creates a new directory at the path `locale/fr/`. You can now start
using your PO editor to edit the PO file located at `locale/fr/gitlab.edit.po`.
1. After updating the translations, you must process the PO files to generate the binary MO files,
and update the JSON files containing the translations:
```shell
bin/rake gettext:compile
```
1. To see the translated content, you must change your preferred language. You can find this under
the user's **Settings** (`/profile`).
1. After checking that the changes are ok, commit the new files. For example:
```shell
git add locale/fr/ app/assets/javascripts/locale/fr/
git commit -m "Add French translations for Value Stream Analytics page"
```
## Manually test translations from the UI
To manually test Vue translations:
1. Change the GitLab localization to another language than English.
1. Generate JSON files using `bin/rake gettext:compile`.
|
https://docs.gitlab.com/development/i18n
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/i18n
|
[
"doc",
"development",
"i18n"
] |
_index.md
|
Create
|
Import
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Translate GitLab to your language
| null |
The text in the GitLab user interface is in American English by default. Each string can be
translated to other languages. As each string is translated, it's added to the languages translation
file and made available in future GitLab releases.
Contributions to translations are always needed. Many strings are not yet available for translation
because they have not been externalized. Helping externalize strings benefits all languages. Some
translations are incomplete or inconsistent. Translating strings helps complete and improve each
language.
There are many ways you can contribute in translating GitLab.
## Externalize strings
Before a string can be translated, it must be externalized. This is the process where English
strings in the GitLab source code are wrapped in a function that retrieves the translated string for
the user's language.
As new features are added and existing features are updated, the surrounding strings are
externalized. However, there are many parts of GitLab that still need more work to externalize all
strings.
See [Externalization for GitLab](externalization.md).
### Editing externalized strings
If you edit externalized strings in GitLab, you must [update the `pot` file](externalization.md#updating-the-po-files-with-the-new-content) before pushing your changes.
## Translate strings
The translation process is managed at [https://crowdin.com/project/gitlab-ee](https://crowdin.com/project/gitlab-ee)
using [Crowdin](https://crowdin.com/).
You must create a Crowdin account before you can submit translations. Once you are signed in, select
the language you wish to contribute translations to.
Voting for translations is also valuable, helping to confirm good translations and flag inaccurate
ones.
See [Translation guidelines](translation.md).
## Proofreading
Proofreading helps ensure the accuracy and consistency of translations. All translations are
proofread before being accepted. If a translation requires changes, a comment explaining why
notifies you.
See [Proofreading Translations](proofreader.md) for more information on who can proofread and
instructions on becoming a proofreader yourself.
## Release
Translations are typically included in the next major or minor release.
See [Merging translations from Crowdin](merging_translations.md).
|
---
stage: Create
group: Import
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Translate GitLab to your language
breadcrumbs:
- doc
- development
- i18n
---
The text in the GitLab user interface is in American English by default. Each string can be
translated to other languages. As each string is translated, it's added to the languages translation
file and made available in future GitLab releases.
Contributions to translations are always needed. Many strings are not yet available for translation
because they have not been externalized. Helping externalize strings benefits all languages. Some
translations are incomplete or inconsistent. Translating strings helps complete and improve each
language.
There are many ways you can contribute in translating GitLab.
## Externalize strings
Before a string can be translated, it must be externalized. This is the process where English
strings in the GitLab source code are wrapped in a function that retrieves the translated string for
the user's language.
As new features are added and existing features are updated, the surrounding strings are
externalized. However, there are many parts of GitLab that still need more work to externalize all
strings.
See [Externalization for GitLab](externalization.md).
### Editing externalized strings
If you edit externalized strings in GitLab, you must [update the `pot` file](externalization.md#updating-the-po-files-with-the-new-content) before pushing your changes.
## Translate strings
The translation process is managed at [https://crowdin.com/project/gitlab-ee](https://crowdin.com/project/gitlab-ee)
using [Crowdin](https://crowdin.com/).
You must create a Crowdin account before you can submit translations. Once you are signed in, select
the language you wish to contribute translations to.
Voting for translations is also valuable, helping to confirm good translations and flag inaccurate
ones.
See [Translation guidelines](translation.md).
## Proofreading
Proofreading helps ensure the accuracy and consistency of translations. All translations are
proofread before being accepted. If a translation requires changes, a comment explaining why
notifies you.
See [Proofreading Translations](proofreader.md) for more information on who can proofread and
instructions on becoming a proofreader yourself.
## Release
Translations are typically included in the next major or minor release.
See [Merging translations from Crowdin](merging_translations.md).
|
https://docs.gitlab.com/development/banzai_pipeline_and_parsing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/banzai_pipeline_and_parsing.md
|
2025-08-13
|
doc/development/gitlab_flavored_markdown
|
[
"doc",
"development",
"gitlab_flavored_markdown"
] |
banzai_pipeline_and_parsing.md
|
Plan
|
Knowledge
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
The Banzai pipeline and parsing
|
The Banzai pipeline and parsing.
|
<!-- vale gitlab.GitLabFlavoredMarkdown = NO -->
Parsing and rendering [GitLab Flavored Markdown](_index.md) into HTML involves different components:
- Banzai pipeline and it's various filters
- Markdown parser
The backend does all the processing for GLFM to HTML. This provides several benefits:
- Security: We run robust sanitization which removes unknown tags, classes and ids.
- References: Our reference syntax requires access to the database to resolve issues, etc, as well as redacting references in which the user has no access.
- Consistency: We want to provide users with a consistent experience, which includes full support of the GLFM syntax and styling. Having a single place where the processing is done allows us to provide that.
- Caching: We cache the HTML in our database when possible, such as for issue or MR descriptions, or comments.
- Quick actions: We use a specialized pipeline to process quick actions, so that we can better detect them in Markdown text.
The frontend handles certain aspects when displaying:
- Math blocks
- Mermaid blocks
- Enforcing certain limits, such as excessive number of math or mermaid blocks.
## The Banzai pipeline
Named after the [surf reef break](https://en.wikipedia.org/wiki/Banzai_Pipeline) in Hawaii, the Banzai pipeline consists of various filters ([lib/banzai/filters](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/banzai/filter)) where Markdown and HTML is transformed in each one, in a pipeline fashion. Various pipelines ([lib/banzai/pipeline](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/banzai/pipeline)) are defined, each with a different sequence of filters, such as `AsciiDocPipeline`, `EmailPipeline`.
The [html-pipeline](https://github.com/gjtorikian/html-pipeline) gem implements the pipeline/filter mechanism.
The primary pipeline is the `FullPipeline`, which is a combination of the `PlainMarkdownPipeline` and the `GfmPipeline`.
### `PlainMarkdownPipeline`
This pipeline contains the filters for transforming raw Markdown into HTML, handled primarily by the `Filter::MarkdownFilter`.
#### `Filter::MarkdownFilter`
This filter interfaces with the actual Markdown parser. The parser uses our [`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) Ruby gem that uses the [`comrak`](https://github.com/kivikakk/comrak) Rust crate.
Text is passed into this filter, and by calling the specified parser engine, generates the corresponding basic HTML.
### `GfmPipeline`
This pipeline contains all the filters that perform the additional transformations on raw HTML into what we consider rendered GLFM.
A Nokogiri document gets passed into each of these filters, and they perform the various transformations.
For example, `EmojiFitler`, `CommitTrailersFilter`, or `SanitizationFilter`.
Anything that can't be handled by the initial Markdown parsing gets handled by these filters.
Of specific note is the `SanitizationFilter`. This is critical for providing safe HTML from possibly malicious input.
### Performance
It's important to not only have the filters run as fast as possible, but to ensure that they don't take too long in general.
For this we use several techniques:
- For certain filters that can take a long time, we use a Ruby timeout with `Gitlab::RenderTimeout.timeout` in [TimeoutFilterHandler](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/concerns/timeout_filter_handler.rb).
This allows us to interrupt the actual processing if it takes too long.
In general, using Ruby `timeout` is [not considered safe](https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/).
We therefore only use it when absolutely necessary, preferring to fix an actual performance problem rather then using a timeout.
- [PipelineTimingCheck](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/concerns/pipeline_timing_check.rb) allows us to keep track of the cumulative amount of time the pipeline is taking. When we reach a maximum, we can then skip any remaining filters.
For nearly all filters, it's generally ok to skip them in a case like this in order to show the user _something_, rather than nothing.
However, there are a couple instances where this is not advisable.
For example in the `SanitizationFilter`, if that filter does not complete, then we can't show the HTML to the user since there could still be unsanitized HTML.
In those cases, we have to show an error message.
There is also a `rake` task that can be used for benchmarking. See the [Performance Guidelines](../performance.md#banzai-pipelines-and-filters)
## Markdown parser
We use our [`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) Ruby gem that uses the [`comrak`](https://github.com/kivikakk/comrak) Rust crate.
`comrak` provides 100% compatibility with GFM and CommonMark while allowing additional extensions to be added to it. For example, we were able to implement our multi-line blockquote and wikilink syntax directly in `comrak`. The goal is to move more of the Ruby filters into either `comrak` (if it makes sense) or into `gitlab-glfm-markdown`.
For more information about the various options that get passed into `comrak`, see [glfm_markdown.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/markdown_engines/glfm_markdown.rb#L12-L34).
## Debugging
Usually the easiest way to debug the various pipelines and filters is to run them from the Rails console. This way you can set a `binding.pry` in a filter and step through the code.
Because of `TimeoutFilterHandler` and `PipelineTimingCheck`, it can be a challenge to debug the filters. There is a special environment variable, `GITLAB_DISABLE_MARKDOWN_TIMEOUT`, that when set disables any timeout checking in the filters. This is also available for customers in the rare instance that a [GitLab Self-Managed instance](../../administration/environment_variables.md) wishes to bypass those checks.
```ruby
text = 'Some test **Markdown**'
html = Banzai.render(text, project: nil)
```
This renders the Markdown in relation to no project. Or you can render it in the context of a project:
```ruby
project = Project.first
text = 'Some test **Markdown**'
html = Banzai.render(text, project: project)
```
The `render` method takes the `text` and a `context` hash, which provides various options for rendering. For example you can use `pipeline: :ascii_doc` to run the `AsciiDocPipeline`. The `FullPipeline` is the default.
If you specify `debug_timing: true`, then you will receive a list of filters and how long each takes.
```ruby
Banzai.render(text, project: nil, debug_timing: true)
D, [2024-12-20T13:35:24.246463 #34584] DEBUG -- : 0.000012_s (0.000012_s): NormalizeSourceFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246543 #34584] DEBUG -- : 0.000007_s (0.000019_s): TruncateSourceFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246589 #34584] DEBUG -- : 0.000028_s (0.000047_s): FrontMatterFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246662 #34584] DEBUG -- : 0.000005_s (0.000005_s): IncludeFilter [FullPipeline]
D, [2024-12-20T13:35:24.246816 #34584] DEBUG -- : 0.000088_s (0.000101_s): MarkdownFilter [FullPipeline]
...
D, [2024-12-20T13:35:24.252338 #34584] DEBUG -- : 0.000013_s (0.004394_s): CustomEmojiFilter [FullPipeline]
D, [2024-12-20T13:35:24.252504 #34584] DEBUG -- : 0.000095_s (0.004489_s): TaskListFilter [FullPipeline]
D, [2024-12-20T13:35:24.252558 #34584] DEBUG -- : 0.000028_s (0.004517_s): SetDirectionFilter [FullPipeline]
D, [2024-12-20T13:35:24.252623 #34584] DEBUG -- : 0.000045_s (0.004562_s): SyntaxHighlightFilter [FullPipeline]
```
Use `debug: true` for even more detail per filter.
|
---
stage: Plan
group: Knowledge
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: The Banzai pipeline and parsing.
title: The Banzai pipeline and parsing
breadcrumbs:
- doc
- development
- gitlab_flavored_markdown
---
<!-- vale gitlab.GitLabFlavoredMarkdown = NO -->
Parsing and rendering [GitLab Flavored Markdown](_index.md) into HTML involves different components:
- Banzai pipeline and it's various filters
- Markdown parser
The backend does all the processing for GLFM to HTML. This provides several benefits:
- Security: We run robust sanitization which removes unknown tags, classes and ids.
- References: Our reference syntax requires access to the database to resolve issues, etc, as well as redacting references in which the user has no access.
- Consistency: We want to provide users with a consistent experience, which includes full support of the GLFM syntax and styling. Having a single place where the processing is done allows us to provide that.
- Caching: We cache the HTML in our database when possible, such as for issue or MR descriptions, or comments.
- Quick actions: We use a specialized pipeline to process quick actions, so that we can better detect them in Markdown text.
The frontend handles certain aspects when displaying:
- Math blocks
- Mermaid blocks
- Enforcing certain limits, such as excessive number of math or mermaid blocks.
## The Banzai pipeline
Named after the [surf reef break](https://en.wikipedia.org/wiki/Banzai_Pipeline) in Hawaii, the Banzai pipeline consists of various filters ([lib/banzai/filters](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/banzai/filter)) where Markdown and HTML is transformed in each one, in a pipeline fashion. Various pipelines ([lib/banzai/pipeline](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/banzai/pipeline)) are defined, each with a different sequence of filters, such as `AsciiDocPipeline`, `EmailPipeline`.
The [html-pipeline](https://github.com/gjtorikian/html-pipeline) gem implements the pipeline/filter mechanism.
The primary pipeline is the `FullPipeline`, which is a combination of the `PlainMarkdownPipeline` and the `GfmPipeline`.
### `PlainMarkdownPipeline`
This pipeline contains the filters for transforming raw Markdown into HTML, handled primarily by the `Filter::MarkdownFilter`.
#### `Filter::MarkdownFilter`
This filter interfaces with the actual Markdown parser. The parser uses our [`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) Ruby gem that uses the [`comrak`](https://github.com/kivikakk/comrak) Rust crate.
Text is passed into this filter, and by calling the specified parser engine, generates the corresponding basic HTML.
### `GfmPipeline`
This pipeline contains all the filters that perform the additional transformations on raw HTML into what we consider rendered GLFM.
A Nokogiri document gets passed into each of these filters, and they perform the various transformations.
For example, `EmojiFitler`, `CommitTrailersFilter`, or `SanitizationFilter`.
Anything that can't be handled by the initial Markdown parsing gets handled by these filters.
Of specific note is the `SanitizationFilter`. This is critical for providing safe HTML from possibly malicious input.
### Performance
It's important to not only have the filters run as fast as possible, but to ensure that they don't take too long in general.
For this we use several techniques:
- For certain filters that can take a long time, we use a Ruby timeout with `Gitlab::RenderTimeout.timeout` in [TimeoutFilterHandler](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/concerns/timeout_filter_handler.rb).
This allows us to interrupt the actual processing if it takes too long.
In general, using Ruby `timeout` is [not considered safe](https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying/).
We therefore only use it when absolutely necessary, preferring to fix an actual performance problem rather then using a timeout.
- [PipelineTimingCheck](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/concerns/pipeline_timing_check.rb) allows us to keep track of the cumulative amount of time the pipeline is taking. When we reach a maximum, we can then skip any remaining filters.
For nearly all filters, it's generally ok to skip them in a case like this in order to show the user _something_, rather than nothing.
However, there are a couple instances where this is not advisable.
For example in the `SanitizationFilter`, if that filter does not complete, then we can't show the HTML to the user since there could still be unsanitized HTML.
In those cases, we have to show an error message.
There is also a `rake` task that can be used for benchmarking. See the [Performance Guidelines](../performance.md#banzai-pipelines-and-filters)
## Markdown parser
We use our [`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) Ruby gem that uses the [`comrak`](https://github.com/kivikakk/comrak) Rust crate.
`comrak` provides 100% compatibility with GFM and CommonMark while allowing additional extensions to be added to it. For example, we were able to implement our multi-line blockquote and wikilink syntax directly in `comrak`. The goal is to move more of the Ruby filters into either `comrak` (if it makes sense) or into `gitlab-glfm-markdown`.
For more information about the various options that get passed into `comrak`, see [glfm_markdown.rb](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/banzai/filter/markdown_engines/glfm_markdown.rb#L12-L34).
## Debugging
Usually the easiest way to debug the various pipelines and filters is to run them from the Rails console. This way you can set a `binding.pry` in a filter and step through the code.
Because of `TimeoutFilterHandler` and `PipelineTimingCheck`, it can be a challenge to debug the filters. There is a special environment variable, `GITLAB_DISABLE_MARKDOWN_TIMEOUT`, that when set disables any timeout checking in the filters. This is also available for customers in the rare instance that a [GitLab Self-Managed instance](../../administration/environment_variables.md) wishes to bypass those checks.
```ruby
text = 'Some test **Markdown**'
html = Banzai.render(text, project: nil)
```
This renders the Markdown in relation to no project. Or you can render it in the context of a project:
```ruby
project = Project.first
text = 'Some test **Markdown**'
html = Banzai.render(text, project: project)
```
The `render` method takes the `text` and a `context` hash, which provides various options for rendering. For example you can use `pipeline: :ascii_doc` to run the `AsciiDocPipeline`. The `FullPipeline` is the default.
If you specify `debug_timing: true`, then you will receive a list of filters and how long each takes.
```ruby
Banzai.render(text, project: nil, debug_timing: true)
D, [2024-12-20T13:35:24.246463 #34584] DEBUG -- : 0.000012_s (0.000012_s): NormalizeSourceFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246543 #34584] DEBUG -- : 0.000007_s (0.000019_s): TruncateSourceFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246589 #34584] DEBUG -- : 0.000028_s (0.000047_s): FrontMatterFilter [PreProcessPipeline]
D, [2024-12-20T13:35:24.246662 #34584] DEBUG -- : 0.000005_s (0.000005_s): IncludeFilter [FullPipeline]
D, [2024-12-20T13:35:24.246816 #34584] DEBUG -- : 0.000088_s (0.000101_s): MarkdownFilter [FullPipeline]
...
D, [2024-12-20T13:35:24.252338 #34584] DEBUG -- : 0.000013_s (0.004394_s): CustomEmojiFilter [FullPipeline]
D, [2024-12-20T13:35:24.252504 #34584] DEBUG -- : 0.000095_s (0.004489_s): TaskListFilter [FullPipeline]
D, [2024-12-20T13:35:24.252558 #34584] DEBUG -- : 0.000028_s (0.004517_s): SetDirectionFilter [FullPipeline]
D, [2024-12-20T13:35:24.252623 #34584] DEBUG -- : 0.000045_s (0.004562_s): SyntaxHighlightFilter [FullPipeline]
```
Use `debug: true` for even more detail per filter.
|
https://docs.gitlab.com/development/gitlab_flavored_markdown
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/gitlab_flavored_markdown
|
[
"doc",
"development",
"gitlab_flavored_markdown"
] |
_index.md
|
Plan
|
Knowledge
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
GitLab Flavored Markdown (GLFM) development guidelines
|
Development guidelines for GitLab Flavored Markdown (GLFM).
|
<!-- vale gitlab_base.GitLabFlavoredMarkdown = NO -->
This and neighboring pages contain developer guidelines for GitLab Flavored Markdown (GLFM).
For the user documentation about Markdown in GitLab, refer to
[GitLab Flavored Markdown](../../user/markdown.md).
GitLab supports Markdown in various places, such as issue or merge request descriptions, comments, and wikis.
The Markdown implementation we use is called
GitLab Flavored Markdown (GLFM).
[CommonMark](https://spec.commonmark.org/current/) is the core of GLFM.
> ...a standard, unambiguous syntax specification for Markdown, along with a suite of comprehensive tests to validate Markdown implementations against this specification.
Extensions from [GitHub Flavored Markdown (GFM)](https://github.github.com/gfm/), such as tables and task lists, are supported.
Various [extensions](../../user/markdown.md#differences-with-standard-markdown), such as math and multiline
blockquotes are then added, creating GLFM.
{{< alert type="note" >}}
In many places in the code, we use `gfm` or `GFM`. In those cases, we're usually
referring to the Markdown in general, not specifically GLFM.
{{< /alert >}}
## Basic flow
To create the HTML displayed to the user, the Markdown is usually processed as follows:
- Markdown is read from the user or from the database and given to the backend.
- A processing pipeline (the "Banzai" pipeline) is run.
- Some pre-processing happens, then is converted into basic HTML using the
[`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) gem, which uses [`comrak`](https://github.com/kivikakk/comrak).
- Various filters are run which further transform the HTML. For example handling
references or custom emoji.
- The HTML is then handed to the frontend, which displays it in various ways, or cached in the database.
- For example, the rich text editor converts the HTML into a format used by [`tiptap`](https://tiptap.dev/product/editor) to be displayed and edited.
## Goal
We aim for GLFM to always be 100% compliant with CommonMark.
Great pains are taken not to add new syntax unless truly necessary.
And in such cases research should be done to find the most
acceptable "Markdown" syntax, closely adhering to a common implementation if available.
The [CommonMark forum](https://talk.commonmark.org) is a good place to research discussions on different topics.
## Additional resources
- [GitLab Flavored Markdown](../../user/markdown.md)
- [Rich text editor development guidelines](../fe_guide/content_editor.md)
- [Emojis](../fe_guide/emojis.md)
- [How to render GitLab-flavored Markdown on the frontend?](../fe_guide/frontend_faq.md#10-how-to-render-gitlab-flavored-markdown)
- [Diagrams.net integration](../fe_guide/diagrams_net_integration.md)
Contact the [Plan:Knowledge team](https://handbook.gitlab.com/handbook/engineering/development/dev/plan/knowledge/) if you have any questions.
|
---
stage: Plan
group: Knowledge
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Development guidelines for GitLab Flavored Markdown (GLFM).
title: GitLab Flavored Markdown (GLFM) development guidelines
breadcrumbs:
- doc
- development
- gitlab_flavored_markdown
---
<!-- vale gitlab_base.GitLabFlavoredMarkdown = NO -->
This and neighboring pages contain developer guidelines for GitLab Flavored Markdown (GLFM).
For the user documentation about Markdown in GitLab, refer to
[GitLab Flavored Markdown](../../user/markdown.md).
GitLab supports Markdown in various places, such as issue or merge request descriptions, comments, and wikis.
The Markdown implementation we use is called
GitLab Flavored Markdown (GLFM).
[CommonMark](https://spec.commonmark.org/current/) is the core of GLFM.
> ...a standard, unambiguous syntax specification for Markdown, along with a suite of comprehensive tests to validate Markdown implementations against this specification.
Extensions from [GitHub Flavored Markdown (GFM)](https://github.github.com/gfm/), such as tables and task lists, are supported.
Various [extensions](../../user/markdown.md#differences-with-standard-markdown), such as math and multiline
blockquotes are then added, creating GLFM.
{{< alert type="note" >}}
In many places in the code, we use `gfm` or `GFM`. In those cases, we're usually
referring to the Markdown in general, not specifically GLFM.
{{< /alert >}}
## Basic flow
To create the HTML displayed to the user, the Markdown is usually processed as follows:
- Markdown is read from the user or from the database and given to the backend.
- A processing pipeline (the "Banzai" pipeline) is run.
- Some pre-processing happens, then is converted into basic HTML using the
[`gitlab-glfm-markdown`](https://gitlab.com/gitlab-org/ruby/gems/gitlab-glfm-markdown) gem, which uses [`comrak`](https://github.com/kivikakk/comrak).
- Various filters are run which further transform the HTML. For example handling
references or custom emoji.
- The HTML is then handed to the frontend, which displays it in various ways, or cached in the database.
- For example, the rich text editor converts the HTML into a format used by [`tiptap`](https://tiptap.dev/product/editor) to be displayed and edited.
## Goal
We aim for GLFM to always be 100% compliant with CommonMark.
Great pains are taken not to add new syntax unless truly necessary.
And in such cases research should be done to find the most
acceptable "Markdown" syntax, closely adhering to a common implementation if available.
The [CommonMark forum](https://talk.commonmark.org) is a good place to research discussions on different topics.
## Additional resources
- [GitLab Flavored Markdown](../../user/markdown.md)
- [Rich text editor development guidelines](../fe_guide/content_editor.md)
- [Emojis](../fe_guide/emojis.md)
- [How to render GitLab-flavored Markdown on the frontend?](../fe_guide/frontend_faq.md#10-how-to-render-gitlab-flavored-markdown)
- [Diagrams.net integration](../fe_guide/diagrams_net_integration.md)
Contact the [Plan:Knowledge team](https://handbook.gitlab.com/handbook/engineering/development/dev/plan/knowledge/) if you have any questions.
|
https://docs.gitlab.com/development/reference_processing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/reference_processing.md
|
2025-08-13
|
doc/development/gitlab_flavored_markdown
|
[
"doc",
"development",
"gitlab_flavored_markdown"
] |
reference_processing.md
|
Plan
|
Knowledge
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Reference processing
|
An introduction to reference parsers and reference filters, and a guide to their implementation.
|
[GitLab Flavored Markdown](../../user/markdown.md) includes the ability to process
references to a range of GitLab domain objects. This is implemented by two
abstractions in the `Banzai` pipeline: `ReferenceFilter` and `ReferenceParser`.
This page explains what these are, how they are used, and how you would
implement a new filter/parser pair.
Each `ReferenceFilter` must have a corresponding `ReferenceParser`.
It is possible to share reference parsers between filters - if two filters find
and link the same type of objects (as specified by the `data-reference-type`
attribute), then we only need one reference parser for that type of domain
object.
## Banzai pipeline
`Banzai` pipeline returns the `result` Hash after being filtered by the Pipeline.
The `result` Hash is passed to each filter for modification. This is where Filters store extracted information from the content.
It contains:
- An `:output` key with the DocumentFragment or String HTML markup based on the output of the last filter in the pipeline.
- A `:reference_filter_nodes` key with the list of DocumentFragment `nodes` that are ready for processing, updated by each filter in the pipeline.
## Reference filters
The first way that references are handled is by reference filters. These are
the tools that identify short-code and URI references from markup documents and
transform them into structured links to the resources they represent.
For example, the class
[`Banzai::Filter::References::IssueReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/issue_reference_filter.rb)
is responsible for handling references to issues, such as
`gitlab-org/gitlab#123` and `https://gitlab.com/gitlab-org/gitlab/-/issues/200048`.
All reference filters are instances of [`HTML::Pipeline::Filter`](https://www.rubydoc.info/gems/html-pipeline),
and inherit (often indirectly) from [`Banzai::Filter::References::ReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/reference_filter.rb).
`HTML::Pipeline::Filter` has a simple interface consisting of `#call`, a void
method that mutates the current document. `ReferenceFilter` provides methods
that make defining suitable `#call` methods easier. Most reference filters
however do not inherit from either of these classes directly, but from
[`AbstractReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/abstract_reference_filter.rb),
which provides a higher-level interface.
Subclasses of `AbstractReferenceFilter` generally do not override `#call`; instead,
a minimum implementation of `AbstractReferenceFilter` should define:
- `.reference_type`: The type of domain object.
This is usually a keyword, and is used to set the `data-reference-type` attribute
on the generated link, and is an important part of the interaction with the
corresponding `ReferenceParser` (see below).
- `.object_class`: a reference to the class of the objects a filter refers to.
This is used to:
- Find the regular expressions used to find references. The class should
include [`Referable`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/referable.rb)
and thus define two regular expressions: `.link_reference_pattern` and
`.reference_pattern`, both of which should contain a named capture group
named the value of `ReferenceFilter.object_sym`.
- Compute the `.object_name`.
- Compute the `.object_sym` (the group name in the reference patterns).
- `.parse_symbol(string)`: parse the text value to an object identifier (`#to_i` by default).
- `#record_identifier(record)`: the inverse of `.parse_symbol`, that is, transform a domain object to an identifier (`#id` by default).
- `#url_for_object(object, parent_object)`: generate the URL for a domain object.
- `#find_object(parent_object, id)`: given the parent (usually a [`Project`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/project.rb))
and an identifier, find the object. For example, this in a reference filter for
merge requests, this might be `project.merge_requests.where(iid: iid)`.
### Add a new reference prefix and filter
For reference filters for new objects, use a format following the pattern
`[object_type:identifier]`, because:
1. Varied single-character prefixes are hard for users to track. Especially for
lower-use object types, this can diminish value for the feature.
1. Suitable single-character prefixes are limited and no longer allowed for new references.
1. Following a consistent pattern allows users to infer the existence of new features.
The [Extensible reference filters](https://gitlab.com/groups/gitlab-org/-/epics/7563)
epic discusses the use of this format.
To add a reference prefix for a new object `apple`, which has both a name and ID,
format the reference as:
- `[apple:123]` for identification by ID.
- `[apple:"Granny Smith"]` for identification by name.
### Performance
#### Find object optimization
This default implementation is not very efficient, because we need to call
`#find_object` for each reference, which may require issuing a DB query every
time. For this reason, most reference filter implementations instead use an
optimization included in `AbstractReferenceFilter`:
> `AbstractReferenceFilter` provides a lazily initialized value
> `#records_per_parent`, which is a mapping from parent object to a collection
> of domain objects.
To use this mechanism, the reference filter must implement the
method: `#parent_records(parent, set_of_identifiers)`, which must return an
enumerable of domain objects.
This allows such classes to define `#find_object` (as
[`IssuableReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/issuable_reference_filter.rb)
does) as:
```ruby
def find_object(parent, iid)
records_per_parent[parent][iid]
end
```
This makes the number of queries linear in the number of projects. We only need
to implement `parent_records` method when we call `records_per_parent` in our
reference filter.
#### Filtering nodes optimization
Each `ReferenceFilter` would iterate over all `<a>` and `text()` nodes in a document.
Not all nodes are processed, document is filtered only for nodes that we want to process.
We are skipping:
- Link tags already processed by some previous filter (if they have a `gfm` class).
- Nodes with the ancestor node that we want to ignore (`ignore_ancestor_query`).
- Empty line.
- Link tags with the empty `href` attribute.
To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) are updated for the next filter to use.
## Reference parsers
In a number of cases, as a performance optimization, we render Markdown to HTML
once, cache the result and then present it to users from the cached value. For
example this happens for notes, issue descriptions, and merge request
descriptions. A consequence of this is that a rendered document might refer to
a resource that some subsequent readers should not be able to see.
For example, you might create an issue, and refer to a confidential issue `#1234`,
which you have access to. This is rendered in the cached HTML as a link to
that [confidential issue](../../user/project/issues/confidential_issues.md),
with data attributes containing its ID, the ID of the
project and other confidential data. A later reader, who has access to your issue
might not have permission to read issue `#1234`, and so we need to redact
these sensitive pieces of data. This is what `ReferenceParser` classes do.
A reference parser is linked to the object that it handles by the link
advertising this relationship in the `data-reference-type` attribute (set by the
reference filter). This is used by the
[`ReferenceRedactor`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/reference_redactor.rb)
to compute which nodes should be visible to users:
```ruby
def nodes_visible_to_user(nodes)
per_type = Hash.new { |h, k| h[k] = [] }
visible = Set.new
nodes.each do |node|
per_type[node.attr('data-reference-type')] << node
end
per_type.each do |type, nodes|
parser = Banzai::ReferenceParser[type].new(context)
visible.merge(parser.nodes_visible_to_user(user, nodes))
end
visible
end
```
The key part here is `Banzai::ReferenceParser[type]`, which is used to look up
the correct reference parser for each type of domain object. This requires that
each reference parser must:
- Be placed in the `Banzai::ReferenceParser` namespace.
- Implement the `.nodes_visible_to_user(user, nodes)` method.
In practice, all reference parsers inherit from [`BaseParser`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/reference_parser/base_parser.rb), and are implemented by defining:
- `.reference_type`, which should equal `ReferenceFilter.reference_type`.
- And by implementing one or more of:
- `#nodes_visible_to_user(user, nodes)` for finest grain control.
- `#can_read_reference?` needed if `nodes_visible_to_user` is not overridden.
- `#references_relation` an active record relation for objects by ID.
- `#nodes_user_can_reference(user, nodes)` to filter nodes directly.
A failure to implement this class for each reference type means that the
application raises exceptions during Markdown processing.
|
---
stage: Plan
group: Knowledge
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: An introduction to reference parsers and reference filters, and a guide
to their implementation.
title: Reference processing
breadcrumbs:
- doc
- development
- gitlab_flavored_markdown
---
[GitLab Flavored Markdown](../../user/markdown.md) includes the ability to process
references to a range of GitLab domain objects. This is implemented by two
abstractions in the `Banzai` pipeline: `ReferenceFilter` and `ReferenceParser`.
This page explains what these are, how they are used, and how you would
implement a new filter/parser pair.
Each `ReferenceFilter` must have a corresponding `ReferenceParser`.
It is possible to share reference parsers between filters - if two filters find
and link the same type of objects (as specified by the `data-reference-type`
attribute), then we only need one reference parser for that type of domain
object.
## Banzai pipeline
`Banzai` pipeline returns the `result` Hash after being filtered by the Pipeline.
The `result` Hash is passed to each filter for modification. This is where Filters store extracted information from the content.
It contains:
- An `:output` key with the DocumentFragment or String HTML markup based on the output of the last filter in the pipeline.
- A `:reference_filter_nodes` key with the list of DocumentFragment `nodes` that are ready for processing, updated by each filter in the pipeline.
## Reference filters
The first way that references are handled is by reference filters. These are
the tools that identify short-code and URI references from markup documents and
transform them into structured links to the resources they represent.
For example, the class
[`Banzai::Filter::References::IssueReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/issue_reference_filter.rb)
is responsible for handling references to issues, such as
`gitlab-org/gitlab#123` and `https://gitlab.com/gitlab-org/gitlab/-/issues/200048`.
All reference filters are instances of [`HTML::Pipeline::Filter`](https://www.rubydoc.info/gems/html-pipeline),
and inherit (often indirectly) from [`Banzai::Filter::References::ReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/reference_filter.rb).
`HTML::Pipeline::Filter` has a simple interface consisting of `#call`, a void
method that mutates the current document. `ReferenceFilter` provides methods
that make defining suitable `#call` methods easier. Most reference filters
however do not inherit from either of these classes directly, but from
[`AbstractReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/references/abstract_reference_filter.rb),
which provides a higher-level interface.
Subclasses of `AbstractReferenceFilter` generally do not override `#call`; instead,
a minimum implementation of `AbstractReferenceFilter` should define:
- `.reference_type`: The type of domain object.
This is usually a keyword, and is used to set the `data-reference-type` attribute
on the generated link, and is an important part of the interaction with the
corresponding `ReferenceParser` (see below).
- `.object_class`: a reference to the class of the objects a filter refers to.
This is used to:
- Find the regular expressions used to find references. The class should
include [`Referable`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/concerns/referable.rb)
and thus define two regular expressions: `.link_reference_pattern` and
`.reference_pattern`, both of which should contain a named capture group
named the value of `ReferenceFilter.object_sym`.
- Compute the `.object_name`.
- Compute the `.object_sym` (the group name in the reference patterns).
- `.parse_symbol(string)`: parse the text value to an object identifier (`#to_i` by default).
- `#record_identifier(record)`: the inverse of `.parse_symbol`, that is, transform a domain object to an identifier (`#id` by default).
- `#url_for_object(object, parent_object)`: generate the URL for a domain object.
- `#find_object(parent_object, id)`: given the parent (usually a [`Project`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/project.rb))
and an identifier, find the object. For example, this in a reference filter for
merge requests, this might be `project.merge_requests.where(iid: iid)`.
### Add a new reference prefix and filter
For reference filters for new objects, use a format following the pattern
`[object_type:identifier]`, because:
1. Varied single-character prefixes are hard for users to track. Especially for
lower-use object types, this can diminish value for the feature.
1. Suitable single-character prefixes are limited and no longer allowed for new references.
1. Following a consistent pattern allows users to infer the existence of new features.
The [Extensible reference filters](https://gitlab.com/groups/gitlab-org/-/epics/7563)
epic discusses the use of this format.
To add a reference prefix for a new object `apple`, which has both a name and ID,
format the reference as:
- `[apple:123]` for identification by ID.
- `[apple:"Granny Smith"]` for identification by name.
### Performance
#### Find object optimization
This default implementation is not very efficient, because we need to call
`#find_object` for each reference, which may require issuing a DB query every
time. For this reason, most reference filter implementations instead use an
optimization included in `AbstractReferenceFilter`:
> `AbstractReferenceFilter` provides a lazily initialized value
> `#records_per_parent`, which is a mapping from parent object to a collection
> of domain objects.
To use this mechanism, the reference filter must implement the
method: `#parent_records(parent, set_of_identifiers)`, which must return an
enumerable of domain objects.
This allows such classes to define `#find_object` (as
[`IssuableReferenceFilter`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/filter/issuable_reference_filter.rb)
does) as:
```ruby
def find_object(parent, iid)
records_per_parent[parent][iid]
end
```
This makes the number of queries linear in the number of projects. We only need
to implement `parent_records` method when we call `records_per_parent` in our
reference filter.
#### Filtering nodes optimization
Each `ReferenceFilter` would iterate over all `<a>` and `text()` nodes in a document.
Not all nodes are processed, document is filtered only for nodes that we want to process.
We are skipping:
- Link tags already processed by some previous filter (if they have a `gfm` class).
- Nodes with the ancestor node that we want to ignore (`ignore_ancestor_query`).
- Empty line.
- Link tags with the empty `href` attribute.
To avoid filtering such nodes for each `ReferenceFilter`, we do it only once and store the result in the result Hash of the pipeline as `result[:reference_filter_nodes]`.
Pipeline `result` is passed to each filter for modification, so every time when `ReferenceFilter` replaces text or link tag, filtered list (`reference_filter_nodes`) are updated for the next filter to use.
## Reference parsers
In a number of cases, as a performance optimization, we render Markdown to HTML
once, cache the result and then present it to users from the cached value. For
example this happens for notes, issue descriptions, and merge request
descriptions. A consequence of this is that a rendered document might refer to
a resource that some subsequent readers should not be able to see.
For example, you might create an issue, and refer to a confidential issue `#1234`,
which you have access to. This is rendered in the cached HTML as a link to
that [confidential issue](../../user/project/issues/confidential_issues.md),
with data attributes containing its ID, the ID of the
project and other confidential data. A later reader, who has access to your issue
might not have permission to read issue `#1234`, and so we need to redact
these sensitive pieces of data. This is what `ReferenceParser` classes do.
A reference parser is linked to the object that it handles by the link
advertising this relationship in the `data-reference-type` attribute (set by the
reference filter). This is used by the
[`ReferenceRedactor`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/reference_redactor.rb)
to compute which nodes should be visible to users:
```ruby
def nodes_visible_to_user(nodes)
per_type = Hash.new { |h, k| h[k] = [] }
visible = Set.new
nodes.each do |node|
per_type[node.attr('data-reference-type')] << node
end
per_type.each do |type, nodes|
parser = Banzai::ReferenceParser[type].new(context)
visible.merge(parser.nodes_visible_to_user(user, nodes))
end
visible
end
```
The key part here is `Banzai::ReferenceParser[type]`, which is used to look up
the correct reference parser for each type of domain object. This requires that
each reference parser must:
- Be placed in the `Banzai::ReferenceParser` namespace.
- Implement the `.nodes_visible_to_user(user, nodes)` method.
In practice, all reference parsers inherit from [`BaseParser`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/banzai/reference_parser/base_parser.rb), and are implemented by defining:
- `.reference_type`, which should equal `ReferenceFilter.reference_type`.
- And by implementing one or more of:
- `#nodes_visible_to_user(user, nodes)` for finest grain control.
- `#can_read_reference?` needed if `nodes_visible_to_user` is not overridden.
- `#references_relation` an active record relation for objects by ID.
- `#nodes_user_can_reference(user, nodes)` to filter nodes directly.
A failure to implement this class for each reference type means that the
application raises exceptions during Markdown processing.
|
https://docs.gitlab.com/development/product_qualified_lead_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/product_qualified_lead_guide
|
[
"doc",
"development",
"product_qualified_lead_guide"
] |
_index.md
|
Growth
|
Acquisition
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Product Qualified Lead (PQL) development guidelines
| null |
The Product Qualified Lead (PQL) funnel connects our users with our team members. Read more about [PQL product principles](https://handbook.gitlab.com/handbook/product/product-principles/#product-qualified-leads-pqls).
A hand-raise PQL is a user who requests to speak to sales from within the product.
## Set up your development environment
1. Set up GDK with a connection to your local CustomersDot instance.
1. Set up CustomersDot to talk to a staging instance of Workato.
1. Set up CustomersDot using the [standard install instructions](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/staging/doc/setup/installation_steps.md).
1. Set the `CUSTOMER_PORTAL_URL` environment variable to your local URL of your CustomersDot instance.
1. Place `export CUSTOMER_PORTAL_URL=http://localhost:5000/` in your shell `rc` script (`~/.zshrc` or `~/.bash_profile` or `~/.bashrc`) and restart GDK.
1. Enter the credentials on CustomersDot development to Workato in your `/config/secrets.yml` and restart. Credentials for the Workato Staging are in the 1Password Subscription portal vault. The URL for staging is `https://apim.workato.com/gitlab-dev/services/marketo/lead`.
```yaml
workato_url: "<%= ENV['WORKATO_URL'] %>"
workato_client_id: "<%= ENV['WORKATO_CLIENT_ID'] %>"
workato_client_secret: "<%= ENV['WORKATO_CLIENT_SECRET'] %>"
```
### Set up lead monitoring
1. Set up access for the Marketo sandbox, similar [to this example request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/13162).
### Manually test leads
1. Register a new user with a unique email on your local GitLab instance.
1. Send the PQL lead by submitting your new form or creating a new trial or a new hand raise lead.
1. Use easily identifiable values that can be easily seen in Workato staging.
1. Observe the entry in the staging instance of Workato and paste in the merge request comment and mention.
## Troubleshooting
- Check the application and Sidekiq logs on `gitlab.com` and CustomersDot to monitor leads.
- Check the `leads` table in CustomersDot.
- Ask for access to the Marketo Sandbox and validate the leads there, [to this example request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/13162).
## Embed a hand-raise lead form
[HandRaiseLeadButton](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/hand_raise_leads/hand_raise_lead/components/hand_raise_lead_button.vue) is a reusable component that adds a button and a hand-raise modal to any screen.
You can import a hand-raise lead button in the following ways:
For Haml:
```ruby
.js-hand-raise-lead-trigger{ data: discover_page_hand_raise_lead_data(group) }
```
For Vue:
```vue
<script>
import HandRaiseLeadButton from 'ee/hand_raise_leads/hand_raise_lead/components/hand_raise_lead_button.vue';
export default {
handRaiseLeadAttributes: {
variant: 'confirm',
category: 'tertiary',
class: 'gl-sm-w-auto gl-w-full gl-sm-ml-3 gl-sm-mt-0 gl-mt-3',
'data-testid': 'some-unique-hand-raise-lead-button',
},
ctaTracking: {
action: 'click_button',
},
components: {
HandRaiseLeadButton,
...
</script>
<template>
<hand-raise-lead-button
:button-attributes="$options.handRaiseLeadAttributes"
glm-content="some-unique-glm-content"
:cta-tracking="$options.ctaTracking"
/>
...
</template>
```
The hand-raise lead form submission can send unique data on modal submission and customize the button by
providing the following props to the button:
```javascript
props: {
ctaTracking: {
type: Object,
required: false,
default: () => ({}),
},
buttonText: {
type: String,
required: false,
default: PQL_BUTTON_TEXT,
},
buttonAttributes: {
type: Object,
required: true,
},
glmContent: {
type: String,
required: true,
},
productInteraction: {
type: String,
required: false,
default: PQL_PRODUCT_INTERACTION,
},
},
```
The `ctaTracking` parameters follow the `data-track` attributes for implementing Snowplow tracking.
The provided tracking attributes are attached to the button inside the `HandRaiseLeadButton` component,
which triggers the hand-raise lead modal when selected.
### Monitor the lead location
When embedding a new hand raise form, use a unique `glmContent` or `glm_content` field that is different to any existing values.
## PQL lead flow
The flow of a PQL lead is as follows:
1. A user triggers a [`HandRaiseLeadButton` component](#embed-a-hand-raise-lead-form) on `gitlab.com`.
1. The `HandRaiseLeadButton` submits any information to the following API endpoint: `/-/gitlab_subscriptions/hand_raise_leads`.
1. That endpoint reposts the form to the CustomersDot `trials/create_hand_raise_lead` endpoint.
1. CustomersDot records the form data to the `leads` table and posts the form to [Workato](https://handbook.gitlab.com/handbook/marketing/marketing-operations/workato/).
1. Workato sends the form to Marketo.
1. Marketo does scoring and sends the form to Salesforce.
1. Our Sales team uses Salesforce to connect to the leads.
### Trial lead flow
#### Trial lead flow on GitLab.com
```mermaid
sequenceDiagram
Trial Frontend Forms ->>TrialsController#create_lead: GitLab.com frontend sends [lead] to backend
TrialsController#create->>CreateLeadService: [lead]
TrialsController#create->>ApplyTrialService: [lead] Apply the trial
CreateLeadService->>SubscriptionPortalClient#generate_trial(sync_to_gl#61;false): [lead] Creates customer account on CustomersDot
ApplyTrialService->>SubscriptionPortalClient#generate_trial(sync_to_gl#61;true): [lead] Asks CustomersDot to apply the trial on namespace
SubscriptionPortalClient#generate_trial(sync_to_gl#61;false)->>CustomersDot|TrialsController#create(sync_to_gl#61;false): GitLab.com sends [lead] to CustomersDot
SubscriptionPortalClient#generate_trial(sync_to_gl#61;true)->>CustomersDot|TrialsController#create(sync_to_gl#61;true): GitLab.com asks CustomersDot to apply the trial
```
#### Trial lead flow on CustomersDot (`sync_to_gl`)
```mermaid
sequenceDiagram
CustomersDot|TrialsController#create->>HostedPlans|CreateTrialService#execute: Save [lead] to leads table for monitoring purposes
HostedPlans|CreateTrialService#execute->>BaseTrialService#create_account: Creates a customer record in customers table
HostedPlans|CreateTrialService#create_lead->>CreateLeadService: Creates a lead record in customers table
HostedPlans|CreateTrialService#create_lead->>Workato|CreateLeadWorker: Async worker to submit [lead] to Workato
Workato|CreateLeadWorker->>Workato|CreateLeadService: [lead]
Workato|CreateLeadService->>WorkatoApp#create_lead: [lead]
WorkatoApp#create_lead->>Workato: [lead] is sent to Workato
```
#### Applying the trial to a namespace on CustomersDot
```mermaid
sequenceDiagram
HostedPlans|CreateTrialService->load_namespace#Gitlab api/namespaces: Load namespace details
HostedPlans|CreateTrialService->create_order#: Creates an order in orders table
HostedPlans|CreateTrialService->create_trial_history#: Creates a record in trial_histories table
```
### Hand raise lead flow
#### Hand raise flow on GitLab.com
```mermaid
sequenceDiagram
HandRaiseForm Vue Component->>HandRaiseLeadsController#create: GitLab.com frontend sends [lead] to backend
HandRaiseLeadsController#create->>CreateHandRaiseLeadService: [lead]
CreateHandRaiseLeadService->>SubscriptionPortalClient: [lead]
SubscriptionPortalClient->>CustomersDot|TrialsController#create_hand_raise_lead: GitLab.com sends [lead] to CustomersDot
```
#### Hand raise flow on CustomersDot
```mermaid
sequenceDiagram
CustomersDot|TrialsController#create_hand_raise_lead->>CreateLeadService: Save [lead] to leads table for monitoring purposes
CustomersDot|TrialsController#create_hand_raise_lead->>Workato|CreateLeadWorker: Async worker to submit [lead] to Workato
Workato|CreateLeadWorker->>Workato|CreateLeadService: [lead]
Workato|CreateLeadService->>WorkatoApp#create_lead: [lead]
WorkatoApp#create_lead->>Workato: [lead] is sent to Workato
```
### PQL flow after Workato for all lead types
```mermaid
sequenceDiagram
Workato->>Marketo: [lead]
Marketo->>Salesforce(SFDC): [lead]
```
|
---
stage: Growth
group: Acquisition
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Product Qualified Lead (PQL) development guidelines
breadcrumbs:
- doc
- development
- product_qualified_lead_guide
---
The Product Qualified Lead (PQL) funnel connects our users with our team members. Read more about [PQL product principles](https://handbook.gitlab.com/handbook/product/product-principles/#product-qualified-leads-pqls).
A hand-raise PQL is a user who requests to speak to sales from within the product.
## Set up your development environment
1. Set up GDK with a connection to your local CustomersDot instance.
1. Set up CustomersDot to talk to a staging instance of Workato.
1. Set up CustomersDot using the [standard install instructions](https://gitlab.com/gitlab-org/customers-gitlab-com/-/blob/staging/doc/setup/installation_steps.md).
1. Set the `CUSTOMER_PORTAL_URL` environment variable to your local URL of your CustomersDot instance.
1. Place `export CUSTOMER_PORTAL_URL=http://localhost:5000/` in your shell `rc` script (`~/.zshrc` or `~/.bash_profile` or `~/.bashrc`) and restart GDK.
1. Enter the credentials on CustomersDot development to Workato in your `/config/secrets.yml` and restart. Credentials for the Workato Staging are in the 1Password Subscription portal vault. The URL for staging is `https://apim.workato.com/gitlab-dev/services/marketo/lead`.
```yaml
workato_url: "<%= ENV['WORKATO_URL'] %>"
workato_client_id: "<%= ENV['WORKATO_CLIENT_ID'] %>"
workato_client_secret: "<%= ENV['WORKATO_CLIENT_SECRET'] %>"
```
### Set up lead monitoring
1. Set up access for the Marketo sandbox, similar [to this example request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/13162).
### Manually test leads
1. Register a new user with a unique email on your local GitLab instance.
1. Send the PQL lead by submitting your new form or creating a new trial or a new hand raise lead.
1. Use easily identifiable values that can be easily seen in Workato staging.
1. Observe the entry in the staging instance of Workato and paste in the merge request comment and mention.
## Troubleshooting
- Check the application and Sidekiq logs on `gitlab.com` and CustomersDot to monitor leads.
- Check the `leads` table in CustomersDot.
- Ask for access to the Marketo Sandbox and validate the leads there, [to this example request](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/13162).
## Embed a hand-raise lead form
[HandRaiseLeadButton](https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/app/assets/javascripts/hand_raise_leads/hand_raise_lead/components/hand_raise_lead_button.vue) is a reusable component that adds a button and a hand-raise modal to any screen.
You can import a hand-raise lead button in the following ways:
For Haml:
```ruby
.js-hand-raise-lead-trigger{ data: discover_page_hand_raise_lead_data(group) }
```
For Vue:
```vue
<script>
import HandRaiseLeadButton from 'ee/hand_raise_leads/hand_raise_lead/components/hand_raise_lead_button.vue';
export default {
handRaiseLeadAttributes: {
variant: 'confirm',
category: 'tertiary',
class: 'gl-sm-w-auto gl-w-full gl-sm-ml-3 gl-sm-mt-0 gl-mt-3',
'data-testid': 'some-unique-hand-raise-lead-button',
},
ctaTracking: {
action: 'click_button',
},
components: {
HandRaiseLeadButton,
...
</script>
<template>
<hand-raise-lead-button
:button-attributes="$options.handRaiseLeadAttributes"
glm-content="some-unique-glm-content"
:cta-tracking="$options.ctaTracking"
/>
...
</template>
```
The hand-raise lead form submission can send unique data on modal submission and customize the button by
providing the following props to the button:
```javascript
props: {
ctaTracking: {
type: Object,
required: false,
default: () => ({}),
},
buttonText: {
type: String,
required: false,
default: PQL_BUTTON_TEXT,
},
buttonAttributes: {
type: Object,
required: true,
},
glmContent: {
type: String,
required: true,
},
productInteraction: {
type: String,
required: false,
default: PQL_PRODUCT_INTERACTION,
},
},
```
The `ctaTracking` parameters follow the `data-track` attributes for implementing Snowplow tracking.
The provided tracking attributes are attached to the button inside the `HandRaiseLeadButton` component,
which triggers the hand-raise lead modal when selected.
### Monitor the lead location
When embedding a new hand raise form, use a unique `glmContent` or `glm_content` field that is different to any existing values.
## PQL lead flow
The flow of a PQL lead is as follows:
1. A user triggers a [`HandRaiseLeadButton` component](#embed-a-hand-raise-lead-form) on `gitlab.com`.
1. The `HandRaiseLeadButton` submits any information to the following API endpoint: `/-/gitlab_subscriptions/hand_raise_leads`.
1. That endpoint reposts the form to the CustomersDot `trials/create_hand_raise_lead` endpoint.
1. CustomersDot records the form data to the `leads` table and posts the form to [Workato](https://handbook.gitlab.com/handbook/marketing/marketing-operations/workato/).
1. Workato sends the form to Marketo.
1. Marketo does scoring and sends the form to Salesforce.
1. Our Sales team uses Salesforce to connect to the leads.
### Trial lead flow
#### Trial lead flow on GitLab.com
```mermaid
sequenceDiagram
Trial Frontend Forms ->>TrialsController#create_lead: GitLab.com frontend sends [lead] to backend
TrialsController#create->>CreateLeadService: [lead]
TrialsController#create->>ApplyTrialService: [lead] Apply the trial
CreateLeadService->>SubscriptionPortalClient#generate_trial(sync_to_gl#61;false): [lead] Creates customer account on CustomersDot
ApplyTrialService->>SubscriptionPortalClient#generate_trial(sync_to_gl#61;true): [lead] Asks CustomersDot to apply the trial on namespace
SubscriptionPortalClient#generate_trial(sync_to_gl#61;false)->>CustomersDot|TrialsController#create(sync_to_gl#61;false): GitLab.com sends [lead] to CustomersDot
SubscriptionPortalClient#generate_trial(sync_to_gl#61;true)->>CustomersDot|TrialsController#create(sync_to_gl#61;true): GitLab.com asks CustomersDot to apply the trial
```
#### Trial lead flow on CustomersDot (`sync_to_gl`)
```mermaid
sequenceDiagram
CustomersDot|TrialsController#create->>HostedPlans|CreateTrialService#execute: Save [lead] to leads table for monitoring purposes
HostedPlans|CreateTrialService#execute->>BaseTrialService#create_account: Creates a customer record in customers table
HostedPlans|CreateTrialService#create_lead->>CreateLeadService: Creates a lead record in customers table
HostedPlans|CreateTrialService#create_lead->>Workato|CreateLeadWorker: Async worker to submit [lead] to Workato
Workato|CreateLeadWorker->>Workato|CreateLeadService: [lead]
Workato|CreateLeadService->>WorkatoApp#create_lead: [lead]
WorkatoApp#create_lead->>Workato: [lead] is sent to Workato
```
#### Applying the trial to a namespace on CustomersDot
```mermaid
sequenceDiagram
HostedPlans|CreateTrialService->load_namespace#Gitlab api/namespaces: Load namespace details
HostedPlans|CreateTrialService->create_order#: Creates an order in orders table
HostedPlans|CreateTrialService->create_trial_history#: Creates a record in trial_histories table
```
### Hand raise lead flow
#### Hand raise flow on GitLab.com
```mermaid
sequenceDiagram
HandRaiseForm Vue Component->>HandRaiseLeadsController#create: GitLab.com frontend sends [lead] to backend
HandRaiseLeadsController#create->>CreateHandRaiseLeadService: [lead]
CreateHandRaiseLeadService->>SubscriptionPortalClient: [lead]
SubscriptionPortalClient->>CustomersDot|TrialsController#create_hand_raise_lead: GitLab.com sends [lead] to CustomersDot
```
#### Hand raise flow on CustomersDot
```mermaid
sequenceDiagram
CustomersDot|TrialsController#create_hand_raise_lead->>CreateLeadService: Save [lead] to leads table for monitoring purposes
CustomersDot|TrialsController#create_hand_raise_lead->>Workato|CreateLeadWorker: Async worker to submit [lead] to Workato
Workato|CreateLeadWorker->>Workato|CreateLeadService: [lead]
Workato|CreateLeadService->>WorkatoApp#create_lead: [lead]
WorkatoApp#create_lead->>Workato: [lead] is sent to Workato
```
### PQL flow after Workato for all lead types
```mermaid
sequenceDiagram
Workato->>Marketo: [lead]
Marketo->>Salesforce(SFDC): [lead]
```
|
https://docs.gitlab.com/development/shell_scripting_guide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/shell_scripting_guide
|
[
"doc",
"development",
"shell_scripting_guide"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Shell scripting standards and style guidelines
| null |
GitLab consists of many various services and sub-projects. The majority of
their backend code is written in [Ruby](https://www.ruby-lang.org) and
[Go](https://go.dev/). However, some of them use shell scripts for
automation of routine system administration tasks like deployment,
installation, etc. It's being done either for historical reasons or as an effort
to minimize the dependencies, for instance, for Docker images.
This page aims to define and organize our shell scripting guidelines,
based on our various experiences. All shell scripts across GitLab project
should be eventually harmonized with this guide. If there are any per-project
deviations from this guide, they should be described in the
`README.md` or `PROCESS.md` file for such a project.
## Avoid using shell scripts
{{< alert type="warning" >}}
This is a must-read section.
{{< /alert >}}
Having said all of the above, we recommend staying away from shell scripts
as much as possible. A language like Ruby or Python (if required for
consistency with codebases that we leverage) is almost always a better choice.
The high-level interpreted languages have more readable syntax, offer much more
mature capabilities for unit-testing, linting, and error reporting.
Use shell scripts only if there's a strong restriction on project's
dependencies size or any other requirements that are more important
in a particular case.
## Scope of this guide
According to the [GitLab installation requirements](../../install/requirements.md),
this guide covers only those shells that are used by
[supported Linux distributions](../../administration/package_information/supported_os.md),
that is:
- [POSIX Shell](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html)
- [Bash](https://www.gnu.org/software/bash/)
## Shell language choice
- When you need to reduce the dependencies list, use what's provided by the environment. For example, for Docker images it's `sh` from `alpine` which is the base image for most of our tool images.
- Everywhere else, use `bash` if possible. It's more powerful than `sh` but still a widespread shell.
## Code style and format
This section describes the tools that should be made a mandatory part of
a project's CI pipeline if it contains shell scripts. These tools
automate shell code formatting, checking for errors or vulnerabilities, etc.
### Linting
We're using the [ShellCheck](https://www.shellcheck.net/) utility in its default configuration to lint our
shell scripts.
All projects with shell scripts should use this GitLab CI/CD job:
```yaml
shell check:
image: koalaman/shellcheck-alpine:stable
stage: test
before_script:
- shellcheck --version
script:
- shellcheck scripts/**/*.sh # path to your shell scripts
```
{{< alert type="note" >}}
By default, ShellCheck uses the [shell detection](https://github.com/koalaman/shellcheck/wiki/SC2148#rationale)
to determine the shell dialect in use. If the shell file is out of your control and ShellCheck cannot
detect the dialect, use `-s` flag to specify it: `-s sh` or `-s bash`.
{{< /alert >}}
### Formatting
It's recommended to use the [shfmt](https://github.com/mvdan/sh#shfmt) tool to maintain consistent formatting.
We format shell scripts according to the [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml),
so the following `shfmt` invocation should be applied to the project's script files:
```shell
shfmt -i 2 -ci -w scripts/**/*.sh
```
In addition to the [Linting](#linting) GitLab CI/CD job, all projects with shell scripts should also
use this job:
```yaml
shfmt:
image: mvdan/shfmt:v3.2.0-alpine
stage: test
before_script:
- shfmt -version
script:
- shfmt -i 2 -ci -d scripts # path to your shell scripts
```
{{< alert type="note" >}}
By default, shfmt uses the [shell detection](https://github.com/mvdan/sh#shfmt) similar to one of ShellCheck
and ignore files starting with a period. To override this, use `-ln` flag to specify the shell dialect:
`-ln posix` or `-ln bash`.
{{< /alert >}}
## Testing
{{< alert type="note" >}}
This is a work in progress.
{{< /alert >}}
It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64016) to evaluate different tools for the
automated testing of shell scripts (like [BATS](https://github.com/bats-core/bats-core)).
## Code Review
The code review should be performed according to:
- [ShellCheck Checks list](https://github.com/koalaman/shellcheck/wiki/Checks)
- [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml)
- [Shfmt formatting caveats](https://github.com/mvdan/sh#caveats)
However, the recommended course of action is to use the aforementioned
tools and address reported offenses. This should eliminate the need
for code review.
---
[Return to Development documentation](../_index.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Shell scripting standards and style guidelines
breadcrumbs:
- doc
- development
- shell_scripting_guide
---
GitLab consists of many various services and sub-projects. The majority of
their backend code is written in [Ruby](https://www.ruby-lang.org) and
[Go](https://go.dev/). However, some of them use shell scripts for
automation of routine system administration tasks like deployment,
installation, etc. It's being done either for historical reasons or as an effort
to minimize the dependencies, for instance, for Docker images.
This page aims to define and organize our shell scripting guidelines,
based on our various experiences. All shell scripts across GitLab project
should be eventually harmonized with this guide. If there are any per-project
deviations from this guide, they should be described in the
`README.md` or `PROCESS.md` file for such a project.
## Avoid using shell scripts
{{< alert type="warning" >}}
This is a must-read section.
{{< /alert >}}
Having said all of the above, we recommend staying away from shell scripts
as much as possible. A language like Ruby or Python (if required for
consistency with codebases that we leverage) is almost always a better choice.
The high-level interpreted languages have more readable syntax, offer much more
mature capabilities for unit-testing, linting, and error reporting.
Use shell scripts only if there's a strong restriction on project's
dependencies size or any other requirements that are more important
in a particular case.
## Scope of this guide
According to the [GitLab installation requirements](../../install/requirements.md),
this guide covers only those shells that are used by
[supported Linux distributions](../../administration/package_information/supported_os.md),
that is:
- [POSIX Shell](https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html)
- [Bash](https://www.gnu.org/software/bash/)
## Shell language choice
- When you need to reduce the dependencies list, use what's provided by the environment. For example, for Docker images it's `sh` from `alpine` which is the base image for most of our tool images.
- Everywhere else, use `bash` if possible. It's more powerful than `sh` but still a widespread shell.
## Code style and format
This section describes the tools that should be made a mandatory part of
a project's CI pipeline if it contains shell scripts. These tools
automate shell code formatting, checking for errors or vulnerabilities, etc.
### Linting
We're using the [ShellCheck](https://www.shellcheck.net/) utility in its default configuration to lint our
shell scripts.
All projects with shell scripts should use this GitLab CI/CD job:
```yaml
shell check:
image: koalaman/shellcheck-alpine:stable
stage: test
before_script:
- shellcheck --version
script:
- shellcheck scripts/**/*.sh # path to your shell scripts
```
{{< alert type="note" >}}
By default, ShellCheck uses the [shell detection](https://github.com/koalaman/shellcheck/wiki/SC2148#rationale)
to determine the shell dialect in use. If the shell file is out of your control and ShellCheck cannot
detect the dialect, use `-s` flag to specify it: `-s sh` or `-s bash`.
{{< /alert >}}
### Formatting
It's recommended to use the [shfmt](https://github.com/mvdan/sh#shfmt) tool to maintain consistent formatting.
We format shell scripts according to the [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml),
so the following `shfmt` invocation should be applied to the project's script files:
```shell
shfmt -i 2 -ci -w scripts/**/*.sh
```
In addition to the [Linting](#linting) GitLab CI/CD job, all projects with shell scripts should also
use this job:
```yaml
shfmt:
image: mvdan/shfmt:v3.2.0-alpine
stage: test
before_script:
- shfmt -version
script:
- shfmt -i 2 -ci -d scripts # path to your shell scripts
```
{{< alert type="note" >}}
By default, shfmt uses the [shell detection](https://github.com/mvdan/sh#shfmt) similar to one of ShellCheck
and ignore files starting with a period. To override this, use `-ln` flag to specify the shell dialect:
`-ln posix` or `-ln bash`.
{{< /alert >}}
## Testing
{{< alert type="note" >}}
This is a work in progress.
{{< /alert >}}
It is an [ongoing effort](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/64016) to evaluate different tools for the
automated testing of shell scripts (like [BATS](https://github.com/bats-core/bats-core)).
## Code Review
The code review should be performed according to:
- [ShellCheck Checks list](https://github.com/koalaman/shellcheck/wiki/Checks)
- [Google Shell Style Guide](https://google.github.io/styleguide/shell.xml)
- [Shfmt formatting caveats](https://github.com/mvdan/sh#caveats)
However, the recommended course of action is to use the aforementioned
tools and address reported offenses. This should eliminate the need
for code review.
---
[Return to Development documentation](../_index.md).
|
https://docs.gitlab.com/development/style_guides
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/style_guides.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
style_guides.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Development style guides
| null |
## Editor/IDE styling standardization
We use [EditorConfig](https://editorconfig.org/) to automatically apply certain styling standards before files are saved
locally. Some editors and IDEs honor the `.editorconfig` settings [automatically by default](https://editorconfig.org/#pre-installed).
If your editor or IDE does not automatically support `.editorconfig`, we suggest investigating to
[see if a plugin exists](https://editorconfig.org/#download). For example, a
[plugin for vim](https://github.com/editorconfig/editorconfig-vim).
## Pre-commit and pre-push static analysis with Lefthook
[Lefthook](https://github.com/evilmartians/lefthook) is a Git hooks manager that allows
custom logic to be executed prior to Git committing or pushing. GitLab comes with
Lefthook configuration (`lefthook.yml`), but it must be installed.
We have a `lefthook.yml` checked in but it is ignored until Lefthook is installed.
### Uninstall Overcommit
We were using Overcommit prior to Lefthook, so you may want to uninstall it first with `overcommit --uninstall`.
### Install Lefthook
1. You can install lefthook in [different ways](https://github.com/evilmartians/lefthook/blob/master/docs/install.md#install-lefthook).
If you do not choose to install it globally (for example, via Homebrew or package managers), and only want to use it for the GitLab project,
you can install the Ruby gem via:
```shell
bundle install
```
1. Install Lefthook managed Git hooks:
```shell
# If installed globally
lefthook install
# Or if installed via ruby gem
bundle exec lefthook install
```
1. Test Lefthook is working by running the Lefthook `pre-push` Git hook:
```shell
# If installed globally
lefthook run pre-push
# Or if installed via ruby gem
bundle exec lefthook run pre-push
```
This should return the Lefthook version and the list of executable commands with output.
### Lefthook configuration
Lefthook is configured with a combination of:
- Project configuration in [`lefthook.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lefthook.yml).
- Any [local configuration](https://github.com/evilmartians/lefthook/blob/master/README.md#local-config).
### Lefthook auto-fixing files
We have a custom lefthook target to run all the linters with auto-fix capabilities,
but just on the files which changed in your branch.
```shell
# If installed globally
lefthook run auto-fix
# Or if installed via ruby gem
bundle exec lefthook run auto-fix
```
### Disable Lefthook temporarily
To disable Lefthook temporarily, you can set the `LEFTHOOK` environment variable to `0`. For instance:
```shell
LEFTHOOK=0 git push ...
```
### Run Lefthook hooks manually
You can run the `pre-commit`, `pre-push`, and `auto-fix` hooks manually. For example:
```shell
bundle exec lefthook run pre-push
```
For more information, check out [Lefthook documentation](https://github.com/evilmartians/lefthook/blob/master/README.md#direct-control).
### Skip Lefthook checks per tag
To skip some checks based on tags when pushing, you can set the `LEFTHOOK_EXCLUDE` environment variable. For instance:
```shell
LEFTHOOK_EXCLUDE=frontend,documentation git push ...
```
As an alternative, you can create `lefthook-local.yml` with this structure:
```yaml
pre-push:
exclude_tags:
- frontend
- documentation
```
For more information, check out [Lefthook documentation](https://github.com/evilmartians/lefthook/blob/master/docs/configuration.md#exclude_tags).
### Skip or enable a specific Lefthook check
To skip or enable a check based on its name when pushing, you can add `skip: true`
or `skip: false` to the `lefthook-local.yml` section for that hook. For instance,
you might want to enable the gettext check to detect issues with `locale/gitlab.pot`:
```yaml
pre-push:
commands:
gettext:
skip: false
```
For more information, check out [Lefthook documentation Skipping commands section](https://github.com/evilmartians/lefthook/blob/master/docs/configuration.md#skip).
## Database migrations
See the dedicated [Database Migrations Style Guide](../migration_style_guide.md).
## JavaScript
See the dedicated [JS Style Guide](../fe_guide/style/javascript.md).
## SCSS
See the dedicated [SCSS Style Guide](../fe_guide/style/scss.md).
## Ruby
See the dedicated [Ruby Style Guide](../backend/ruby_style_guide.md).
## Go
See the dedicated [Go standards and style guidelines](../go_guide/_index.md).
## Shell commands (Ruby)
See the dedicated [Guidelines for shell commands in the GitLab codebase](../shell_commands.md).
## Shell scripting
See the dedicated [Shell scripting standards and style guidelines](../shell_scripting_guide/_index.md).
## Publishing NPM packages to npmjs.com
See the dedicated [npmjs package publishing guide](../npmjs.md).
## Markdown
<!-- vale gitlab_base.Spelling = NO -->
We're following [Ciro Santilli's Markdown Style Guide](https://cirosantilli.com/markdown-style-guide/).
<!-- vale gitlab_base.Spelling = YES -->
## Documentation
See the dedicated [Documentation Style Guide](../documentation/styleguide/_index.md).
### Guidelines for good practices
Good practice examples demonstrate encouraged ways of writing code while
comparing with examples of practices to avoid. These examples are labeled as
"Bad" or "Good". In GitLab development guidelines, when presenting the cases,
it's recommended to follow a "first-bad-then-good" strategy. First demonstrate
the "Bad" practice (how things *could* be done, which is often still working
code), and then how things *should* be done better, using a "Good" example. This
is typically an improved example of the same code.
Consider the following guidelines when offering examples:
- First, offer the "Bad" example, and then the "Good" one.
- When only one bad case and one good case is given, use the same code block.
- When more than one bad case or one good case is offered, use separated code
blocks for each. With many examples being presented, a clear separation helps
the reader to go directly to the good part. Consider offering an explanation
(for example, a comment, or a link to a resource) on why something is bad
practice.
- Better and best cases can be considered part of the good cases' code block.
In the same code block, precede each with comments: `# Better` and `# Best`.
Although the bad-then-good approach is acceptable for the GitLab development
guidelines, do not use it for user documentation. For user documentation, use
*Do* and *Don't*. For examples, see the [Pajamas Design System](https://design.gitlab.com/content/punctuation/).
## Python
See the dedicated [Python Development Guidelines](../python_guide/_index.md).
## Misc
Code should be written in [US English](https://en.wikipedia.org/wiki/American_English).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Development style guides
breadcrumbs:
- doc
- development
- contributing
---
## Editor/IDE styling standardization
We use [EditorConfig](https://editorconfig.org/) to automatically apply certain styling standards before files are saved
locally. Some editors and IDEs honor the `.editorconfig` settings [automatically by default](https://editorconfig.org/#pre-installed).
If your editor or IDE does not automatically support `.editorconfig`, we suggest investigating to
[see if a plugin exists](https://editorconfig.org/#download). For example, a
[plugin for vim](https://github.com/editorconfig/editorconfig-vim).
## Pre-commit and pre-push static analysis with Lefthook
[Lefthook](https://github.com/evilmartians/lefthook) is a Git hooks manager that allows
custom logic to be executed prior to Git committing or pushing. GitLab comes with
Lefthook configuration (`lefthook.yml`), but it must be installed.
We have a `lefthook.yml` checked in but it is ignored until Lefthook is installed.
### Uninstall Overcommit
We were using Overcommit prior to Lefthook, so you may want to uninstall it first with `overcommit --uninstall`.
### Install Lefthook
1. You can install lefthook in [different ways](https://github.com/evilmartians/lefthook/blob/master/docs/install.md#install-lefthook).
If you do not choose to install it globally (for example, via Homebrew or package managers), and only want to use it for the GitLab project,
you can install the Ruby gem via:
```shell
bundle install
```
1. Install Lefthook managed Git hooks:
```shell
# If installed globally
lefthook install
# Or if installed via ruby gem
bundle exec lefthook install
```
1. Test Lefthook is working by running the Lefthook `pre-push` Git hook:
```shell
# If installed globally
lefthook run pre-push
# Or if installed via ruby gem
bundle exec lefthook run pre-push
```
This should return the Lefthook version and the list of executable commands with output.
### Lefthook configuration
Lefthook is configured with a combination of:
- Project configuration in [`lefthook.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lefthook.yml).
- Any [local configuration](https://github.com/evilmartians/lefthook/blob/master/README.md#local-config).
### Lefthook auto-fixing files
We have a custom lefthook target to run all the linters with auto-fix capabilities,
but just on the files which changed in your branch.
```shell
# If installed globally
lefthook run auto-fix
# Or if installed via ruby gem
bundle exec lefthook run auto-fix
```
### Disable Lefthook temporarily
To disable Lefthook temporarily, you can set the `LEFTHOOK` environment variable to `0`. For instance:
```shell
LEFTHOOK=0 git push ...
```
### Run Lefthook hooks manually
You can run the `pre-commit`, `pre-push`, and `auto-fix` hooks manually. For example:
```shell
bundle exec lefthook run pre-push
```
For more information, check out [Lefthook documentation](https://github.com/evilmartians/lefthook/blob/master/README.md#direct-control).
### Skip Lefthook checks per tag
To skip some checks based on tags when pushing, you can set the `LEFTHOOK_EXCLUDE` environment variable. For instance:
```shell
LEFTHOOK_EXCLUDE=frontend,documentation git push ...
```
As an alternative, you can create `lefthook-local.yml` with this structure:
```yaml
pre-push:
exclude_tags:
- frontend
- documentation
```
For more information, check out [Lefthook documentation](https://github.com/evilmartians/lefthook/blob/master/docs/configuration.md#exclude_tags).
### Skip or enable a specific Lefthook check
To skip or enable a check based on its name when pushing, you can add `skip: true`
or `skip: false` to the `lefthook-local.yml` section for that hook. For instance,
you might want to enable the gettext check to detect issues with `locale/gitlab.pot`:
```yaml
pre-push:
commands:
gettext:
skip: false
```
For more information, check out [Lefthook documentation Skipping commands section](https://github.com/evilmartians/lefthook/blob/master/docs/configuration.md#skip).
## Database migrations
See the dedicated [Database Migrations Style Guide](../migration_style_guide.md).
## JavaScript
See the dedicated [JS Style Guide](../fe_guide/style/javascript.md).
## SCSS
See the dedicated [SCSS Style Guide](../fe_guide/style/scss.md).
## Ruby
See the dedicated [Ruby Style Guide](../backend/ruby_style_guide.md).
## Go
See the dedicated [Go standards and style guidelines](../go_guide/_index.md).
## Shell commands (Ruby)
See the dedicated [Guidelines for shell commands in the GitLab codebase](../shell_commands.md).
## Shell scripting
See the dedicated [Shell scripting standards and style guidelines](../shell_scripting_guide/_index.md).
## Publishing NPM packages to npmjs.com
See the dedicated [npmjs package publishing guide](../npmjs.md).
## Markdown
<!-- vale gitlab_base.Spelling = NO -->
We're following [Ciro Santilli's Markdown Style Guide](https://cirosantilli.com/markdown-style-guide/).
<!-- vale gitlab_base.Spelling = YES -->
## Documentation
See the dedicated [Documentation Style Guide](../documentation/styleguide/_index.md).
### Guidelines for good practices
Good practice examples demonstrate encouraged ways of writing code while
comparing with examples of practices to avoid. These examples are labeled as
"Bad" or "Good". In GitLab development guidelines, when presenting the cases,
it's recommended to follow a "first-bad-then-good" strategy. First demonstrate
the "Bad" practice (how things *could* be done, which is often still working
code), and then how things *should* be done better, using a "Good" example. This
is typically an improved example of the same code.
Consider the following guidelines when offering examples:
- First, offer the "Bad" example, and then the "Good" one.
- When only one bad case and one good case is given, use the same code block.
- When more than one bad case or one good case is offered, use separated code
blocks for each. With many examples being presented, a clear separation helps
the reader to go directly to the good part. Consider offering an explanation
(for example, a comment, or a link to a resource) on why something is bad
practice.
- Better and best cases can be considered part of the good cases' code block.
In the same code block, precede each with comments: `# Better` and `# Best`.
Although the bad-then-good approach is acceptable for the GitLab development
guidelines, do not use it for user documentation. For user documentation, use
*Do* and *Don't*. For examples, see the [Pajamas Design System](https://design.gitlab.com/content/punctuation/).
## Python
See the dedicated [Python Development Guidelines](../python_guide/_index.md).
## Misc
Code should be written in [US English](https://en.wikipedia.org/wiki/American_English).
|
https://docs.gitlab.com/development/merge_request_workflow
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_workflow.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
merge_request_workflow.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Merge requests workflow
| null |
We welcome merge requests from everyone, with fixes and improvements
to GitLab code, tests, and documentation. The issues that are specifically suitable
for community contributions have the
[`Seeking community contributions`](../labels/_index.md#label-for-community-contributors)
label, but you are free to contribute to any issue you want.
## Working from issues
If you find an issue, submit a merge request with a fix or improvement,
if you can, and include tests.
If you want to add a new feature that is not labeled, it is best to first create
an issue (if there isn't one already) and leave a comment asking for it
to be labeled as `Seeking community contributions`. See the [feature proposals](issue_workflow.md#feature-proposals)
section.
If you don't know how to fix the issue but can write a test that exposes the
issue, we will accept that as well. In general, bug fixes that include a
regression test are merged quickly. New features without proper tests
might be slower to receive feedback.
If you are new to GitLab development (or web development in general), see the
[how to contribute](_index.md#how-to-contribute) section to get started with
some potentially easy issues.
## Merge request ownership
If an issue is marked for the current milestone at any time, even
when you are working on it, a GitLab team member may take over the merge request to ensure the work is finished before the release date.
If a contributor is no longer actively working on a submitted merge request,
we can:
- Decide that the merge request will be finished by one of our
[Merge request coaches](https://about.gitlab.com/company/team/).
- Close the merge request.
We make this decision based on how important the change is for our product vision. If a merge
request coach is going to finish the merge request, we assign the
`~coach will finish` label.
When a team member picks up a community contribution,
we credit the original author by adding a changelog entry crediting the author
and optionally include the original author on at least one of the commits
within the MR.
## Merge request guidelines for contributors
For a walkthrough of the contribution process, see [Tutorial: Make a GitLab contribution](first_contribution/_index.md).
### Best practices
- If the change is non-trivial, we encourage you to start a discussion with
[a product manager or a member of the team](https://handbook.gitlab.com/handbook/product/categories/).
You can do this by tagging them in an MR before submitting the code for review. Talking
to team members can be helpful when making design decisions. Communicating the
intent behind your changes can also help expedite merge request reviews.
- Consider placing your code behind a feature flag if you think it might affect production availability.
Not sure? Read [When to use feature flags](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags).
- If you would like quick feedback on your merge request feel free to mention someone
from the [core team](https://about.gitlab.com/community/core-team/) or one of the
[merge request coaches](https://about.gitlab.com/company/team/). When having your code reviewed
and when reviewing merge requests, keep the [code review guidelines](../code_review.md)
in mind. And if your code also makes changes to the database, or does expensive queries,
check the [database review guidelines](../database_review.md).
### Keep it simple
*Live by smaller iterations.* Keep the amount of changes in a single MR **as small as possible**.
If you want to contribute a large feature, think very carefully about what the
[minimum valuable change](https://handbook.gitlab.com/handbook/product/product-principles/#the-minimal-valuable-change-mvc)
is. Can you split the functionality into two smaller MRs? Can you submit only the
backend/API code? Can you start with a very simple UI? Can you do just a part of the
refactor?
Small MRs which are more easily reviewed, lead to higher code quality which is
more important to GitLab than having a minimal commit log. The smaller an MR is,
the more likely it will be merged quickly. After that you can send more MRs to
enhance and expand the feature. The [How to get faster PR reviews](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/faster_reviews.md)
document from the Kubernetes team also has some great points regarding this.
### Commit messages guidelines
Commit messages should follow the guidelines below, for reasons explained by Chris Beams in [How to Write a Git Commit Message](https://cbea.ms/git-commit/):
- The commit subject and body must be separated by a blank line.
- The commit subject must start with a capital letter.
- The commit subject must not be longer than 72 characters.
- The commit subject must not end with a period.
- The commit body must not contain more than 72 characters per line.
- The commit subject or body must not contain Emojis.
- Commits that change 30 or more lines across at least 3 files should
describe these changes in the commit body.
- Use issues, milestones, and merge requests' full URLs instead of short references,
as they are displayed as plain text outside of GitLab.
- The merge request should not contain more than 10 commit messages.
- The commit subject should contain at least 3 words.
**Important notes**:
- If the guidelines are not met, the MR may not pass the [Danger checks](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles/-/blob/master/lib/danger/rules/commit_messages/Dangerfile).
- Consider enabling [Squash and merge](../../user/project/merge_requests/squash_and_merge.md)
if your merge request includes "Applied suggestion to X files" commits, so that Danger can ignore those.
- The prefixes in the form of `[prefix]` and `prefix:` are allowed (they can be all lowercase, as long
as the message itself is capitalized). For instance, `danger: Improve Danger behavior` and
`[API] Improve the labels endpoint` are valid commit messages.
#### Why these standards matter
1. Consistent commit messages that follow these guidelines make the history more readable.
1. Concise standard commit messages helps to identify [breaking changes](../deprecation_guidelines/_index.md) for a deployment or ~"master:broken" quicker when
reviewing commits between two points in time.
#### Commit message template
Example commit message template that can be used on your machine that embodies the above (guide for [how to apply template](https://codeinthehole.com/tips/a-useful-template-for-commit-messages/)):
```plaintext
# (If applied, this commit will...) <subject> (Max 72 characters)
# |<---- Using a Maximum Of 72 Characters ---->|
# Explain why this change is being made
# |<---- Try To Limit Each Line to a Maximum Of 72 Characters ---->|
# Provide links or keys to any relevant tickets, articles or other resources
# Use issues and merge requests' full URLs instead of short references,
# as they are displayed as plain text outside of GitLab
# --- COMMIT END ---
# --------------------
# Remember to
# Capitalize the subject line
# Use the imperative mood in the subject line
# Do not end the subject line with a period
# Subject must contain at least 3 words
# Separate subject from body with a blank line
# Commits that change 30 or more lines across at least 3 files should
# describe these changes in the commit body
# Do not use Emojis
# Use the body to explain what and why vs. how
# Can use multiple lines with "-" for bullet points in body
# For more information: https://cbea.ms/git-commit/
# --------------------
```
## Contribution acceptance criteria
To make sure that your merge request can be approved, ensure that it meets
the contribution acceptance criteria below:
1. The change is as small as possible.
1. If the merge request contains more than 500 changes:
- Explain the reason
- Mention a maintainer
1. Mention any major [breaking changes](../deprecation_guidelines/_index.md).
1. Include proper tests and make all tests pass (unless it contains a test
exposing a bug in existing code). Every new class should have corresponding
unit tests, even if the class is exercised at a higher level, such as a feature test.
- If a failing CI build seems to be unrelated to your contribution, you can try
restarting the failing CI job, rebasing on top of target branch to bring in updates that
may resolve the failure, or if it has not been fixed yet, ask a developer to
help you fix the test.
1. The MR contains a few logically organized commits, or has [squashing commits enabled](../../user/project/merge_requests/squash_and_merge.md).
1. The changes can merge without problems. If not, you should rebase if you're the
only one working on your feature branch, otherwise merge the default branch into the MR branch.
1. Only one specific issue is fixed or one specific feature is implemented. Do not
combine things; send separate merge requests for each issue or feature.
1. Migrations should do only one thing (for example, create a table, move data to a new
table, or remove an old table) to aid retrying on failure.
1. Contains functionality that other users will benefit from.
1. Doesn't add configuration options or settings options since they complicate making
and testing future changes.
1. Changes do not degrade performance:
- Avoid repeated polling of endpoints that require a significant amount of overhead.
- Check for N+1 queries via the SQL log or [`QueryRecorder`](../merge_request_concepts/performance.md).
- Avoid repeated access of the file system.
- Use [polling with ETag caching](../polling.md) if needed to support real-time features.
1. If the merge request adds any new libraries (like gems or JavaScript libraries),
they should conform to our [Licensing guidelines](../licensing.md). See those
instructions for help if the "license-finder" test fails with a
`Dependencies that need approval` error. Also, make the reviewer aware of the new
library and explain why you need it.
1. The merge request meets the GitLab [definition of done](#definition-of-done), below.
## Definition of done
If you contribute to GitLab, know that changes involve more than just
code. We use the following [definition of done](https://www.agilealliance.org/glossary/definition-of-done/).
To reach the definition of done, the merge request must create no regressions and meet all these criteria:
- Verified as working in production on GitLab.com.
- Verified as working for GitLab Self-Managed instances.
- Verified as supporting [Geo](../../administration/geo/_index.md) through the [self-service framework](../geo/framework.md). For more information, see [Geo is a requirement in the definition of done](../geo/framework.md#geo-is-a-requirement-in-the-definition-of-done).
If a regression occurs, we prefer you revert the change.
Your contribution is incomplete until you have made sure it meets all of these
requirements.
### Functionality
1. Working and clean code that is commented where needed.
1. The change is evaluated to [limit the impact of far-reaching work](https://handbook.gitlab.com/handbook/engineering/core-development/#reducing-the-impact-of-far-reaching-work).
1. [Performance guidelines](../merge_request_concepts/performance.md) have been followed.
1. [Secure coding guidelines](../secure_coding_guidelines.md) have been followed.
1. [Application and rate limit guidelines](../merge_request_concepts/rate_limits.md) have been followed.
1. [Documented](../documentation/_index.md) in the `/doc` directory.
1. If your MR touches code that executes shell commands, reads or opens files, or
handles paths to files on disk, make sure it adheres to the
[shell command guidelines](../shell_commands.md)
1. [Code changes should include observability instrumentation](../code_review.md#observability-instrumentation).
1. If your code needs to handle file storage, see the [uploads documentation](../uploads/_index.md).
1. If your merge request adds one or more migrations, make sure to execute all migrations on a fresh database
before the MR is reviewed.
If the review leads to large changes in the MR, execute the migrations again
after the review is complete.
1. If your merge request adds new validations to existing models, to make sure the
data processing is backwards compatible:
- Ask in the [`#database`](https://gitlab.slack.com/archives/CNZ8E900G) Slack channel
for assistance to execute the database query that checks the existing rows to
ensure existing rows aren't impacted by the change.
- Add the necessary validation with a feature flag to be gradually rolled out
following [the rollout steps](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#rollout).
If this merge request is urgent, the code owners should make the final call on
whether reviewing existing rows should be included as an immediate follow-up task
to the merge request.
{{< alert type="note" >}}
There isn't a way to know anything about our customers' data on their
[GitLab Self-Managed instances](../../subscriptions/self_managed/_index.md), so keep
that in mind for any data implications with your merge request.
1. Consider self-managed functionality and upgrade paths. The change should consider both:
- If additional work needs to be done for self-managed availability, and
- If the change requires a [required stop](../database/required_stops.md) when upgrading GitLab versions.
{{< /alert >}}
Upgrade stops are sometimes requested when a GitLab code change is dependent
upon a background migration being already complete. Ideally, changes causing required
upgrade stops should be held for the next major release, or
[at least a 3 milestones notice in advance if unavoidable](../../update/upgrade_paths.md).
### Testing
1. [Unit, integration, and system tests](../testing_guide/_index.md) that all pass
on the CI server.
1. Peer member testing is optional but recommended when the risk of a change is high.
This includes when the changes are [far-reaching](https://handbook.gitlab.com/handbook/engineering/core-development/#reducing-the-impact-of-far-reaching-work)
or are for [components critical for security](../code_review.md#security).
1. Regressions and bugs are covered with tests that reduce the risk of the issue happening
again.
1. For tests that use Capybara, read
[how to write reliable, asynchronous integration tests](https://thoughtbot.com/blog/write-reliable-asynchronous-integration-tests-with-capybara).
1. [Black-box tests/end-to-end tests](../testing_guide/testing_levels.md#black-box-tests-at-the-system-level-aka-end-to-end-tests)
added if required. Contact [the quality team](https://handbook.gitlab.com/handbook/engineering/quality/)
with any questions.
1. The change is tested in a review app where possible and if appropriate.
1. Code affected by a feature flag is covered by [automated tests with the feature flag enabled and disabled](../feature_flags/_index.md#feature-flags-in-tests), or both
states are tested as part of peer member testing or as part of the rollout plan.
1. If your merge request adds one or more migrations, write tests for more complex migrations.
### UI changes
1. Use available components from the GitLab Design System,
[Pajamas](https://design.gitlab.com/).
1. The MR must include "Before" and "After" screenshots if UI changes are made.
1. If the MR changes CSS classes, include the list of affected pages, which
can be found by running `grep css-class ./app -R`.
### Description of changes
1. Clear title and description explaining the relevancy of the contribution.
1. Description includes any steps or setup required to ensure reviewers can view the changes you've made (for example, include any information about feature flags).
1. [Changelog entry added](../changelog.md), if necessary.
1. If your merge request introduces changes that require additional steps when
self-compiling GitLab, add them to `doc/install/self_compiled/_index.md` in
the same merge request.
1. If your merge request introduces changes that require additional steps when
upgrading GitLab from source, add them to
`doc/update/upgrading_from_source.md` in the same merge request. If these
instructions are specific to a version, add them to the "Version specific
upgrading instructions" section.
### Approval
1. The MR was evaluated against the [MR acceptance checklist](../code_review.md#acceptance-checklist).
1. Create an issue in the [infrastructure issue tracker](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues) to inform the Infrastructure department when your contribution is changing default settings or introduces a new setting, if relevant.
1. An agreed-upon [rollout plan](https://handbook.gitlab.com/handbook/engineering/development/processes/rollout-plans/).
1. Reviewed by relevant reviewers, and all concerns are addressed for Availability, Regressions, and Security. Documentation reviews should take place as soon as possible, but they should not block a merge request.
1. Your merge request has at least 1 approval, but depending on your changes
you might need additional approvals. Refer to the [Approval guidelines](../code_review.md#approval-guidelines).
- You don't have to select any specific approvers, but you can if you really want
specific people to approve your merge request.
1. Merged by a project maintainer.
### Production use
The following items are checked after the merge request has been merged:
1. Confirmed to be working in staging before implementing the change in production, where possible.
1. Confirmed to be working in the production with no new [Sentry](https://handbook.gitlab.com/handbook/engineering/monitoring/#sentry) errors after the contribution is deployed.
1. Confirmed that the [rollout plan](https://handbook.gitlab.com/handbook/engineering/development/processes/rollout-plans/) has been completed.
1. If there is a performance risk in the change, you have analyzed the performance of the system before and after the change.
1. *If the merge request uses feature flags, per-project or per-group enablement, and a staged rollout:*
- Confirmed to be working on GitLab projects.
- Confirmed to be working at each stage for all projects added.
1. Added to the [release post](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/),
if relevant.
1. Added to [the website](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml), if relevant.
Contributions do not require approval from the [Product team](https://handbook.gitlab.com/handbook/product/product-processes/#community-considerations).
## Dependencies
If you add a dependency in GitLab (such as an operating system package),
consider updating the following, and note the applicability of each in your merge
request:
1. Note the addition in the [release blog post](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/)
(create one if it doesn't exist yet).
1. [The upgrade guide](../../update/upgrading_from_source.md).
1. The [GitLab Installation Guide](../../install/self_compiled/_index.md#1-packages-and-dependencies).
1. The [GitLab Development Kit](https://gitlab.com/gitlab-org/gitlab-development-kit).
1. The [CI environment preparation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/prepare_build.sh).
1. The [Linux package creator](https://gitlab.com/gitlab-org/omnibus-gitlab).
1. The [Cloud Native GitLab Dockerfiles](https://gitlab.com/gitlab-org/build/CNG)
## Incremental improvements
We allow engineering time to fix small problems (with or without an
issue) that are incremental improvements, such as:
1. Unprioritized bug fixes (for example,
[Banner alerting of project move is showing up everywhere](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18985))
1. Documentation improvements
1. RuboCop or Code Quality improvements
Tag a merge request with ~"Stuff that should Just Work" to track work in
this area.
## Related topics
- [The responsibility of the merge request author](../code_review.md#the-responsibility-of-the-merge-request-author)
- [Having your merge request reviewed](../code_review.md#having-your-merge-request-reviewed)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Merge requests workflow
breadcrumbs:
- doc
- development
- contributing
---
We welcome merge requests from everyone, with fixes and improvements
to GitLab code, tests, and documentation. The issues that are specifically suitable
for community contributions have the
[`Seeking community contributions`](../labels/_index.md#label-for-community-contributors)
label, but you are free to contribute to any issue you want.
## Working from issues
If you find an issue, submit a merge request with a fix or improvement,
if you can, and include tests.
If you want to add a new feature that is not labeled, it is best to first create
an issue (if there isn't one already) and leave a comment asking for it
to be labeled as `Seeking community contributions`. See the [feature proposals](issue_workflow.md#feature-proposals)
section.
If you don't know how to fix the issue but can write a test that exposes the
issue, we will accept that as well. In general, bug fixes that include a
regression test are merged quickly. New features without proper tests
might be slower to receive feedback.
If you are new to GitLab development (or web development in general), see the
[how to contribute](_index.md#how-to-contribute) section to get started with
some potentially easy issues.
## Merge request ownership
If an issue is marked for the current milestone at any time, even
when you are working on it, a GitLab team member may take over the merge request to ensure the work is finished before the release date.
If a contributor is no longer actively working on a submitted merge request,
we can:
- Decide that the merge request will be finished by one of our
[Merge request coaches](https://about.gitlab.com/company/team/).
- Close the merge request.
We make this decision based on how important the change is for our product vision. If a merge
request coach is going to finish the merge request, we assign the
`~coach will finish` label.
When a team member picks up a community contribution,
we credit the original author by adding a changelog entry crediting the author
and optionally include the original author on at least one of the commits
within the MR.
## Merge request guidelines for contributors
For a walkthrough of the contribution process, see [Tutorial: Make a GitLab contribution](first_contribution/_index.md).
### Best practices
- If the change is non-trivial, we encourage you to start a discussion with
[a product manager or a member of the team](https://handbook.gitlab.com/handbook/product/categories/).
You can do this by tagging them in an MR before submitting the code for review. Talking
to team members can be helpful when making design decisions. Communicating the
intent behind your changes can also help expedite merge request reviews.
- Consider placing your code behind a feature flag if you think it might affect production availability.
Not sure? Read [When to use feature flags](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags).
- If you would like quick feedback on your merge request feel free to mention someone
from the [core team](https://about.gitlab.com/community/core-team/) or one of the
[merge request coaches](https://about.gitlab.com/company/team/). When having your code reviewed
and when reviewing merge requests, keep the [code review guidelines](../code_review.md)
in mind. And if your code also makes changes to the database, or does expensive queries,
check the [database review guidelines](../database_review.md).
### Keep it simple
*Live by smaller iterations.* Keep the amount of changes in a single MR **as small as possible**.
If you want to contribute a large feature, think very carefully about what the
[minimum valuable change](https://handbook.gitlab.com/handbook/product/product-principles/#the-minimal-valuable-change-mvc)
is. Can you split the functionality into two smaller MRs? Can you submit only the
backend/API code? Can you start with a very simple UI? Can you do just a part of the
refactor?
Small MRs which are more easily reviewed, lead to higher code quality which is
more important to GitLab than having a minimal commit log. The smaller an MR is,
the more likely it will be merged quickly. After that you can send more MRs to
enhance and expand the feature. The [How to get faster PR reviews](https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/faster_reviews.md)
document from the Kubernetes team also has some great points regarding this.
### Commit messages guidelines
Commit messages should follow the guidelines below, for reasons explained by Chris Beams in [How to Write a Git Commit Message](https://cbea.ms/git-commit/):
- The commit subject and body must be separated by a blank line.
- The commit subject must start with a capital letter.
- The commit subject must not be longer than 72 characters.
- The commit subject must not end with a period.
- The commit body must not contain more than 72 characters per line.
- The commit subject or body must not contain Emojis.
- Commits that change 30 or more lines across at least 3 files should
describe these changes in the commit body.
- Use issues, milestones, and merge requests' full URLs instead of short references,
as they are displayed as plain text outside of GitLab.
- The merge request should not contain more than 10 commit messages.
- The commit subject should contain at least 3 words.
**Important notes**:
- If the guidelines are not met, the MR may not pass the [Danger checks](https://gitlab.com/gitlab-org/ruby/gems/gitlab-dangerfiles/-/blob/master/lib/danger/rules/commit_messages/Dangerfile).
- Consider enabling [Squash and merge](../../user/project/merge_requests/squash_and_merge.md)
if your merge request includes "Applied suggestion to X files" commits, so that Danger can ignore those.
- The prefixes in the form of `[prefix]` and `prefix:` are allowed (they can be all lowercase, as long
as the message itself is capitalized). For instance, `danger: Improve Danger behavior` and
`[API] Improve the labels endpoint` are valid commit messages.
#### Why these standards matter
1. Consistent commit messages that follow these guidelines make the history more readable.
1. Concise standard commit messages helps to identify [breaking changes](../deprecation_guidelines/_index.md) for a deployment or ~"master:broken" quicker when
reviewing commits between two points in time.
#### Commit message template
Example commit message template that can be used on your machine that embodies the above (guide for [how to apply template](https://codeinthehole.com/tips/a-useful-template-for-commit-messages/)):
```plaintext
# (If applied, this commit will...) <subject> (Max 72 characters)
# |<---- Using a Maximum Of 72 Characters ---->|
# Explain why this change is being made
# |<---- Try To Limit Each Line to a Maximum Of 72 Characters ---->|
# Provide links or keys to any relevant tickets, articles or other resources
# Use issues and merge requests' full URLs instead of short references,
# as they are displayed as plain text outside of GitLab
# --- COMMIT END ---
# --------------------
# Remember to
# Capitalize the subject line
# Use the imperative mood in the subject line
# Do not end the subject line with a period
# Subject must contain at least 3 words
# Separate subject from body with a blank line
# Commits that change 30 or more lines across at least 3 files should
# describe these changes in the commit body
# Do not use Emojis
# Use the body to explain what and why vs. how
# Can use multiple lines with "-" for bullet points in body
# For more information: https://cbea.ms/git-commit/
# --------------------
```
## Contribution acceptance criteria
To make sure that your merge request can be approved, ensure that it meets
the contribution acceptance criteria below:
1. The change is as small as possible.
1. If the merge request contains more than 500 changes:
- Explain the reason
- Mention a maintainer
1. Mention any major [breaking changes](../deprecation_guidelines/_index.md).
1. Include proper tests and make all tests pass (unless it contains a test
exposing a bug in existing code). Every new class should have corresponding
unit tests, even if the class is exercised at a higher level, such as a feature test.
- If a failing CI build seems to be unrelated to your contribution, you can try
restarting the failing CI job, rebasing on top of target branch to bring in updates that
may resolve the failure, or if it has not been fixed yet, ask a developer to
help you fix the test.
1. The MR contains a few logically organized commits, or has [squashing commits enabled](../../user/project/merge_requests/squash_and_merge.md).
1. The changes can merge without problems. If not, you should rebase if you're the
only one working on your feature branch, otherwise merge the default branch into the MR branch.
1. Only one specific issue is fixed or one specific feature is implemented. Do not
combine things; send separate merge requests for each issue or feature.
1. Migrations should do only one thing (for example, create a table, move data to a new
table, or remove an old table) to aid retrying on failure.
1. Contains functionality that other users will benefit from.
1. Doesn't add configuration options or settings options since they complicate making
and testing future changes.
1. Changes do not degrade performance:
- Avoid repeated polling of endpoints that require a significant amount of overhead.
- Check for N+1 queries via the SQL log or [`QueryRecorder`](../merge_request_concepts/performance.md).
- Avoid repeated access of the file system.
- Use [polling with ETag caching](../polling.md) if needed to support real-time features.
1. If the merge request adds any new libraries (like gems or JavaScript libraries),
they should conform to our [Licensing guidelines](../licensing.md). See those
instructions for help if the "license-finder" test fails with a
`Dependencies that need approval` error. Also, make the reviewer aware of the new
library and explain why you need it.
1. The merge request meets the GitLab [definition of done](#definition-of-done), below.
## Definition of done
If you contribute to GitLab, know that changes involve more than just
code. We use the following [definition of done](https://www.agilealliance.org/glossary/definition-of-done/).
To reach the definition of done, the merge request must create no regressions and meet all these criteria:
- Verified as working in production on GitLab.com.
- Verified as working for GitLab Self-Managed instances.
- Verified as supporting [Geo](../../administration/geo/_index.md) through the [self-service framework](../geo/framework.md). For more information, see [Geo is a requirement in the definition of done](../geo/framework.md#geo-is-a-requirement-in-the-definition-of-done).
If a regression occurs, we prefer you revert the change.
Your contribution is incomplete until you have made sure it meets all of these
requirements.
### Functionality
1. Working and clean code that is commented where needed.
1. The change is evaluated to [limit the impact of far-reaching work](https://handbook.gitlab.com/handbook/engineering/core-development/#reducing-the-impact-of-far-reaching-work).
1. [Performance guidelines](../merge_request_concepts/performance.md) have been followed.
1. [Secure coding guidelines](../secure_coding_guidelines.md) have been followed.
1. [Application and rate limit guidelines](../merge_request_concepts/rate_limits.md) have been followed.
1. [Documented](../documentation/_index.md) in the `/doc` directory.
1. If your MR touches code that executes shell commands, reads or opens files, or
handles paths to files on disk, make sure it adheres to the
[shell command guidelines](../shell_commands.md)
1. [Code changes should include observability instrumentation](../code_review.md#observability-instrumentation).
1. If your code needs to handle file storage, see the [uploads documentation](../uploads/_index.md).
1. If your merge request adds one or more migrations, make sure to execute all migrations on a fresh database
before the MR is reviewed.
If the review leads to large changes in the MR, execute the migrations again
after the review is complete.
1. If your merge request adds new validations to existing models, to make sure the
data processing is backwards compatible:
- Ask in the [`#database`](https://gitlab.slack.com/archives/CNZ8E900G) Slack channel
for assistance to execute the database query that checks the existing rows to
ensure existing rows aren't impacted by the change.
- Add the necessary validation with a feature flag to be gradually rolled out
following [the rollout steps](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#rollout).
If this merge request is urgent, the code owners should make the final call on
whether reviewing existing rows should be included as an immediate follow-up task
to the merge request.
{{< alert type="note" >}}
There isn't a way to know anything about our customers' data on their
[GitLab Self-Managed instances](../../subscriptions/self_managed/_index.md), so keep
that in mind for any data implications with your merge request.
1. Consider self-managed functionality and upgrade paths. The change should consider both:
- If additional work needs to be done for self-managed availability, and
- If the change requires a [required stop](../database/required_stops.md) when upgrading GitLab versions.
{{< /alert >}}
Upgrade stops are sometimes requested when a GitLab code change is dependent
upon a background migration being already complete. Ideally, changes causing required
upgrade stops should be held for the next major release, or
[at least a 3 milestones notice in advance if unavoidable](../../update/upgrade_paths.md).
### Testing
1. [Unit, integration, and system tests](../testing_guide/_index.md) that all pass
on the CI server.
1. Peer member testing is optional but recommended when the risk of a change is high.
This includes when the changes are [far-reaching](https://handbook.gitlab.com/handbook/engineering/core-development/#reducing-the-impact-of-far-reaching-work)
or are for [components critical for security](../code_review.md#security).
1. Regressions and bugs are covered with tests that reduce the risk of the issue happening
again.
1. For tests that use Capybara, read
[how to write reliable, asynchronous integration tests](https://thoughtbot.com/blog/write-reliable-asynchronous-integration-tests-with-capybara).
1. [Black-box tests/end-to-end tests](../testing_guide/testing_levels.md#black-box-tests-at-the-system-level-aka-end-to-end-tests)
added if required. Contact [the quality team](https://handbook.gitlab.com/handbook/engineering/quality/)
with any questions.
1. The change is tested in a review app where possible and if appropriate.
1. Code affected by a feature flag is covered by [automated tests with the feature flag enabled and disabled](../feature_flags/_index.md#feature-flags-in-tests), or both
states are tested as part of peer member testing or as part of the rollout plan.
1. If your merge request adds one or more migrations, write tests for more complex migrations.
### UI changes
1. Use available components from the GitLab Design System,
[Pajamas](https://design.gitlab.com/).
1. The MR must include "Before" and "After" screenshots if UI changes are made.
1. If the MR changes CSS classes, include the list of affected pages, which
can be found by running `grep css-class ./app -R`.
### Description of changes
1. Clear title and description explaining the relevancy of the contribution.
1. Description includes any steps or setup required to ensure reviewers can view the changes you've made (for example, include any information about feature flags).
1. [Changelog entry added](../changelog.md), if necessary.
1. If your merge request introduces changes that require additional steps when
self-compiling GitLab, add them to `doc/install/self_compiled/_index.md` in
the same merge request.
1. If your merge request introduces changes that require additional steps when
upgrading GitLab from source, add them to
`doc/update/upgrading_from_source.md` in the same merge request. If these
instructions are specific to a version, add them to the "Version specific
upgrading instructions" section.
### Approval
1. The MR was evaluated against the [MR acceptance checklist](../code_review.md#acceptance-checklist).
1. Create an issue in the [infrastructure issue tracker](https://gitlab.com/gitlab-com/gl-infra/reliability/-/issues) to inform the Infrastructure department when your contribution is changing default settings or introduces a new setting, if relevant.
1. An agreed-upon [rollout plan](https://handbook.gitlab.com/handbook/engineering/development/processes/rollout-plans/).
1. Reviewed by relevant reviewers, and all concerns are addressed for Availability, Regressions, and Security. Documentation reviews should take place as soon as possible, but they should not block a merge request.
1. Your merge request has at least 1 approval, but depending on your changes
you might need additional approvals. Refer to the [Approval guidelines](../code_review.md#approval-guidelines).
- You don't have to select any specific approvers, but you can if you really want
specific people to approve your merge request.
1. Merged by a project maintainer.
### Production use
The following items are checked after the merge request has been merged:
1. Confirmed to be working in staging before implementing the change in production, where possible.
1. Confirmed to be working in the production with no new [Sentry](https://handbook.gitlab.com/handbook/engineering/monitoring/#sentry) errors after the contribution is deployed.
1. Confirmed that the [rollout plan](https://handbook.gitlab.com/handbook/engineering/development/processes/rollout-plans/) has been completed.
1. If there is a performance risk in the change, you have analyzed the performance of the system before and after the change.
1. *If the merge request uses feature flags, per-project or per-group enablement, and a staged rollout:*
- Confirmed to be working on GitLab projects.
- Confirmed to be working at each stage for all projects added.
1. Added to the [release post](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/),
if relevant.
1. Added to [the website](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/features.yml), if relevant.
Contributions do not require approval from the [Product team](https://handbook.gitlab.com/handbook/product/product-processes/#community-considerations).
## Dependencies
If you add a dependency in GitLab (such as an operating system package),
consider updating the following, and note the applicability of each in your merge
request:
1. Note the addition in the [release blog post](https://handbook.gitlab.com/handbook/marketing/blog/release-posts/)
(create one if it doesn't exist yet).
1. [The upgrade guide](../../update/upgrading_from_source.md).
1. The [GitLab Installation Guide](../../install/self_compiled/_index.md#1-packages-and-dependencies).
1. The [GitLab Development Kit](https://gitlab.com/gitlab-org/gitlab-development-kit).
1. The [CI environment preparation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/scripts/prepare_build.sh).
1. The [Linux package creator](https://gitlab.com/gitlab-org/omnibus-gitlab).
1. The [Cloud Native GitLab Dockerfiles](https://gitlab.com/gitlab-org/build/CNG)
## Incremental improvements
We allow engineering time to fix small problems (with or without an
issue) that are incremental improvements, such as:
1. Unprioritized bug fixes (for example,
[Banner alerting of project move is showing up everywhere](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18985))
1. Documentation improvements
1. RuboCop or Code Quality improvements
Tag a merge request with ~"Stuff that should Just Work" to track work in
this area.
## Related topics
- [The responsibility of the merge request author](../code_review.md#the-responsibility-of-the-merge-request-author)
- [Having your merge request reviewed](../code_review.md#having-your-merge-request-reviewed)
|
https://docs.gitlab.com/development/contributing
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute to GitLab development
|
Code contribution guidelines, style guides, and processes.
|
Thank you for your interest in contributing to GitLab.
You can contribute new features, changes to code or processes, typo fixes,
or updates to language in the interface.
This guide details how to contribute to the development of GitLab.
For a step-by-step guide for first-time contributors, see [Tutorial: Make a GitLab contribution](first_contribution/_index.md).
## How to contribute
1. Read the [Code of Conduct](https://about.gitlab.com/community/contribute/code-of-conduct/).
1. [Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
1. [Choose or create an issue to work on](#choose-or-create-an-issue).
1. [Choose a development environment](#choose-a-development-environment).
1. Make changes and open a merge request.
1. Your merge request is triaged, reviewed, and can then be incorporated into the product.
{{< alert type="note" >}}
All contributions must be submitted in English. GitLab engineering work is done in English,
and merge requests and issues in other languages cannot be reviewed or accepted.
{{< /alert >}}
## GitLab technologies
[GitLab](https://gitlab.com/gitlab-org/gitlab) is a [Ruby on Rails](https://rubyonrails.org/) application.
It uses [Haml](https://haml.info/) and a JavaScript-based frontend with [Vue.js](https://vuejs.org/).
Some satellite projects use [Go](https://go.dev/).
For example:
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner)
- [Gitaly](https://gitlab.com/gitlab-org/gitaly)
- [GLab](https://gitlab.com/gitlab-org/cli)
- [GitLab Terraform Provider](https://gitlab.com/gitlab-org/terraform-provider-gitlab)
We have [development style guides for each technology](style_guides.md) to help you align with our coding standards.
If you want to contribute to the [website](https://about.gitlab.com/) or the [handbook](https://handbook.gitlab.com/handbook/),
go to the footer of any page and select **View page source** to open the page in the repository.
## Choose or create an issue
If you know what you're going to work on, see if an issue exists.
If it doesn't, open a [new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new).
Select the appropriate template and add all the necessary information about the work you plan to do.
That way you can get more guidance and support.
If you're not sure what to work on, you can
[view issues with the `~quick win` label](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&first_page_size=100),
and filter specifically for [documentation `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=documentation&first_page_size=100),
[backend `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=backend&first_page_size=100),
or [frontend `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=frontend&first_page_size=100).
When you find an issue you want to work on, leave a comment on it.
This helps the GitLab team and members of the wider GitLab community know that you are working on that issue.
This is a good opportunity to [validate the issue](issue_workflow.md#clarifyingvalidating-an-issue).
Confirm that the issue is still valid, clarify your intended approach, and ask if a feature or change is likely to be accepted.
You do not need to be assigned to the issue to get started.
If the issue already has an assignee, ask if they are still working on the issue or if they would like to collaborate.
For details, see [the issues workflow](issue_workflow.md).
## Join the community
[Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access),
a set of forks mirrored from GitLab repositories in order to improve the contributor experience.
When you request access to the community forks you will receive an onboarding issue in the
[community onboarding project](https://gitlab.com/gitlab-community/community-members/onboarding/-/issues).
For more information, read about the community forks in the [Meta repository README](https://gitlab.com/gitlab-community/meta#why).
Additionally, we recommend you join the [GitLab Discord server](https://discord.com/invite/gitlab),
where GitLab team members and the wider community are ready and waiting to answer your questions
and offer support for making contributions.
## Choose a development environment
To write and test your code locally, choose a local development environment.
- [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit), is a local
development environment that includes an installation of GitLab Self-Managed, sample projects,
and administrator access with which you can test functionality.
- [GDK-in-a-box](first_contribution/configure-dev-env-gdk-in-a-box.md),
packages GDK into a pre-configured virtual machine image that you can connect to with VS Code.
Follow [Configure GDK-in-a-box](first_contribution/configure-dev-env-gdk-in-a-box.md) to set up GDK-in-a-box.
To install GDK and its dependencies, follow the steps in [Install the GDK development environment](first_contribution/configure-dev-env-gdk.md).
- Use [Gitpod](first_contribution/configure-dev-env-gitpod.md) for an in-browser remote development
environment that runs regardless of your local hardware, operating system, or software.
## Open a merge request
1. Go to [the community fork on GitLab.com](https://gitlab.com/gitlab-community/gitlab).
If you don't see this message, on the left sidebar, select **Code > Merge requests > New merge request**.
1. Take a look at the branch names. You should be merging from your branch
in the community fork to the `master` branch in the GitLab repository.
1. Fill out the information and then select **Save changes**.
Don't worry if your merge request is not complete.
If you don't want anyone from GitLab to review it, you can select the **Mark as draft** checkbox.
If you're not happy with the merge request after you create it, you can close it, no harm done.
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.
Someone from GitLab will look at your request and let you know what the next steps are.
For details, see the [merge request workflow](merge_request_workflow.md).
Have questions?
Use `@gitlab-bot help` to ping a GitLab Merge Request coach. For more information on MR coaches, visit [How GitLab Merge Request Coaches Can Help You](merge_request_coaches.md).
### How community merge requests are triaged
When you create a merge request, a merge request coach will assign relevant reviewers or
guide you through the review themselves if possible.
The goal is to have a merge request reviewed within a week after a reviewer is assigned.
At times this may take longer due to high workload, holidays, or other reasons.
If you need to, find a
[merge request coach](https://handbook.gitlab.com/handbook/marketing/developer-relations/contributor-success/merge-request-coach-lifecycle/#current-merge-request-coaches)
who specializes in the type of code you have written and mention them in the merge request.
For example, if you have written some frontend code, you should mention the frontend merge request coach.
If your code has multiple disciplines, you can mention multiple merge request coaches.
For details about timelines and how you can request help or escalate a merge request,
see the [Wider Community Merge Request guide](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/merge-request-triage/).
After your merge request is reviewed and merged, your changes will be deployed to GitLab.com and included in the next release!
#### Review process
When you submit code to GitLab, we really want it to get merged!
However, we review submissions carefully, and this takes time.
Code submissions are usually reviewed by two
[domain experts](../code_review.md#domain-experts) before being merged:
- A [reviewer](../code_review.md#the-responsibility-of-the-reviewer).
- A [maintainer](../code_review.md#the-responsibility-of-the-maintainer).
After review, the reviewer could ask the author to update the merge request.
In that case, the reviewer will set the `~"workflow::in dev"` label.
Once you have updated the merge request with the requested changes, comment on it with `@gitlab-bot ready` to signal that it is ready for review again.
This process may repeat several times before merge.
Read our [merge request guidelines for contributors before you start for the first time](merge_request_workflow.md#merge-request-guidelines-for-contributors).
- [Make sure to follow our commit message guidelines](merge_request_workflow.md#commit-messages-guidelines).
- Write a great description that includes steps to reproduce your implementation.
- Automated testing is required. Take your time to understand the different
[testing levels](../testing_guide/testing_levels.md#how-to-test-at-the-correct-level) and apply them accordingly.
## Contributing to Premium/Ultimate features with an Enterprise Edition license
If you would like to work on GitLab features that are within a paid tier, the code that lives in the
[EE directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee), it requires a GitLab Enterprise Edition license.
Request an Enterprise Edition Developers License according to the [documented process](https://handbook.gitlab.com/handbook/marketing/developer-relations/contributor-success/community-contributors-workflows/#contributing-to-the-gitlab-enterprise-edition-ee).
## Get help
How to find help contributing to GitLab:
- Type `@gitlab-bot help` in a comment on a merge request or issue to tag a MR coach.
- See [How GitLab Merge Request Coaches Can Help You](merge_request_coaches.md) for more information.
- Join the [GitLab Community Discord](https://discord.gg/gitlab) and ask for help in the `#contribute` channel.
- Email the Contributor Success team at `contributors@gitlab.com`.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
description: Code contribution guidelines, style guides, and processes.
title: Contribute to GitLab development
breadcrumbs:
- doc
- development
- contributing
---
Thank you for your interest in contributing to GitLab.
You can contribute new features, changes to code or processes, typo fixes,
or updates to language in the interface.
This guide details how to contribute to the development of GitLab.
For a step-by-step guide for first-time contributors, see [Tutorial: Make a GitLab contribution](first_contribution/_index.md).
## How to contribute
1. Read the [Code of Conduct](https://about.gitlab.com/community/contribute/code-of-conduct/).
1. [Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access).
1. [Choose or create an issue to work on](#choose-or-create-an-issue).
1. [Choose a development environment](#choose-a-development-environment).
1. Make changes and open a merge request.
1. Your merge request is triaged, reviewed, and can then be incorporated into the product.
{{< alert type="note" >}}
All contributions must be submitted in English. GitLab engineering work is done in English,
and merge requests and issues in other languages cannot be reviewed or accepted.
{{< /alert >}}
## GitLab technologies
[GitLab](https://gitlab.com/gitlab-org/gitlab) is a [Ruby on Rails](https://rubyonrails.org/) application.
It uses [Haml](https://haml.info/) and a JavaScript-based frontend with [Vue.js](https://vuejs.org/).
Some satellite projects use [Go](https://go.dev/).
For example:
- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner)
- [Gitaly](https://gitlab.com/gitlab-org/gitaly)
- [GLab](https://gitlab.com/gitlab-org/cli)
- [GitLab Terraform Provider](https://gitlab.com/gitlab-org/terraform-provider-gitlab)
We have [development style guides for each technology](style_guides.md) to help you align with our coding standards.
If you want to contribute to the [website](https://about.gitlab.com/) or the [handbook](https://handbook.gitlab.com/handbook/),
go to the footer of any page and select **View page source** to open the page in the repository.
## Choose or create an issue
If you know what you're going to work on, see if an issue exists.
If it doesn't, open a [new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new).
Select the appropriate template and add all the necessary information about the work you plan to do.
That way you can get more guidance and support.
If you're not sure what to work on, you can
[view issues with the `~quick win` label](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&first_page_size=100),
and filter specifically for [documentation `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=documentation&first_page_size=100),
[backend `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=backend&first_page_size=100),
or [frontend `~quick win`](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&label_name%5B%5D=frontend&first_page_size=100).
When you find an issue you want to work on, leave a comment on it.
This helps the GitLab team and members of the wider GitLab community know that you are working on that issue.
This is a good opportunity to [validate the issue](issue_workflow.md#clarifyingvalidating-an-issue).
Confirm that the issue is still valid, clarify your intended approach, and ask if a feature or change is likely to be accepted.
You do not need to be assigned to the issue to get started.
If the issue already has an assignee, ask if they are still working on the issue or if they would like to collaborate.
For details, see [the issues workflow](issue_workflow.md).
## Join the community
[Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access),
a set of forks mirrored from GitLab repositories in order to improve the contributor experience.
When you request access to the community forks you will receive an onboarding issue in the
[community onboarding project](https://gitlab.com/gitlab-community/community-members/onboarding/-/issues).
For more information, read about the community forks in the [Meta repository README](https://gitlab.com/gitlab-community/meta#why).
Additionally, we recommend you join the [GitLab Discord server](https://discord.com/invite/gitlab),
where GitLab team members and the wider community are ready and waiting to answer your questions
and offer support for making contributions.
## Choose a development environment
To write and test your code locally, choose a local development environment.
- [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit), is a local
development environment that includes an installation of GitLab Self-Managed, sample projects,
and administrator access with which you can test functionality.
- [GDK-in-a-box](first_contribution/configure-dev-env-gdk-in-a-box.md),
packages GDK into a pre-configured virtual machine image that you can connect to with VS Code.
Follow [Configure GDK-in-a-box](first_contribution/configure-dev-env-gdk-in-a-box.md) to set up GDK-in-a-box.
To install GDK and its dependencies, follow the steps in [Install the GDK development environment](first_contribution/configure-dev-env-gdk.md).
- Use [Gitpod](first_contribution/configure-dev-env-gitpod.md) for an in-browser remote development
environment that runs regardless of your local hardware, operating system, or software.
## Open a merge request
1. Go to [the community fork on GitLab.com](https://gitlab.com/gitlab-community/gitlab).
If you don't see this message, on the left sidebar, select **Code > Merge requests > New merge request**.
1. Take a look at the branch names. You should be merging from your branch
in the community fork to the `master` branch in the GitLab repository.
1. Fill out the information and then select **Save changes**.
Don't worry if your merge request is not complete.
If you don't want anyone from GitLab to review it, you can select the **Mark as draft** checkbox.
If you're not happy with the merge request after you create it, you can close it, no harm done.
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.
Someone from GitLab will look at your request and let you know what the next steps are.
For details, see the [merge request workflow](merge_request_workflow.md).
Have questions?
Use `@gitlab-bot help` to ping a GitLab Merge Request coach. For more information on MR coaches, visit [How GitLab Merge Request Coaches Can Help You](merge_request_coaches.md).
### How community merge requests are triaged
When you create a merge request, a merge request coach will assign relevant reviewers or
guide you through the review themselves if possible.
The goal is to have a merge request reviewed within a week after a reviewer is assigned.
At times this may take longer due to high workload, holidays, or other reasons.
If you need to, find a
[merge request coach](https://handbook.gitlab.com/handbook/marketing/developer-relations/contributor-success/merge-request-coach-lifecycle/#current-merge-request-coaches)
who specializes in the type of code you have written and mention them in the merge request.
For example, if you have written some frontend code, you should mention the frontend merge request coach.
If your code has multiple disciplines, you can mention multiple merge request coaches.
For details about timelines and how you can request help or escalate a merge request,
see the [Wider Community Merge Request guide](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/merge-request-triage/).
After your merge request is reviewed and merged, your changes will be deployed to GitLab.com and included in the next release!
#### Review process
When you submit code to GitLab, we really want it to get merged!
However, we review submissions carefully, and this takes time.
Code submissions are usually reviewed by two
[domain experts](../code_review.md#domain-experts) before being merged:
- A [reviewer](../code_review.md#the-responsibility-of-the-reviewer).
- A [maintainer](../code_review.md#the-responsibility-of-the-maintainer).
After review, the reviewer could ask the author to update the merge request.
In that case, the reviewer will set the `~"workflow::in dev"` label.
Once you have updated the merge request with the requested changes, comment on it with `@gitlab-bot ready` to signal that it is ready for review again.
This process may repeat several times before merge.
Read our [merge request guidelines for contributors before you start for the first time](merge_request_workflow.md#merge-request-guidelines-for-contributors).
- [Make sure to follow our commit message guidelines](merge_request_workflow.md#commit-messages-guidelines).
- Write a great description that includes steps to reproduce your implementation.
- Automated testing is required. Take your time to understand the different
[testing levels](../testing_guide/testing_levels.md#how-to-test-at-the-correct-level) and apply them accordingly.
## Contributing to Premium/Ultimate features with an Enterprise Edition license
If you would like to work on GitLab features that are within a paid tier, the code that lives in the
[EE directory](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee), it requires a GitLab Enterprise Edition license.
Request an Enterprise Edition Developers License according to the [documented process](https://handbook.gitlab.com/handbook/marketing/developer-relations/contributor-success/community-contributors-workflows/#contributing-to-the-gitlab-enterprise-edition-ee).
## Get help
How to find help contributing to GitLab:
- Type `@gitlab-bot help` in a comment on a merge request or issue to tag a MR coach.
- See [How GitLab Merge Request Coaches Can Help You](merge_request_coaches.md) for more information.
- Join the [GitLab Community Discord](https://discord.gg/gitlab) and ask for help in the `#contribute` channel.
- Email the Contributor Success team at `contributors@gitlab.com`.
|
https://docs.gitlab.com/development/design
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/design.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
design.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Design and user interface changes
| null |
Follow these guidelines when contributing or reviewing design and user interface
(UI) changes. Refer to our [code review guide](../code_review.md) for broader
advice and best practices for code review in general.
The basis for most of these guidelines is [Pajamas](https://design.gitlab.com/),
GitLab design system. We encourage you to [contribute to Pajamas](https://design.gitlab.com/get-started/contributing/)
with additions and improvements.
## Merge request reviews
As a merge request (MR) author, you must:
- Include _Before_ and _After_
screenshots (or videos) of your changes in the description, as explained in our
[MR workflow](merge_request_workflow.md). These screenshots/videos are very helpful
for all reviewers and can speed up the review process, especially if the changes
are small.
- Attach the ~UX label to any merge request that has any user facing changes. This will trigger our
Reviewer Roulette to suggest a UX [reviewer](https://handbook.gitlab.com/handbook/product/ux/product-designer/mr-reviews/#stage-group-mrs).
If you are a **team member**: We recommend assigning the Product Designer suggested by the
[Reviewer Roulette](../code_review.md#reviewer-roulette) as reviewer. [This helps us](https://handbook.gitlab.com/handbook/product/ux/product-designer/mr-reviews/#benefits) spread work evenly, improve communication, and make our UI more
consistent. If you have a reason to choose a different reviewer, add a comment to mention you assigned
it to a Product Designer of your choice.
If you are a **community contributor**: We favor choosing the Product Designer that is a
[domain expert](../code_review.md#domain-experts) in the area you are contributing, to regardless
of the Reviewer Roulette.
## Checklist
Check these aspects both when _designing_ and _reviewing_ UI changes.
### Writing
- Follow [Pajamas](https://design.gitlab.com/content/ui-text/) as the primary
guidelines for UI text and [documentation style guide](../documentation/styleguide/_index.md)
as the secondary.
- Use clear and consistent terminology.
- Check grammar and spelling.
- Consider help content and follow its [guidelines](https://design.gitlab.com/patterns/contextual-help).
- Request review from the [appropriate Technical Writer](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments),
indicating any specific files or lines they should review, and how to preview
or understand the location/context of the text from the user's perspective.
### Patterns
- Consider similar patterns used in the product and justify in the issue when diverging
from them.
- Use appropriate [components](https://design.gitlab.com/components/overview/)
and [data visualizations](https://design.gitlab.com/data-visualization/overview/).
### Visual design
Check visual design properties using your browser's elements inspector ([Chrome](https://developer.chrome.com/docs/devtools/css/),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/open_the_inspector/index.html)).
- The design system provides [design tokens](https://design.gitlab.com/product-foundations/design-tokens/) and [components](https://design.gitlab.com/components) that work in supported modes.
- Use recommended [colors based on semantic meaning](https://design.gitlab.com/product-foundations/design-tokens#semantic-design-tokens) as part of designing a unified product experience. These color combinations are supported in all modes.
- Follow [typography guidelines](https://design.gitlab.com/product-foundations/type-fundamentals/).
- Follow [layout guidelines](https://design.gitlab.com/product-foundations/layout#grid).
- Use existing [icons](https://gitlab-org.gitlab.io/gitlab-svgs/) and [illustrations](https://gitlab-org.gitlab.io/gitlab-svgs/illustrations/)
or propose new ones according to [iconography](https://design.gitlab.com/product-foundations/iconography/)
and [illustration](https://design.gitlab.com/product-foundations/illustration-creation-guide/)
guidelines.
- Account for all [supported modes](../../user/profile/preferences.md#change-the-mode).
- The design system provides [design tokens](https://design.gitlab.com/product-foundations/design-tokens/) and [components](https://design.gitlab.com/components) that work in supported modes.
- Take extra care when mode is a primary factor in customer outcomes.
- Dark mode design must align with the [dark mode principles](https://handbook.gitlab.com/handbook/product/ux/product-designer/#designing-with-modes).
### States
Check states using your browser's _styles inspector_ to toggle CSS pseudo-classes
like `:hover` and others ([Chrome](https://developer.chrome.com/docs/devtools/css/reference/#pseudo-class),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/examine_and_edit_css/index.html#viewing-common-pseudo-classes)).
- Account for all applicable states (error, rest, loading, focus, hover, selected, disabled).
- Account for states dependent on data size ([empty](https://design.gitlab.com/patterns/empty-states),
some data, and lots of data).
- Account for states dependent on user role, user preferences, and subscription.
- Consider animations and transitions, and follow their [guidelines](https://design.gitlab.com/brand-design/motion).
### Responsive
Check responsive behavior using your browser's _responsive mode_ ([Chrome](https://developer.chrome.com/docs/devtools/device-mode/#viewport),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/responsive_design_mode/index.html)).
- Account for resizing, collapsing, moving, or wrapping of elements across
all breakpoints (even if larger viewports are prioritized).
- Provide the same information and actions in all breakpoints.
### Accessibility
Check accessibility using your browser's _accessibility inspector_ ([Chrome](https://developer.chrome.com/docs/devtools/accessibility/reference/),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/accessibility_inspector/index.html#accessing-the-accessibility-inspector)).
- Conform to level AA of the World Wide Web Consortium (W3C) [Web Content Accessibility Guidelines 2.1](https://www.w3.org/TR/WCAG21/),
according to our [statement of compliance](https://design.gitlab.com/accessibility/a11y/).
- Follow accessibility [Pajamas' best practices](https://design.gitlab.com/accessibility/best-practices/)
and read the accessibility developer documentation's [checklist](../fe_guide/accessibility/best_practices.md#quick-checklist).
### Handoff
When the design is ready, _before_ starting its implementation:
- Share design specifications in the related issue, preferably through a [Figma link](https://help.figma.com/hc/en-us/articles/360040531773-Share-files-and-prototypes)
or [GitLab Designs feature](../../user/project/issues/design_management.md).
See [when you should use each tool](https://handbook.gitlab.com/handbook/product/ux/product-designer/#deliver).
- Document user flow and states (for example, using [Mermaid flowcharts in Markdown](../../user/markdown.md#mermaid)).
- Document [design tokens](https://design.gitlab.com/product-foundations/design-tokens) (for example using the [design token annotation](https://www.figma.com/file/dWP1ldkBU4jeUqx5rO3jrn/Annotations-and-utilities?type=design&node-id=2002-34) in Figma).
- Document animations and transitions.
- Document responsive behaviors.
- Document non-evident behaviors (for example, field is auto-focused).
- Document accessibility behaviors (for example, using [accessibility annotations in Figma](https://www.figma.com/file/g7QtDbfxF3pCdWiyskIr0X/Accessibility-bluelines)).
- Contribute new icons or illustrations to the [GitLab SVGs](https://gitlab.com/gitlab-org/gitlab-svgs)
project.
### Follow-ups
At any moment, but usually _during_ or _after_ the design's implementation:
- Contribute [issues to Pajamas](https://design.gitlab.com/get-started/contributing#contribute-an-issue)
for additions or enhancements to the design system.
- Create issues with the [`~Deferred UX`](../labels/_index.md#technical-debt-and-deferred-ux)
label for intentional deviations from the agreed-upon UX requirements due to
time or feasibility challenges, linking back to the corresponding issues or
merge requests.
- Create issues for [feature additions or enhancements](issue_workflow.md#feature-proposals)
outside the agreed-upon UX requirements to avoid scope creep.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Design and user interface changes
breadcrumbs:
- doc
- development
- contributing
---
Follow these guidelines when contributing or reviewing design and user interface
(UI) changes. Refer to our [code review guide](../code_review.md) for broader
advice and best practices for code review in general.
The basis for most of these guidelines is [Pajamas](https://design.gitlab.com/),
GitLab design system. We encourage you to [contribute to Pajamas](https://design.gitlab.com/get-started/contributing/)
with additions and improvements.
## Merge request reviews
As a merge request (MR) author, you must:
- Include _Before_ and _After_
screenshots (or videos) of your changes in the description, as explained in our
[MR workflow](merge_request_workflow.md). These screenshots/videos are very helpful
for all reviewers and can speed up the review process, especially if the changes
are small.
- Attach the ~UX label to any merge request that has any user facing changes. This will trigger our
Reviewer Roulette to suggest a UX [reviewer](https://handbook.gitlab.com/handbook/product/ux/product-designer/mr-reviews/#stage-group-mrs).
If you are a **team member**: We recommend assigning the Product Designer suggested by the
[Reviewer Roulette](../code_review.md#reviewer-roulette) as reviewer. [This helps us](https://handbook.gitlab.com/handbook/product/ux/product-designer/mr-reviews/#benefits) spread work evenly, improve communication, and make our UI more
consistent. If you have a reason to choose a different reviewer, add a comment to mention you assigned
it to a Product Designer of your choice.
If you are a **community contributor**: We favor choosing the Product Designer that is a
[domain expert](../code_review.md#domain-experts) in the area you are contributing, to regardless
of the Reviewer Roulette.
## Checklist
Check these aspects both when _designing_ and _reviewing_ UI changes.
### Writing
- Follow [Pajamas](https://design.gitlab.com/content/ui-text/) as the primary
guidelines for UI text and [documentation style guide](../documentation/styleguide/_index.md)
as the secondary.
- Use clear and consistent terminology.
- Check grammar and spelling.
- Consider help content and follow its [guidelines](https://design.gitlab.com/patterns/contextual-help).
- Request review from the [appropriate Technical Writer](https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments),
indicating any specific files or lines they should review, and how to preview
or understand the location/context of the text from the user's perspective.
### Patterns
- Consider similar patterns used in the product and justify in the issue when diverging
from them.
- Use appropriate [components](https://design.gitlab.com/components/overview/)
and [data visualizations](https://design.gitlab.com/data-visualization/overview/).
### Visual design
Check visual design properties using your browser's elements inspector ([Chrome](https://developer.chrome.com/docs/devtools/css/),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/open_the_inspector/index.html)).
- The design system provides [design tokens](https://design.gitlab.com/product-foundations/design-tokens/) and [components](https://design.gitlab.com/components) that work in supported modes.
- Use recommended [colors based on semantic meaning](https://design.gitlab.com/product-foundations/design-tokens#semantic-design-tokens) as part of designing a unified product experience. These color combinations are supported in all modes.
- Follow [typography guidelines](https://design.gitlab.com/product-foundations/type-fundamentals/).
- Follow [layout guidelines](https://design.gitlab.com/product-foundations/layout#grid).
- Use existing [icons](https://gitlab-org.gitlab.io/gitlab-svgs/) and [illustrations](https://gitlab-org.gitlab.io/gitlab-svgs/illustrations/)
or propose new ones according to [iconography](https://design.gitlab.com/product-foundations/iconography/)
and [illustration](https://design.gitlab.com/product-foundations/illustration-creation-guide/)
guidelines.
- Account for all [supported modes](../../user/profile/preferences.md#change-the-mode).
- The design system provides [design tokens](https://design.gitlab.com/product-foundations/design-tokens/) and [components](https://design.gitlab.com/components) that work in supported modes.
- Take extra care when mode is a primary factor in customer outcomes.
- Dark mode design must align with the [dark mode principles](https://handbook.gitlab.com/handbook/product/ux/product-designer/#designing-with-modes).
### States
Check states using your browser's _styles inspector_ to toggle CSS pseudo-classes
like `:hover` and others ([Chrome](https://developer.chrome.com/docs/devtools/css/reference/#pseudo-class),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/examine_and_edit_css/index.html#viewing-common-pseudo-classes)).
- Account for all applicable states (error, rest, loading, focus, hover, selected, disabled).
- Account for states dependent on data size ([empty](https://design.gitlab.com/patterns/empty-states),
some data, and lots of data).
- Account for states dependent on user role, user preferences, and subscription.
- Consider animations and transitions, and follow their [guidelines](https://design.gitlab.com/brand-design/motion).
### Responsive
Check responsive behavior using your browser's _responsive mode_ ([Chrome](https://developer.chrome.com/docs/devtools/device-mode/#viewport),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/responsive_design_mode/index.html)).
- Account for resizing, collapsing, moving, or wrapping of elements across
all breakpoints (even if larger viewports are prioritized).
- Provide the same information and actions in all breakpoints.
### Accessibility
Check accessibility using your browser's _accessibility inspector_ ([Chrome](https://developer.chrome.com/docs/devtools/accessibility/reference/),
[Firefox](https://firefox-source-docs.mozilla.org/devtools-user/accessibility_inspector/index.html#accessing-the-accessibility-inspector)).
- Conform to level AA of the World Wide Web Consortium (W3C) [Web Content Accessibility Guidelines 2.1](https://www.w3.org/TR/WCAG21/),
according to our [statement of compliance](https://design.gitlab.com/accessibility/a11y/).
- Follow accessibility [Pajamas' best practices](https://design.gitlab.com/accessibility/best-practices/)
and read the accessibility developer documentation's [checklist](../fe_guide/accessibility/best_practices.md#quick-checklist).
### Handoff
When the design is ready, _before_ starting its implementation:
- Share design specifications in the related issue, preferably through a [Figma link](https://help.figma.com/hc/en-us/articles/360040531773-Share-files-and-prototypes)
or [GitLab Designs feature](../../user/project/issues/design_management.md).
See [when you should use each tool](https://handbook.gitlab.com/handbook/product/ux/product-designer/#deliver).
- Document user flow and states (for example, using [Mermaid flowcharts in Markdown](../../user/markdown.md#mermaid)).
- Document [design tokens](https://design.gitlab.com/product-foundations/design-tokens) (for example using the [design token annotation](https://www.figma.com/file/dWP1ldkBU4jeUqx5rO3jrn/Annotations-and-utilities?type=design&node-id=2002-34) in Figma).
- Document animations and transitions.
- Document responsive behaviors.
- Document non-evident behaviors (for example, field is auto-focused).
- Document accessibility behaviors (for example, using [accessibility annotations in Figma](https://www.figma.com/file/g7QtDbfxF3pCdWiyskIr0X/Accessibility-bluelines)).
- Contribute new icons or illustrations to the [GitLab SVGs](https://gitlab.com/gitlab-org/gitlab-svgs)
project.
### Follow-ups
At any moment, but usually _during_ or _after_ the design's implementation:
- Contribute [issues to Pajamas](https://design.gitlab.com/get-started/contributing#contribute-an-issue)
for additions or enhancements to the design system.
- Create issues with the [`~Deferred UX`](../labels/_index.md#technical-debt-and-deferred-ux)
label for intentional deviations from the agreed-upon UX requirements due to
time or feasibility challenges, linking back to the corresponding issues or
merge requests.
- Create issues for [feature additions or enhancements](issue_workflow.md#feature-proposals)
outside the agreed-upon UX requirements to avoid scope creep.
|
https://docs.gitlab.com/development/issue_workflow
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/issue_workflow.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
issue_workflow.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Issues workflow
| null |
## Creating an issue
**Before you submit an issue, [search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues)**
for similar entries. Someone else might have already had the same bug or feature proposal.
If you find an existing issue, show your support with an emoji reaction and add your notes to the discussion.
### Bugs
To submit a bug:
- Use the ['Bug' issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Bug.md).
The text in the comments (`<!-- ... -->`) should help you with which information to include.
- To report a suspected security vulnerability, follow the
[disclosure process on the GitLab.com website](https://about.gitlab.com/security/disclosure/).
{{< alert type="warning" >}}
Do **not** create publicly viewable issues for suspected security vulnerabilities.
{{< /alert >}}
### Feature proposals
To create a feature proposal, open an issue in the issue tracker using the
[**Feature Proposal - detailed** issue template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed).
In order to help track feature proposals, we use the
[`~"type::feature"`](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name=type::feature) label.
Users that are not members of the project cannot add labels via the UI.
Instead, use [reactive label commands](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#reactive-workflow-automation).
Keep feature proposals as small and simple as possible, complex ones
might be edited to make them small and simple.
For changes to the user interface (UI), follow our [design and UI guidelines](design.md),
and include a visual example (screenshot, wireframe, or mockup). Such issues should
be given the `~UX"` label (using the [reactive label commands](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#reactive-workflow-automation)) for the Product Design team to provide input and guidance.
## Finding issues to work on
GitLab has over 75,000 issues that you can work on.
You can use [labels](../../user/project/labels.md) to filter and find suitable issues to work on.
New contributors can look for [issues with the `quick win` label](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&first_page_size=20).
The `frontend` and `backend` labels are also a good choice to refine the issue list.
## Clarifying/validating an issue
Many issues have not been visited or validated recently.
Before trying to solve an issue, take the following steps:
- Ask the author if the issue is still relevant.
- Ask the community if the issue is still relevant.
- Attempt to validate whether:
- A merge request has already been created (see the related merge requests section).
Sometimes the issue is not closed/updated.
- The `type::bug` still exists (by recreating it).
- The `type::feature` has not already been implemented (by trying it).
## Working on the issue
Leave a note to indicate you wish to work on the issue and would like to be assigned
(mention the author and/or `@gitlab-org/coaches`).
If you are stuck or did not properly understand the issue you can ask the author or
the community for help.
## Issue triaging
Our issue triage policies are [described in our handbook](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/).
You are very welcome to help the GitLab team triage issues.
The most important thing is making sure valid issues receive feedback from the
development team. Therefore the priority is mentioning developers that can help
on those issues. Select someone with relevant experience from the
[GitLab team](https://about.gitlab.com/company/team/).
If there is nobody mentioned with that expertise, look in the commit history for
the affected files to find someone.
We also have triage automation in place, described [in our handbook](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/).
For information about which labels to apply to issues, see [Labels](../labels/_index.md).
## Issue weight
Issue weight allows us to get an idea of the amount of work required to solve
one or multiple issues. This makes it possible to schedule work more accurately.
You are encouraged to set the weight of any issue. Following the guidelines
below will make it easy to manage this, without unnecessary overhead.
1. Set weight for any issue at the earliest possible convenience
1. If you don't agree with a set weight, discuss with other developers until
consensus is reached about the weight
1. Issue weights are an abstract measurement of complexity of the issue. Do not
relate issue weight directly to time. This is called [anchoring](https://en.wikipedia.org/wiki/Anchoring_(cognitive_bias))
and something you want to avoid.
1. Something that has a weight of 1 (or no weight) is really small and simple.
Something that is 9 is rewriting a large fundamental part of GitLab,
which might lead to many hard problems to solve. Changing some text in GitLab
is probably 1, adding a new Git Hook maybe 4 or 5, big features 7-9.
1. If something is very large, it should probably be split up in multiple
issues or chunks. You can not set the weight of a parent issue and set
weights to children issues.
## Regression issues
Every monthly release has a corresponding issue on the CE issue tracker to keep
track of functionality broken by that release and any fixes that need to be
included in a patch release (see
[8.3 Regressions](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/4127) as an example).
As outlined in the issue description, the intended workflow is to post one note
with a reference to an issue describing the regression, and then to update that
note with a reference to the merge request that fixes it as it becomes available.
If you're a contributor who doesn't have the required permissions to update
other users' notes, post a new note with a reference to both the issue
and the merge request.
The release manager will
[update the notes](https://gitlab.com/gitlab-org/release-tools/blob/master/doc/pro-tips.md#update-the-regression-issue)
in the regression issue as fixes are addressed.
## Technical debt in follow-up issues
It's common to discover technical debt during development of a new feature. In
the spirit of "minimum viable change", resolution is often deferred to a
follow-up issue. However, this cannot be used as an excuse to merge poor-quality
code that would otherwise not pass review, or to overlook trivial matters that
don't deserve to be scheduled independently, and would be best resolved in the
original merge request - or not tracked at all!
The overheads of scheduling, and rate of change in the GitLab codebase, mean
that the cost of a trivial technical debt issue can quickly exceed the value of
tracking it. This generally means we should resolve these in the original merge
request - or not create a follow-up issue at all.
For example, a typo in a comment that is being copied between files is worth
fixing in the same MR, but not worth creating a follow-up issue for. Renaming a
method that is used in many places to make its intent slightly clearer may be
worth fixing, but it should not happen in the same MR, and is generally not
worth the overhead of having an issue of its own. These issues would invariably
be labeled `~P4 ~S4` if we were to create them.
More severe technical debt can have implications for development velocity. If
it isn't addressed in a timely manner, the codebase becomes needlessly difficult
to change, new features become difficult to add, and regressions abound.
Discoveries of this kind of technical debt should be treated seriously, and
while resolution in a follow-up issue may be appropriate, maintainers should
generally obtain a scheduling commitment from the author of the original MR, or
the engineering or product manager for the relevant area. This may take the form
of appropriate Priority / Severity labels on the issue, or an explicit milestone
and assignee.
The maintainer must always agree before an outstanding discussion is resolved in
this manner, and will be the one to create the issue. The title and description
should be of the same quality as those created
[in the usual manner](../labels/_index.md#technical-debt-and-deferred-ux) - in particular, the issue title
**must not** begin with `Follow-up`! The creating maintainer should also expect
to be involved in some capacity when work begins on the follow-up issue.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Issues workflow
breadcrumbs:
- doc
- development
- contributing
---
## Creating an issue
**Before you submit an issue, [search the issue tracker](https://gitlab.com/gitlab-org/gitlab/-/issues)**
for similar entries. Someone else might have already had the same bug or feature proposal.
If you find an existing issue, show your support with an emoji reaction and add your notes to the discussion.
### Bugs
To submit a bug:
- Use the ['Bug' issue template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Bug.md).
The text in the comments (`<!-- ... -->`) should help you with which information to include.
- To report a suspected security vulnerability, follow the
[disclosure process on the GitLab.com website](https://about.gitlab.com/security/disclosure/).
{{< alert type="warning" >}}
Do **not** create publicly viewable issues for suspected security vulnerabilities.
{{< /alert >}}
### Feature proposals
To create a feature proposal, open an issue in the issue tracker using the
[**Feature Proposal - detailed** issue template](https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20-%20detailed).
In order to help track feature proposals, we use the
[`~"type::feature"`](https://gitlab.com/gitlab-org/gitlab/-/issues?label_name=type::feature) label.
Users that are not members of the project cannot add labels via the UI.
Instead, use [reactive label commands](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#reactive-workflow-automation).
Keep feature proposals as small and simple as possible, complex ones
might be edited to make them small and simple.
For changes to the user interface (UI), follow our [design and UI guidelines](design.md),
and include a visual example (screenshot, wireframe, or mockup). Such issues should
be given the `~UX"` label (using the [reactive label commands](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/#reactive-workflow-automation)) for the Product Design team to provide input and guidance.
## Finding issues to work on
GitLab has over 75,000 issues that you can work on.
You can use [labels](../../user/project/labels.md) to filter and find suitable issues to work on.
New contributors can look for [issues with the `quick win` label](https://gitlab.com/groups/gitlab-org/-/issues/?sort=created_asc&state=opened&label_name%5B%5D=quick%20win&first_page_size=20).
The `frontend` and `backend` labels are also a good choice to refine the issue list.
## Clarifying/validating an issue
Many issues have not been visited or validated recently.
Before trying to solve an issue, take the following steps:
- Ask the author if the issue is still relevant.
- Ask the community if the issue is still relevant.
- Attempt to validate whether:
- A merge request has already been created (see the related merge requests section).
Sometimes the issue is not closed/updated.
- The `type::bug` still exists (by recreating it).
- The `type::feature` has not already been implemented (by trying it).
## Working on the issue
Leave a note to indicate you wish to work on the issue and would like to be assigned
(mention the author and/or `@gitlab-org/coaches`).
If you are stuck or did not properly understand the issue you can ask the author or
the community for help.
## Issue triaging
Our issue triage policies are [described in our handbook](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/issue-triage/).
You are very welcome to help the GitLab team triage issues.
The most important thing is making sure valid issues receive feedback from the
development team. Therefore the priority is mentioning developers that can help
on those issues. Select someone with relevant experience from the
[GitLab team](https://about.gitlab.com/company/team/).
If there is nobody mentioned with that expertise, look in the commit history for
the affected files to find someone.
We also have triage automation in place, described [in our handbook](https://handbook.gitlab.com/handbook/engineering/infrastructure/engineering-productivity/triage-operations/).
For information about which labels to apply to issues, see [Labels](../labels/_index.md).
## Issue weight
Issue weight allows us to get an idea of the amount of work required to solve
one or multiple issues. This makes it possible to schedule work more accurately.
You are encouraged to set the weight of any issue. Following the guidelines
below will make it easy to manage this, without unnecessary overhead.
1. Set weight for any issue at the earliest possible convenience
1. If you don't agree with a set weight, discuss with other developers until
consensus is reached about the weight
1. Issue weights are an abstract measurement of complexity of the issue. Do not
relate issue weight directly to time. This is called [anchoring](https://en.wikipedia.org/wiki/Anchoring_(cognitive_bias))
and something you want to avoid.
1. Something that has a weight of 1 (or no weight) is really small and simple.
Something that is 9 is rewriting a large fundamental part of GitLab,
which might lead to many hard problems to solve. Changing some text in GitLab
is probably 1, adding a new Git Hook maybe 4 or 5, big features 7-9.
1. If something is very large, it should probably be split up in multiple
issues or chunks. You can not set the weight of a parent issue and set
weights to children issues.
## Regression issues
Every monthly release has a corresponding issue on the CE issue tracker to keep
track of functionality broken by that release and any fixes that need to be
included in a patch release (see
[8.3 Regressions](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/4127) as an example).
As outlined in the issue description, the intended workflow is to post one note
with a reference to an issue describing the regression, and then to update that
note with a reference to the merge request that fixes it as it becomes available.
If you're a contributor who doesn't have the required permissions to update
other users' notes, post a new note with a reference to both the issue
and the merge request.
The release manager will
[update the notes](https://gitlab.com/gitlab-org/release-tools/blob/master/doc/pro-tips.md#update-the-regression-issue)
in the regression issue as fixes are addressed.
## Technical debt in follow-up issues
It's common to discover technical debt during development of a new feature. In
the spirit of "minimum viable change", resolution is often deferred to a
follow-up issue. However, this cannot be used as an excuse to merge poor-quality
code that would otherwise not pass review, or to overlook trivial matters that
don't deserve to be scheduled independently, and would be best resolved in the
original merge request - or not tracked at all!
The overheads of scheduling, and rate of change in the GitLab codebase, mean
that the cost of a trivial technical debt issue can quickly exceed the value of
tracking it. This generally means we should resolve these in the original merge
request - or not create a follow-up issue at all.
For example, a typo in a comment that is being copied between files is worth
fixing in the same MR, but not worth creating a follow-up issue for. Renaming a
method that is used in many places to make its intent slightly clearer may be
worth fixing, but it should not happen in the same MR, and is generally not
worth the overhead of having an issue of its own. These issues would invariably
be labeled `~P4 ~S4` if we were to create them.
More severe technical debt can have implications for development velocity. If
it isn't addressed in a timely manner, the codebase becomes needlessly difficult
to change, new features become difficult to add, and regressions abound.
Discoveries of this kind of technical debt should be treated seriously, and
while resolution in a follow-up issue may be appropriate, maintainers should
generally obtain a scheduling commitment from the author of the original MR, or
the engineering or product manager for the relevant area. This may take the form
of appropriate Priority / Severity labels on the issue, or an explicit milestone
and assignee.
The maintainer must always agree before an outstanding discussion is resolved in
this manner, and will be the one to create the issue. The title and description
should be of the same quality as those created
[in the usual manner](../labels/_index.md#technical-debt-and-deferred-ux) - in particular, the issue title
**must not** begin with `Follow-up`! The creating maintainer should also expect
to be involved in some capacity when work begins on the follow-up issue.
|
https://docs.gitlab.com/development/merge_request_coaches
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/merge_request_coaches.md
|
2025-08-13
|
doc/development/contributing
|
[
"doc",
"development",
"contributing"
] |
merge_request_coaches.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
How GitLab Merge Request Coaches can help you
| null |
Welcome, GitLab contributor! As you work on your contributions, Merge Request (MR) Coaches are here to help you succeed. This guide explains how we can support you throughout your contribution journey.
## What is a Merge Request Coach?
MR Coaches are GitLab team members with a special interest in helping community contributors like you get their changes merged into GitLab. Think of us as your guides and advocates in the contribution process.
## How we can help you
### Getting started
- We can help you understand GitLab contribution requirements
- We can provide hints and guidance if you're new to Ruby, JavaScript, Go, or programming
### During development
- We can review your merge requests and provide constructive feedback
- We can help you understand and resolve CI pipeline issues
### Code review process
- We can help find the right reviewers for your contribution
- We can help you understand and address code review feedback
- We can provide technical guidance on implementing requested changes
## If you're stuck
Don't hesitate to ask for help if:
- You're unsure how to implement something
- The CI pipeline is failing
- You don't understand review feedback
- You need help with Git or the development process
## Where to find us
You can reach MR Coaches by commenting `@gitlab-bot help` on your merge request or issue.
## What we look for in contributions
To help your MR succeed, we check for:
- Adherence to GitLab [contribution acceptance criteria](merge_request_workflow.md#contribution-acceptance-criteria)
- Test coverage
- Documentation updates when needed
## Tips for working with MR Coaches
1. **Be Responsive**: Even a quick update helps us help you
1. **Ask Questions Early**: We'd rather help prevent issues than fix them later
1. **Share Your Constraints**: Let us know if you have limited time or specific challenges
1. **Be Open to Feedback**: We aim to help your code meet GitLab quality standards
Remember: No question is "stupid". We're here to help you succeed. Your contributions make GitLab better, and we appreciate your efforts to improve the product and grow your skills.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: How GitLab Merge Request Coaches can help you
breadcrumbs:
- doc
- development
- contributing
---
Welcome, GitLab contributor! As you work on your contributions, Merge Request (MR) Coaches are here to help you succeed. This guide explains how we can support you throughout your contribution journey.
## What is a Merge Request Coach?
MR Coaches are GitLab team members with a special interest in helping community contributors like you get their changes merged into GitLab. Think of us as your guides and advocates in the contribution process.
## How we can help you
### Getting started
- We can help you understand GitLab contribution requirements
- We can provide hints and guidance if you're new to Ruby, JavaScript, Go, or programming
### During development
- We can review your merge requests and provide constructive feedback
- We can help you understand and resolve CI pipeline issues
### Code review process
- We can help find the right reviewers for your contribution
- We can help you understand and address code review feedback
- We can provide technical guidance on implementing requested changes
## If you're stuck
Don't hesitate to ask for help if:
- You're unsure how to implement something
- The CI pipeline is failing
- You don't understand review feedback
- You need help with Git or the development process
## Where to find us
You can reach MR Coaches by commenting `@gitlab-bot help` on your merge request or issue.
## What we look for in contributions
To help your MR succeed, we check for:
- Adherence to GitLab [contribution acceptance criteria](merge_request_workflow.md#contribution-acceptance-criteria)
- Test coverage
- Documentation updates when needed
## Tips for working with MR Coaches
1. **Be Responsive**: Even a quick update helps us help you
1. **Ask Questions Early**: We'd rather help prevent issues than fix them later
1. **Share Your Constraints**: Let us know if you have limited time or specific challenges
1. **Be Open to Feedback**: We aim to help your code meet GitLab quality standards
Remember: No question is "stupid". We're here to help you succeed. Your contributions make GitLab better, and we appreciate your efforts to improve the product and grow your skills.
|
https://docs.gitlab.com/development/contributing/contribute-gitpod
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/contribute-gitpod.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
contribute-gitpod.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute code with Gitpod
| null |
Now for the fun part. Let's edit some code.
In this example, I found some UI text I'd like to change.
In the upper-right corner in GitLab, I selected my avatar and then **Preferences**.
I want to change `Syntax highlighting theme` to `Code syntax highlighting theme`:
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
1. Create a new branch for your changes:
Select `master` in the status bar, then from the **Select a branch or tag to checkout** box,
select **Create new branch** and enter a name for the new branch.
If your code change addresses an issue, [start the branch name with the issue number](../../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number).
The examples in this doc use a new branch called `ui-updates`.
1. Search the repository for the string `Syntax highlighting theme`:
- In VS Code, select the search icon <i class="fa fa-search fa-flip-horizontal" aria-hidden="true"></i> from the side toolbar.
1. Select the `app/views/profiles/preferences/show.html.haml` file.
1. Update the string to `Code syntax highlighting theme`.
1. Save your changes.
1. Use the IDE **Terminal** tab to commit the changes:
```shell
git commit -m "Update UI text
Standardizing the text on this page so
that each area uses consistent language."
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Push the changes to the new branch:
```shell
git push --set-upstream origin ui-updates
```
1. You can [create a merge request](mr-review.md) with the code change,
or continue to update the translation files.
## Update the translation files
English UI strings are localized into many languages.
These strings are saved in a `.pot` file, which must be regenerated
any time you update UI text.
To automatically regenerate the localization file:
1. Ensure you are in the `gitlab-development-kit/gitlab` directory.
1. Run the following command:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
The `.pot` file will be generated in the `/locale` directory.
Now, in the `gitlab-development-kit/gitlab` directory, if you type `git status`
you should have both files listed:
```shell
modified: app/views/profiles/preferences/show.html.haml
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
For more information about localization, see [internationalization](../../i18n/externalization.md).
## Update the documentation
Documentation for GitLab is published on <https://docs.gitlab.com>.
When you add or update a feature, you must update the documentation as well.
1. To find the documentation for a feature, the easiest thing is to search the
documentation site. In this case, the setting is described on this documentation page:
```plaintext
https://docs.gitlab.com/ee/user/profile/preferences.html
```
1. The URL shows you the location of the file in the `/doc` directory.
In this case, the location is:
```plaintext
doc/user/profile/preferences.md
```
1. Go to this location in your local `gitlab` repository and update the `.md` file
and any related images.
Now when you run `git status`, you should have something like:
```plaintext
modified: app/views/profiles/preferences/show.html.haml
modified: doc/user/profile/img/profile-preferences-syntax-themes.png
modified: doc/user/profile/preferences.md
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute code with Gitpod
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
Now for the fun part. Let's edit some code.
In this example, I found some UI text I'd like to change.
In the upper-right corner in GitLab, I selected my avatar and then **Preferences**.
I want to change `Syntax highlighting theme` to `Code syntax highlighting theme`:
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
1. Create a new branch for your changes:
Select `master` in the status bar, then from the **Select a branch or tag to checkout** box,
select **Create new branch** and enter a name for the new branch.
If your code change addresses an issue, [start the branch name with the issue number](../../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number).
The examples in this doc use a new branch called `ui-updates`.
1. Search the repository for the string `Syntax highlighting theme`:
- In VS Code, select the search icon <i class="fa fa-search fa-flip-horizontal" aria-hidden="true"></i> from the side toolbar.
1. Select the `app/views/profiles/preferences/show.html.haml` file.
1. Update the string to `Code syntax highlighting theme`.
1. Save your changes.
1. Use the IDE **Terminal** tab to commit the changes:
```shell
git commit -m "Update UI text
Standardizing the text on this page so
that each area uses consistent language."
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Push the changes to the new branch:
```shell
git push --set-upstream origin ui-updates
```
1. You can [create a merge request](mr-review.md) with the code change,
or continue to update the translation files.
## Update the translation files
English UI strings are localized into many languages.
These strings are saved in a `.pot` file, which must be regenerated
any time you update UI text.
To automatically regenerate the localization file:
1. Ensure you are in the `gitlab-development-kit/gitlab` directory.
1. Run the following command:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
The `.pot` file will be generated in the `/locale` directory.
Now, in the `gitlab-development-kit/gitlab` directory, if you type `git status`
you should have both files listed:
```shell
modified: app/views/profiles/preferences/show.html.haml
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
For more information about localization, see [internationalization](../../i18n/externalization.md).
## Update the documentation
Documentation for GitLab is published on <https://docs.gitlab.com>.
When you add or update a feature, you must update the documentation as well.
1. To find the documentation for a feature, the easiest thing is to search the
documentation site. In this case, the setting is described on this documentation page:
```plaintext
https://docs.gitlab.com/ee/user/profile/preferences.html
```
1. The URL shows you the location of the file in the `/doc` directory.
In this case, the location is:
```plaintext
doc/user/profile/preferences.md
```
1. Go to this location in your local `gitlab` repository and update the `.md` file
and any related images.
Now when you run `git status`, you should have something like:
```plaintext
modified: app/views/profiles/preferences/show.html.haml
modified: doc/user/profile/img/profile-preferences-syntax-themes.png
modified: doc/user/profile/preferences.md
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
|
https://docs.gitlab.com/development/contributing/configure-dev-env-gitpod
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/configure-dev-env-gitpod.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
configure-dev-env-gitpod.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Configure the Gitpod development environment
| null |
To contribute code without the overhead of setting up a local development environment,
you should use Gitpod.
## Use Gitpod to contribute without a local environment setup
Set aside about 15 minutes to launch the GDK in Gitpod.
1. [Launch the GDK in Gitpod](https://gitpod.io/#https://gitlab.com/gitlab-community/gitlab/-/tree/master/).
1. Select **Continue with GitLab** to start a Gitpod environment for this fork.
1. If this is your first time using Gitpod, create a free account and connect it
to your GitLab account:
1. Select **Authorize** when prompted to **Authorize Gitpod.io to use your account?**.
1. On the **Welcome to Gitpod** screen, enter your name and select whether you would like
to **Connect with LinkedIn** or **Continue with 10 hours per month**.
1. Choose the `Browser` version of VS Code when prompted to **Choose an editor**.
1. Continue through the settings until the **New Workspace** screen.
1. On the **New Workspace** screen, before you select **Continue**:
- Leave the default repository URL: `gitlab.com/gitlab-community/gitlab/-/tree/master/`.
- Select your preferred **Editor**.
The examples in this tutorial use Visual Studio Code (VS Code) as the editor,
sometimes referred to as an integrated development environment (IDE).
- Leave the default **Class**: `Standard`.
1. Wait a few minutes for Gitpod to launch.
You can begin exploring the codebase and making your changes after the editor you chose has launched.
1. You will need to wait a little longer for GitLab to be available to preview your changes.
When the GitLab GDK is ready, the **Terminal** panel in Gitpod will return
a URL local to the Gitpod environment:
```shell
=> GitLab available at http://127.0.0.1:3000.
```
Select the `http://127.0.0.1:3000` to open the GitLab development environment in a new browser tab.
1. After the environment loads, sign in as the default `root` user and
follow the prompts to change the default password:
- Username: `root`
- Password: `5iveL!fe`
After the Gitpod editor is ready, continue to [Change the code with Gitpod](contribute-gitpod.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Configure the Gitpod development environment
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
To contribute code without the overhead of setting up a local development environment,
you should use Gitpod.
## Use Gitpod to contribute without a local environment setup
Set aside about 15 minutes to launch the GDK in Gitpod.
1. [Launch the GDK in Gitpod](https://gitpod.io/#https://gitlab.com/gitlab-community/gitlab/-/tree/master/).
1. Select **Continue with GitLab** to start a Gitpod environment for this fork.
1. If this is your first time using Gitpod, create a free account and connect it
to your GitLab account:
1. Select **Authorize** when prompted to **Authorize Gitpod.io to use your account?**.
1. On the **Welcome to Gitpod** screen, enter your name and select whether you would like
to **Connect with LinkedIn** or **Continue with 10 hours per month**.
1. Choose the `Browser` version of VS Code when prompted to **Choose an editor**.
1. Continue through the settings until the **New Workspace** screen.
1. On the **New Workspace** screen, before you select **Continue**:
- Leave the default repository URL: `gitlab.com/gitlab-community/gitlab/-/tree/master/`.
- Select your preferred **Editor**.
The examples in this tutorial use Visual Studio Code (VS Code) as the editor,
sometimes referred to as an integrated development environment (IDE).
- Leave the default **Class**: `Standard`.
1. Wait a few minutes for Gitpod to launch.
You can begin exploring the codebase and making your changes after the editor you chose has launched.
1. You will need to wait a little longer for GitLab to be available to preview your changes.
When the GitLab GDK is ready, the **Terminal** panel in Gitpod will return
a URL local to the Gitpod environment:
```shell
=> GitLab available at http://127.0.0.1:3000.
```
Select the `http://127.0.0.1:3000` to open the GitLab development environment in a new browser tab.
1. After the environment loads, sign in as the default `root` user and
follow the prompts to change the default password:
- Username: `root`
- Password: `5iveL!fe`
After the Gitpod editor is ready, continue to [Change the code with Gitpod](contribute-gitpod.md).
|
https://docs.gitlab.com/development/contributing/contribute-gdk
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/contribute-gdk.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
contribute-gdk.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute code with GDK
| null |
Now for the fun part. Let's edit some code.
In this example, I found some UI text I'd like to change.
In the upper-right corner in GitLab, I selected my avatar and then **Preferences**.
I want to change `Syntax highlighting theme` to `Code syntax highlighting theme`:
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
Use your local IDE to make changes to the code in the GDK directory.
1. Create a new branch for your changes:
```shell
git checkout -b ui-updates
```
1. Search the `gitlab-development-kit/gitlab` directory for the string `Syntax highlighting theme`.
The results show one `.haml` file and several `.po` files.
1. Open the `app/views/profiles/preferences/show.html.haml` file.
1. Update the string from `Syntax highlighting theme` to
`Code syntax highlighting theme`.
1. Save the file.
1. You can check that you were successful:
In the `gitlab-development-kit/gitlab` directory, type `git status`
to show the file you modified:
```shell
modified: app/views/profiles/preferences/show.html.haml
```
1. Refresh the web browser where you're viewing the GDK.
The changes should be displayed. Take a screenshot.
1. Commit the changes:
```shell
git commit -a -m "Update UI text
Standardizing the text on this page so
that each area uses consistent language."
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Push the changes to the new branch:
```shell
git push --set-upstream origin ui-updates
```
1. You can [Create a merge request](mr-review.md) with the code change,
or continue to [update the translation files](#update-the-translation-files).
## Update the translation files
English UI strings are localized into many languages.
These strings are saved in a `.pot` file, which must be regenerated
any time you update UI text.
To automatically regenerate the localization file:
1. Ensure you are in the `gitlab-development-kit/gitlab` directory.
1. Run the following command:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
The `.pot` file will be generated in the `/locale` directory.
Now, in the `gitlab-development-kit/gitlab` directory, if you type `git status`
you should have both files listed:
```shell
modified: app/views/profiles/preferences/show.html.haml
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
For more information about localization, see [internationalization](../../i18n/externalization.md).
## Update the documentation
Documentation for GitLab is published on <https://docs.gitlab.com>.
When you add or update a feature, you must update the documentation as well.
1. To find the documentation for a feature, the easiest thing is to search the
documentation site. In this case, the setting is described on this documentation page:
```plaintext
https://docs.gitlab.com/ee/user/profile/preferences.html
```
1. The URL shows you the location of the file in the `/doc` directory.
In this case, the location is:
```plaintext
doc/user/profile/preferences.md
```
1. Go to this location in your local `gitlab` repository and update the `.md` file
and any related images.
Now when you run `git status`, you should have something like:
```plaintext
modified: app/views/profiles/preferences/show.html.haml
modified: doc/user/profile/img/profile-preferences-syntax-themes.png
modified: doc/user/profile/preferences.md
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute code with GDK
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
Now for the fun part. Let's edit some code.
In this example, I found some UI text I'd like to change.
In the upper-right corner in GitLab, I selected my avatar and then **Preferences**.
I want to change `Syntax highlighting theme` to `Code syntax highlighting theme`:
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
Use your local IDE to make changes to the code in the GDK directory.
1. Create a new branch for your changes:
```shell
git checkout -b ui-updates
```
1. Search the `gitlab-development-kit/gitlab` directory for the string `Syntax highlighting theme`.
The results show one `.haml` file and several `.po` files.
1. Open the `app/views/profiles/preferences/show.html.haml` file.
1. Update the string from `Syntax highlighting theme` to
`Code syntax highlighting theme`.
1. Save the file.
1. You can check that you were successful:
In the `gitlab-development-kit/gitlab` directory, type `git status`
to show the file you modified:
```shell
modified: app/views/profiles/preferences/show.html.haml
```
1. Refresh the web browser where you're viewing the GDK.
The changes should be displayed. Take a screenshot.
1. Commit the changes:
```shell
git commit -a -m "Update UI text
Standardizing the text on this page so
that each area uses consistent language."
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Push the changes to the new branch:
```shell
git push --set-upstream origin ui-updates
```
1. You can [Create a merge request](mr-review.md) with the code change,
or continue to [update the translation files](#update-the-translation-files).
## Update the translation files
English UI strings are localized into many languages.
These strings are saved in a `.pot` file, which must be regenerated
any time you update UI text.
To automatically regenerate the localization file:
1. Ensure you are in the `gitlab-development-kit/gitlab` directory.
1. Run the following command:
```shell
tooling/bin/gettext_extractor locale/gitlab.pot
```
The `.pot` file will be generated in the `/locale` directory.
Now, in the `gitlab-development-kit/gitlab` directory, if you type `git status`
you should have both files listed:
```shell
modified: app/views/profiles/preferences/show.html.haml
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
For more information about localization, see [internationalization](../../i18n/externalization.md).
## Update the documentation
Documentation for GitLab is published on <https://docs.gitlab.com>.
When you add or update a feature, you must update the documentation as well.
1. To find the documentation for a feature, the easiest thing is to search the
documentation site. In this case, the setting is described on this documentation page:
```plaintext
https://docs.gitlab.com/ee/user/profile/preferences.html
```
1. The URL shows you the location of the file in the `/doc` directory.
In this case, the location is:
```plaintext
doc/user/profile/preferences.md
```
1. Go to this location in your local `gitlab` repository and update the `.md` file
and any related images.
Now when you run `git status`, you should have something like:
```plaintext
modified: app/views/profiles/preferences/show.html.haml
modified: doc/user/profile/img/profile-preferences-syntax-themes.png
modified: doc/user/profile/preferences.md
modified: locale/gitlab.pot
```
1. Commit and push the changes.
1. [Create a merge request](mr-review.md) or continue to update the documentation.
|
https://docs.gitlab.com/development/contributing/mr-review
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/mr-review.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
mr-review.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Create a merge request
| null |
Now you're ready to push changes from the community fork to the main GitLab repository!
[View an interactive demo of this step](https://gitlab.navattic.com/tu5n0haw).
1. Go to [the community fork on GitLab.com](https://gitlab.com/gitlab-community/gitlab).
You should see a message like this one:

Select **Create merge request**.
If you don't see this message, on the left sidebar, select **Code > Merge requests > New merge request**.
1. Take a look at the branch names. You should be merging from your branch
in the community fork to the `master` branch in the GitLab repository.

1. Fill out the information and then select **Save changes**.
Don't worry if your merge request is not complete.
If you don't want anyone from GitLab to review it, you can select the **Mark as draft** checkbox.
If you're not happy with the merge request after you create it, you can close it, no harm done.
1. Select the **Changes** tab. It should look something like this:

The red text shows the code before you made changes. The green shows what
the code looks like now.
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.

Someone from GitLab will look at your request and let you know what the next steps are.
## Complete the review process
After you create a merge request, GitLab automatically triggers a [CI/CD pipeline](../../../ci/pipelines/_index.md)
that runs tests, linting, security scans, and more.
Your pipeline must be successful for your merge request to be merged.
- To check the status of your pipeline, at the top of your merge request, select **Pipelines**.
- If you need help understanding or fixing the pipeline, use the `@gitlab-bot help` command in a comment to tag an MR coach.
- For more on MR coaching, visit [How GitLab Merge Request Coaches Can Help You](../merge_request_coaches.md).
### Getting a review
GitLab will triage your merge request automatically.
However, you can type `@gitlab-bot ready` in a comment to alert reviewers that your MR is ready.
- When the label is set to `workflow::ready for review`, [a developer reviews the MR](../../code_review.md).
- After you have resolved all of their feedback and the MR has been approved, the MR is ready for merge.
If you need help at any point in the process, type `@gitlab-bot help` in a comment or initiate a
[mentor session](https://about.gitlab.com/community/contribute/mentor-sessions/) on [Discord](https://discord.com/invite/gitlab).
When the merge request is merged, your change becomes part of the GitLab codebase.
Great job! Thank you for your contribution!
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Create a merge request
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
Now you're ready to push changes from the community fork to the main GitLab repository!
[View an interactive demo of this step](https://gitlab.navattic.com/tu5n0haw).
1. Go to [the community fork on GitLab.com](https://gitlab.com/gitlab-community/gitlab).
You should see a message like this one:

Select **Create merge request**.
If you don't see this message, on the left sidebar, select **Code > Merge requests > New merge request**.
1. Take a look at the branch names. You should be merging from your branch
in the community fork to the `master` branch in the GitLab repository.

1. Fill out the information and then select **Save changes**.
Don't worry if your merge request is not complete.
If you don't want anyone from GitLab to review it, you can select the **Mark as draft** checkbox.
If you're not happy with the merge request after you create it, you can close it, no harm done.
1. Select the **Changes** tab. It should look something like this:

The red text shows the code before you made changes. The green shows what
the code looks like now.
1. If you're happy with this merge request and want to start the review process, type
`@gitlab-bot ready` in a comment and then select **Comment**.

Someone from GitLab will look at your request and let you know what the next steps are.
## Complete the review process
After you create a merge request, GitLab automatically triggers a [CI/CD pipeline](../../../ci/pipelines/_index.md)
that runs tests, linting, security scans, and more.
Your pipeline must be successful for your merge request to be merged.
- To check the status of your pipeline, at the top of your merge request, select **Pipelines**.
- If you need help understanding or fixing the pipeline, use the `@gitlab-bot help` command in a comment to tag an MR coach.
- For more on MR coaching, visit [How GitLab Merge Request Coaches Can Help You](../merge_request_coaches.md).
### Getting a review
GitLab will triage your merge request automatically.
However, you can type `@gitlab-bot ready` in a comment to alert reviewers that your MR is ready.
- When the label is set to `workflow::ready for review`, [a developer reviews the MR](../../code_review.md).
- After you have resolved all of their feedback and the MR has been approved, the MR is ready for merge.
If you need help at any point in the process, type `@gitlab-bot help` in a comment or initiate a
[mentor session](https://about.gitlab.com/community/contribute/mentor-sessions/) on [Discord](https://discord.com/invite/gitlab).
When the merge request is merged, your change becomes part of the GitLab codebase.
Great job! Thank you for your contribution!
|
https://docs.gitlab.com/development/contributing/contribute-web-ide
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/contribute-web-ide.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
contribute-web-ide.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Contribute code with the Web IDE
| null |
The [GitLab Web IDE](../../../user/project/web_ide/_index.md) is a built-in advanced editor with commit staging.
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
The example in this section shows how to modify a line of code as part of a community contribution
to GitLab code using the Web IDE.
1. Go to the [GitLab community fork](https://gitlab.com/gitlab-community/gitlab-org/gitlab).
1. Search the GitLab code for the string `Syntax highlighting theme`.
From the [GitLab Community Fork](https://gitlab.com/gitlab-community/gitlab-org/gitlab):
1. On the left sidebar, select **Search or go to**.
1. Enter the search string `"Syntax highlighting theme"`.
1. Select the filename
[from the results](https://gitlab.com/search?search=%22Syntax+highlighting+theme%22&nav_source=navbar&project_id=41372369&group_id=60717473&search_code=true).
In this case, `app/views/profiles/preferences/show.html.haml`.
1. Open the file in Web IDE. Select **Edit > Open in Web IDE**.
- Keyboard shortcut: <kbd>.</kbd>
1. Update the string from `Syntax highlighting theme` to `Code syntax highlighting theme`.
1. Save your changes.
1. On the left activity bar, select **Source Control**.
Keyboard shortcut: <kbd>Control</kbd>+<kbd>Shift</kbd>+<kbd>G</kbd>.
1. Enter your commit message:
```plaintext
Update UI text
Standardizing the text on this page so
that each area uses consistent language.
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Select **Commit to new branch** from the **Commit to** dropdown list, and enter `1st-contrib-example`.
If your code change addresses an issue, [start the branch name with the issue number](../../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number).
1. In the notification that appears in the lower right, select **Create MR**.
1. Continue to [Create a merge request](mr-review.md)
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Contribute code with the Web IDE
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
The [GitLab Web IDE](../../../user/project/web_ide/_index.md) is a built-in advanced editor with commit staging.
{{< alert type="warning" >}}
This tutorial is designed to be a general introduction to contributing to the GitLab project
and is not an example of a change that should be submitted for review.
{{< /alert >}}
The example in this section shows how to modify a line of code as part of a community contribution
to GitLab code using the Web IDE.
1. Go to the [GitLab community fork](https://gitlab.com/gitlab-community/gitlab-org/gitlab).
1. Search the GitLab code for the string `Syntax highlighting theme`.
From the [GitLab Community Fork](https://gitlab.com/gitlab-community/gitlab-org/gitlab):
1. On the left sidebar, select **Search or go to**.
1. Enter the search string `"Syntax highlighting theme"`.
1. Select the filename
[from the results](https://gitlab.com/search?search=%22Syntax+highlighting+theme%22&nav_source=navbar&project_id=41372369&group_id=60717473&search_code=true).
In this case, `app/views/profiles/preferences/show.html.haml`.
1. Open the file in Web IDE. Select **Edit > Open in Web IDE**.
- Keyboard shortcut: <kbd>.</kbd>
1. Update the string from `Syntax highlighting theme` to `Code syntax highlighting theme`.
1. Save your changes.
1. On the left activity bar, select **Source Control**.
Keyboard shortcut: <kbd>Control</kbd>+<kbd>Shift</kbd>+<kbd>G</kbd>.
1. Enter your commit message:
```plaintext
Update UI text
Standardizing the text on this page so
that each area uses consistent language.
```
Follow the GitLab
[commit message guidelines](../merge_request_workflow.md#commit-messages-guidelines).
1. Select **Commit to new branch** from the **Commit to** dropdown list, and enter `1st-contrib-example`.
If your code change addresses an issue, [start the branch name with the issue number](../../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number).
1. In the notification that appears in the lower right, select **Create MR**.
1. Continue to [Create a merge request](mr-review.md)
|
https://docs.gitlab.com/development/contributing/configure-dev-env-gdk
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/configure-dev-env-gdk.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
configure-dev-env-gdk.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Install the GDK development environment
| null |
If you want to contribute to the GitLab codebase and want a development environment in which to test
your changes, you can use [the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit),
a local version of GitLab that's yours to play with.
The GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.

If you prefer to use GDK in a local virtual machine, use the steps in [Configure GDK-in-a-box](configure-dev-env-gdk-in-a-box.md)
[View an interactive demo of this step](https://gitlab.navattic.com/xtk20s8x).
## Install and configure GitLab Development Kit (GDK)
If you already have a working GDK,
[update it to use the community fork](#update-an-existing-gdk-installation).
Set aside about two hours to install the GDK. If all goes smoothly, it
should take about an hour to install.
Sometimes the installation needs some tweaks to make it work, so you should
also set aside some time for troubleshooting.
It might seem like a lot of work, but after you have the GDK running,
you'll be able to make any changes.

To install the GDK:
1. Ensure you're on
[one of the supported platforms](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/#supported-platforms).
1. Confirm that [Git](../../../topics/git/how_to_install_git/_index.md) is installed,
and that you have a source code editor.
1. Choose the directory where you want to install the GDK.
The installation script installs the application to a new subdirectory called `gdk`.
Keep the directory name short. Some users encounter issues with long directory names.
1. From the command line, go to that directory.
In this example, create and change to the `dev` directory:
```shell
mkdir ~/dev && cd "$_"
```
1. Run the one-line installation command:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/install" | bash
```
This script clones the GitLab Development Kit (GDK) repository into a new subdirectory, and sets up necessary dependencies using the `asdf` version manager (including Ruby, Node.js, PostgreSQL, Redis, and more).
{{< alert type="note" >}}
If you're using another version manager for those dependencies, refer to the [troubleshooting section](#error-no-version-is-set-for-command) to avoid conflicts.
{{< /alert >}}
1. For the message `Where would you like to install the GDK? [./gdk]`,
press <kbd>Enter</kbd> to accept the default location.
1. For the message `Which GitLab repo URL would you like to clone?`, enter the GitLab community fork URL:
```shell
https://gitlab.com/gitlab-community/gitlab.git
```
1. For the message `GitLab would like to collect basic error and usage data`,
choose your option based on the prompt.
While the installation is running, copy any messages that are displayed.
If you have any problems with the installation, you can use this output as
part of [troubleshooting](#troubleshoot-gdk).
1. After the installation is complete,
copy the `source` command from the message corresponding to your shell
from the message `INFO: To make sure GDK commands are available in this shell`:
```shell
source ~/.asdf/asdf.sh
```
1. Go to the directory where the GDK was installed:
```shell
cd gdk
```
1. Run `gdk truncate-legacy-tables` to ensure that the data in the main and CI databases are truncated,
then `gdk doctor` to confirm the GDK installation:
```shell
gdk truncate-legacy-tables && gdk doctor
```
- If `gdk doctor` returns errors, consult the [Troubleshoot GDK](#troubleshoot-gdk) section.
- If `gdk doctor` returns `Your GDK is healthy`, proceed to the next step.
1. Start the GDK:
```shell
gdk start
```
1. Wait for `GitLab available at http://127.0.0.1:3000`,
and connect to the GDK using the URL provided.
1. Sign in with the username `root` and the password `5iveL!fe`. You will be prompted
to reset your password the first time you sign in.
1. Continue to [Change the code with the GDK](contribute-gdk.md).
## Update an existing GDK installation
If you have an existing GDK installation, you should update it to use the community fork.
1. Delete the existing `gdk/gitlab` directory.
1. Clone the community fork into that location:
```shell
cd gdk
git clone https://gitlab.com/gitlab-community/gitlab.git
```
To confirm it was successful:
1. Ensure the `gdk/gitlab` directory exists.
1. Go to the top `gdk` directory and run `gdk stop` and `gdk start`.
If you get errors, run `gdk doctor` to troubleshoot.
For more advanced troubleshooting, continue to the [Troubleshoot GDK](#troubleshoot-gdk) section.
## Troubleshoot GDK
{{< alert type="note" >}}
For more advanced troubleshooting, see
the [troubleshooting documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/doc/troubleshooting)
and the [#contribute channel on Discord](https://discord.com/channels/778180511088640070/997442331202564176).
{{< /alert >}}
If you encounter issues, go to the `gdk/gitlab`
directory and run `gdk doctor`.
If `gdk doctor` returns Node or Ruby-related errors, run:
```shell
yarn install && bundle install
bundle exec rails db:migrate RAILS_ENV=development
```
### Error: No version is set for command
If you already use another version manager in your system, you may encounter the "No version is set for command <command>" error.
To resolve this issue, you can temporarily comment out the sourcing of `asdf.sh` in your shell:
1. Open your shell configuration file (for example, `.zshrc`, `.bashrc`):
```shell
nano <path-to-shell-config>
```
1. Comment out the following line:
```shell
# Added by GDK bootstrap
# source ~/.asdf/asdf.sh
```
1. After making these changes, restart your shell or terminal session for the modifications to take effect.
To use `asdf` again, revert any previous changes.
## Change the code
After the GDK is ready, continue to [Contribute code with the GDK](contribute-gdk.md).
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Install the GDK development environment
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
If you want to contribute to the GitLab codebase and want a development environment in which to test
your changes, you can use [the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit),
a local version of GitLab that's yours to play with.
The GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.

If you prefer to use GDK in a local virtual machine, use the steps in [Configure GDK-in-a-box](configure-dev-env-gdk-in-a-box.md)
[View an interactive demo of this step](https://gitlab.navattic.com/xtk20s8x).
## Install and configure GitLab Development Kit (GDK)
If you already have a working GDK,
[update it to use the community fork](#update-an-existing-gdk-installation).
Set aside about two hours to install the GDK. If all goes smoothly, it
should take about an hour to install.
Sometimes the installation needs some tweaks to make it work, so you should
also set aside some time for troubleshooting.
It might seem like a lot of work, but after you have the GDK running,
you'll be able to make any changes.

To install the GDK:
1. Ensure you're on
[one of the supported platforms](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/#supported-platforms).
1. Confirm that [Git](../../../topics/git/how_to_install_git/_index.md) is installed,
and that you have a source code editor.
1. Choose the directory where you want to install the GDK.
The installation script installs the application to a new subdirectory called `gdk`.
Keep the directory name short. Some users encounter issues with long directory names.
1. From the command line, go to that directory.
In this example, create and change to the `dev` directory:
```shell
mkdir ~/dev && cd "$_"
```
1. Run the one-line installation command:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/install" | bash
```
This script clones the GitLab Development Kit (GDK) repository into a new subdirectory, and sets up necessary dependencies using the `asdf` version manager (including Ruby, Node.js, PostgreSQL, Redis, and more).
{{< alert type="note" >}}
If you're using another version manager for those dependencies, refer to the [troubleshooting section](#error-no-version-is-set-for-command) to avoid conflicts.
{{< /alert >}}
1. For the message `Where would you like to install the GDK? [./gdk]`,
press <kbd>Enter</kbd> to accept the default location.
1. For the message `Which GitLab repo URL would you like to clone?`, enter the GitLab community fork URL:
```shell
https://gitlab.com/gitlab-community/gitlab.git
```
1. For the message `GitLab would like to collect basic error and usage data`,
choose your option based on the prompt.
While the installation is running, copy any messages that are displayed.
If you have any problems with the installation, you can use this output as
part of [troubleshooting](#troubleshoot-gdk).
1. After the installation is complete,
copy the `source` command from the message corresponding to your shell
from the message `INFO: To make sure GDK commands are available in this shell`:
```shell
source ~/.asdf/asdf.sh
```
1. Go to the directory where the GDK was installed:
```shell
cd gdk
```
1. Run `gdk truncate-legacy-tables` to ensure that the data in the main and CI databases are truncated,
then `gdk doctor` to confirm the GDK installation:
```shell
gdk truncate-legacy-tables && gdk doctor
```
- If `gdk doctor` returns errors, consult the [Troubleshoot GDK](#troubleshoot-gdk) section.
- If `gdk doctor` returns `Your GDK is healthy`, proceed to the next step.
1. Start the GDK:
```shell
gdk start
```
1. Wait for `GitLab available at http://127.0.0.1:3000`,
and connect to the GDK using the URL provided.
1. Sign in with the username `root` and the password `5iveL!fe`. You will be prompted
to reset your password the first time you sign in.
1. Continue to [Change the code with the GDK](contribute-gdk.md).
## Update an existing GDK installation
If you have an existing GDK installation, you should update it to use the community fork.
1. Delete the existing `gdk/gitlab` directory.
1. Clone the community fork into that location:
```shell
cd gdk
git clone https://gitlab.com/gitlab-community/gitlab.git
```
To confirm it was successful:
1. Ensure the `gdk/gitlab` directory exists.
1. Go to the top `gdk` directory and run `gdk stop` and `gdk start`.
If you get errors, run `gdk doctor` to troubleshoot.
For more advanced troubleshooting, continue to the [Troubleshoot GDK](#troubleshoot-gdk) section.
## Troubleshoot GDK
{{< alert type="note" >}}
For more advanced troubleshooting, see
the [troubleshooting documentation](https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/doc/troubleshooting)
and the [#contribute channel on Discord](https://discord.com/channels/778180511088640070/997442331202564176).
{{< /alert >}}
If you encounter issues, go to the `gdk/gitlab`
directory and run `gdk doctor`.
If `gdk doctor` returns Node or Ruby-related errors, run:
```shell
yarn install && bundle install
bundle exec rails db:migrate RAILS_ENV=development
```
### Error: No version is set for command
If you already use another version manager in your system, you may encounter the "No version is set for command <command>" error.
To resolve this issue, you can temporarily comment out the sourcing of `asdf.sh` in your shell:
1. Open your shell configuration file (for example, `.zshrc`, `.bashrc`):
```shell
nano <path-to-shell-config>
```
1. Comment out the following line:
```shell
# Added by GDK bootstrap
# source ~/.asdf/asdf.sh
```
1. After making these changes, restart your shell or terminal session for the modifications to take effect.
To use `asdf` again, revert any previous changes.
## Change the code
After the GDK is ready, continue to [Contribute code with the GDK](contribute-gdk.md).
|
https://docs.gitlab.com/development/contributing/configure-dev-env-gdk-in-a-box
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/configure-dev-env-gdk-in-a-box.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
configure-dev-env-gdk-in-a-box.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Configure GDK-in-a-box
| null |
If you want to contribute to the GitLab codebase and want a development environment in which to test
your changes, you can use
[GDK-in-a-box](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/gdk_in_a_box.md). GDK-in-a-box is available as a multi-platform container image, pre-configured with [the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit).
{{< alert type="warning" >}}
Virtual Machine (VM) images for GDK-in-a-box are also available. These VM images are from an earlier iteration of GDK-in-a-box. Information related to these has been retained below. Note, they are deprecated and not actively updated.
{{< /alert >}}
The GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.
It requires 30 GB of disk space.

If you prefer to use GDK locally without a VM, use the steps in [Install the GDK development environment](configure-dev-env-gdk.md)
## Download GDK-in-a-box
1. Install a container runtime.
- Multiple options are available, including [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Docker Engine](https://docs.docker.com/engine/install/), and [Rancher Desktop](https://docs.rancherdesktop.io/getting-started/installation).
- Docker Desktop can also be installed through package managers like [Homebrew](https://formulae.brew.sh/formula/docker).
- **Note**: On Rancher Desktop, you may want to disable Kubernetes under "Preferences".
- Other container runtimes that support Docker-compatible commands should also work.
1. Pull the container image. The image requires a download of less than 6 GB and might take some time to download.
- `docker pull registry.gitlab.com/gitlab-org/gitlab-development-kit/gitlab-gdk-in-a-box:latest`
1. Create a container from the image:
```shell
docker run -d -h gdk.local --name gdk \
-p 2022:2022 \
-p 2222:2222 \
-p 3000:3000 \
-p 3005:3005 \
-p 3010:3010 \
-p 3038:3038 \
-p 5100:5100 \
-p 5778:5778 \
-p 9000:9000 \
registry.gitlab.com/gitlab-org/gitlab-development-kit/gitlab-gdk-in-a-box:latest
```
1. Continue to **Use VS Code to connect to GDK**.
## Use VS Code to connect to GDK
[View a demo video of this step](https://go.gitlab.com/b54mHb).
{{< alert type="note" >}}
You might need to modify the system configuration of your container runtime (CPU cores and RAM) before starting it. A suggested configuration is less than 12 GB RAM, and 4 cores.
{{< /alert >}}
1. Start the container.
1. In VS Code, select **Terminal > New terminal**, then run a `curl` command to add an SSH key to your local `~/.ssh/config`:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/setup-ssh-key" | bash
```
To learn more about the script, you can examine the
[`setup-ssh-key` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/setup-ssh-key).
1. In the script, type `1` to select the Container installation.
1. In VS Code, install the **Remote - SSH** extension:
- [VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh)
- [VSCodium](https://open-vsx.org/extension/jeanp413/open-remote-ssh)
1. Connect VS Code to the VM:
- Select **Remote-SSH: Connect to host** from the command palette.
- Select `gdk.local` to connect.
1. A new VS Code window opens.
You can close the old window to avoid confusion.
Complete the remaining steps in the new window.
1. In the VS Code terminal, run a `curl` command to configure Git in the GDK:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/first_time_setup" | bash
```
- Enter your name and email address when prompted.
- Add the displayed [SSH key to your profile](https://gitlab.com/-/user_settings/ssh_keys).
To learn more about the script, you can examine the
[`first_time_setup` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/first_time_setup).
1. In VS Code, select **File > Open folder**, and go to: `/home/gdk/gitlab-development-kit/gitlab/`.
1. Open GitLab in your browser: `http://gdk.local:3000`.
- If the page does not load, add `127.0.0.1 gdk.local` to your local machine's hosts file.
1. Sign in with the username `root` and password `5iveL!fe`.
1. Continue to [change the code with the GDK](contribute-gdk.md).
## Shut down the GDK Container
You can stop the container by running the following command on your host:
```shell
docker stop gdk
```
## Remove the GDK Container
{{< alert type="warning" >}}
This deletes the current container and any data inside. Ensure you have committed any changes before running this command.
{{< /alert >}}
You can remove the container by running the following command on your host:
```shell
docker rm gdk
```
## Update GDK-in-a-box
You can update GDK-in-a-box while connected to `gdk.local` in VS Code.
In the VS Code terminal, enter:
```shell
gdk update
```
## Change the code
After the GDK is ready, continue to [Contribute code with the GDK](contribute-gdk.md).
## Download GDK-in-a-box VM Images (Deprecated)
1. Download and install virtualization software to run the virtual machine:
- Mac computers with [Apple silicon](https://support.apple.com/en-us/116943): [UTM](https://docs.getutm.app/installation/macos/).
Select **Download from GitHub**.
- Linux / Windows / Mac computers with Intel silicon: [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
1. Download and unzip GDK-in-a-box. The file is up to 15 GB and might take some time to download:
- Mac computers with Apple silicon: [UTM image](https://go.gitlab.com/cCHpCP)
- Linux / Windows / Mac: [VirtualBox image](https://go.gitlab.com/5iydBP)
1. Double-click the virtual machine image to open it:
- UTM: `gdk.utm`
- VirtualBox: `gdk.vbox`
1. Continue to **Use VS Code to connect to GDK (VM)**.
## Use VS Code to connect to GDK (VM)
{{< alert type="note" >}}
You might need to modify the system configuration (CPU cores and RAM) before starting the virtual machine.
{{< /alert >}}
1. Start the VM (you can minimize UTM or VirtualBox).
1. In VS Code, select **Terminal > New terminal**, then run a `curl` command to add an SSH key to your local `~/.ssh/config`:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/setup-ssh-key" | bash
```
To learn more about the script, you can examine the
[`setup-ssh-key` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/setup-ssh-key).
1. In the script, type `2` to select the VM installation.
1. In VS Code, install the **Remote - SSH** extension:
- [VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh)
- [VSCodium](https://open-vsx.org/extension/jeanp413/open-remote-ssh)
1. Make sure that VS Code has access to the local network (**Privacy & Security > Local Network**).
1. Connect VS Code to the VM:
- Select **Remote-SSH: Connect to host** from the command palette.
- Enter the SSH host: `gdk.local`
1. A new VS Code window opens.
You can close the old window to avoid confusion.
Complete the remaining steps in the new window.
1. In the VS Code terminal, run a `curl` command to configure Git in the GDK:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/first_time_setup" | bash
```
- Enter your name and email address when prompted.
- Add the displayed [SSH key to your profile](https://gitlab.com/-/user_settings/ssh_keys).
To learn more about the script, you can examine the
[`first_time_setup` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/first_time_setup).
1. In VS Code, select **File > Open folder**, and go to: `/home/debian/gitlab-development-kit/gitlab/`.
1. Open GitLab in your browser: `http://gdk.local:3000`.
1. Sign in with the username `root` and password `5iveL!fe`.
1. Continue to [change the code with the GDK](contribute-gdk.md).
## Shut down GDK VM
You can select the power icon ({{< icon name="power" >}}) to shut down
the virtual machine, or enter the `shutdown` command in the terminal.
Use the password `debian`:
```shell
sudo shutdown now
```
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Configure GDK-in-a-box
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
If you want to contribute to the GitLab codebase and want a development environment in which to test
your changes, you can use
[GDK-in-a-box](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/gdk_in_a_box.md). GDK-in-a-box is available as a multi-platform container image, pre-configured with [the GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit).
{{< alert type="warning" >}}
Virtual Machine (VM) images for GDK-in-a-box are also available. These VM images are from an earlier iteration of GDK-in-a-box. Information related to these has been retained below. Note, they are deprecated and not actively updated.
{{< /alert >}}
The GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.
It requires 30 GB of disk space.

If you prefer to use GDK locally without a VM, use the steps in [Install the GDK development environment](configure-dev-env-gdk.md)
## Download GDK-in-a-box
1. Install a container runtime.
- Multiple options are available, including [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Docker Engine](https://docs.docker.com/engine/install/), and [Rancher Desktop](https://docs.rancherdesktop.io/getting-started/installation).
- Docker Desktop can also be installed through package managers like [Homebrew](https://formulae.brew.sh/formula/docker).
- **Note**: On Rancher Desktop, you may want to disable Kubernetes under "Preferences".
- Other container runtimes that support Docker-compatible commands should also work.
1. Pull the container image. The image requires a download of less than 6 GB and might take some time to download.
- `docker pull registry.gitlab.com/gitlab-org/gitlab-development-kit/gitlab-gdk-in-a-box:latest`
1. Create a container from the image:
```shell
docker run -d -h gdk.local --name gdk \
-p 2022:2022 \
-p 2222:2222 \
-p 3000:3000 \
-p 3005:3005 \
-p 3010:3010 \
-p 3038:3038 \
-p 5100:5100 \
-p 5778:5778 \
-p 9000:9000 \
registry.gitlab.com/gitlab-org/gitlab-development-kit/gitlab-gdk-in-a-box:latest
```
1. Continue to **Use VS Code to connect to GDK**.
## Use VS Code to connect to GDK
[View a demo video of this step](https://go.gitlab.com/b54mHb).
{{< alert type="note" >}}
You might need to modify the system configuration of your container runtime (CPU cores and RAM) before starting it. A suggested configuration is less than 12 GB RAM, and 4 cores.
{{< /alert >}}
1. Start the container.
1. In VS Code, select **Terminal > New terminal**, then run a `curl` command to add an SSH key to your local `~/.ssh/config`:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/setup-ssh-key" | bash
```
To learn more about the script, you can examine the
[`setup-ssh-key` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/setup-ssh-key).
1. In the script, type `1` to select the Container installation.
1. In VS Code, install the **Remote - SSH** extension:
- [VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh)
- [VSCodium](https://open-vsx.org/extension/jeanp413/open-remote-ssh)
1. Connect VS Code to the VM:
- Select **Remote-SSH: Connect to host** from the command palette.
- Select `gdk.local` to connect.
1. A new VS Code window opens.
You can close the old window to avoid confusion.
Complete the remaining steps in the new window.
1. In the VS Code terminal, run a `curl` command to configure Git in the GDK:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/first_time_setup" | bash
```
- Enter your name and email address when prompted.
- Add the displayed [SSH key to your profile](https://gitlab.com/-/user_settings/ssh_keys).
To learn more about the script, you can examine the
[`first_time_setup` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/first_time_setup).
1. In VS Code, select **File > Open folder**, and go to: `/home/gdk/gitlab-development-kit/gitlab/`.
1. Open GitLab in your browser: `http://gdk.local:3000`.
- If the page does not load, add `127.0.0.1 gdk.local` to your local machine's hosts file.
1. Sign in with the username `root` and password `5iveL!fe`.
1. Continue to [change the code with the GDK](contribute-gdk.md).
## Shut down the GDK Container
You can stop the container by running the following command on your host:
```shell
docker stop gdk
```
## Remove the GDK Container
{{< alert type="warning" >}}
This deletes the current container and any data inside. Ensure you have committed any changes before running this command.
{{< /alert >}}
You can remove the container by running the following command on your host:
```shell
docker rm gdk
```
## Update GDK-in-a-box
You can update GDK-in-a-box while connected to `gdk.local` in VS Code.
In the VS Code terminal, enter:
```shell
gdk update
```
## Change the code
After the GDK is ready, continue to [Contribute code with the GDK](contribute-gdk.md).
## Download GDK-in-a-box VM Images (Deprecated)
1. Download and install virtualization software to run the virtual machine:
- Mac computers with [Apple silicon](https://support.apple.com/en-us/116943): [UTM](https://docs.getutm.app/installation/macos/).
Select **Download from GitHub**.
- Linux / Windows / Mac computers with Intel silicon: [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
1. Download and unzip GDK-in-a-box. The file is up to 15 GB and might take some time to download:
- Mac computers with Apple silicon: [UTM image](https://go.gitlab.com/cCHpCP)
- Linux / Windows / Mac: [VirtualBox image](https://go.gitlab.com/5iydBP)
1. Double-click the virtual machine image to open it:
- UTM: `gdk.utm`
- VirtualBox: `gdk.vbox`
1. Continue to **Use VS Code to connect to GDK (VM)**.
## Use VS Code to connect to GDK (VM)
{{< alert type="note" >}}
You might need to modify the system configuration (CPU cores and RAM) before starting the virtual machine.
{{< /alert >}}
1. Start the VM (you can minimize UTM or VirtualBox).
1. In VS Code, select **Terminal > New terminal**, then run a `curl` command to add an SSH key to your local `~/.ssh/config`:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/setup-ssh-key" | bash
```
To learn more about the script, you can examine the
[`setup-ssh-key` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/setup-ssh-key).
1. In the script, type `2` to select the VM installation.
1. In VS Code, install the **Remote - SSH** extension:
- [VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh)
- [VSCodium](https://open-vsx.org/extension/jeanp413/open-remote-ssh)
1. Make sure that VS Code has access to the local network (**Privacy & Security > Local Network**).
1. Connect VS Code to the VM:
- Select **Remote-SSH: Connect to host** from the command palette.
- Enter the SSH host: `gdk.local`
1. A new VS Code window opens.
You can close the old window to avoid confusion.
Complete the remaining steps in the new window.
1. In the VS Code terminal, run a `curl` command to configure Git in the GDK:
```shell
curl "https://gitlab.com/gitlab-org/gitlab-development-kit/-/raw/main/support/gdk-in-a-box/first_time_setup" | bash
```
- Enter your name and email address when prompted.
- Add the displayed [SSH key to your profile](https://gitlab.com/-/user_settings/ssh_keys).
To learn more about the script, you can examine the
[`first_time_setup` code](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/support/gdk-in-a-box/first_time_setup).
1. In VS Code, select **File > Open folder**, and go to: `/home/debian/gitlab-development-kit/gitlab/`.
1. Open GitLab in your browser: `http://gdk.local:3000`.
1. Sign in with the username `root` and password `5iveL!fe`.
1. Continue to [change the code with the GDK](contribute-gdk.md).
## Shut down GDK VM
You can select the power icon ({{< icon name="power" >}}) to shut down
the virtual machine, or enter the `shutdown` command in the terminal.
Use the password `debian`:
```shell
sudo shutdown now
```
|
https://docs.gitlab.com/development/contributing/first_contribution
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/_index.md
|
2025-08-13
|
doc/development/contributing/first_contribution
|
[
"doc",
"development",
"contributing",
"first_contribution"
] |
_index.md
|
none
|
unassigned
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Tutorial: Make a GitLab contribution
| null |
Everyone can contribute to the development of GitLab.
You can contribute new features, changes to code or processes, typo fixes,
or updates to language in the interface.
This tutorial walks you through the contribution process with an example of updating UI text and related files.
You can follow this tutorial to familiarize yourself with the contribution process.
## Before you begin
1. If you don't already have a GitLab account [create a new one](https://gitlab.com/users/sign_up).
Confirm you can successfully [sign in](https://gitlab.com/users/sign_in).
1. [Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access),
a set of forks mirrored from GitLab repositories in order to improve the contributor experience.
- When you request access to the community forks you will receive an onboarding issue in the
[community onboarding project](https://gitlab.com/gitlab-community/community-members/onboarding/-/issues).
- For more information, read the [community forks blog post](https://about.gitlab.com/blog/2023/04/04/gitlab-community-forks/).
- The access request will be manually verified and should take no more than a few hours.
- If you use a local development environment, you can start making changes locally while you wait
for the team to confirm your access.
You must have access to the community fork to push your changes to it.
1. We recommend you join the [GitLab Discord server](https://discord.com/invite/gitlab), where GitLab team
members and the wider community are ready and waiting to answer your questions and offer support
for making contributions.
1. Once your community forks access request is approved you can start using [GitLab Duo](../../../user/gitlab_duo/_index.md),
our AI-native features including Code Suggestions, Chat, Root Cause Analysis, and more.
## Choose how you want to contribute
To get started, select the development option that works best for you:
- [**Web IDE**](contribute-web-ide.md) - Make a quick change from your browser.
Use the Web IDE to change code or fix a typo and create a merge request from your browser.
- No configuration or installation required.
- Available within a few seconds.
- [**Gitpod**](configure-dev-env-gitpod.md) - Most contributors should use this option.
- In-browser remote development environment that runs regardless of your local hardware,
operating system, or software.
- Make and preview remote changes in your local browser.
- Takes a few minutes to set up and is fully ready in thirty minutes.
- GitLab Development Kit (GDK) and GDK-in-a-box - Fully local development.
GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.
These options rely on local hardware and may be resource intensive.
- [**GDK-in-a-box**](configure-dev-env-gdk-in-a-box.md): Recommended for local development.
Download and run a pre-configured container image that contains the GDK, then connect to it with VS Code.
- Minimal configuration required.
- After the 10 GB image has downloaded, GDK-in-a-box is ready in a few minutes.
- [**Standalone GDK**](configure-dev-env-gdk.md): Install the GDK and its dependencies.
Install the GDK for a fully local development environment.
- Some configuration required.
- May take up to two hours to install and configure.
- This is the route used by development teams at GitLab.
|
---
stage: none
group: unassigned
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: 'Tutorial: Make a GitLab contribution'
breadcrumbs:
- doc
- development
- contributing
- first_contribution
---
Everyone can contribute to the development of GitLab.
You can contribute new features, changes to code or processes, typo fixes,
or updates to language in the interface.
This tutorial walks you through the contribution process with an example of updating UI text and related files.
You can follow this tutorial to familiarize yourself with the contribution process.
## Before you begin
1. If you don't already have a GitLab account [create a new one](https://gitlab.com/users/sign_up).
Confirm you can successfully [sign in](https://gitlab.com/users/sign_in).
1. [Request access to the community forks](https://gitlab.com/groups/gitlab-community/community-members/-/group_members/request_access),
a set of forks mirrored from GitLab repositories in order to improve the contributor experience.
- When you request access to the community forks you will receive an onboarding issue in the
[community onboarding project](https://gitlab.com/gitlab-community/community-members/onboarding/-/issues).
- For more information, read the [community forks blog post](https://about.gitlab.com/blog/2023/04/04/gitlab-community-forks/).
- The access request will be manually verified and should take no more than a few hours.
- If you use a local development environment, you can start making changes locally while you wait
for the team to confirm your access.
You must have access to the community fork to push your changes to it.
1. We recommend you join the [GitLab Discord server](https://discord.com/invite/gitlab), where GitLab team
members and the wider community are ready and waiting to answer your questions and offer support
for making contributions.
1. Once your community forks access request is approved you can start using [GitLab Duo](../../../user/gitlab_duo/_index.md),
our AI-native features including Code Suggestions, Chat, Root Cause Analysis, and more.
## Choose how you want to contribute
To get started, select the development option that works best for you:
- [**Web IDE**](contribute-web-ide.md) - Make a quick change from your browser.
Use the Web IDE to change code or fix a typo and create a merge request from your browser.
- No configuration or installation required.
- Available within a few seconds.
- [**Gitpod**](configure-dev-env-gitpod.md) - Most contributors should use this option.
- In-browser remote development environment that runs regardless of your local hardware,
operating system, or software.
- Make and preview remote changes in your local browser.
- Takes a few minutes to set up and is fully ready in thirty minutes.
- GitLab Development Kit (GDK) and GDK-in-a-box - Fully local development.
GDK is a local development environment that includes an installation of GitLab Self-Managed,
sample projects, and administrator access with which you can test functionality.
These options rely on local hardware and may be resource intensive.
- [**GDK-in-a-box**](configure-dev-env-gdk-in-a-box.md): Recommended for local development.
Download and run a pre-configured container image that contains the GDK, then connect to it with VS Code.
- Minimal configuration required.
- After the 10 GB image has downloaded, GDK-in-a-box is ready in a few minutes.
- [**Standalone GDK**](configure-dev-env-gdk.md): Install the GDK and its dependencies.
Install the GDK for a fully local development environment.
- Some configuration required.
- May take up to two hours to install and configure.
- This is the route used by development teams at GitLab.
|
https://docs.gitlab.com/development/contributing/verify
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/contributing/_index.md
|
2025-08-13
|
doc/development/contributing/verify
|
[
"doc",
"development",
"contributing",
"verify"
] |
_index.md
|
Verify
|
Pipeline Execution
| null |
Contribute to Verify stage codebase
| null |
## What are we working on in Verify?
Verify stage is working on a comprehensive Continuous Integration platform
integrated into the GitLab product. Our goal is to empower our users to make
great technical and business decisions, by delivering a fast, reliable, secure
platform that verifies assumptions that our users make, and check them against
the criteria defined in CI/CD configuration. They could be unit tests, end-to-end
tests, benchmarking, performance validation, code coverage enforcement, and so on.
Feedback delivered by GitLab CI/CD makes it possible for our users to make well
informed decisions about technological and business choices they need to make
to succeed. Why is Continuous Integration a mission critical product?
GitLab CI/CD is our platform to deliver feedback to our users and customers.
They contribute their continuous integration configuration files
`.gitlab-ci.yml` to describe the questions they want to get answers for. Each
time someone pushes a commit or triggers a pipeline we need to find answers for
very important questions that have been asked in CI/CD configuration.
Failing to answer these questions or, what might be even worse, providing false
answers, might result in a user making a wrong decision. Such wrong decisions
can have very severe consequences.
## Core principles of our CI/CD platform
Data produced by the platform should be:
1. Accurate.
1. Durable.
1. Accessible.
The platform itself should be:
1. Reliable.
1. Secure.
1. Deterministic.
1. Trustworthy.
1. Fast.
1. Simple.
Since the inception of GitLab CI/CD, we have lived by these principles,
and they serve us and our users well. Some examples of these principles are that:
- The feedback delivered by GitLab CI/CD and data produced by the platform should be accurate.
If a job fails and we notify a user that it was successful, it can have severe negative consequences.
- Feedback needs to be available when a user needs it and data cannot disappear unexpectedly when engineers need it.
- It all doesn't matter if the platform is not secure and we
are leaking credentials or secrets.
- When a user provides a set of preconditions in a form of CI/CD configuration, the result should be deterministic each time a pipeline runs, because otherwise the platform might not be trustworthy.
- If it is fast, simple to use and has a great UX it will serve our users well.
## Building things in Verify
### Measure before you optimize, and make data-informed decisions
It is very difficult to optimize something that you cannot measure. How would you
know if you succeeded, or how significant the success was? If you are working on
a performance or reliability improvement, make sure that you measure things before
you optimize them.
The best way to measure stuff is to add a Prometheus metric. Counters, gauges, and
histograms are great ways to quickly get approximated results. Unfortunately this
is not the best way to measure tail latency. Prometheus metrics, especially histograms,
are usually approximations.
If you have to measure tail latency, like how slow something could be or how
large a request payload might be, consider adding custom application logs and
always use structured logging.
It's useful to use profiling and flamegraphs to understand what the code execution
path truly looks like!
### Strive for simple solutions, avoid clever solutions
It is sometimes tempting to use a clever solution to deliver something more
quickly. We want to avoid shipping clever code, because it is usually more
difficult to understand and maintain in the long term. Instead, we want to
focus on boring solutions that make it easier to evolve the codebase and keep the
contribution barrier low. We want to find solutions that are as simple as
possible.
### Do not confuse boring solutions with easy solutions
Boring solutions are sometimes confused with easy solutions. Very often the
opposite is true. An easy solution might not be simple - for example, a complex
new library can be included to add a very small functionality that otherwise
could be implemented quickly - it is easier to include this library than to
build this thing, but it would bring a lot of complexity into the product.
On the other hand, it is also possible to over-engineer a solution when a simple,
well tested, and well maintained library is available. In that case using the
library might make sense. We recognize that we are constantly balancing simple
and easy solutions, and that finding the right balance is important.
### "Simple" is not mutually exclusive with "flexible"
Building simple things does not mean that more advanced and flexible solutions
will not be available. A good example here is an expanding complexity of
writing `.gitlab-ci.yml` configuration. For example, you can use a simple
method to define an environment name:
```yaml
deploy:
environment: production
script: cap deploy
```
But the `environment` keyword can be also expanded into another level of
configuration that can offer more flexibility.
```yaml
deploy:
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://prod.example.com
script: cap deploy
```
This kind of approach shields new users from the complexities of the platform,
but still allows them to go deeper if they need to. This approach can be
applied to many other technical implementations.
### Make things observable
GitLab is a DevOps platform. We popularize DevOps because it helps companies
be more efficient and achieve better results. One important component of
DevOps culture is to take ownership over features and code that you are
building. It is very difficult to do that when you don't know how your features
perform and behave in the production environment.
This is why we want to make our features and code observable. It
should be written in a way that an author can understand how well or how poorly
the feature or code behaves in the production environment. We usually accomplish
that by introducing the proper mix of Prometheus metrics and application
loggers.
**TODO** document when to use Prometheus metrics, when to use loggers. Write a
few sentences about histograms and counters. Write a few sentences highlighting
importance of metrics when doing incremental rollouts.
### Protect customer data
Making data produced by our CI/CD platform durable is important. We recognize that
data generated in the CI/CD by users and customers is
something important and we must protect it. This data is not only important
because it can contain important information, we also do have compliance and
auditing responsibilities.
Therefore we must take extra care when we are writing migrations
that permanently removes data from our database, or when we are define
new retention policies.
As a general rule, when you are writing code that is supposed to remove
data from the database, file system, or object storage, you should get an extra pair
of eyes on your changes. When you are defining a new retention policy, you
should double check with PMs and EMs.
### Get your design reviewed
When you are designing a subsystem for pipeline processing and transitioning
CI/CD statuses, request an additional opinion on the design from a Verify maintainer (`@gitlab-org/maintainers/cicd-verify`)
as early as possible and hold others accountable for doing the same. Having your
design reviewed by a Verify maintainer helps to identify any blind spots you might
have overlooked as early as possible and possibly leads to a better solution.
By having the design reviewed before any development work is started, it also helps to
make merge request review more efficient. You would be less likely to encounter
significantly differing opinions or change requests during the maintainer review
if the design has been reviewed by a Verify maintainer. As a result, the merge request
could be merged sooner.
### Get your changes reviewed
When your merge request is ready for reviews you must assign reviewers and then
maintainers. Depending on the complexity of a change, you might want to involve
the people that know the most about the codebase area you are changing. We do
have many domain experts and maintainers in Verify and it is absolutely
acceptable to ask them to review your code when you are not certain if a
reviewer or maintainer assigned by the Reviewer Roulette has enough context
about the change.
The reviewer roulette offers useful suggestions, but as assigning the right
reviewers is important it should not be done automatically every time. It might
not make sense to assign someone who knows nothing about the area you are
updating, because their feedback might be limited to code style and syntax.
Depending on the complexity and impact of a change, assigning the right people
to review your changes might be very important.
If you don't know who to assign, consult `git blame` or ask in the `#s_verify`
Slack channel (GitLab team members only).
There are two kinds of changes / merge requests that require additional
attention from reviews and an additional reviewer:
1. Merge requests changing code around pipelines / stages / builds statuses.
1. Merge requests changing code around authentication / security features.
In both cases engineers are expected to request a review from a maintainer and
a domain expert. If maintainer is the domain expert, involving another person
is recommended.
### Incremental rollouts
After your merge request is merged by a maintainer, it is time to release it to
users and the wider community. We usually do this with feature flags.
While not every merge request needs a feature flag, most merge
requests in Verify should have [feature flags](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags).
If you already follow the advice on this page, you probably already have a
few metrics and perhaps a few loggers added that make your new code observable
in the production environment. You can now use these metrics to incrementally
roll out your changes!
A typical scenario involves enabling a few features in a few internal projects
while observing your metrics or loggers. Be aware that there might be a
small delay involved in ingesting logs in Elastic or Kibana. After you confirm
the feature works well with internal projects you can start an
incremental rollout for other projects.
Avoid using "percent of time" incremental rollouts. These are error prone,
especially when you are checking feature flags in a few places in the codebase
and you have not memoized the result of a check in a single place.
### Do not cause our Universe to implode
During one of the first GitLab Contributes events we had a discussion about the importance
of keeping CI/CD pipeline, stage, and job statuses accurate. We considered a hypothetical
scenario relating to a software being built by one of our [early customers](https://about.gitlab.com/blog/2016/11/23/gitlab-adoption-growing-at-cern/):
> What happens if software deployed to the [Large Hadron Collider (LHC)](https://en.wikipedia.org/wiki/Large_Hadron_Collider),
> breaks because of a bug in GitLab CI/CD that showed that a pipeline
> passed, but this data was not accurate and the software deployed was actually
> invalid? A problem like this could cause the LHC to malfunction, which
> could generate a new particle that would then cause the universe to implode.
That would be quite an undesirable outcome of a small bug in GitLab CI/CD status
processing. Take extra care when you are working on CI/CD statuses,
we don't want to implode our Universe!
This is an extreme and unlikely scenario, but presenting data that is not accurate
can potentially cause a myriad of problems through the
[butterfly effect](https://en.wikipedia.org/wiki/Butterfly_effect).
There are much more likely scenarios that
can have disastrous consequences. GitLab CI/CD is being used by companies
building medical, aviation, and automotive software. Continuous Integration is
a mission critical part of software engineering.
### Definition of Done
In Verify, we follow our Development team's [Definition of Done](../merge_request_workflow.md#definition-of-done).
We also want to keep things efficient and [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) when we answer questions
and solve problems for our users.
For any issue that is resolved because the solution is supported with existing `.gitlab-ci.yml` syntax,
create a project in the [`ci-sample-projects`](https://gitlab.com/gitlab-org/ci-sample-projects) group
that demonstrates the solution.
The project must have:
- A simple title.
- A clear description.
- A `README.md` with:
- A link to the resolved issue. You should also direct users to collaborate in the
resolved issue if any questions arise.
- A link to any relevant documentation.
- A detailed explanation of what the example is doing.
|
---
stage: Verify
group: Pipeline Execution
title: Contribute to Verify stage codebase
breadcrumbs:
- doc
- development
- contributing
- verify
---
## What are we working on in Verify?
Verify stage is working on a comprehensive Continuous Integration platform
integrated into the GitLab product. Our goal is to empower our users to make
great technical and business decisions, by delivering a fast, reliable, secure
platform that verifies assumptions that our users make, and check them against
the criteria defined in CI/CD configuration. They could be unit tests, end-to-end
tests, benchmarking, performance validation, code coverage enforcement, and so on.
Feedback delivered by GitLab CI/CD makes it possible for our users to make well
informed decisions about technological and business choices they need to make
to succeed. Why is Continuous Integration a mission critical product?
GitLab CI/CD is our platform to deliver feedback to our users and customers.
They contribute their continuous integration configuration files
`.gitlab-ci.yml` to describe the questions they want to get answers for. Each
time someone pushes a commit or triggers a pipeline we need to find answers for
very important questions that have been asked in CI/CD configuration.
Failing to answer these questions or, what might be even worse, providing false
answers, might result in a user making a wrong decision. Such wrong decisions
can have very severe consequences.
## Core principles of our CI/CD platform
Data produced by the platform should be:
1. Accurate.
1. Durable.
1. Accessible.
The platform itself should be:
1. Reliable.
1. Secure.
1. Deterministic.
1. Trustworthy.
1. Fast.
1. Simple.
Since the inception of GitLab CI/CD, we have lived by these principles,
and they serve us and our users well. Some examples of these principles are that:
- The feedback delivered by GitLab CI/CD and data produced by the platform should be accurate.
If a job fails and we notify a user that it was successful, it can have severe negative consequences.
- Feedback needs to be available when a user needs it and data cannot disappear unexpectedly when engineers need it.
- It all doesn't matter if the platform is not secure and we
are leaking credentials or secrets.
- When a user provides a set of preconditions in a form of CI/CD configuration, the result should be deterministic each time a pipeline runs, because otherwise the platform might not be trustworthy.
- If it is fast, simple to use and has a great UX it will serve our users well.
## Building things in Verify
### Measure before you optimize, and make data-informed decisions
It is very difficult to optimize something that you cannot measure. How would you
know if you succeeded, or how significant the success was? If you are working on
a performance or reliability improvement, make sure that you measure things before
you optimize them.
The best way to measure stuff is to add a Prometheus metric. Counters, gauges, and
histograms are great ways to quickly get approximated results. Unfortunately this
is not the best way to measure tail latency. Prometheus metrics, especially histograms,
are usually approximations.
If you have to measure tail latency, like how slow something could be or how
large a request payload might be, consider adding custom application logs and
always use structured logging.
It's useful to use profiling and flamegraphs to understand what the code execution
path truly looks like!
### Strive for simple solutions, avoid clever solutions
It is sometimes tempting to use a clever solution to deliver something more
quickly. We want to avoid shipping clever code, because it is usually more
difficult to understand and maintain in the long term. Instead, we want to
focus on boring solutions that make it easier to evolve the codebase and keep the
contribution barrier low. We want to find solutions that are as simple as
possible.
### Do not confuse boring solutions with easy solutions
Boring solutions are sometimes confused with easy solutions. Very often the
opposite is true. An easy solution might not be simple - for example, a complex
new library can be included to add a very small functionality that otherwise
could be implemented quickly - it is easier to include this library than to
build this thing, but it would bring a lot of complexity into the product.
On the other hand, it is also possible to over-engineer a solution when a simple,
well tested, and well maintained library is available. In that case using the
library might make sense. We recognize that we are constantly balancing simple
and easy solutions, and that finding the right balance is important.
### "Simple" is not mutually exclusive with "flexible"
Building simple things does not mean that more advanced and flexible solutions
will not be available. A good example here is an expanding complexity of
writing `.gitlab-ci.yml` configuration. For example, you can use a simple
method to define an environment name:
```yaml
deploy:
environment: production
script: cap deploy
```
But the `environment` keyword can be also expanded into another level of
configuration that can offer more flexibility.
```yaml
deploy:
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://prod.example.com
script: cap deploy
```
This kind of approach shields new users from the complexities of the platform,
but still allows them to go deeper if they need to. This approach can be
applied to many other technical implementations.
### Make things observable
GitLab is a DevOps platform. We popularize DevOps because it helps companies
be more efficient and achieve better results. One important component of
DevOps culture is to take ownership over features and code that you are
building. It is very difficult to do that when you don't know how your features
perform and behave in the production environment.
This is why we want to make our features and code observable. It
should be written in a way that an author can understand how well or how poorly
the feature or code behaves in the production environment. We usually accomplish
that by introducing the proper mix of Prometheus metrics and application
loggers.
**TODO** document when to use Prometheus metrics, when to use loggers. Write a
few sentences about histograms and counters. Write a few sentences highlighting
importance of metrics when doing incremental rollouts.
### Protect customer data
Making data produced by our CI/CD platform durable is important. We recognize that
data generated in the CI/CD by users and customers is
something important and we must protect it. This data is not only important
because it can contain important information, we also do have compliance and
auditing responsibilities.
Therefore we must take extra care when we are writing migrations
that permanently removes data from our database, or when we are define
new retention policies.
As a general rule, when you are writing code that is supposed to remove
data from the database, file system, or object storage, you should get an extra pair
of eyes on your changes. When you are defining a new retention policy, you
should double check with PMs and EMs.
### Get your design reviewed
When you are designing a subsystem for pipeline processing and transitioning
CI/CD statuses, request an additional opinion on the design from a Verify maintainer (`@gitlab-org/maintainers/cicd-verify`)
as early as possible and hold others accountable for doing the same. Having your
design reviewed by a Verify maintainer helps to identify any blind spots you might
have overlooked as early as possible and possibly leads to a better solution.
By having the design reviewed before any development work is started, it also helps to
make merge request review more efficient. You would be less likely to encounter
significantly differing opinions or change requests during the maintainer review
if the design has been reviewed by a Verify maintainer. As a result, the merge request
could be merged sooner.
### Get your changes reviewed
When your merge request is ready for reviews you must assign reviewers and then
maintainers. Depending on the complexity of a change, you might want to involve
the people that know the most about the codebase area you are changing. We do
have many domain experts and maintainers in Verify and it is absolutely
acceptable to ask them to review your code when you are not certain if a
reviewer or maintainer assigned by the Reviewer Roulette has enough context
about the change.
The reviewer roulette offers useful suggestions, but as assigning the right
reviewers is important it should not be done automatically every time. It might
not make sense to assign someone who knows nothing about the area you are
updating, because their feedback might be limited to code style and syntax.
Depending on the complexity and impact of a change, assigning the right people
to review your changes might be very important.
If you don't know who to assign, consult `git blame` or ask in the `#s_verify`
Slack channel (GitLab team members only).
There are two kinds of changes / merge requests that require additional
attention from reviews and an additional reviewer:
1. Merge requests changing code around pipelines / stages / builds statuses.
1. Merge requests changing code around authentication / security features.
In both cases engineers are expected to request a review from a maintainer and
a domain expert. If maintainer is the domain expert, involving another person
is recommended.
### Incremental rollouts
After your merge request is merged by a maintainer, it is time to release it to
users and the wider community. We usually do this with feature flags.
While not every merge request needs a feature flag, most merge
requests in Verify should have [feature flags](https://handbook.gitlab.com/handbook/product-development-flow/feature-flag-lifecycle/#when-to-use-feature-flags).
If you already follow the advice on this page, you probably already have a
few metrics and perhaps a few loggers added that make your new code observable
in the production environment. You can now use these metrics to incrementally
roll out your changes!
A typical scenario involves enabling a few features in a few internal projects
while observing your metrics or loggers. Be aware that there might be a
small delay involved in ingesting logs in Elastic or Kibana. After you confirm
the feature works well with internal projects you can start an
incremental rollout for other projects.
Avoid using "percent of time" incremental rollouts. These are error prone,
especially when you are checking feature flags in a few places in the codebase
and you have not memoized the result of a check in a single place.
### Do not cause our Universe to implode
During one of the first GitLab Contributes events we had a discussion about the importance
of keeping CI/CD pipeline, stage, and job statuses accurate. We considered a hypothetical
scenario relating to a software being built by one of our [early customers](https://about.gitlab.com/blog/2016/11/23/gitlab-adoption-growing-at-cern/):
> What happens if software deployed to the [Large Hadron Collider (LHC)](https://en.wikipedia.org/wiki/Large_Hadron_Collider),
> breaks because of a bug in GitLab CI/CD that showed that a pipeline
> passed, but this data was not accurate and the software deployed was actually
> invalid? A problem like this could cause the LHC to malfunction, which
> could generate a new particle that would then cause the universe to implode.
That would be quite an undesirable outcome of a small bug in GitLab CI/CD status
processing. Take extra care when you are working on CI/CD statuses,
we don't want to implode our Universe!
This is an extreme and unlikely scenario, but presenting data that is not accurate
can potentially cause a myriad of problems through the
[butterfly effect](https://en.wikipedia.org/wiki/Butterfly_effect).
There are much more likely scenarios that
can have disastrous consequences. GitLab CI/CD is being used by companies
building medical, aviation, and automotive software. Continuous Integration is
a mission critical part of software engineering.
### Definition of Done
In Verify, we follow our Development team's [Definition of Done](../merge_request_workflow.md#definition-of-done).
We also want to keep things efficient and [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) when we answer questions
and solve problems for our users.
For any issue that is resolved because the solution is supported with existing `.gitlab-ci.yml` syntax,
create a project in the [`ci-sample-projects`](https://gitlab.com/gitlab-org/ci-sample-projects) group
that demonstrates the solution.
The project must have:
- A simple title.
- A clear description.
- A `README.md` with:
- A link to the resolved issue. You should also direct users to collaborate in the
resolved issue if any questions arise.
- A link to any relevant documentation.
- A detailed explanation of what the example is doing.
|
https://docs.gitlab.com/development/repository_storage_moves
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/development/_index.md
|
2025-08-13
|
doc/development/repository_storage_moves
|
[
"doc",
"development",
"repository_storage_moves"
] |
_index.md
|
Create
|
Source Code
|
Any user with at least the Maintainer role can merge updates to this content. For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
|
Project Repository Storage Moves
| null |
This document was created to help contributors understand the code design of
[project repository storage moves](../../api/project_repository_storage_moves.md).
Read this document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the Code Owners feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
- `Projects::RepositoryStorageMove`: Tracks the move, includes state machine.
- Defined in `app/models/projects/repository_storage_move.rb`.
- `RepositoryStorageMovable`: Contains the state machine logic, validators, and some helper methods.
- Defined in `app/models/concerns/repository_storage_movable.rb`.
- `Project`: The project model.
- Defined in `app/models/project.rb`.
- `CanMoveRepositoryStorage`: Contains helper methods that are into `Project`.
- Defined in `app/models/concerns/can_move_repository_storage.rb`.
- `API::ProjectRepositoryStorageMoves`: API class for project repository storage moves.
- Defined in `lib/api/project_repository_storage_moves.rb`.
- `Entities::Projects::RepositoryStorageMove`: API entity for serializing the `Projects::RepositoryStorageMove` model.
- Defined in `lib/api/entities/projects/repository_storage_moves.rb`.
- `Projects::ScheduleBulkRepositoryShardMovesService`: Service to schedule bulk moves.
- Defined in `app/services/projects/schedule_bulk_repository_shard_moves_service.rb`.
- `ScheduleBulkRepositoryShardMovesMethods`: Generic methods for bulk moves.
- Defined in `app/services/concerns/schedule_bulk_repository_shard_moves_methods.rb`.
- `Projects::ScheduleBulkRepositoryShardMovesWorker`: Worker to handle bulk moves.
- Defined in `app/workers/projects/schedule_bulk_repository_shard_moves_worker.rb`.
- `Projects::UpdateRepositoryStorageWorker`: Finds repository storage move and then calls the update storage service.
- Defined in `app/workers/projects/update_repository_storage_worker.rb`.
- `UpdateRepositoryStorageWorker`: Module containing generic logic for `Projects::UpdateRepositoryStorageWorker`.
- Defined in `app/workers/concerns/update_repository_storage_worker.rb`.
- `Projects::UpdateRepositoryStorageService`: Performs the move.
- Defined in `app/services/projects/update_repository_storage_service.rb`.
- `UpdateRepositoryStorageMethods`: Module with generic methods included in `Projects::UpdateRepositoryStorageService`.
- Defined in `app/services/concerns/update_repository_storage_methods.rb`.
- `Projects::UpdateService`: Schedules move if the passed parameters request a move.
- Defined in `app/services/projects/update_service.rb`.
- `PoolRepository`: Ruby object representing Gitaly `ObjectPool`.
- Defined in `app/models/pool_repository.rb`.
- `ObjectPool::CreateWorker`: Worker to create an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/create_worker.rb`.
- `ObjectPool::JoinWorker`: Worker to join an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/join_worker.rb`.
- `ObjectPool::ScheduleJoinWorker`: Worker to schedule an `ObjectPool::JoinWorker`.
- Defined in `app/workers/object_pool/schedule_join_worker.rb`.
- `ObjectPool::DestroyWorker`: Worker to destroy an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/destroy_worker.rb`.
- `ObjectPoolQueue`: Module to configure `ObjectPool` workers.
- Defined in `app/workers/concerns/object_pool_queue.rb`.
- `Repositories::ReplicateService`: Handles replication of data from one repository to another.
- Defined in `app/services/repositories/replicate_service.rb`.
## Flow
These flowcharts should help explain the flow from the endpoints down to the
models for different features.
### Schedule a repository storage move with the API
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
A[<code>POST /api/:version/project_repository_storage_moves</code>] --> C
B[<code>POST /api/:version/projects/:id/repository_storage_moves</code>] --> D
C[Schedule move for each project in shard] --> D[Set state to scheduled]
D --> E[<code>after_transition callback</code>]
E --> F{<code>set_repository_read_only!</code>}
F -->|success| H[Schedule repository update worker]
F -->|error| G[Set state to failed]
```
### Moving the storage after being scheduled
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
A[Repository update worker scheduled] --> B{State is scheduled?}
B -->|Yes| C[Set state to started]
B -->|No| D[Return success]
C --> E{Same filesystem?}
E -.-> G[Set project repo to writable]
E -->|Yes| F["Mirror repositories (project, wiki, design, & pool)"]
G --> H[Update repo storage value]
H --> I[Set state to finished]
I --> J[Associate project with new pool repository]
J --> K[Unlink old pool repository]
K --> L[Update project repository storage values]
L --> N[Remove old paths if same filesystem]
N --> M[Set state to finished]
```
|
---
stage: Create
group: Source Code
info: Any user with at least the Maintainer role can merge updates to this content.
For details, see https://docs.gitlab.com/development/development_processes/#development-guidelines-review.
title: Project Repository Storage Moves
breadcrumbs:
- doc
- development
- repository_storage_moves
---
This document was created to help contributors understand the code design of
[project repository storage moves](../../api/project_repository_storage_moves.md).
Read this document before making changes to the code for this feature.
This document is intentionally limited to an overview of how the code is
designed, as code can change often. To understand how a specific part of the
feature works, view the code and the specs. The details here explain how the
major components of the Code Owners feature work.
{{< alert type="note" >}}
This document should be updated when parts of the codebase referenced in this
document are updated, removed, or new parts are added.
{{< /alert >}}
## Business logic
- `Projects::RepositoryStorageMove`: Tracks the move, includes state machine.
- Defined in `app/models/projects/repository_storage_move.rb`.
- `RepositoryStorageMovable`: Contains the state machine logic, validators, and some helper methods.
- Defined in `app/models/concerns/repository_storage_movable.rb`.
- `Project`: The project model.
- Defined in `app/models/project.rb`.
- `CanMoveRepositoryStorage`: Contains helper methods that are into `Project`.
- Defined in `app/models/concerns/can_move_repository_storage.rb`.
- `API::ProjectRepositoryStorageMoves`: API class for project repository storage moves.
- Defined in `lib/api/project_repository_storage_moves.rb`.
- `Entities::Projects::RepositoryStorageMove`: API entity for serializing the `Projects::RepositoryStorageMove` model.
- Defined in `lib/api/entities/projects/repository_storage_moves.rb`.
- `Projects::ScheduleBulkRepositoryShardMovesService`: Service to schedule bulk moves.
- Defined in `app/services/projects/schedule_bulk_repository_shard_moves_service.rb`.
- `ScheduleBulkRepositoryShardMovesMethods`: Generic methods for bulk moves.
- Defined in `app/services/concerns/schedule_bulk_repository_shard_moves_methods.rb`.
- `Projects::ScheduleBulkRepositoryShardMovesWorker`: Worker to handle bulk moves.
- Defined in `app/workers/projects/schedule_bulk_repository_shard_moves_worker.rb`.
- `Projects::UpdateRepositoryStorageWorker`: Finds repository storage move and then calls the update storage service.
- Defined in `app/workers/projects/update_repository_storage_worker.rb`.
- `UpdateRepositoryStorageWorker`: Module containing generic logic for `Projects::UpdateRepositoryStorageWorker`.
- Defined in `app/workers/concerns/update_repository_storage_worker.rb`.
- `Projects::UpdateRepositoryStorageService`: Performs the move.
- Defined in `app/services/projects/update_repository_storage_service.rb`.
- `UpdateRepositoryStorageMethods`: Module with generic methods included in `Projects::UpdateRepositoryStorageService`.
- Defined in `app/services/concerns/update_repository_storage_methods.rb`.
- `Projects::UpdateService`: Schedules move if the passed parameters request a move.
- Defined in `app/services/projects/update_service.rb`.
- `PoolRepository`: Ruby object representing Gitaly `ObjectPool`.
- Defined in `app/models/pool_repository.rb`.
- `ObjectPool::CreateWorker`: Worker to create an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/create_worker.rb`.
- `ObjectPool::JoinWorker`: Worker to join an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/join_worker.rb`.
- `ObjectPool::ScheduleJoinWorker`: Worker to schedule an `ObjectPool::JoinWorker`.
- Defined in `app/workers/object_pool/schedule_join_worker.rb`.
- `ObjectPool::DestroyWorker`: Worker to destroy an `ObjectPool` with `Gitaly`.
- Defined in `app/workers/object_pool/destroy_worker.rb`.
- `ObjectPoolQueue`: Module to configure `ObjectPool` workers.
- Defined in `app/workers/concerns/object_pool_queue.rb`.
- `Repositories::ReplicateService`: Handles replication of data from one repository to another.
- Defined in `app/services/repositories/replicate_service.rb`.
## Flow
These flowcharts should help explain the flow from the endpoints down to the
models for different features.
### Schedule a repository storage move with the API
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
A[<code>POST /api/:version/project_repository_storage_moves</code>] --> C
B[<code>POST /api/:version/projects/:id/repository_storage_moves</code>] --> D
C[Schedule move for each project in shard] --> D[Set state to scheduled]
D --> E[<code>after_transition callback</code>]
E --> F{<code>set_repository_read_only!</code>}
F -->|success| H[Schedule repository update worker]
F -->|error| G[Set state to failed]
```
### Moving the storage after being scheduled
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD
A[Repository update worker scheduled] --> B{State is scheduled?}
B -->|Yes| C[Set state to started]
B -->|No| D[Return success]
C --> E{Same filesystem?}
E -.-> G[Set project repo to writable]
E -->|Yes| F["Mirror repositories (project, wiki, design, & pool)"]
G --> H[Update repo storage value]
H --> I[Set state to finished]
I --> J[Associate project with new pool repository]
J --> K[Unlink old pool repository]
K --> L[Update project repository storage values]
L --> N[Remove old paths if same filesystem]
N --> M[Set state to finished]
```
|
https://docs.gitlab.com/set_up_organization
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/set_up_organization.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
set_up_organization.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Manage your organization
|
Users, groups, namespaces, SSH keys.
|
Configure your organization and its users. Determine user roles
and give everyone access to the projects they need.
{{< cards >}}
- [Tutorial: Set up your organization](../tutorials/manage_user/_index.md)
- [Namespaces](../user/namespace/_index.md)
- [Members](../user/project/members/_index.md)
- [Organization (in development)](../user/organization/_index.md)
- [Groups](../user/group/_index.md)
- [Sharing projects and groups](../user/project/members/sharing_projects_groups.md)
- [Enterprise users](../user/enterprise_user/_index.md)
- [Service accounts](../user/profile/service_accounts.md)
- [User account options](../user/profile/_index.md)
- [SSH keys](../user/ssh.md)
- [GitLab.com settings](../user/gitlab_com/_index.md)
{{< /cards >}}
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Users, groups, namespaces, SSH keys.
title: Manage your organization
breadcrumbs:
- doc
- topics
---
Configure your organization and its users. Determine user roles
and give everyone access to the projects they need.
{{< cards >}}
- [Tutorial: Set up your organization](../tutorials/manage_user/_index.md)
- [Namespaces](../user/namespace/_index.md)
- [Members](../user/project/members/_index.md)
- [Organization (in development)](../user/organization/_index.md)
- [Groups](../user/group/_index.md)
- [Sharing projects and groups](../user/project/members/sharing_projects_groups.md)
- [Enterprise users](../user/enterprise_user/_index.md)
- [Service accounts](../user/profile/service_accounts.md)
- [User account options](../user/profile/_index.md)
- [SSH keys](../user/ssh.md)
- [GitLab.com settings](../user/gitlab_com/_index.md)
{{< /cards >}}
|
https://docs.gitlab.com/build_your_application
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/build_your_application.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
build_your_application.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use CI/CD to build your application
|
Runners, jobs, pipelines, variables.
|
Use CI/CD to generate your application.
{{< cards >}}
- [Getting started](../ci/_index.md)
- [CI/CD YAML syntax reference](../ci/yaml/_index.md)
- [Runners](../ci/runners/_index.md)
- [Pipelines](../ci/pipelines/_index.md)
- [Jobs](../ci/jobs/_index.md)
- [CI/CD components](../ci/components/_index.md)
- [CI/CD variables](../ci/variables/_index.md)
- [Pipeline security](../ci/pipelines/pipeline_security.md)
- [Debugging](../ci/debugging.md)
- [Auto DevOps](autodevops/_index.md)
- [Testing](../ci/testing/_index.md)
- [Google cloud integration](../ci/gitlab_google_cloud_integration/_index.md)
- [Migrate to GitLab CI/CD](../ci/migration/plan_a_migration.md)
- [External repository integrations](../ci/ci_cd_for_external_repos/_index.md)
{{< /cards >}}
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Runners, jobs, pipelines, variables.
title: Use CI/CD to build your application
breadcrumbs:
- doc
- topics
---
Use CI/CD to generate your application.
{{< cards >}}
- [Getting started](../ci/_index.md)
- [CI/CD YAML syntax reference](../ci/yaml/_index.md)
- [Runners](../ci/runners/_index.md)
- [Pipelines](../ci/pipelines/_index.md)
- [Jobs](../ci/jobs/_index.md)
- [CI/CD components](../ci/components/_index.md)
- [CI/CD variables](../ci/variables/_index.md)
- [Pipeline security](../ci/pipelines/pipeline_security.md)
- [Debugging](../ci/debugging.md)
- [Auto DevOps](autodevops/_index.md)
- [Testing](../ci/testing/_index.md)
- [Google cloud integration](../ci/gitlab_google_cloud_integration/_index.md)
- [Migrate to GitLab CI/CD](../ci/migration/plan_a_migration.md)
- [External repository integrations](../ci/ci_cd_for_external_repos/_index.md)
{{< /cards >}}
|
https://docs.gitlab.com/gitlab_flow
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/gitlab_flow.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
gitlab_flow.md
| null | null | null | null | null |
<!-- markdownlint-disable -->
<!-- vale off -->
This document was moved to [another location](https://about.gitlab.com/blog/2023/07/27/gitlab-flow-duo/).
<!-- This redirect file can be deleted after <2025-10-08>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
|
---
redirect_to: https://about.gitlab.com/blog/2023/07/27/gitlab-flow-duo/
remove_date: '2025-10-08'
breadcrumbs:
- doc
- topics
---
<!-- markdownlint-disable -->
<!-- vale off -->
This document was moved to [another location](https://about.gitlab.com/blog/2023/07/27/gitlab-flow-duo/).
<!-- This redirect file can be deleted after <2025-10-08>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (for example, link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->
|
https://docs.gitlab.com/release_your_application
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/release_your_application.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
release_your_application.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Deploy and release your application
|
Environments, packages, review apps, GitLab Pages.
|
Deployment is the step of the software delivery process when your
application gets deployed to its final, target infrastructure.
You can deploy your application internally or to the public.
Preview a release in a review app, and use feature flags to
release features incrementally.
{{< cards >}}
- [Getting started](../user/get_started/get_started_deploy_release.md)
- [Packages and registries](../user/packages/_index.md)
- [Environments](../ci/environments/_index.md)
- [Deployments](../ci/environments/deployments.md)
- [Releases](../user/project/releases/_index.md)
- [Roll out an application incrementally](../ci/environments/incremental_rollouts.md)
- [Feature flags](../operations/feature_flags.md)
- [GitLab Pages](../user/project/pages/_index.md)
{{< /cards >}}
## Related topics
- [Auto DevOps](autodevops/_index.md) is an automated CI/CD-based workflow that supports the entire software
supply chain: build, test, lint, package, deploy, secure, and monitor applications using GitLab CI/CD.
It provides a set of ready-to-use templates that serve the vast majority of use cases.
- [Auto Deploy](autodevops/stages.md#auto-deploy) is the DevOps stage dedicated to software
deployment using GitLab CI/CD. Auto Deploy has built-in support for EC2 and ECS deployments.
- Deploy to Kubernetes clusters by using the [GitLab agent for Kubernetes](../user/clusters/agent/install/_index.md).
- Use Docker images to run AWS commands from GitLab CI/CD, and a template to
facilitate [deployment to AWS](../ci/cloud_deployment/_index.md).
- Use GitLab CI/CD to target any type of infrastructure accessible by GitLab Runner.
[User and pre-defined environment variables](../ci/variables/_index.md) and CI/CD templates
support setting up a vast number of deployment strategies.
- Use GitLab [Cloud Seed](../cloud_seed/_index.md)
to set up deployment credentials and deploy your application to Google Cloud Run with minimal friction.
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Environments, packages, review apps, GitLab Pages.
title: Deploy and release your application
breadcrumbs:
- doc
- topics
---
Deployment is the step of the software delivery process when your
application gets deployed to its final, target infrastructure.
You can deploy your application internally or to the public.
Preview a release in a review app, and use feature flags to
release features incrementally.
{{< cards >}}
- [Getting started](../user/get_started/get_started_deploy_release.md)
- [Packages and registries](../user/packages/_index.md)
- [Environments](../ci/environments/_index.md)
- [Deployments](../ci/environments/deployments.md)
- [Releases](../user/project/releases/_index.md)
- [Roll out an application incrementally](../ci/environments/incremental_rollouts.md)
- [Feature flags](../operations/feature_flags.md)
- [GitLab Pages](../user/project/pages/_index.md)
{{< /cards >}}
## Related topics
- [Auto DevOps](autodevops/_index.md) is an automated CI/CD-based workflow that supports the entire software
supply chain: build, test, lint, package, deploy, secure, and monitor applications using GitLab CI/CD.
It provides a set of ready-to-use templates that serve the vast majority of use cases.
- [Auto Deploy](autodevops/stages.md#auto-deploy) is the DevOps stage dedicated to software
deployment using GitLab CI/CD. Auto Deploy has built-in support for EC2 and ECS deployments.
- Deploy to Kubernetes clusters by using the [GitLab agent for Kubernetes](../user/clusters/agent/install/_index.md).
- Use Docker images to run AWS commands from GitLab CI/CD, and a template to
facilitate [deployment to AWS](../ci/cloud_deployment/_index.md).
- Use GitLab CI/CD to target any type of infrastructure accessible by GitLab Runner.
[User and pre-defined environment variables](../ci/variables/_index.md) and CI/CD templates
support setting up a vast number of deployment strategies.
- Use GitLab [Cloud Seed](../cloud_seed/_index.md)
to set up deployment credentials and deploy your application to Google Cloud Run with minimal friction.
|
https://docs.gitlab.com/manage_code
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/manage_code.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
manage_code.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Manage your code
|
Repositories, merge requests, remote development.
|
Store your source files in a repository and create merge requests. Write, debug, and collaborate on code.
{{< cards >}}
- [Getting started](../user/get_started/get_started_managing_code.md)
- [Repositories](../user/project/repository/_index.md)
- [Merge requests](../user/project/merge_requests/_index.md)
- [Remote development](../user/project/remote_development/_index.md)
{{< /cards >}}
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Repositories, merge requests, remote development.
title: Manage your code
breadcrumbs:
- doc
- topics
---
Store your source files in a repository and create merge requests. Write, debug, and collaborate on code.
{{< cards >}}
- [Getting started](../user/get_started/get_started_managing_code.md)
- [Repositories](../user/project/repository/_index.md)
- [Merge requests](../user/project/merge_requests/_index.md)
- [Remote development](../user/project/remote_development/_index.md)
{{< /cards >}}
|
https://docs.gitlab.com/plan_and_track
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/plan_and_track.md
|
2025-08-13
|
doc/topics
|
[
"doc",
"topics"
] |
plan_and_track.md
|
Plan
|
Project Management
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Plan and track work
|
Epics, issues, milestones, and labels.
|
Plan your work by creating requirements, issues, and epics. Schedule work
with milestones and track your team's time. Learn how to save time with
quick actions, see how GitLab renders Markdown text, and learn how to
use Git to interact with GitLab.
<!-- vale gitlab_base.Spelling = NO -->
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a thorough demo of Plan features, see
[Multi-team planning with GitLab Ultimate](https://www.youtube.com/watch?v=KmASFwSap7c).
In this video, Gabe describes a use case of a multi-team organization that uses GitLab
with Scaled Agile Framework (SAFe).
Alternatively, to learn how to map the SAFe to what you can do in GitLab see
[SAFe without silos in GitLab](https://about.gitlab.com/blog/2025/04/08/safe-without-silos-in-gitlab/).
<!-- vale gitlab_base.Spelling = YES -->
{{< cards >}}
- [Getting started](../user/get_started/get_started_planning_work.md)
- [Tutorial: Use GitLab for scrum](../tutorials/scrum_events/_index.md)
- [Tutorial: Use GitLab for Kanban](../tutorials/kanban/_index.md)
- [Labels](../user/project/labels.md)
- [Iterations](../user/group/iterations/_index.md)
- [Milestones](../user/project/milestones/_index.md)
- [Issues](../user/project/issues/_index.md)
- [Issue boards](../user/project/issue_board.md)
- [Comments and threads](../user/discussions/_index.md)
- [Tasks](../user/tasks.md)
- [Requirements](../user/project/requirements/_index.md)
- [Time tracking](../user/project/time_tracking.md)
- [CRM](../user/crm/_index.md)
- [Wikis](../user/project/wiki/_index.md)
- [Epics](../user/group/epics/_index.md)
- [Roadmaps](../user/group/roadmap/_index.md)
- [Objectives and key results](../user/okrs.md)
- [To-Do List](../user/todos.md)
- [Keyboard shortcuts](../user/shortcuts.md)
- [Quick actions](../user/project/quick_actions.md)
- [Markdown](../user/markdown.md)
{{< /cards >}}
|
---
stage: Plan
group: Project Management
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Epics, issues, milestones, and labels.
title: Plan and track work
breadcrumbs:
- doc
- topics
---
Plan your work by creating requirements, issues, and epics. Schedule work
with milestones and track your team's time. Learn how to save time with
quick actions, see how GitLab renders Markdown text, and learn how to
use Git to interact with GitLab.
<!-- vale gitlab_base.Spelling = NO -->
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a thorough demo of Plan features, see
[Multi-team planning with GitLab Ultimate](https://www.youtube.com/watch?v=KmASFwSap7c).
In this video, Gabe describes a use case of a multi-team organization that uses GitLab
with Scaled Agile Framework (SAFe).
Alternatively, to learn how to map the SAFe to what you can do in GitLab see
[SAFe without silos in GitLab](https://about.gitlab.com/blog/2025/04/08/safe-without-silos-in-gitlab/).
<!-- vale gitlab_base.Spelling = YES -->
{{< cards >}}
- [Getting started](../user/get_started/get_started_planning_work.md)
- [Tutorial: Use GitLab for scrum](../tutorials/scrum_events/_index.md)
- [Tutorial: Use GitLab for Kanban](../tutorials/kanban/_index.md)
- [Labels](../user/project/labels.md)
- [Iterations](../user/group/iterations/_index.md)
- [Milestones](../user/project/milestones/_index.md)
- [Issues](../user/project/issues/_index.md)
- [Issue boards](../user/project/issue_board.md)
- [Comments and threads](../user/discussions/_index.md)
- [Tasks](../user/tasks.md)
- [Requirements](../user/project/requirements/_index.md)
- [Time tracking](../user/project/time_tracking.md)
- [CRM](../user/crm/_index.md)
- [Wikis](../user/project/wiki/_index.md)
- [Epics](../user/group/epics/_index.md)
- [Roadmaps](../user/group/roadmap/_index.md)
- [Objectives and key results](../user/okrs.md)
- [To-Do List](../user/todos.md)
- [Keyboard shortcuts](../user/shortcuts.md)
- [Quick actions](../user/project/quick_actions.md)
- [Markdown](../user/markdown.md)
{{< /cards >}}
|
https://docs.gitlab.com/topics/runner_fleet_design_guides
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/_index.md
|
2025-08-13
|
doc/topics/runner_fleet_design_guides
|
[
"doc",
"topics",
"runner_fleet_design_guides"
] |
_index.md
|
CI
|
Runner
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
GitLab Runner fleet configuration and best practices
|
Runner Fleet.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
Set up and manage your GitLab Runner infrastructure with proven strategies and recommendations.
Use these recommendations to develop a GitLab Runner deployment strategy based on your organization's requirements.
GitLab does not make specific recommendations about the type of infrastructure you should use.
These best practices provide insights from operating the runner fleet on GitLab.com,
which processes millions of CI/CD jobs each month.
|
---
stage: CI
group: Runner
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Runner Fleet.
title: GitLab Runner fleet configuration and best practices
breadcrumbs:
- doc
- topics
- runner_fleet_design_guides
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
Set up and manage your GitLab Runner infrastructure with proven strategies and recommendations.
Use these recommendations to develop a GitLab Runner deployment strategy based on your organization's requirements.
GitLab does not make specific recommendations about the type of infrastructure you should use.
These best practices provide insights from operating the runner fleet on GitLab.com,
which processes millions of CI/CD jobs each month.
|
https://docs.gitlab.com/topics/gitlab_runner_fleet_config_and_best_practices
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/gitlab_runner_fleet_config_and_best_practices.md
|
2025-08-13
|
doc/topics/runner_fleet_design_guides
|
[
"doc",
"topics",
"runner_fleet_design_guides"
] |
gitlab_runner_fleet_config_and_best_practices.md
|
CI
|
Runner
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Design and configure a GitLab Runner fleet on Google Kubernetes Engine
|
Runner Fleet.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
Use these recommendations to analyze your CI/CD build requirements to design, configure,
and validate a GitLab Runner fleet hosted on Google Kubernetes Engine (GKE).
The following diagram illustrates the path of your runner fleet implementation journey.
The guide follows these steps:

You can use this framework to plan a runner deployment for a single group or a GitLab instance that serves your entire organization.
This framework includes the following steps:
1. [Assess the expected CI/CD workloads](#assess-the-expected-cicd-workloads)
1. [Plan the Runner fleet configuration](#plan-the-runner-fleet-configuration)
1. [Deploy the runner on GKE](#deploy-the-runner-on-gke)
1. [Optimize](#optimize)
## Assess the expected CI/CD workloads
In this phase, you gather the CI/CD build requirements of the development teams that you support. If applicable, create an inventory of the programming, scripting, and markup languages that are in use.
You might be supporting multiple development teams, various programming languages,
and build requirements. Start with one team, one project, and one set of CI/CD build
requirements for the first set of in-depth analysis.
To assess expected CI/CD workloads:
- Estimate the CI/CD job demand that you expect to support (hourly, daily, weekly).
- Estimate the CPU and RAM resource requirements for a representative sample CI/CD job for a specific project. These estimates help identify the different profiles you might support. The characteristics of those profiles are important to identify the right GKE Cluster needed to support your requirements. Refer to this example on how to determine the CPU and RAM requirements.
- Determine if you have any security or policy requirements that require you to segment access to certain runners by groups or projects.
### Estimate the CPU and RAM requirements for a CI/CD job
The CPU and RAM resource requirements vary depending on factors like the type of programming language or the type of CI/CD job (build, integration tests, unit tests, security scans). The following section describes a method to gather CI/CD job CPU and resource requirements. You can adopt and build on this approach for your own needs.
For example, to run a CI/CD job similar to the one defined in the FastAPI project fork: [ra-group / fastapi · GitLab](https://gitlab.com/ra-group2/fastapi).
The job in this example uses a Python image, downloads the project's requirements, and runs the existing unit tests.
The `.gitlab-ci.yml` for the job is as follows:
```yaml
tests:
image: python:3.11.10-bookworm
parallel: 25
script:
- pip install -r requirements.txt
- pytest
```
To identify the compute and RAM resources needed, use Docker to:
- Create a specific image that uses the FastAPI fork and the CI/CD job script as entrypoint.
- Run a container with the built image and monitor resource usage.
Complete the following steps to identify the compute and RAM resources needed:
1. Create a script file in your project that contains all the CI commands. The script file is named `entrypoint.sh`.
```shell
#!/bin/bash
cd /fastapi || exit
pip install -r requirements.txt
pytest
1. Create a Dockerfile to create an image where the `entrypoint.sh` file runs the CI script.
```dockerfile
FROM python:3.11.10-bookworm
RUN mkdir /fastapi
COPY . /fastapi
RUN chmod +x /fastapi/entrypoint.sh
CMD [ "bash", "/fastapi/entrypoint.sh" ]
```
1. Build the image. To simplify the process, perform all operations such as build, store,
and run the image locally. This approach eliminates the need of an online registry to pull and push the image.
```shell
❯ docker build . -t my-project_dir/fastapi:testing
...
Successfully tagged my-project_dir/fastapi:testing
```
1. Run a container with the built image and simultaneously monitor the resources usage during the container execution. Create a script named `metrics.sh` with the following command:
```shell
#! /bin/bash
container_id=$(docker run -d --rm my-project_dir/fastapi:testing)
while true; do
echo "Collecting metrics..."
metrics=$(docker stats --no-trunc --no-stream --format "table {{.ID}}\t{{.CPUPerc}}\t{{.MemUsage}}" | grep "$container_id")
if [ -z "$metrics" ]; then
exit 0
fi
echo "Saving metrics..."
echo "$metrics" >> metrics.log
sleep 1
done
```
This script runs a detached container with the image built. The container ID is then used to collect its `CPU` and `Memory` usage until the container exits upon successful completion. The metrics collected are saved in a file called `metrics.log`.
{{< alert type="note" >}}
In the example, the CI/CD job is short-lived, so the sleep between each container poll is set to one second. Adjust this value to better suit your needs.
{{< /alert >}}
1. Analyze the `metrics.log` file to identify the peak usage of the test container.
In the example, the maximum CPU usage is `107.50%` and the maximum memory usage is `303.1Mi`.
```log
223e93dd05c6 94.98% 83.79MiB / 15.58GiB
223e93dd05c6 28.27% 85.4MiB / 15.58GiB
223e93dd05c6 53.92% 121.8MiB / 15.58GiB
223e93dd05c6 70.73% 171.9MiB / 15.58GiB
223e93dd05c6 20.78% 177.2MiB / 15.58GiB
223e93dd05c6 26.19% 180.3MiB / 15.58GiB
223e93dd05c6 77.04% 224.1MiB / 15.58GiB
223e93dd05c6 97.16% 226.5MiB / 15.58GiB
223e93dd05c6 98.52% 259MiB / 15.58GiB
223e93dd05c6 98.78% 303.1MiB / 15.58GiB
223e93dd05c6 100.03% 159.8MiB / 15.58GiB
223e93dd05c6 103.97% 204MiB / 15.58GiB
223e93dd05c6 107.50% 207.8MiB / 15.58GiB
223e93dd05c6 105.96% 215.7MiB / 15.58GiB
223e93dd05c6 101.88% 226.2MiB / 15.58GiB
223e93dd05c6 100.44% 226.7MiB / 15.58GiB
223e93dd05c6 100.20% 226.9MiB / 15.58GiB
223e93dd05c6 100.60% 227.6MiB / 15.58GiB
223e93dd05c6 100.46% 228MiB / 15.58GiB
```
### Analyzing the metrics collected
Based on the metrics collected, for this job profile, you can limit the Kubernetes executor job to
`1 CPU` and `~304 Mi of Memory`. Even if this conclusion is accurate, it might not be practical for all use cases.
If you use a cluster with a node pool of three `e2-standard-4` nodes to run jobs, the `1 CPU` limit allows only **12 jobs** to run simultaneously (an `e2-standard-4` node has **4 vCPU** and **16 GB** of memory). Additional jobs wait for the running jobs to complete and free up the resources before starting.
The memory requested is critical because Kubernetes terminates any pod that uses more memory than the limit set or available on the cluster. However, the CPU limit is more flexible but impacts the job duration. A lower CPU limit set increases the time it takes for a job to complete. In the previous example, setting the CPU limit to `250m` (or `0.25`) instead `1` increased the job duration by four times (from about two minutes to eight to ten minutes).
As the metrics collection method uses a polling mechanism, you should round up the maximum usage identified. For example, instead of `303 Mi` for the memory usage, round it to `400 Mi`.
Important considerations for the previous example:
- The metrics were collected on the local machine, which doesn't have the same CPU configuration than a Google Kubernetes Engine Cluster. However, these metrics were validated by monitoring them on a Kubernetes cluster with an `e2-standard-4` node.
- To get an accurate representation of those metrics, run the tests described in the [Assess phase](#assess-the-expected-cicd-workloads) on a Google Compute Engine VM.
## Plan the runner fleet configuration
In the planning phase, map out the right runner fleet configuration for your organization. Consider the runner scope (instance, group, project) and the Kubernetes cluster configuration based on:
- Your assessment of the CI/CD job resource demand
- Your inventory of CI/CD job types
### Runner scope
To plan runner scope, consider the following questions:
- Do you want project owners and group owners to create and manage their own runners?
- By default, project and group owners can create runner configuration and register runners to a project or group in GitLab.
- This design allows developers to create a build environment quickly. This approach reduces developer friction when getting started with GitLab CI/CD. However, in large organizations, this approach may lead to many underutilized or unused runners across the environment.
- Does your organization have security or other policies that require segmenting access to certain types of runners to specific groups or projects?
The most straightforward way to deploy a runner in a GitLab Self-Managed environments is to create it for an instance. Runners scoped for an instance are available to all groups and projects by default.
If you can meet all your organization's needs with instance runners, then this deployment pattern is the most efficient pattern. It ensures that you can operate a CI/CD build fleet at scale efficiently and cost effectively.
If there are requirements to segment access to specific runners to certain groups or projects, incorporate those into your planning process.
#### Example runner fleet configuration - Instance runners
The configuration in the table demonstrates the flexibility available when configuring a runner fleet for your organization. This example uses multiple runners with different instance sizes and different job tags. These runners enables you to support different types of CI/CD jobs, each with specific CPU and RAM resource requirements. However, it may not be the most efficient pattern when using Kubernetes.
| Runner Type | Runner Tag | Scope | Count of Runner type to offer | Runner Worker Specification | Runner Host Environment | Environment Configuration |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Instance | ci-runner-small | Available to run CI/CD jobs for all groups and projects by default. | 5 | 2 vCPU, 8 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-2** |
| Instance | ci-runner-medium | Available to run CI/CD jobs for all groups and projects by default. | 2 | 4 vCPU, 16 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-4** |
| Instance | ci-runner-large | Available to run CI/CD jobs for all groups and projects by default. | 1 | 8 vCPU, 32 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-8** |
In the runner fleet configuration example, there are a total of three runner configurations and eight runners actively running CI/CD jobs.
With the Kubernetes executor, you can use the Kubernetes scheduler and overwrite container resources.
In theory, you can deploy a single GitLab Runner on a Kubernetes cluster with adequate resources. You
can then overwrite container resources to select the appropriate compute type for each CI/CD job.
Implementing this pattern reduces the number of separate runner configurations you need to deploy and operate.
### Best practices
- Always dedicate a node pool to the runner managers.
- Log processing and cache or artifacts management can be CPU intensive.
- Always set a default limit (CPU/Memory for Build/Helper/Service containers) in the `config.toml` file.
- Always allow maximum overwrite for the resources in the `config.toml` file.
- In the job definition (`.gitlab-ci.yml`), specify the right limit needed by the jobs.
- If not specified, the default values set in the `config.toml` file is used.
- If a container exceeds its memory limit, the system automatically terminates it using the Out of Memory (OOM) kill process.
- Use the feature flags `FF_RETRIEVE_POD_WARNING_EVENTS` and `FF_PRINT_POD_EVENTS`. For more details, see the [feature flags documentation](https://docs.gitlab.com/runner/configuration/feature-flags.html).
## Deploy the runner on GKE
When you are ready to install GitLab Runner on a Google Kubernetes cluster, you have many options. If you have created your cluster on GKE, you can use either the GitLab Runner Helm Chart or Operator to install the runner on the cluster.
If you are yet to set up the cluster on GKE, GitLab provides the GitLab Runner Infrastructure Toolkit (GRIT) which simultaneously:
- Create a multi node pool GKE cluster: **Standard Edition** and **Standard Mode**.
- Install GitLab Runner on the cluster using the GitLab Runner Kubernetes operator
The following example uses GRIT to deploy the Google Kubernetes cluster and GitLab Runner Manager.
To have the cluster and GitLab Runner well configured, consider the following information:
- **How many job types do I need to cover?** This information comes from the assess phase. The assess phase aggregates metrics and identifies the number of resulting groups, considering organizational constraints. A **job type** is a collection of categorized jobs identified during the access phase. This categorization is based on the maximum resources needed by the job.
- **How many GitLab Runner Managers do I need to run?** This information comes from the plan phase. If the organization manages projects separately, apply this framework to each project individually. This approach is relevant only when multiple job profiles are identified (for the entire organization or for a specific project), and they are all handled by an individual or a fleet of GitLab Runners. A basic configuration typically uses one GitLab Runner Manager per GKE cluster.
- **What is the estimated max concurrent CI/CD jobs?** This information represents an estimate of the maximum number of concurrent CI/CD jobs that are run at any point in time. This information is needed when configuring the GitLab Runner Manager by providing how long it waits during the `Prepare` stage: job pod scheduling on a node with limited available resources.
### Real life applications for the FastAPI fork
For the FastAPI, consider the following information:
- **How many job profiles do I need to cover?** We only have one job profile with the following characteristics: `1 CPU` and `303 Mi` of memory. As explained in [Analyzing the metrics collected](#analyzing-the-metrics-collected) sections, we change those raw values to the following:
- `400 Mi` for the memory limit instead of `303 Mi` to avoid any job failure due to the memory limits.
- `0.20` for the CPU instead of `1 CPU`. We don't mind our job taking more time to complete. We prioritize accuracy and quality over speed when completing tasks.
- **How many GitLab Runner Managers do I need to run?** Only one GitLab Runner Manager is enough for our tests.
- **What is the expected Workload?** We want to run up to 20 jobs simultaneously at any time.
Based on these inputs, any GKE Cluster with the following minimum characteristics should be enough:
- Minimum CPU: **(0.20 + helper CPU usage) * number of jobs simultaneously**. In our example, we get **7 vCPU** with the limit for the helper container set to **0.15 CPU**.
- Minimum Memory: **(400Mi + helper memory usage) * number of jobs simultaneously**. In our example, we get at least **10 Gi** with the limit for the helper set to **100 Mi**.
Other characteristics such as the minimum storage required should also be considered. However, we don't into consideration in the example.
Possible configurations for our GKE cluster can be (both configuration allows to run more than **20 jobs** simultaneously):
- GKE Cluster with a node pool of `3 e2-standard-4` nodes for a total of `12 vCPU` and `48 GiB` of memory
- GKE Cluster with a node pool of only on `e2-standard-8` nodes for a total of `8 vCPU` and `32 GiB` of memory
For the sake of our example, we use the first configuration. To prevent the GitLab Runner Manager log processing from impacting the overall log processing, use a dedicated node pool where GitLab Runner is installed.
#### GKE GRIT configuration
The resulting GKE configuration for GRIT looks similar to this:
```terraform
google_project = "GCLOUD_PROJECT"
google_region = "GCLOUD_REGION"
google_zone = "GCLOUD_ZONE"
name = "my-grit-gke-cluster"
node_pools = {
"runner-manager" = {
node_count = 1,
node_config = {
machine_type = "e2-standard-2",
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 50,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner",
}
},
},
"worker-pool" = {
node_count = 3,
node_config = {
machine_type = "e2-standard-4", #4 vCPU, 16 GB each
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
}
```
In the previous configuration:
- The `runner-manager` block refers to the node pool where GitLab Runner is installed. In our example, a `e2-standard-2` is more than enough.
- The labels sections in the `runner-manager` block is useful when installing GitLab Runner on GitLab. A node selector is configured through the operator configuration to make sure that GitLab Runner is installed on a node of this node pool.
- The `worker-pool` block refers to the node pool where the CI/CD job pod is created. The configuration provided creates a node pool of `3 e2-standard-4` nodes labeled `"app" = "gitlab-runner-job"` to host the job pod.
- The `image_type` parameter can be used to set the image used by the nodes. It can be set to `windows_ltsc_containerd` if your workload relies mostly on Windows image.
Here is an illustration of this configuration:

#### GitLab Runner GRIT configuration
The resulting GitLab Runner configuration for GRIT looks similar to this:
```terraform
gitlab_pat = "glpat-REDACTED"
gitlab_project_id = GITLAB_PROJECT_ID
runner_description = "my-grit-gitlab-runner"
runner_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-ocp:amd64-v17.3.1"
helper_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-helper-ocp:x86_64-v17.3.1"
concurrent = 20
check_interval = 1
runner_tags = ["my-custom-tag"]
config_template = <<EOT
[[runners]]
name = "my-grit-gitlab-runner"
shell = "bash"
environment = [
"FF_RETRIEVE_POD_WARNING_EVENTS=true",
"FF_PRINT_POD_EVENTS=true",
]
[runners.kubernetes]
image = "alpine"
cpu_limit = "0.25"
memory_limit = "400Mi"
helper_cpu = "150m"
helper_memory = "150Mi"
cpu_limit_overwrite_max_allowed = "0.25"
memory_limit_overwrite_max_allowed = "400Mi"
helper_cpu_limit_overwrite_max_allowed = "150m"
helper_memory_limit_overwrite_max_allowed = "150Mi"
[runners.kubernetes.node_selector]
"app" = "gitlab-runner-job"
EOT
pod_spec = [
{
name = "selector",
patchType = "merge",
patch = <<EOT
nodeSelector:
app: "gitlab-runner"
EOT
}
]
```
In the previous configuration:
- The `pod_spec` parameter allows us to set a node selector for the pod running GitLab Runner. In the configuration, the node selector is set to `"app" = "gitlab-runner"` to ensure that GitLab Runner is installed on the runner-manager node pool.
- The `config_template` parameters provides a default limit for all jobs run by the GitLab Runner Manager. It also allows an overwrite of those limits as long as the value set is not greater than the default values.
- The feature flags `FF_RETRIEVE_POD_WARNING_EVENTS` and `FF_PRINT_POD_EVENTS`are also set to ease debugging in the event of a job failure. See the [feature flag documentation](https://docs.gitlab.com/runner/configuration/feature-flags.html) for more details.
### Real life applications for a hypothetical use case
Take the following information in to consideration:
- **How many job profiles do I need to cover?** Two profiles (specifications provided takes the helper limits in account):
- Medium jobs: `300m CPU` and `200 MiB`
- CPU-intensive jobs: `1 CPU` and `1 GiB`
- **How many GitLab Runner Managers do I need to run?** One.
- **What is the expected Workload?**
- Up to **50 medium** jobs simultaneously
- Up to **25 CPU-intensive** jobs simultaneously
#### GKE configuration
- Needs for medium jobs:
- CPU: 300m * 50 = 5 CPU (approximate)
- Memory: 200 MiB * 50 = 10 GiB
- Needs for CPU-intensive jobs:
- CPU: 1 * 25 = 25
- Memory: 1 GiB * 25 = 25 GiB
The GKE cluster should have:
- A node pool for GitLab Runner Manager (let's consider that the log processing is not demanding): **1 e2-standard-2** node
- A node pool for medium jobs: **3 e2-standard-4** nodes
- A node pool for CPU-intensive jobs: **1 e2-highcpu-32** node (`32 vCPU` and `32 GiB` Memory)
```terraform
google_project = "GCLOUD_PROJECT"
google_region = "GCLOUD_REGION"
google_zone = "GCLOUD_ZONE"
name = "my-grit-gke-cluster"
node_pools = {
"runner-manager" = {
node_count = 1,
node_config = {
machine_type = "e2-standard-2",
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 50,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner",
}
},
},
"medium-pool" = {
node_count = 3,
node_config = {
machine_type = "e2-standard-4", #4 vCPU, 16 GB each
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
"cpu-intensive-pool" = {
node_count = 1,
node_config = {
machine_type = "e2-highcpu-32", #32 vCPU, 32 GB each
image_type = "cos_containerd",
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
}
```
#### GitLab Runner configuration
The current implementation of GRIT doesn't allow the installation of more than one runner at the time. The `config_template` provided doesn't set configurations like `node_selection` and other limits, as done in the previous example. A simple configuration allows the maximum allowed overwrite value for CPU-intensive jobs and sets the correct values in the `.gitlab-ci.yml` file. The resulting GitLab Runner configuration looks similar to this:
```terraform
gitlab_pat = "glpat-REDACTED"
gitlab_project_id = GITLAB_PROJECT_ID
runner_description = "my-grit-gitlab-runner"
runner_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-ocp:amd64-v17.3.1"
helper_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-helper-ocp:x86_64-v17.3.1"
concurrent = 100
check_interval = 1
runner_tags = ["my-custom-tag"]
config_template = <<EOT
[[runners]]
name = "my-grit-gitlab-runner"
shell = "bash"
environment = [
"FF_RETRIEVE_POD_WARNING_EVENTS=true",
"FF_PRINT_POD_EVENTS=true",
]
[runners.kubernetes]
image = "alpine"
cpu_limit_overwrite_max_allowed = "0.75"
memory_limit_overwrite_max_allowed = "900Mi"
helper_cpu_limit_overwrite_max_allowed = "250m"
helper_memory_limit_overwrite_max_allowed = "100Mi"
EOT
pod_spec = [
{
name = "selector",
patchType = "merge",
patch = <<EOT
nodeSelector:
app: "gitlab-runner"
EOT
}
]
```
The `.gitlab-ci.yml` file looks similar to this:
- For medium jobs:
```yaml
variables:
KUBERNETES_CPU_LIMIT: "200m"
KUBERNETES_MEMORY_LIMIT: "100Mi"
KUBERNETES_HELPER_CPU_LIMIT: "100m"
KUBERNETES_HELPER_MEMORY_LIMIT: "100Mi"
tests:
image: some-image:latest
script:
- command_1
- command_2
# ...
- command_n
tags:
- my-custom-tag
```
- For CPU-intensive jobs:
```yaml
variables:
KUBERNETES_CPU_LIMIT: "0.75"
KUBERNETES_MEMORY_LIMIT: "900Mi"
KUBERNETES_HELPER_CPU_LIMIT: "150m"
KUBERNETES_HELPER_MEMORY_LIMIT: "100Mi"
tests:
image: custom-cpu-intensive-image:latest
script:
- cpu_intensive_command_1
- cpu_intensive_command_2
# ...
- cpu_intensive_command_n
tags:
- my-custom-tag
```
{{< alert type="note" >}}
For an easier configuration, use one GitLab Runner per cluster for job profile. This approach is recommended until GitLab supports either multiple GitLab Runner installations on the same cluster or multiple `[[runners]]` section in the `config.toml` template.
{{< /alert >}}
### Set up monitoring and observability
As a final step in the deployment phase, you must establish a solution to monitor the runner host environment and GitLab Runner. The infrastructure level, runner, and CI/CD job metrics provide insights into the efficiency and reliability of your CI/CD build infrastructure. They also provides the insight needed to tune and optimize the Kubernetes cluster, GitLab Runner, and CI/CD job configuration.
#### Monitoring best practices
- Monitor job level metrics: job duration, job success and failure rates.
- To analyze the job-level metrics, understand which CI/CD jobs are run most frequently and consume the most compute and RAM resources in aggregate. This job profile is a good starting point for evaluating optimization opportunities.
- Monitor the Kubernetes cluster resource utilization:
- CPU utilization
- Memory utilization
- Network utilization
- Disk utilization
See the [Dedicated GitLab Runner monitoring page](https://docs.gitlab.com/runner/monitoring/) for more details on how to proceed.
## Optimize
Optimizing a CI/CD build environment is an ongoing process. The type and volume of CI/CD jobs are constantly evolving, requiring your active engagement.
You likely have specific organizational goals for CI/CD and CI/CD build infrastructure. Therefore, the first step is to define your optimization requirements and quantifiable objectives.
The following is an example set of optimization requirements from across our customer base:
- CI/CD job startup times
- CI/CD job duration
- CI/CD job reliability
- CI/CD compute cost optimization
The next step is to start analyzing the CI/CD metrics in conjunction with the infrastructure metrics for the Kubernetes cluster. A critical correlation to analyze is as follows:
- CPU utilization by Kubernetes namespace
- Memory utilization by Kubernetes namespace
- CPU utilization by node
- Memory utilization by node
- CI/CD job failure rates
Typically on Kubernetes, high CI/CD job failure rates (independent of failures due to flaky tests) are attributed to resource constraints on the Kubernetes cluster. Analyze these metrics to achieve the optimal balance of CI/CD job start times, job duration, job reliability, and infrastructure resource utilization in your Kubernetes cluster configuration.
### Best practices
- Establish a process to categorize CI/CD jobs across your organization by job type.
- Establish a job type categorization framework to simplify both monitoring configuration and approach to optimization of the GitLab CI/CD build infrastructure on Kubernetes and the CI/CD job type.
- Assigning each job type its own node on the cluster might result in the best balance of CI/CD job performance, job reliability and infrastructure utilization.
Using Kubernetes as the infrastructure stack for the CI/CD build environment offers significant benefits. However, it require continuous monitoring and optimization of the Kubernetes infrastructure. After you establish an observability and optimization framework, you can support millions of CI/CD jobs per month. You can eliminate resource contention, achieve deterministic CI/CD job runs, and optimal resource usage. These improvements result in operational efficiency and cost optimization.
## Next steps
Take the next steps to provide a better user experience:
- Support for multiple GitLab Runner installations on the same cluster. This enables a better management of scenarios where multiple job profiles should be handled (the GitLab Runner can be adequately configured to prevent any misused of the resources).
- Support for GKE node autoscaling. This allows GKE to scale up and down according to the workload, thus saving money.
- Enable jobs metrics monitoring. This enables admins to better optimize their cluster and GitLab Runner based on actual usage.
|
---
stage: CI
group: Runner
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Runner Fleet.
title: Design and configure a GitLab Runner fleet on Google Kubernetes Engine
breadcrumbs:
- doc
- topics
- runner_fleet_design_guides
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed
{{< /details >}}
Use these recommendations to analyze your CI/CD build requirements to design, configure,
and validate a GitLab Runner fleet hosted on Google Kubernetes Engine (GKE).
The following diagram illustrates the path of your runner fleet implementation journey.
The guide follows these steps:

You can use this framework to plan a runner deployment for a single group or a GitLab instance that serves your entire organization.
This framework includes the following steps:
1. [Assess the expected CI/CD workloads](#assess-the-expected-cicd-workloads)
1. [Plan the Runner fleet configuration](#plan-the-runner-fleet-configuration)
1. [Deploy the runner on GKE](#deploy-the-runner-on-gke)
1. [Optimize](#optimize)
## Assess the expected CI/CD workloads
In this phase, you gather the CI/CD build requirements of the development teams that you support. If applicable, create an inventory of the programming, scripting, and markup languages that are in use.
You might be supporting multiple development teams, various programming languages,
and build requirements. Start with one team, one project, and one set of CI/CD build
requirements for the first set of in-depth analysis.
To assess expected CI/CD workloads:
- Estimate the CI/CD job demand that you expect to support (hourly, daily, weekly).
- Estimate the CPU and RAM resource requirements for a representative sample CI/CD job for a specific project. These estimates help identify the different profiles you might support. The characteristics of those profiles are important to identify the right GKE Cluster needed to support your requirements. Refer to this example on how to determine the CPU and RAM requirements.
- Determine if you have any security or policy requirements that require you to segment access to certain runners by groups or projects.
### Estimate the CPU and RAM requirements for a CI/CD job
The CPU and RAM resource requirements vary depending on factors like the type of programming language or the type of CI/CD job (build, integration tests, unit tests, security scans). The following section describes a method to gather CI/CD job CPU and resource requirements. You can adopt and build on this approach for your own needs.
For example, to run a CI/CD job similar to the one defined in the FastAPI project fork: [ra-group / fastapi · GitLab](https://gitlab.com/ra-group2/fastapi).
The job in this example uses a Python image, downloads the project's requirements, and runs the existing unit tests.
The `.gitlab-ci.yml` for the job is as follows:
```yaml
tests:
image: python:3.11.10-bookworm
parallel: 25
script:
- pip install -r requirements.txt
- pytest
```
To identify the compute and RAM resources needed, use Docker to:
- Create a specific image that uses the FastAPI fork and the CI/CD job script as entrypoint.
- Run a container with the built image and monitor resource usage.
Complete the following steps to identify the compute and RAM resources needed:
1. Create a script file in your project that contains all the CI commands. The script file is named `entrypoint.sh`.
```shell
#!/bin/bash
cd /fastapi || exit
pip install -r requirements.txt
pytest
1. Create a Dockerfile to create an image where the `entrypoint.sh` file runs the CI script.
```dockerfile
FROM python:3.11.10-bookworm
RUN mkdir /fastapi
COPY . /fastapi
RUN chmod +x /fastapi/entrypoint.sh
CMD [ "bash", "/fastapi/entrypoint.sh" ]
```
1. Build the image. To simplify the process, perform all operations such as build, store,
and run the image locally. This approach eliminates the need of an online registry to pull and push the image.
```shell
❯ docker build . -t my-project_dir/fastapi:testing
...
Successfully tagged my-project_dir/fastapi:testing
```
1. Run a container with the built image and simultaneously monitor the resources usage during the container execution. Create a script named `metrics.sh` with the following command:
```shell
#! /bin/bash
container_id=$(docker run -d --rm my-project_dir/fastapi:testing)
while true; do
echo "Collecting metrics..."
metrics=$(docker stats --no-trunc --no-stream --format "table {{.ID}}\t{{.CPUPerc}}\t{{.MemUsage}}" | grep "$container_id")
if [ -z "$metrics" ]; then
exit 0
fi
echo "Saving metrics..."
echo "$metrics" >> metrics.log
sleep 1
done
```
This script runs a detached container with the image built. The container ID is then used to collect its `CPU` and `Memory` usage until the container exits upon successful completion. The metrics collected are saved in a file called `metrics.log`.
{{< alert type="note" >}}
In the example, the CI/CD job is short-lived, so the sleep between each container poll is set to one second. Adjust this value to better suit your needs.
{{< /alert >}}
1. Analyze the `metrics.log` file to identify the peak usage of the test container.
In the example, the maximum CPU usage is `107.50%` and the maximum memory usage is `303.1Mi`.
```log
223e93dd05c6 94.98% 83.79MiB / 15.58GiB
223e93dd05c6 28.27% 85.4MiB / 15.58GiB
223e93dd05c6 53.92% 121.8MiB / 15.58GiB
223e93dd05c6 70.73% 171.9MiB / 15.58GiB
223e93dd05c6 20.78% 177.2MiB / 15.58GiB
223e93dd05c6 26.19% 180.3MiB / 15.58GiB
223e93dd05c6 77.04% 224.1MiB / 15.58GiB
223e93dd05c6 97.16% 226.5MiB / 15.58GiB
223e93dd05c6 98.52% 259MiB / 15.58GiB
223e93dd05c6 98.78% 303.1MiB / 15.58GiB
223e93dd05c6 100.03% 159.8MiB / 15.58GiB
223e93dd05c6 103.97% 204MiB / 15.58GiB
223e93dd05c6 107.50% 207.8MiB / 15.58GiB
223e93dd05c6 105.96% 215.7MiB / 15.58GiB
223e93dd05c6 101.88% 226.2MiB / 15.58GiB
223e93dd05c6 100.44% 226.7MiB / 15.58GiB
223e93dd05c6 100.20% 226.9MiB / 15.58GiB
223e93dd05c6 100.60% 227.6MiB / 15.58GiB
223e93dd05c6 100.46% 228MiB / 15.58GiB
```
### Analyzing the metrics collected
Based on the metrics collected, for this job profile, you can limit the Kubernetes executor job to
`1 CPU` and `~304 Mi of Memory`. Even if this conclusion is accurate, it might not be practical for all use cases.
If you use a cluster with a node pool of three `e2-standard-4` nodes to run jobs, the `1 CPU` limit allows only **12 jobs** to run simultaneously (an `e2-standard-4` node has **4 vCPU** and **16 GB** of memory). Additional jobs wait for the running jobs to complete and free up the resources before starting.
The memory requested is critical because Kubernetes terminates any pod that uses more memory than the limit set or available on the cluster. However, the CPU limit is more flexible but impacts the job duration. A lower CPU limit set increases the time it takes for a job to complete. In the previous example, setting the CPU limit to `250m` (or `0.25`) instead `1` increased the job duration by four times (from about two minutes to eight to ten minutes).
As the metrics collection method uses a polling mechanism, you should round up the maximum usage identified. For example, instead of `303 Mi` for the memory usage, round it to `400 Mi`.
Important considerations for the previous example:
- The metrics were collected on the local machine, which doesn't have the same CPU configuration than a Google Kubernetes Engine Cluster. However, these metrics were validated by monitoring them on a Kubernetes cluster with an `e2-standard-4` node.
- To get an accurate representation of those metrics, run the tests described in the [Assess phase](#assess-the-expected-cicd-workloads) on a Google Compute Engine VM.
## Plan the runner fleet configuration
In the planning phase, map out the right runner fleet configuration for your organization. Consider the runner scope (instance, group, project) and the Kubernetes cluster configuration based on:
- Your assessment of the CI/CD job resource demand
- Your inventory of CI/CD job types
### Runner scope
To plan runner scope, consider the following questions:
- Do you want project owners and group owners to create and manage their own runners?
- By default, project and group owners can create runner configuration and register runners to a project or group in GitLab.
- This design allows developers to create a build environment quickly. This approach reduces developer friction when getting started with GitLab CI/CD. However, in large organizations, this approach may lead to many underutilized or unused runners across the environment.
- Does your organization have security or other policies that require segmenting access to certain types of runners to specific groups or projects?
The most straightforward way to deploy a runner in a GitLab Self-Managed environments is to create it for an instance. Runners scoped for an instance are available to all groups and projects by default.
If you can meet all your organization's needs with instance runners, then this deployment pattern is the most efficient pattern. It ensures that you can operate a CI/CD build fleet at scale efficiently and cost effectively.
If there are requirements to segment access to specific runners to certain groups or projects, incorporate those into your planning process.
#### Example runner fleet configuration - Instance runners
The configuration in the table demonstrates the flexibility available when configuring a runner fleet for your organization. This example uses multiple runners with different instance sizes and different job tags. These runners enables you to support different types of CI/CD jobs, each with specific CPU and RAM resource requirements. However, it may not be the most efficient pattern when using Kubernetes.
| Runner Type | Runner Tag | Scope | Count of Runner type to offer | Runner Worker Specification | Runner Host Environment | Environment Configuration |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Instance | ci-runner-small | Available to run CI/CD jobs for all groups and projects by default. | 5 | 2 vCPU, 8 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-2** |
| Instance | ci-runner-medium | Available to run CI/CD jobs for all groups and projects by default. | 2 | 4 vCPU, 16 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-4** |
| Instance | ci-runner-large | Available to run CI/CD jobs for all groups and projects by default. | 1 | 8 vCPU, 32 GB RAM | Kubernetes | → 3 nodes <br> → Runner worker compute node \= **e2-standard-8** |
In the runner fleet configuration example, there are a total of three runner configurations and eight runners actively running CI/CD jobs.
With the Kubernetes executor, you can use the Kubernetes scheduler and overwrite container resources.
In theory, you can deploy a single GitLab Runner on a Kubernetes cluster with adequate resources. You
can then overwrite container resources to select the appropriate compute type for each CI/CD job.
Implementing this pattern reduces the number of separate runner configurations you need to deploy and operate.
### Best practices
- Always dedicate a node pool to the runner managers.
- Log processing and cache or artifacts management can be CPU intensive.
- Always set a default limit (CPU/Memory for Build/Helper/Service containers) in the `config.toml` file.
- Always allow maximum overwrite for the resources in the `config.toml` file.
- In the job definition (`.gitlab-ci.yml`), specify the right limit needed by the jobs.
- If not specified, the default values set in the `config.toml` file is used.
- If a container exceeds its memory limit, the system automatically terminates it using the Out of Memory (OOM) kill process.
- Use the feature flags `FF_RETRIEVE_POD_WARNING_EVENTS` and `FF_PRINT_POD_EVENTS`. For more details, see the [feature flags documentation](https://docs.gitlab.com/runner/configuration/feature-flags.html).
## Deploy the runner on GKE
When you are ready to install GitLab Runner on a Google Kubernetes cluster, you have many options. If you have created your cluster on GKE, you can use either the GitLab Runner Helm Chart or Operator to install the runner on the cluster.
If you are yet to set up the cluster on GKE, GitLab provides the GitLab Runner Infrastructure Toolkit (GRIT) which simultaneously:
- Create a multi node pool GKE cluster: **Standard Edition** and **Standard Mode**.
- Install GitLab Runner on the cluster using the GitLab Runner Kubernetes operator
The following example uses GRIT to deploy the Google Kubernetes cluster and GitLab Runner Manager.
To have the cluster and GitLab Runner well configured, consider the following information:
- **How many job types do I need to cover?** This information comes from the assess phase. The assess phase aggregates metrics and identifies the number of resulting groups, considering organizational constraints. A **job type** is a collection of categorized jobs identified during the access phase. This categorization is based on the maximum resources needed by the job.
- **How many GitLab Runner Managers do I need to run?** This information comes from the plan phase. If the organization manages projects separately, apply this framework to each project individually. This approach is relevant only when multiple job profiles are identified (for the entire organization or for a specific project), and they are all handled by an individual or a fleet of GitLab Runners. A basic configuration typically uses one GitLab Runner Manager per GKE cluster.
- **What is the estimated max concurrent CI/CD jobs?** This information represents an estimate of the maximum number of concurrent CI/CD jobs that are run at any point in time. This information is needed when configuring the GitLab Runner Manager by providing how long it waits during the `Prepare` stage: job pod scheduling on a node with limited available resources.
### Real life applications for the FastAPI fork
For the FastAPI, consider the following information:
- **How many job profiles do I need to cover?** We only have one job profile with the following characteristics: `1 CPU` and `303 Mi` of memory. As explained in [Analyzing the metrics collected](#analyzing-the-metrics-collected) sections, we change those raw values to the following:
- `400 Mi` for the memory limit instead of `303 Mi` to avoid any job failure due to the memory limits.
- `0.20` for the CPU instead of `1 CPU`. We don't mind our job taking more time to complete. We prioritize accuracy and quality over speed when completing tasks.
- **How many GitLab Runner Managers do I need to run?** Only one GitLab Runner Manager is enough for our tests.
- **What is the expected Workload?** We want to run up to 20 jobs simultaneously at any time.
Based on these inputs, any GKE Cluster with the following minimum characteristics should be enough:
- Minimum CPU: **(0.20 + helper CPU usage) * number of jobs simultaneously**. In our example, we get **7 vCPU** with the limit for the helper container set to **0.15 CPU**.
- Minimum Memory: **(400Mi + helper memory usage) * number of jobs simultaneously**. In our example, we get at least **10 Gi** with the limit for the helper set to **100 Mi**.
Other characteristics such as the minimum storage required should also be considered. However, we don't into consideration in the example.
Possible configurations for our GKE cluster can be (both configuration allows to run more than **20 jobs** simultaneously):
- GKE Cluster with a node pool of `3 e2-standard-4` nodes for a total of `12 vCPU` and `48 GiB` of memory
- GKE Cluster with a node pool of only on `e2-standard-8` nodes for a total of `8 vCPU` and `32 GiB` of memory
For the sake of our example, we use the first configuration. To prevent the GitLab Runner Manager log processing from impacting the overall log processing, use a dedicated node pool where GitLab Runner is installed.
#### GKE GRIT configuration
The resulting GKE configuration for GRIT looks similar to this:
```terraform
google_project = "GCLOUD_PROJECT"
google_region = "GCLOUD_REGION"
google_zone = "GCLOUD_ZONE"
name = "my-grit-gke-cluster"
node_pools = {
"runner-manager" = {
node_count = 1,
node_config = {
machine_type = "e2-standard-2",
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 50,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner",
}
},
},
"worker-pool" = {
node_count = 3,
node_config = {
machine_type = "e2-standard-4", #4 vCPU, 16 GB each
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
}
```
In the previous configuration:
- The `runner-manager` block refers to the node pool where GitLab Runner is installed. In our example, a `e2-standard-2` is more than enough.
- The labels sections in the `runner-manager` block is useful when installing GitLab Runner on GitLab. A node selector is configured through the operator configuration to make sure that GitLab Runner is installed on a node of this node pool.
- The `worker-pool` block refers to the node pool where the CI/CD job pod is created. The configuration provided creates a node pool of `3 e2-standard-4` nodes labeled `"app" = "gitlab-runner-job"` to host the job pod.
- The `image_type` parameter can be used to set the image used by the nodes. It can be set to `windows_ltsc_containerd` if your workload relies mostly on Windows image.
Here is an illustration of this configuration:

#### GitLab Runner GRIT configuration
The resulting GitLab Runner configuration for GRIT looks similar to this:
```terraform
gitlab_pat = "glpat-REDACTED"
gitlab_project_id = GITLAB_PROJECT_ID
runner_description = "my-grit-gitlab-runner"
runner_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-ocp:amd64-v17.3.1"
helper_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-helper-ocp:x86_64-v17.3.1"
concurrent = 20
check_interval = 1
runner_tags = ["my-custom-tag"]
config_template = <<EOT
[[runners]]
name = "my-grit-gitlab-runner"
shell = "bash"
environment = [
"FF_RETRIEVE_POD_WARNING_EVENTS=true",
"FF_PRINT_POD_EVENTS=true",
]
[runners.kubernetes]
image = "alpine"
cpu_limit = "0.25"
memory_limit = "400Mi"
helper_cpu = "150m"
helper_memory = "150Mi"
cpu_limit_overwrite_max_allowed = "0.25"
memory_limit_overwrite_max_allowed = "400Mi"
helper_cpu_limit_overwrite_max_allowed = "150m"
helper_memory_limit_overwrite_max_allowed = "150Mi"
[runners.kubernetes.node_selector]
"app" = "gitlab-runner-job"
EOT
pod_spec = [
{
name = "selector",
patchType = "merge",
patch = <<EOT
nodeSelector:
app: "gitlab-runner"
EOT
}
]
```
In the previous configuration:
- The `pod_spec` parameter allows us to set a node selector for the pod running GitLab Runner. In the configuration, the node selector is set to `"app" = "gitlab-runner"` to ensure that GitLab Runner is installed on the runner-manager node pool.
- The `config_template` parameters provides a default limit for all jobs run by the GitLab Runner Manager. It also allows an overwrite of those limits as long as the value set is not greater than the default values.
- The feature flags `FF_RETRIEVE_POD_WARNING_EVENTS` and `FF_PRINT_POD_EVENTS`are also set to ease debugging in the event of a job failure. See the [feature flag documentation](https://docs.gitlab.com/runner/configuration/feature-flags.html) for more details.
### Real life applications for a hypothetical use case
Take the following information in to consideration:
- **How many job profiles do I need to cover?** Two profiles (specifications provided takes the helper limits in account):
- Medium jobs: `300m CPU` and `200 MiB`
- CPU-intensive jobs: `1 CPU` and `1 GiB`
- **How many GitLab Runner Managers do I need to run?** One.
- **What is the expected Workload?**
- Up to **50 medium** jobs simultaneously
- Up to **25 CPU-intensive** jobs simultaneously
#### GKE configuration
- Needs for medium jobs:
- CPU: 300m * 50 = 5 CPU (approximate)
- Memory: 200 MiB * 50 = 10 GiB
- Needs for CPU-intensive jobs:
- CPU: 1 * 25 = 25
- Memory: 1 GiB * 25 = 25 GiB
The GKE cluster should have:
- A node pool for GitLab Runner Manager (let's consider that the log processing is not demanding): **1 e2-standard-2** node
- A node pool for medium jobs: **3 e2-standard-4** nodes
- A node pool for CPU-intensive jobs: **1 e2-highcpu-32** node (`32 vCPU` and `32 GiB` Memory)
```terraform
google_project = "GCLOUD_PROJECT"
google_region = "GCLOUD_REGION"
google_zone = "GCLOUD_ZONE"
name = "my-grit-gke-cluster"
node_pools = {
"runner-manager" = {
node_count = 1,
node_config = {
machine_type = "e2-standard-2",
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 50,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner",
}
},
},
"medium-pool" = {
node_count = 3,
node_config = {
machine_type = "e2-standard-4", #4 vCPU, 16 GB each
image_type = "cos_containerd", #Linux OS container only. Change to windows_ltsc_containerd for Windows OS container
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
"cpu-intensive-pool" = {
node_count = 1,
node_config = {
machine_type = "e2-highcpu-32", #32 vCPU, 32 GB each
image_type = "cos_containerd",
disk_size_gb = 150,
disk_type = "pd-balanced",
labels = {
"app" = "gitlab-runner-job"
}
},
},
}
```
#### GitLab Runner configuration
The current implementation of GRIT doesn't allow the installation of more than one runner at the time. The `config_template` provided doesn't set configurations like `node_selection` and other limits, as done in the previous example. A simple configuration allows the maximum allowed overwrite value for CPU-intensive jobs and sets the correct values in the `.gitlab-ci.yml` file. The resulting GitLab Runner configuration looks similar to this:
```terraform
gitlab_pat = "glpat-REDACTED"
gitlab_project_id = GITLAB_PROJECT_ID
runner_description = "my-grit-gitlab-runner"
runner_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-ocp:amd64-v17.3.1"
helper_image = "registry.gitlab.com/gitlab-org/ci-cd/gitlab-runner-ubi-images/gitlab-runner-helper-ocp:x86_64-v17.3.1"
concurrent = 100
check_interval = 1
runner_tags = ["my-custom-tag"]
config_template = <<EOT
[[runners]]
name = "my-grit-gitlab-runner"
shell = "bash"
environment = [
"FF_RETRIEVE_POD_WARNING_EVENTS=true",
"FF_PRINT_POD_EVENTS=true",
]
[runners.kubernetes]
image = "alpine"
cpu_limit_overwrite_max_allowed = "0.75"
memory_limit_overwrite_max_allowed = "900Mi"
helper_cpu_limit_overwrite_max_allowed = "250m"
helper_memory_limit_overwrite_max_allowed = "100Mi"
EOT
pod_spec = [
{
name = "selector",
patchType = "merge",
patch = <<EOT
nodeSelector:
app: "gitlab-runner"
EOT
}
]
```
The `.gitlab-ci.yml` file looks similar to this:
- For medium jobs:
```yaml
variables:
KUBERNETES_CPU_LIMIT: "200m"
KUBERNETES_MEMORY_LIMIT: "100Mi"
KUBERNETES_HELPER_CPU_LIMIT: "100m"
KUBERNETES_HELPER_MEMORY_LIMIT: "100Mi"
tests:
image: some-image:latest
script:
- command_1
- command_2
# ...
- command_n
tags:
- my-custom-tag
```
- For CPU-intensive jobs:
```yaml
variables:
KUBERNETES_CPU_LIMIT: "0.75"
KUBERNETES_MEMORY_LIMIT: "900Mi"
KUBERNETES_HELPER_CPU_LIMIT: "150m"
KUBERNETES_HELPER_MEMORY_LIMIT: "100Mi"
tests:
image: custom-cpu-intensive-image:latest
script:
- cpu_intensive_command_1
- cpu_intensive_command_2
# ...
- cpu_intensive_command_n
tags:
- my-custom-tag
```
{{< alert type="note" >}}
For an easier configuration, use one GitLab Runner per cluster for job profile. This approach is recommended until GitLab supports either multiple GitLab Runner installations on the same cluster or multiple `[[runners]]` section in the `config.toml` template.
{{< /alert >}}
### Set up monitoring and observability
As a final step in the deployment phase, you must establish a solution to monitor the runner host environment and GitLab Runner. The infrastructure level, runner, and CI/CD job metrics provide insights into the efficiency and reliability of your CI/CD build infrastructure. They also provides the insight needed to tune and optimize the Kubernetes cluster, GitLab Runner, and CI/CD job configuration.
#### Monitoring best practices
- Monitor job level metrics: job duration, job success and failure rates.
- To analyze the job-level metrics, understand which CI/CD jobs are run most frequently and consume the most compute and RAM resources in aggregate. This job profile is a good starting point for evaluating optimization opportunities.
- Monitor the Kubernetes cluster resource utilization:
- CPU utilization
- Memory utilization
- Network utilization
- Disk utilization
See the [Dedicated GitLab Runner monitoring page](https://docs.gitlab.com/runner/monitoring/) for more details on how to proceed.
## Optimize
Optimizing a CI/CD build environment is an ongoing process. The type and volume of CI/CD jobs are constantly evolving, requiring your active engagement.
You likely have specific organizational goals for CI/CD and CI/CD build infrastructure. Therefore, the first step is to define your optimization requirements and quantifiable objectives.
The following is an example set of optimization requirements from across our customer base:
- CI/CD job startup times
- CI/CD job duration
- CI/CD job reliability
- CI/CD compute cost optimization
The next step is to start analyzing the CI/CD metrics in conjunction with the infrastructure metrics for the Kubernetes cluster. A critical correlation to analyze is as follows:
- CPU utilization by Kubernetes namespace
- Memory utilization by Kubernetes namespace
- CPU utilization by node
- Memory utilization by node
- CI/CD job failure rates
Typically on Kubernetes, high CI/CD job failure rates (independent of failures due to flaky tests) are attributed to resource constraints on the Kubernetes cluster. Analyze these metrics to achieve the optimal balance of CI/CD job start times, job duration, job reliability, and infrastructure resource utilization in your Kubernetes cluster configuration.
### Best practices
- Establish a process to categorize CI/CD jobs across your organization by job type.
- Establish a job type categorization framework to simplify both monitoring configuration and approach to optimization of the GitLab CI/CD build infrastructure on Kubernetes and the CI/CD job type.
- Assigning each job type its own node on the cluster might result in the best balance of CI/CD job performance, job reliability and infrastructure utilization.
Using Kubernetes as the infrastructure stack for the CI/CD build environment offers significant benefits. However, it require continuous monitoring and optimization of the Kubernetes infrastructure. After you establish an observability and optimization framework, you can support millions of CI/CD jobs per month. You can eliminate resource contention, achieve deterministic CI/CD job runs, and optimal resource usage. These improvements result in operational efficiency and cost optimization.
## Next steps
Take the next steps to provide a better user experience:
- Support for multiple GitLab Runner installations on the same cluster. This enables a better management of scenarios where multiple job profiles should be handled (the GitLab Runner can be adequately configured to prevent any misused of the resources).
- Support for GKE node autoscaling. This allows GKE to scale up and down according to the workload, thus saving money.
- Enable jobs metrics monitoring. This enables admins to better optimize their cluster and GitLab Runner based on actual usage.
|
https://docs.gitlab.com/topics/gitlab_runner_manager_performance
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/gitlab_runner_manager_performance.md
|
2025-08-13
|
doc/topics/runner_fleet_design_guides
|
[
"doc",
"topics",
"runner_fleet_design_guides"
] |
gitlab_runner_manager_performance.md
|
Verify
|
Runner
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Optimize GitLab Runner manager pod performance
|
Optimize GitLab Runner Manager Pod performance in Kubernetes environments.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
To monitor and optimize GitLab Runner manager pod performance in Kubernetes environments, GitLab recommends
the following best practices. Apply them to identify performance bottlenecks and implement solutions for
optimal CI/CD pipeline execution.
## Prerequisites
Before you implement these recommendations:
- Deploy GitLab Runner in Kubernetes using the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/)
- Have administrator access to your Kubernetes cluster
- Configure [Prometheus monitoring](../../administration/monitoring/_index.md) for GitLab Runner
- Have basic understanding of Kubernetes resource management
## GitLab Runner manager pod responsibilities
The GitLab Runner manager pod coordinates all CI/CD job execution in Kubernetes.
Its performance directly impacts your pipeline efficiency.
It handles:
- **Log processing**: Collects and forwards job logs from worker pods to GitLab
- **Cache management**: Coordinates local and cloud-based caching operations
- **Kubernetes API requests**: Creates, monitors, and deletes worker pods
- **GitLab API communication**: Polls for jobs and reports status updates
- **Pod lifecycle management**: Manages worker pod provisioning and cleanup
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: GitLab Runner manager pod architecture
accDescr: The manager pod polls GitLab for jobs, creates job pods through the Kubernetes API, manages the S3 cache, and forwards logs from job pods to GitLab.
subgraph "External Services"
GL[GitLab Instance]
S3[S3 Cache Storage]
end
subgraph "Manager Pod"
MP[Manager Process]
LB[Log Buffer]
CM[Cache Manager]
end
subgraph "Kubernetes API"
K8S[API Server]
end
subgraph "Job Pods"
JP1[Job Pod 1]
JP2[Job Pod 2]
JP3[Job Pod N]
end
GL <-->|Poll Jobs<br/>Update Status| MP
MP <-->|Create/Delete<br/>Monitor Pods| K8S
MP <-->|Cache Operations| S3
JP1 -->|Stream Logs| LB
JP2 -->|Stream Logs| LB
JP3 -->|Stream Logs| LB
LB -->|Forward Logs| GL
CM <-->|Manage Cache| S3
```
Each responsibility affects performance differently:
- **CPU intensive**: Kubernetes API operations, log processing
- **Memory intensive**: Log buffering, job queue management
- **Network intensive**: GitLab API communication, log streaming
## Deploy GitLab Runner in Kubernetes
Install GitLab Runner through the [GitLab Runner Operator](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator).
The Operator actively receives new features and improvements.
The GitLab Runner team installs the Operator through the
[Experimental GRIT framework](https://gitlab.com/gitlab-org/ci-cd/runner-tools/grit/-/tree/main/scenarios/google/gke/operator?ref_type=heads).
The easiest way to install GitLab Runner in Kubernetes is to apply the
[`operator.k8s.yaml` manifest from the latest release](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator/-/releases)
and then follow the instructions in the [Operator install documentation](https://docs.gitlab.com/runner/install/operator/#install-on-kubernetes).
## Configure monitoring
Observability is critical for GitLab Runner administration in Kubernetes because
pods are ephemeral and metrics provide the primary operational visibility.
For monitoring, install [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md).
To configure monitoring for the Operator, see [Monitor GitLab Runner Operator](https://docs.gitlab.com/runner/monitoring#monitor-gitlab-runner-operator).
## Performance monitoring
Effective monitoring is crucial for maintaining optimal manager pod performance.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Metrics collection and monitoring flow
accDescr: The manager pod exposes metrics, Prometheus scrapes the metrics using PodMonitor configuration, Grafana visualizes the data, and Alertmanager notifies operators.
subgraph "Metrics Collection Flow"
MP[Manager Pod<br/>:9252/metrics]
PM[PodMonitor]
P[Prometheus]
G[Grafana]
A[Alertmanager]
MP -->|Expose Metrics| PM
PM -->|Scrape| P
P -->|Query| G
P -->|Alerts| A
A -->|Notify| O[Operators]
end
```
### Key performance metrics
Monitor these essential metrics:
| Metric | Description | Performance Indicator |
| -------------------------------------------------- | -------------------------------- | --------------------- |
| `gitlab_runner_jobs` | Current running jobs | Job queue saturation |
| `gitlab_runner_limit` | Configured job concurrency limit | Capacity utilization |
| `gitlab_runner_request_concurrency_exceeded_total` | Requests above concurrency limit | API throttling |
| `gitlab_runner_errors_total` | Total caught errors | System stability |
| `container_cpu_usage_seconds_total` | Container CPU usage | Resource consumption |
| `container_memory_working_set_bytes` | Container memory usage | Memory pressure |
### Prometheus queries
Track manager pod performance with these queries:
```prometheus
# Manager pod memory usage in MB
container_memory_working_set_bytes{pod=~"gitlab-runner.*"} / 1024 / 1024
# Manager pod CPU utilization in Millicores
rate(container_cpu_usage_seconds_total{pod=~"gitlab-runner.*"}[5m]) * 1000
# Job queue saturation
gitlab_runner_jobs / gitlab_runner_limit
# Jobs per runner
gitlab_runner_jobs
# API request rate
sum(rate(apiserver_request_total[5m]))
```
### Example dashboard
The following dashboard shows Manager Pod utilization across all pods using the Prometheus queries described previously:

This dashboard can help you visualize:
- Memory usage trends across manager pods
- CPU utilization patterns during job execution
- Job queue saturation levels
- Individual pod resource consumption
## Identify overloaded manager pods
Recognize performance degradation before it impacts your pipelines.
### Resource utilization indicators
By default, GitLab Runner Operator does not apply CPU or memory limits to manager pods.
To set resource limits:
```shell
kubectl patch deployment gitlab-runner -p '{"spec":{"template":{"spec":{"containers":[{"name":"gitlab-runner","resources":{"requests":{"cpu":"500m","memory":"256Mi"},"limits":{"cpu":"1000m","memory":"512Mi"}}}]}}}}'
```
{{< alert type="note" >}}
The feature to allow deployment patching from the Operator configuration is under development.
For more information, see [merge request 197](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator/-/merge_requests/197).
{{< /alert >}}
**High CPU usage patterns:**
- CPU consistently above 70% during standard operations
- CPU spikes exceeding 90% during job creation
- Sustained high CPU without corresponding job activity
**Memory consumption trends:**
- Memory usage above 80% of allocated limits
- Continuous memory growth without workload increase
- Out-of-memory (OOM) events in manager pod logs
### Performance degradation signs
Watch for these operational symptoms:
- Jobs remaining pending longer than usual
- Pod creation times exceeding 30 seconds
- Delayed log output in GitLab job interfaces
- `etcdserver: request timed out` errors in logs
### Diagnostic commands
```shell
# Current resource usage
kubectl top pods --containers
> POD NAME CPU(cores) MEMORY(bytes)
> gitlab-runner-runner-86cd68d899-m6qqm runner 7m 32Mi
# Check for performance errors
kubectl logs gitlab-runner-runner-86cd68d899-m6qqm --since=2h | grep -E "(error|timeout|failed)"
```
## Resource configuration
Proper resource configuration is essential for optimal performance.
### Performance testing methodology
GitLab Runner Manager Pod performance is tested using a job that maximizes log output:
<details>
<summary>Performance test job definition</summary>
```yaml
performance_test:
stage: build
timeout: 30m
tags:
- kubernetes_runner
image: alpine:latest
parallel: 100
variables:
FILE_SIZE_MB: 4
CHUNK_SIZE_BYTES: 1024
FILE_NAME: "test_file_${CI_JOB_ID}_${FILE_SIZE_MB}MB.dat"
KUBERNETES_CPU_REQUEST: "200m"
KUBERNETES_CPU_LIMIT: "200m"
KUBERNETES_MEMORY_REQUEST: "200Mi"
KUBERNETES_MEMORY_LIMIT: "200Mi"
script:
- echo "Starting performance test job ${CI_PARALLEL_ID}/${CI_PARALLEL_TOTAL} with ${FILE_SIZE_MB}MB file size, ${CHUNK_SIZE_BYTES} bytes chunk size"
- dd if=/dev/urandom of="${FILE_NAME}" bs=1M count=${FILE_SIZE_MB}
- echo "File generated successfully. Size:"
- ls -lh "${FILE_NAME}"
- echo "Reading file in ${CHUNK_SIZE_BYTES} byte chunks"
- |
TOTAL_SIZE=$(stat -c%s "${FILE_NAME}")
BLOCKS=$((TOTAL_SIZE / CHUNK_SIZE_BYTES))
echo "Processing $BLOCKS blocks of $CHUNK_SIZE_BYTES bytes each"
for i in $(seq 0 99 $BLOCKS); do
echo "Processing blocks $i to $((i+99))"
dd if="${FILE_NAME}" bs=${CHUNK_SIZE_BYTES} skip=$i count=100 2>/dev/null | xxd -l $((CHUNK_SIZE_BYTES * 100)) -c 16
sleep 0.5
done
```
</details>
This test generates 4 MB of log output per job, which reaches the default
[`output_limit`](https://docs.gitlab.com/runner/configuration/advanced-configuration/#the-runners-section)
to stress test the manager pod's log processing capabilities.
**Test results:**
| Parallel Jobs | Peak CPU Usage | Peak Memory Usage |
| ------------- | -------------- | ----------------- |
| 50 | 308m | 261 MB |
| 100 | 657m | 369 MB |
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
xychart-beta
accTitle: Manager pod resource usage compared to concurrent jobs
accDescr: Chart showing CPU usage (10-610 millicores) and memory usage (50-300 MB) that scale with concurrent jobs (0-100).
x-axis [0, 25, 50, 75, 100]
y-axis "Resource Usage" 0 --> 700
line "CPU (millicores)" [10, 160, 310, 460, 610]
line "Memory (MB)" [50, 112, 175, 237, 300]
```
**Key findings:**
- CPU usage scales approximately linearly with concurrent jobs
- Memory usage increases with job count but not linearly
- All jobs run concurrently without queuing
### CPU requirements
Based on GitLab performance testing, calculate manager pod CPU requirements:
Manager pod CPU = Base CPU + (Concurrent jobs × CPU per job factor)
Where:
- Base CPU: 10m (baseline overhead)
- CPU per job factor: ~6m per concurrent job (based on testing)
**Examples based on test results:**
For 50 concurrent jobs:
```yaml
resources:
requests:
cpu: "310m" # 10m + (50 × 6m) = 310m
limits:
cpu: "465m" # 50% headroom for burst traffic
```
For 100 concurrent jobs:
```yaml
resources:
requests:
cpu: "610m" # 10m + (100 × 6m) = 610m
limits:
cpu: "915m" # 50% headroom
```
### Memory requirements
Based on GitLab testing, calculate memory requirements:
Manager pod memory = Base memory + (Concurrent jobs × Memory per job)
Where:
- Base memory: 50 MB (baseline overhead)
- Memory per job: ~2.5 MB per concurrent job (with 4MB log output)
**Examples based on test results:**
For 50 concurrent jobs:
```yaml
resources:
requests:
memory: "175Mi" # 50 + (50 × 2.5) = 175 MB
limits:
memory: "350Mi" # 100% headroom
```
For 100 concurrent jobs:
```yaml
resources:
requests:
memory: "300Mi" # 50 + (100 × 2.5) = 300 MB
limits:
memory: "600Mi" # 100% headroom
```
{{< alert type="note" >}}
Memory usage varies significantly based on log volume. Jobs producing more than
4 MB of logs require proportionally more memory.
{{< /alert >}}
### Configuration examples
**Small-scale (1-20 concurrent jobs):**
```yaml
resources:
limits:
cpu: 300m
memory: 256Mi
requests:
cpu: 150m
memory: 128Mi
runners:
config: |
concurrent = 20
[[runners]]
limit = 20
request_concurrency = 5
```
**Large-scale (75+ concurrent jobs):**
```yaml
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 600m
memory: 600Mi
runners:
config: |
concurrent = 150
[[runners]]
limit = 150
request_concurrency = 20
```
### Horizontal pod autoscaler
Configure automatic scaling:
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: gitlab-runner-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gitlab-runner
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
## Troubleshoot performance issues
Address common manager pod performance problems with these solutions.
### API rate limiting
**Problem:** Manager pod exceeds Kubernetes API rate limits.
**Solution:** Optimize API polling:
```toml
[[runners]]
[runners.kubernetes]
poll_interval = "5s" # Increase from default 3s
poll_timeout = "180s"
```
## Performance optimization
Apply these performance optimization strategies for challenging scenarios.
### Cache optimization
Configure distributed caching to reduce manager pod load. This action reduces computation
required for job pods by sharing cached files:
```toml
[runners.cache]
Type = "s3"
Shared = true
[runners.cache.s3]
ServerAddress = "cache.example.com"
BucketName = "gitlab-runner-cache"
PreSignedURLDisabled = false
```
## Node segregation
Segregate manager pods from job pods by using dedicated nodes to ensure stable
performance and prevent resource contention. This isolation prevents job pods
from disrupting critical manager pod operations.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TB
accTitle: Kubernetes node segregation architecture
accDescr: Node segregation with manager pods on dedicated manager nodes and job pods on worker nodes, separated by taints.
subgraph "Kubernetes Cluster"
subgraph "Manager Nodes"
MN1[Manager Node 1<br/>Taint: runner.gitlab.com/manager]
MN2[Manager Node 2<br/>Taint: runner.gitlab.com/manager]
MP1[Manager Pod 1]
MP2[Manager Pod 2]
MN1 --> MP1
MN2 --> MP2
end
subgraph "Worker Nodes"
WN1[Worker Node 1<br/>Taint: runner.gitlab.com/job]
WN2[Worker Node 2<br/>Taint: runner.gitlab.com/job]
WN3[Worker Node 3<br/>Taint: runner.gitlab.com/job]
JP1[Job Pod 1]
JP2[Job Pod 2]
JP3[Job Pod 3]
JP4[Job Pod 4]
WN1 --> JP1
WN1 --> JP2
WN2 --> JP3
WN3 --> JP4
end
end
MP1 -.->|Creates & Manages| JP1
MP1 -.->|Creates & Manages| JP2
MP2 -.->|Creates & Manages| JP3
MP2 -.->|Creates & Manages| JP4
```
### Configure node taints
**For manager nodes:**
```shell
# Taint nodes dedicated to Manager Pods
kubectl taint nodes <manager-node-name> runner.gitlab.com/manager=:NoExecute
# Label nodes for easier selection
kubectl label nodes <manager-node-name> runner.gitlab.com/workload-type=manager
```
**For worker nodes:**
```shell
# Taint nodes dedicated to job pods
kubectl taint nodes <worker-node-name> runner.gitlab.com/job=:NoExecute
# Label nodes for job scheduling
kubectl label nodes <worker-node-name> runner.gitlab.com/workload-type=job
```
### Configure manager pod scheduling
Update the GitLab Runner Operator configuration to schedule manager pods only on dedicated nodes:
```yaml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
spec:
gitlabUrl: https://gitlab.example.com
token: gitlab-runner-secret
buildImage: alpine
podSpec:
name: "manager-node-affinity"
patch: |
{
"spec": {
"nodeSelector": {
"runner.gitlab.com/workload-type": "manager"
},
"tolerations": [
{
"key": "runner.gitlab.com/manager",
"operator": "Exists",
"effect": "NoExecute"
}
]
}
}
patchType: "strategic"
```
### Configure job pod scheduling
Ensure job pods run only on worker nodes by updating `config.toml`.
```toml
[runners.kubernetes.node_selector]
"runner.gitlab.com/workload-type" = "job"
[runners.kubernetes.node_tolerations]
"runner.gitlab.com/job=" = "NoExecute"
```
**Benefits of node segregation:**
- Dedicated resources for manager pods without job interference
- Predictable performance without resource contention
- Option to run without resource limits when using dedicated nodes
- Simplified capacity planning with node-based scaling
### Emergency procedures
**Graceful restart:**
```shell
# Scale down to stop accepting new jobs
kubectl scale deployment gitlab-runner --replicas=0
# Wait for active jobs to complete (max 10 minutes)
timeout 600 bash -c 'while kubectl get pods -l job-type=user-job | grep Running; do sleep 10; done'
# Scale back up
kubectl scale deployment gitlab-runner --replicas=1
```
## Capacity planning
These calculations are based on tests with 4 MB log output per job.
Your resource requirements might vary based on:
- Log volume per job
- Job execution patterns
- Cache usage
- Network latency to GitLab
Calculate optimal resources using this Python function:
```python
def calculate_manager_resources(concurrent_jobs, avg_log_mb_per_job=4):
"""Calculate Manager Pod resources based on performance testing."""
# CPU: ~6m per concurrent job + 10m base
base_cpu = 0.01 # 10m
cpu_per_job = 0.006 # 6m per job
total_cpu = base_cpu + (concurrent_jobs * cpu_per_job)
# Memory: ~2.5MB per job + 50MB base (for 4MB log output)
base_memory = 50
memory_per_job = 2.5 * (avg_log_mb_per_job / 4) # Scale with log size
total_memory = base_memory + (concurrent_jobs * memory_per_job)
return {
'cpu_request': f"{int(total_cpu * 1000)}m",
'cpu_limit': f"{int(total_cpu * 1.5 * 1000)}m", # 50% headroom
'memory_request': f"{int(total_memory)}Mi",
'memory_limit': f"{int(total_memory * 2.0)}Mi" # 100% headroom
}
```
## Performance thresholds
Establish thresholds for proactive intervention:
| Metric | Warning | Critical | Action Required |
| -------------- | -------------- | -------------- | ----------------------- |
| CPU Usage | 70% sustained | 85% sustained | Scale or optimize |
| Memory Usage | 80% of limit | 90% of limit | Increase limits |
| API Error Rate | 2% of requests | 5% of requests | Investigate bottlenecks |
| Job Queue Time | 30 seconds | 2 minutes | Review capacity |
## Related topics
- [GitLab Runner fleet configuration and best practices](gitlab_runner_fleet_config_and_best_practices.md) - Job pod performance optimization
- [GitLab Runner executors](https://docs.gitlab.com/runner/executors/) - Execution environment performance characteristics
- [GitLab Runner monitoring](../../administration/monitoring/_index.md) - General monitoring setup
- [Plan and operate a fleet of runners](https://docs.gitlab.com/runner/fleet_scaling/) - Strategic fleet deployment
## Summary
Optimizing GitLab Runner manager pod performance requires systematic monitoring,
proper resource allocation, and proactive troubleshooting.
Key strategies include:
- **Proactive monitoring** by using Prometheus metrics and Grafana dashboards
- **Resource planning** based on concurrent job capacity and log volume
- **Multi-manager architecture** for fault tolerance and load distribution
- **Emergency procedures** for quick issue resolution
Implement these strategies to ensure reliable CI/CD pipeline execution while maintaining optimal resource utilization.
|
---
stage: Verify
group: Runner
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Optimize GitLab Runner Manager Pod performance in Kubernetes environments.
title: Optimize GitLab Runner manager pod performance
breadcrumbs:
- doc
- topics
- runner_fleet_design_guides
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
To monitor and optimize GitLab Runner manager pod performance in Kubernetes environments, GitLab recommends
the following best practices. Apply them to identify performance bottlenecks and implement solutions for
optimal CI/CD pipeline execution.
## Prerequisites
Before you implement these recommendations:
- Deploy GitLab Runner in Kubernetes using the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/)
- Have administrator access to your Kubernetes cluster
- Configure [Prometheus monitoring](../../administration/monitoring/_index.md) for GitLab Runner
- Have basic understanding of Kubernetes resource management
## GitLab Runner manager pod responsibilities
The GitLab Runner manager pod coordinates all CI/CD job execution in Kubernetes.
Its performance directly impacts your pipeline efficiency.
It handles:
- **Log processing**: Collects and forwards job logs from worker pods to GitLab
- **Cache management**: Coordinates local and cloud-based caching operations
- **Kubernetes API requests**: Creates, monitors, and deletes worker pods
- **GitLab API communication**: Polls for jobs and reports status updates
- **Pod lifecycle management**: Manages worker pod provisioning and cleanup
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: GitLab Runner manager pod architecture
accDescr: The manager pod polls GitLab for jobs, creates job pods through the Kubernetes API, manages the S3 cache, and forwards logs from job pods to GitLab.
subgraph "External Services"
GL[GitLab Instance]
S3[S3 Cache Storage]
end
subgraph "Manager Pod"
MP[Manager Process]
LB[Log Buffer]
CM[Cache Manager]
end
subgraph "Kubernetes API"
K8S[API Server]
end
subgraph "Job Pods"
JP1[Job Pod 1]
JP2[Job Pod 2]
JP3[Job Pod N]
end
GL <-->|Poll Jobs<br/>Update Status| MP
MP <-->|Create/Delete<br/>Monitor Pods| K8S
MP <-->|Cache Operations| S3
JP1 -->|Stream Logs| LB
JP2 -->|Stream Logs| LB
JP3 -->|Stream Logs| LB
LB -->|Forward Logs| GL
CM <-->|Manage Cache| S3
```
Each responsibility affects performance differently:
- **CPU intensive**: Kubernetes API operations, log processing
- **Memory intensive**: Log buffering, job queue management
- **Network intensive**: GitLab API communication, log streaming
## Deploy GitLab Runner in Kubernetes
Install GitLab Runner through the [GitLab Runner Operator](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator).
The Operator actively receives new features and improvements.
The GitLab Runner team installs the Operator through the
[Experimental GRIT framework](https://gitlab.com/gitlab-org/ci-cd/runner-tools/grit/-/tree/main/scenarios/google/gke/operator?ref_type=heads).
The easiest way to install GitLab Runner in Kubernetes is to apply the
[`operator.k8s.yaml` manifest from the latest release](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator/-/releases)
and then follow the instructions in the [Operator install documentation](https://docs.gitlab.com/runner/install/operator/#install-on-kubernetes).
## Configure monitoring
Observability is critical for GitLab Runner administration in Kubernetes because
pods are ephemeral and metrics provide the primary operational visibility.
For monitoring, install [`kube-prometheus-stack`](https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md).
To configure monitoring for the Operator, see [Monitor GitLab Runner Operator](https://docs.gitlab.com/runner/monitoring#monitor-gitlab-runner-operator).
## Performance monitoring
Effective monitoring is crucial for maintaining optimal manager pod performance.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart TD
accTitle: Metrics collection and monitoring flow
accDescr: The manager pod exposes metrics, Prometheus scrapes the metrics using PodMonitor configuration, Grafana visualizes the data, and Alertmanager notifies operators.
subgraph "Metrics Collection Flow"
MP[Manager Pod<br/>:9252/metrics]
PM[PodMonitor]
P[Prometheus]
G[Grafana]
A[Alertmanager]
MP -->|Expose Metrics| PM
PM -->|Scrape| P
P -->|Query| G
P -->|Alerts| A
A -->|Notify| O[Operators]
end
```
### Key performance metrics
Monitor these essential metrics:
| Metric | Description | Performance Indicator |
| -------------------------------------------------- | -------------------------------- | --------------------- |
| `gitlab_runner_jobs` | Current running jobs | Job queue saturation |
| `gitlab_runner_limit` | Configured job concurrency limit | Capacity utilization |
| `gitlab_runner_request_concurrency_exceeded_total` | Requests above concurrency limit | API throttling |
| `gitlab_runner_errors_total` | Total caught errors | System stability |
| `container_cpu_usage_seconds_total` | Container CPU usage | Resource consumption |
| `container_memory_working_set_bytes` | Container memory usage | Memory pressure |
### Prometheus queries
Track manager pod performance with these queries:
```prometheus
# Manager pod memory usage in MB
container_memory_working_set_bytes{pod=~"gitlab-runner.*"} / 1024 / 1024
# Manager pod CPU utilization in Millicores
rate(container_cpu_usage_seconds_total{pod=~"gitlab-runner.*"}[5m]) * 1000
# Job queue saturation
gitlab_runner_jobs / gitlab_runner_limit
# Jobs per runner
gitlab_runner_jobs
# API request rate
sum(rate(apiserver_request_total[5m]))
```
### Example dashboard
The following dashboard shows Manager Pod utilization across all pods using the Prometheus queries described previously:

This dashboard can help you visualize:
- Memory usage trends across manager pods
- CPU utilization patterns during job execution
- Job queue saturation levels
- Individual pod resource consumption
## Identify overloaded manager pods
Recognize performance degradation before it impacts your pipelines.
### Resource utilization indicators
By default, GitLab Runner Operator does not apply CPU or memory limits to manager pods.
To set resource limits:
```shell
kubectl patch deployment gitlab-runner -p '{"spec":{"template":{"spec":{"containers":[{"name":"gitlab-runner","resources":{"requests":{"cpu":"500m","memory":"256Mi"},"limits":{"cpu":"1000m","memory":"512Mi"}}}]}}}}'
```
{{< alert type="note" >}}
The feature to allow deployment patching from the Operator configuration is under development.
For more information, see [merge request 197](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator/-/merge_requests/197).
{{< /alert >}}
**High CPU usage patterns:**
- CPU consistently above 70% during standard operations
- CPU spikes exceeding 90% during job creation
- Sustained high CPU without corresponding job activity
**Memory consumption trends:**
- Memory usage above 80% of allocated limits
- Continuous memory growth without workload increase
- Out-of-memory (OOM) events in manager pod logs
### Performance degradation signs
Watch for these operational symptoms:
- Jobs remaining pending longer than usual
- Pod creation times exceeding 30 seconds
- Delayed log output in GitLab job interfaces
- `etcdserver: request timed out` errors in logs
### Diagnostic commands
```shell
# Current resource usage
kubectl top pods --containers
> POD NAME CPU(cores) MEMORY(bytes)
> gitlab-runner-runner-86cd68d899-m6qqm runner 7m 32Mi
# Check for performance errors
kubectl logs gitlab-runner-runner-86cd68d899-m6qqm --since=2h | grep -E "(error|timeout|failed)"
```
## Resource configuration
Proper resource configuration is essential for optimal performance.
### Performance testing methodology
GitLab Runner Manager Pod performance is tested using a job that maximizes log output:
<details>
<summary>Performance test job definition</summary>
```yaml
performance_test:
stage: build
timeout: 30m
tags:
- kubernetes_runner
image: alpine:latest
parallel: 100
variables:
FILE_SIZE_MB: 4
CHUNK_SIZE_BYTES: 1024
FILE_NAME: "test_file_${CI_JOB_ID}_${FILE_SIZE_MB}MB.dat"
KUBERNETES_CPU_REQUEST: "200m"
KUBERNETES_CPU_LIMIT: "200m"
KUBERNETES_MEMORY_REQUEST: "200Mi"
KUBERNETES_MEMORY_LIMIT: "200Mi"
script:
- echo "Starting performance test job ${CI_PARALLEL_ID}/${CI_PARALLEL_TOTAL} with ${FILE_SIZE_MB}MB file size, ${CHUNK_SIZE_BYTES} bytes chunk size"
- dd if=/dev/urandom of="${FILE_NAME}" bs=1M count=${FILE_SIZE_MB}
- echo "File generated successfully. Size:"
- ls -lh "${FILE_NAME}"
- echo "Reading file in ${CHUNK_SIZE_BYTES} byte chunks"
- |
TOTAL_SIZE=$(stat -c%s "${FILE_NAME}")
BLOCKS=$((TOTAL_SIZE / CHUNK_SIZE_BYTES))
echo "Processing $BLOCKS blocks of $CHUNK_SIZE_BYTES bytes each"
for i in $(seq 0 99 $BLOCKS); do
echo "Processing blocks $i to $((i+99))"
dd if="${FILE_NAME}" bs=${CHUNK_SIZE_BYTES} skip=$i count=100 2>/dev/null | xxd -l $((CHUNK_SIZE_BYTES * 100)) -c 16
sleep 0.5
done
```
</details>
This test generates 4 MB of log output per job, which reaches the default
[`output_limit`](https://docs.gitlab.com/runner/configuration/advanced-configuration/#the-runners-section)
to stress test the manager pod's log processing capabilities.
**Test results:**
| Parallel Jobs | Peak CPU Usage | Peak Memory Usage |
| ------------- | -------------- | ----------------- |
| 50 | 308m | 261 MB |
| 100 | 657m | 369 MB |
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
xychart-beta
accTitle: Manager pod resource usage compared to concurrent jobs
accDescr: Chart showing CPU usage (10-610 millicores) and memory usage (50-300 MB) that scale with concurrent jobs (0-100).
x-axis [0, 25, 50, 75, 100]
y-axis "Resource Usage" 0 --> 700
line "CPU (millicores)" [10, 160, 310, 460, 610]
line "Memory (MB)" [50, 112, 175, 237, 300]
```
**Key findings:**
- CPU usage scales approximately linearly with concurrent jobs
- Memory usage increases with job count but not linearly
- All jobs run concurrently without queuing
### CPU requirements
Based on GitLab performance testing, calculate manager pod CPU requirements:
Manager pod CPU = Base CPU + (Concurrent jobs × CPU per job factor)
Where:
- Base CPU: 10m (baseline overhead)
- CPU per job factor: ~6m per concurrent job (based on testing)
**Examples based on test results:**
For 50 concurrent jobs:
```yaml
resources:
requests:
cpu: "310m" # 10m + (50 × 6m) = 310m
limits:
cpu: "465m" # 50% headroom for burst traffic
```
For 100 concurrent jobs:
```yaml
resources:
requests:
cpu: "610m" # 10m + (100 × 6m) = 610m
limits:
cpu: "915m" # 50% headroom
```
### Memory requirements
Based on GitLab testing, calculate memory requirements:
Manager pod memory = Base memory + (Concurrent jobs × Memory per job)
Where:
- Base memory: 50 MB (baseline overhead)
- Memory per job: ~2.5 MB per concurrent job (with 4MB log output)
**Examples based on test results:**
For 50 concurrent jobs:
```yaml
resources:
requests:
memory: "175Mi" # 50 + (50 × 2.5) = 175 MB
limits:
memory: "350Mi" # 100% headroom
```
For 100 concurrent jobs:
```yaml
resources:
requests:
memory: "300Mi" # 50 + (100 × 2.5) = 300 MB
limits:
memory: "600Mi" # 100% headroom
```
{{< alert type="note" >}}
Memory usage varies significantly based on log volume. Jobs producing more than
4 MB of logs require proportionally more memory.
{{< /alert >}}
### Configuration examples
**Small-scale (1-20 concurrent jobs):**
```yaml
resources:
limits:
cpu: 300m
memory: 256Mi
requests:
cpu: 150m
memory: 128Mi
runners:
config: |
concurrent = 20
[[runners]]
limit = 20
request_concurrency = 5
```
**Large-scale (75+ concurrent jobs):**
```yaml
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 600m
memory: 600Mi
runners:
config: |
concurrent = 150
[[runners]]
limit = 150
request_concurrency = 20
```
### Horizontal pod autoscaler
Configure automatic scaling:
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: gitlab-runner-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: gitlab-runner
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
## Troubleshoot performance issues
Address common manager pod performance problems with these solutions.
### API rate limiting
**Problem:** Manager pod exceeds Kubernetes API rate limits.
**Solution:** Optimize API polling:
```toml
[[runners]]
[runners.kubernetes]
poll_interval = "5s" # Increase from default 3s
poll_timeout = "180s"
```
## Performance optimization
Apply these performance optimization strategies for challenging scenarios.
### Cache optimization
Configure distributed caching to reduce manager pod load. This action reduces computation
required for job pods by sharing cached files:
```toml
[runners.cache]
Type = "s3"
Shared = true
[runners.cache.s3]
ServerAddress = "cache.example.com"
BucketName = "gitlab-runner-cache"
PreSignedURLDisabled = false
```
## Node segregation
Segregate manager pods from job pods by using dedicated nodes to ensure stable
performance and prevent resource contention. This isolation prevents job pods
from disrupting critical manager pod operations.
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TB
accTitle: Kubernetes node segregation architecture
accDescr: Node segregation with manager pods on dedicated manager nodes and job pods on worker nodes, separated by taints.
subgraph "Kubernetes Cluster"
subgraph "Manager Nodes"
MN1[Manager Node 1<br/>Taint: runner.gitlab.com/manager]
MN2[Manager Node 2<br/>Taint: runner.gitlab.com/manager]
MP1[Manager Pod 1]
MP2[Manager Pod 2]
MN1 --> MP1
MN2 --> MP2
end
subgraph "Worker Nodes"
WN1[Worker Node 1<br/>Taint: runner.gitlab.com/job]
WN2[Worker Node 2<br/>Taint: runner.gitlab.com/job]
WN3[Worker Node 3<br/>Taint: runner.gitlab.com/job]
JP1[Job Pod 1]
JP2[Job Pod 2]
JP3[Job Pod 3]
JP4[Job Pod 4]
WN1 --> JP1
WN1 --> JP2
WN2 --> JP3
WN3 --> JP4
end
end
MP1 -.->|Creates & Manages| JP1
MP1 -.->|Creates & Manages| JP2
MP2 -.->|Creates & Manages| JP3
MP2 -.->|Creates & Manages| JP4
```
### Configure node taints
**For manager nodes:**
```shell
# Taint nodes dedicated to Manager Pods
kubectl taint nodes <manager-node-name> runner.gitlab.com/manager=:NoExecute
# Label nodes for easier selection
kubectl label nodes <manager-node-name> runner.gitlab.com/workload-type=manager
```
**For worker nodes:**
```shell
# Taint nodes dedicated to job pods
kubectl taint nodes <worker-node-name> runner.gitlab.com/job=:NoExecute
# Label nodes for job scheduling
kubectl label nodes <worker-node-name> runner.gitlab.com/workload-type=job
```
### Configure manager pod scheduling
Update the GitLab Runner Operator configuration to schedule manager pods only on dedicated nodes:
```yaml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
spec:
gitlabUrl: https://gitlab.example.com
token: gitlab-runner-secret
buildImage: alpine
podSpec:
name: "manager-node-affinity"
patch: |
{
"spec": {
"nodeSelector": {
"runner.gitlab.com/workload-type": "manager"
},
"tolerations": [
{
"key": "runner.gitlab.com/manager",
"operator": "Exists",
"effect": "NoExecute"
}
]
}
}
patchType: "strategic"
```
### Configure job pod scheduling
Ensure job pods run only on worker nodes by updating `config.toml`.
```toml
[runners.kubernetes.node_selector]
"runner.gitlab.com/workload-type" = "job"
[runners.kubernetes.node_tolerations]
"runner.gitlab.com/job=" = "NoExecute"
```
**Benefits of node segregation:**
- Dedicated resources for manager pods without job interference
- Predictable performance without resource contention
- Option to run without resource limits when using dedicated nodes
- Simplified capacity planning with node-based scaling
### Emergency procedures
**Graceful restart:**
```shell
# Scale down to stop accepting new jobs
kubectl scale deployment gitlab-runner --replicas=0
# Wait for active jobs to complete (max 10 minutes)
timeout 600 bash -c 'while kubectl get pods -l job-type=user-job | grep Running; do sleep 10; done'
# Scale back up
kubectl scale deployment gitlab-runner --replicas=1
```
## Capacity planning
These calculations are based on tests with 4 MB log output per job.
Your resource requirements might vary based on:
- Log volume per job
- Job execution patterns
- Cache usage
- Network latency to GitLab
Calculate optimal resources using this Python function:
```python
def calculate_manager_resources(concurrent_jobs, avg_log_mb_per_job=4):
"""Calculate Manager Pod resources based on performance testing."""
# CPU: ~6m per concurrent job + 10m base
base_cpu = 0.01 # 10m
cpu_per_job = 0.006 # 6m per job
total_cpu = base_cpu + (concurrent_jobs * cpu_per_job)
# Memory: ~2.5MB per job + 50MB base (for 4MB log output)
base_memory = 50
memory_per_job = 2.5 * (avg_log_mb_per_job / 4) # Scale with log size
total_memory = base_memory + (concurrent_jobs * memory_per_job)
return {
'cpu_request': f"{int(total_cpu * 1000)}m",
'cpu_limit': f"{int(total_cpu * 1.5 * 1000)}m", # 50% headroom
'memory_request': f"{int(total_memory)}Mi",
'memory_limit': f"{int(total_memory * 2.0)}Mi" # 100% headroom
}
```
## Performance thresholds
Establish thresholds for proactive intervention:
| Metric | Warning | Critical | Action Required |
| -------------- | -------------- | -------------- | ----------------------- |
| CPU Usage | 70% sustained | 85% sustained | Scale or optimize |
| Memory Usage | 80% of limit | 90% of limit | Increase limits |
| API Error Rate | 2% of requests | 5% of requests | Investigate bottlenecks |
| Job Queue Time | 30 seconds | 2 minutes | Review capacity |
## Related topics
- [GitLab Runner fleet configuration and best practices](gitlab_runner_fleet_config_and_best_practices.md) - Job pod performance optimization
- [GitLab Runner executors](https://docs.gitlab.com/runner/executors/) - Execution environment performance characteristics
- [GitLab Runner monitoring](../../administration/monitoring/_index.md) - General monitoring setup
- [Plan and operate a fleet of runners](https://docs.gitlab.com/runner/fleet_scaling/) - Strategic fleet deployment
## Summary
Optimizing GitLab Runner manager pod performance requires systematic monitoring,
proper resource allocation, and proactive troubleshooting.
Key strategies include:
- **Proactive monitoring** by using Prometheus metrics and Grafana dashboards
- **Resource planning** based on concurrent job capacity and log volume
- **Multi-manager architecture** for fault tolerance and load distribution
- **Emergency procedures** for quick issue resolution
Implement these strategies to ensure reliable CI/CD pipeline execution while maintaining optimal resource utilization.
|
https://docs.gitlab.com/topics/upgrading_auto_deploy_dependencies
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/upgrading_auto_deploy_dependencies.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
upgrading_auto_deploy_dependencies.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Upgrading deployments for newer Auto Deploy dependencies
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
[Auto Deploy](stages.md#auto-deploy) is a feature that deploys your application to a Kubernetes cluster.
It consists of several dependencies:
- [Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml) is a set of pipeline jobs and scripts that makes use of `auto-deploy-image`.
- [`auto-deploy-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) is the executable image that communicates with the Kubernetes cluster.
- [`auto-deploy-app chart`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) is the Helm chart for deploying your application.
The `auto-deploy-image` and `auto-deploy-app` charts use [Semantic Versioning](https://semver.org/).
By default, your Auto DevOps project keeps using the stable and non-breaking version.
However, these dependencies could be upgraded in a major version release of GitLab
with breaking changes requiring you to upgrade your deployments.
This guide explains how to upgrade your deployments with newer or different major versions of Auto Deploy dependencies.
## Verify dependency versions
The process to check the current versions differs depending on which template you
are using. First verify which template is in use:
- For GitLab Self-Managed instances, the [stable Auto Deploy template bundled with the GitLab package](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
is being used.
- [The GitLab.com stable Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
is being used if **one** of the following is true:
- Your Auto DevOps project doesn't have a `.gitlab-ci.yml` file.
- Your Auto DevOps project has a `.gitlab-ci.yml` and [includes](../../ci/yaml/_index.md#includetemplate)
the `Auto-DevOps.gitlab-ci.yml` template.
- [The latest Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.latest.gitlab-ci.yml)
is being used if **both** of the following is true:
- Your Auto DevOps project has a `.gitlab-ci.yml` file and [includes](../../ci/yaml/_index.md#includetemplate)
the `Auto-DevOps.gitlab-ci.yml` template.
- It also includes [the latest Auto Deploy template](#early-adopters)
If you know what template is being used:
- The `auto-deploy-image` version is in the template (for example `auto-deploy-image:v1.0.3`).
- The `auto-deploy-app` chart version is [in the auto-deploy-image repository](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/v1.0.3/assets/auto-deploy-app/Chart.yaml)
(for example `version: 1.0.3`).
## Compatibility
The following table explains the version compatibility between GitLab and Auto Deploy dependencies:
| GitLab version | `auto-deploy-image` version | Notes |
|------------------|-----------------------------|-------|
| v10.0 to v14.0 | v0.1.0 to v2.0.0 | v0 and v1 auto-deploy-image are backwards compatible. |
| v13.4 and later | v2.0.0 and later | v2 auto-deploy-image contains breaking changes, as explained in the [upgrade guide](#upgrade-deployments-to-the-v2-auto-deploy-image). |
You can find the current stable version of auto-deploy-image in the [Auto Deploy stable template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml).
## Upgrade Guide
Projects using Auto DevOps must use the unmodified chart managed by GitLab.
[Customized charts](customize.md#custom-helm-chart) are unsupported.
### Upgrade deployments to the v1 `auto-deploy-image`
The v1 chart is backward compatible with the v0 chart, so no configuration changes are needed.
### Upgrade deployments to the v2 `auto-deploy-image`
The v2 auto-deploy-image contains multiple dependency and architectural changes.
If your Auto DevOps project has an active environment deployed with the v1 `auto-deploy-image`, proceed with the following upgrade guide. Otherwise, you can skip this process.
#### Kubernetes 1.16+
The v2 auto-deploy-image drops support for Kubernetes 1.15 and earlier. If you need to upgrade your
Kubernetes cluster, follow your cloud provider's instructions. Here's
[an example on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster).
#### Helm v3
The `auto-deploy-image` uses the Helm binary to manipulate the releases.
Previously, `auto-deploy-image` used Helm v2, which used Tiller in a cluster.
In the v2 `auto-deploy-image`, it uses Helm v3 that doesn't require Tiller anymore.
If your Auto DevOps project has an active environment that was deployed with the v1
`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm v3:
1. Include the [Helm 2to3 migration CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml):
- If you are on GitLab.com, or GitLab 14.0.1 or later, this template is already included in Auto DevOps.
- On other versions of GitLab, you can modify your `.gitlab-ci.yml` to include the templates:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml
```
1. Set the following CI/CD variables:
- `MIGRATE_HELM_2TO3` to `true`. If this variable is not present, migration jobs do not run.
- `AUTO_DEVOPS_FORCE_DEPLOY_V2` to `1`.
- **Optional**: `BACKUP_HELM2_RELEASES` to `1`. If you set this variable, the migration
job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
If you accidentally delete the Helm v2 releases before you are ready, you can restore
this backup from a Kubernetes manifest file by using `kubectl apply -f $backup`.
{{< alert type="warning" >}}
Do not use this if you have public pipelines.
This artifact can contain secrets and is visible to any
user who can see your job.
{{< /alert >}}
1. Run a pipeline and trigger the `<environment-name>:helm-2to3:migrate` job.
1. Deploy your environment as usual. This deployment uses Helm v3.
1. If the deployment succeeds, you can safely run `<environment-name>:helm-2to3:cleanup`.
This deletes all Helm v2 release data from the namespace.
1. Remove the `MIGRATE_HELM_2TO3` CI/CD variable or set it to `false`. You can do this one environment at a time using [environment scopes](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
#### In-Cluster PostgreSQL Channel 2
The v2 auto-deploy-image drops support for [legacy in-cluster PostgreSQL](upgrading_postgresql.md).
If your Kubernetes cluster still depends on it, [upgrade and migrate your data](upgrading_postgresql.md)
with the [v1 auto-deploy-image](#use-a-specific-version-of-auto-deploy-dependencies).
#### Traffic routing change for canary deployments and incremental rollouts
Auto Deploy supports advanced deployment strategies such as [canary deployments](cicd_variables.md#deploy-policy-for-canary-environments)
and [incremental rollouts](../../ci/environments/incremental_rollouts.md).
Previously, `auto-deploy-image` created one service to balance the traffic between
unstable and stable tracks by changing the replica ratio. In the v2 `auto-deploy-image`,
it controls the traffic with [Canary Ingress](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary).
For more details, see the [v2 `auto-deploy-app` chart resource architecture](#v2-chart-resource-architecture).
If your Auto DevOps project has active `canary` or `rollout` track releases in the
`production` environment deployed with the v1 `auto-deploy-image`, use the following
steps to upgrade to v2:
1. Verify your project is [using the v1 `auto-deploy-image`](#verify-dependency-versions).
If not, [specify the version](#use-a-specific-version-of-auto-deploy-dependencies).
1. If you're in the process of deploying `canary` or `rollout` deployments, promote
them to `production` first to delete the unstable tracks.
1. Verify your project is [using the v2 `auto-deploy-image`](#verify-dependency-versions).
If not, [specify the version](#use-a-specific-version-of-auto-deploy-dependencies).
1. Add an `AUTO_DEVOPS_FORCE_DEPLOY_V2` CI/CD variable with a value of `true`
in the GitLab CI/CD settings.
1. Create a new pipeline and run the `production` job to renew the resource architecture
with the v2 `auto-deploy-app chart`.
1. Remove the `AUTO_DEVOPS_FORCE_DEPLOY_V2` variable.
### Use a specific version of Auto Deploy dependencies
To use a specific version of Auto Deploy dependencies, specify the previous Auto Deploy
stable template that contains the [desired version of `auto-deploy-image` and `auto-deploy-app`](#verify-dependency-versions).
For example, if the template is bundled in GitLab 16.10, change your `.gitlab-ci.yml` to:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/v16.10.0-ee/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
```
### Ignore warnings and continue deploying
If you are certain that the new chart version is safe to be deployed, you can add
the `AUTO_DEVOPS_FORCE_DEPLOY_V<major-version-number>` [CI/CD variable](cicd_variables.md#build-and-deployment-variables)
to force the deployment to continue.
For example, if you want to deploy the `v2.0.0` chart on a deployment that previously
used the `v0.17.0` chart, add `AUTO_DEVOPS_FORCE_DEPLOY_V2`.
## Early adopters
If you want to use the latest [beta](../../policy/development_stages_support.md#beta) or unstable version of `auto-deploy-image`, include
the latest Auto Deploy template into your `.gitlab-ci.yml`:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- template: Jobs/Deploy.latest.gitlab-ci.yml
```
{{< alert type="warning" >}}
Using a [beta](../../policy/development_stages_support.md#beta) or unstable `auto-deploy-image` could cause unrecoverable damage to
your environments. Do not test it with important projects or environments.
{{< /alert >}}
## Resource Architectures of the `auto-deploy-app` chart
### v0 and v1 chart resource architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
accTitle: v0 and v1 chart resource architecture
accDescr: Shows the relationships between the components of the v0 and v1 charts.
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
A[Ingress] --> D(Service);
D[Service] --> E(Deployment:Pods:app:stable);
D[Service] --> F(Deployment:Pods:app:canary);
D[Service] --> I(Deployment:Pods:app:rollout);
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
I(Deployment:Pods:app:rollout)---id1[(Pods:Postgres)]
end
```
### v2 chart resource architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
accTitle: v2 chart resource architecture
accDescr: Shows the relationships between the components of the v2 chart.
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
Z[Nginx Ingress] --> |If canary is present or incremental rollout/|J(Canary Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
subgraph stable track
A[Ingress] --> D[Service];
D[Service] --> E(Deployment:Pods:app:stable);
end
subgraph canary track
J(Canary Ingress) --> K[Service]
K[Service] --> F(Deployment:Pods:app:canary);
end
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
end
```
## Troubleshooting
### Major version mismatch warning
If deploying a chart that has a major version that is different from the previous one,
the new chart might not be correctly deployed. This could be due to an architectural
change. If that happens, the deployment job fails with a message similar to:
```plaintext
*************************************************************************************
[WARNING]
Detected a major version difference between the chart that is currently deploying (auto-deploy-app-v0.7.0), and the previously deployed chart (auto-deploy-app-v1.0.0).
A new major version might not be backward compatible with the current release (production). The deployment could fail or be stuck in an unrecoverable status.
...
```
To clear this error message and resume deployments, you must do one of the following:
- Manually [upgrade the chart version](#upgrade-guide).
- [Use a specific chart version](#use-a-specific-version-of-auto-deploy-dependencies).
### Error: `missing key "app.kubernetes.io/managed-by": must be set to "Helm"`
If your cluster has a deployment that was deployed with the v1 `auto-deploy-image`,
you might encounter the following error:
- `Error: rendered manifests contain a resource that already exists. Unable to continue with install: Secret "production-postgresql" in namespace "<project-name>-production" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "production-postgresql"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "<project-name>-production"`
This is because the previous deployment was deployed with Helm2, which is not compatible with Helm3.
To resolve the problem, follow the [upgrade guide](#upgrade-deployments-to-the-v2-auto-deploy-image).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Upgrading deployments for newer Auto Deploy dependencies
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
[Auto Deploy](stages.md#auto-deploy) is a feature that deploys your application to a Kubernetes cluster.
It consists of several dependencies:
- [Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml) is a set of pipeline jobs and scripts that makes use of `auto-deploy-image`.
- [`auto-deploy-image`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) is the executable image that communicates with the Kubernetes cluster.
- [`auto-deploy-app chart`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) is the Helm chart for deploying your application.
The `auto-deploy-image` and `auto-deploy-app` charts use [Semantic Versioning](https://semver.org/).
By default, your Auto DevOps project keeps using the stable and non-breaking version.
However, these dependencies could be upgraded in a major version release of GitLab
with breaking changes requiring you to upgrade your deployments.
This guide explains how to upgrade your deployments with newer or different major versions of Auto Deploy dependencies.
## Verify dependency versions
The process to check the current versions differs depending on which template you
are using. First verify which template is in use:
- For GitLab Self-Managed instances, the [stable Auto Deploy template bundled with the GitLab package](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
is being used.
- [The GitLab.com stable Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml)
is being used if **one** of the following is true:
- Your Auto DevOps project doesn't have a `.gitlab-ci.yml` file.
- Your Auto DevOps project has a `.gitlab-ci.yml` and [includes](../../ci/yaml/_index.md#includetemplate)
the `Auto-DevOps.gitlab-ci.yml` template.
- [The latest Auto Deploy template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.latest.gitlab-ci.yml)
is being used if **both** of the following is true:
- Your Auto DevOps project has a `.gitlab-ci.yml` file and [includes](../../ci/yaml/_index.md#includetemplate)
the `Auto-DevOps.gitlab-ci.yml` template.
- It also includes [the latest Auto Deploy template](#early-adopters)
If you know what template is being used:
- The `auto-deploy-image` version is in the template (for example `auto-deploy-image:v1.0.3`).
- The `auto-deploy-app` chart version is [in the auto-deploy-image repository](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/v1.0.3/assets/auto-deploy-app/Chart.yaml)
(for example `version: 1.0.3`).
## Compatibility
The following table explains the version compatibility between GitLab and Auto Deploy dependencies:
| GitLab version | `auto-deploy-image` version | Notes |
|------------------|-----------------------------|-------|
| v10.0 to v14.0 | v0.1.0 to v2.0.0 | v0 and v1 auto-deploy-image are backwards compatible. |
| v13.4 and later | v2.0.0 and later | v2 auto-deploy-image contains breaking changes, as explained in the [upgrade guide](#upgrade-deployments-to-the-v2-auto-deploy-image). |
You can find the current stable version of auto-deploy-image in the [Auto Deploy stable template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml).
## Upgrade Guide
Projects using Auto DevOps must use the unmodified chart managed by GitLab.
[Customized charts](customize.md#custom-helm-chart) are unsupported.
### Upgrade deployments to the v1 `auto-deploy-image`
The v1 chart is backward compatible with the v0 chart, so no configuration changes are needed.
### Upgrade deployments to the v2 `auto-deploy-image`
The v2 auto-deploy-image contains multiple dependency and architectural changes.
If your Auto DevOps project has an active environment deployed with the v1 `auto-deploy-image`, proceed with the following upgrade guide. Otherwise, you can skip this process.
#### Kubernetes 1.16+
The v2 auto-deploy-image drops support for Kubernetes 1.15 and earlier. If you need to upgrade your
Kubernetes cluster, follow your cloud provider's instructions. Here's
[an example on GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/upgrading-a-cluster).
#### Helm v3
The `auto-deploy-image` uses the Helm binary to manipulate the releases.
Previously, `auto-deploy-image` used Helm v2, which used Tiller in a cluster.
In the v2 `auto-deploy-image`, it uses Helm v3 that doesn't require Tiller anymore.
If your Auto DevOps project has an active environment that was deployed with the v1
`auto-deploy-image`, use the following steps to upgrade to v2, which uses Helm v3:
1. Include the [Helm 2to3 migration CI/CD template](https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml):
- If you are on GitLab.com, or GitLab 14.0.1 or later, this template is already included in Auto DevOps.
- On other versions of GitLab, you can modify your `.gitlab-ci.yml` to include the templates:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Helm-2to3.gitlab-ci.yml
```
1. Set the following CI/CD variables:
- `MIGRATE_HELM_2TO3` to `true`. If this variable is not present, migration jobs do not run.
- `AUTO_DEVOPS_FORCE_DEPLOY_V2` to `1`.
- **Optional**: `BACKUP_HELM2_RELEASES` to `1`. If you set this variable, the migration
job saves a backup for 1 week in a job artifact called `helm-2-release-backups`.
If you accidentally delete the Helm v2 releases before you are ready, you can restore
this backup from a Kubernetes manifest file by using `kubectl apply -f $backup`.
{{< alert type="warning" >}}
Do not use this if you have public pipelines.
This artifact can contain secrets and is visible to any
user who can see your job.
{{< /alert >}}
1. Run a pipeline and trigger the `<environment-name>:helm-2to3:migrate` job.
1. Deploy your environment as usual. This deployment uses Helm v3.
1. If the deployment succeeds, you can safely run `<environment-name>:helm-2to3:cleanup`.
This deletes all Helm v2 release data from the namespace.
1. Remove the `MIGRATE_HELM_2TO3` CI/CD variable or set it to `false`. You can do this one environment at a time using [environment scopes](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
#### In-Cluster PostgreSQL Channel 2
The v2 auto-deploy-image drops support for [legacy in-cluster PostgreSQL](upgrading_postgresql.md).
If your Kubernetes cluster still depends on it, [upgrade and migrate your data](upgrading_postgresql.md)
with the [v1 auto-deploy-image](#use-a-specific-version-of-auto-deploy-dependencies).
#### Traffic routing change for canary deployments and incremental rollouts
Auto Deploy supports advanced deployment strategies such as [canary deployments](cicd_variables.md#deploy-policy-for-canary-environments)
and [incremental rollouts](../../ci/environments/incremental_rollouts.md).
Previously, `auto-deploy-image` created one service to balance the traffic between
unstable and stable tracks by changing the replica ratio. In the v2 `auto-deploy-image`,
it controls the traffic with [Canary Ingress](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary).
For more details, see the [v2 `auto-deploy-app` chart resource architecture](#v2-chart-resource-architecture).
If your Auto DevOps project has active `canary` or `rollout` track releases in the
`production` environment deployed with the v1 `auto-deploy-image`, use the following
steps to upgrade to v2:
1. Verify your project is [using the v1 `auto-deploy-image`](#verify-dependency-versions).
If not, [specify the version](#use-a-specific-version-of-auto-deploy-dependencies).
1. If you're in the process of deploying `canary` or `rollout` deployments, promote
them to `production` first to delete the unstable tracks.
1. Verify your project is [using the v2 `auto-deploy-image`](#verify-dependency-versions).
If not, [specify the version](#use-a-specific-version-of-auto-deploy-dependencies).
1. Add an `AUTO_DEVOPS_FORCE_DEPLOY_V2` CI/CD variable with a value of `true`
in the GitLab CI/CD settings.
1. Create a new pipeline and run the `production` job to renew the resource architecture
with the v2 `auto-deploy-app chart`.
1. Remove the `AUTO_DEVOPS_FORCE_DEPLOY_V2` variable.
### Use a specific version of Auto Deploy dependencies
To use a specific version of Auto Deploy dependencies, specify the previous Auto Deploy
stable template that contains the [desired version of `auto-deploy-image` and `auto-deploy-app`](#verify-dependency-versions).
For example, if the template is bundled in GitLab 16.10, change your `.gitlab-ci.yml` to:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/gitlab-org/gitlab/-/raw/v16.10.0-ee/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
```
### Ignore warnings and continue deploying
If you are certain that the new chart version is safe to be deployed, you can add
the `AUTO_DEVOPS_FORCE_DEPLOY_V<major-version-number>` [CI/CD variable](cicd_variables.md#build-and-deployment-variables)
to force the deployment to continue.
For example, if you want to deploy the `v2.0.0` chart on a deployment that previously
used the `v0.17.0` chart, add `AUTO_DEVOPS_FORCE_DEPLOY_V2`.
## Early adopters
If you want to use the latest [beta](../../policy/development_stages_support.md#beta) or unstable version of `auto-deploy-image`, include
the latest Auto Deploy template into your `.gitlab-ci.yml`:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- template: Jobs/Deploy.latest.gitlab-ci.yml
```
{{< alert type="warning" >}}
Using a [beta](../../policy/development_stages_support.md#beta) or unstable `auto-deploy-image` could cause unrecoverable damage to
your environments. Do not test it with important projects or environments.
{{< /alert >}}
## Resource Architectures of the `auto-deploy-app` chart
### v0 and v1 chart resource architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
accTitle: v0 and v1 chart resource architecture
accDescr: Shows the relationships between the components of the v0 and v1 charts.
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
A[Ingress] --> D(Service);
D[Service] --> E(Deployment:Pods:app:stable);
D[Service] --> F(Deployment:Pods:app:canary);
D[Service] --> I(Deployment:Pods:app:rollout);
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
I(Deployment:Pods:app:rollout)---id1[(Pods:Postgres)]
end
```
### v2 chart resource architecture
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
graph TD;
accTitle: v2 chart resource architecture
accDescr: Shows the relationships between the components of the v2 chart.
subgraph gl-managed-app
Z[Nginx Ingress]
end
Z[Nginx Ingress] --> A(Ingress);
Z[Nginx Ingress] --> B(Ingress);
Z[Nginx Ingress] --> |If canary is present or incremental rollout/|J(Canary Ingress);
subgraph stg namespace
B[Ingress] --> H(...);
end
subgraph prd namespace
subgraph stable track
A[Ingress] --> D[Service];
D[Service] --> E(Deployment:Pods:app:stable);
end
subgraph canary track
J(Canary Ingress) --> K[Service]
K[Service] --> F(Deployment:Pods:app:canary);
end
E(Deployment:Pods:app:stable)---id1[(Pods:Postgres)]
F(Deployment:Pods:app:canary)---id1[(Pods:Postgres)]
end
```
## Troubleshooting
### Major version mismatch warning
If deploying a chart that has a major version that is different from the previous one,
the new chart might not be correctly deployed. This could be due to an architectural
change. If that happens, the deployment job fails with a message similar to:
```plaintext
*************************************************************************************
[WARNING]
Detected a major version difference between the chart that is currently deploying (auto-deploy-app-v0.7.0), and the previously deployed chart (auto-deploy-app-v1.0.0).
A new major version might not be backward compatible with the current release (production). The deployment could fail or be stuck in an unrecoverable status.
...
```
To clear this error message and resume deployments, you must do one of the following:
- Manually [upgrade the chart version](#upgrade-guide).
- [Use a specific chart version](#use-a-specific-version-of-auto-deploy-dependencies).
### Error: `missing key "app.kubernetes.io/managed-by": must be set to "Helm"`
If your cluster has a deployment that was deployed with the v1 `auto-deploy-image`,
you might encounter the following error:
- `Error: rendered manifests contain a resource that already exists. Unable to continue with install: Secret "production-postgresql" in namespace "<project-name>-production" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "production-postgresql"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "<project-name>-production"`
This is because the previous deployment was deployed with Helm2, which is not compatible with Helm3.
To resolve the problem, follow the [upgrade guide](#upgrade-deployments-to-the-v2-auto-deploy-image).
|
https://docs.gitlab.com/topics/cicd_variables
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/cicd_variables.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
cicd_variables.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
CI/CD variables
| null |
Use CI/CD variables to set up the Auto DevOps domain, provide a custom
Helm chart, or scale your application.
## Build and deployment variables
Use these variables to customize and deploy your build.
<!-- markdownlint-disable MD056 -->
| **CI/CD variable** | **Description** |
|-----------------------------------------|-----------------|
| `ADDITIONAL_HOSTS` | Fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. |
| `<ENVIRONMENT>_ADDITIONAL_HOSTS` | For a specific environment, the fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. This takes precedence over `ADDITIONAL_HOSTS`. |
| `AUTO_BUILD_IMAGE_VERSION` | Customize the image version used for the `build` job. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-build-image/-/releases). |
| `AUTO_DEPLOY_IMAGE_VERSION` | Customize the image version used for Kubernetes deployment jobs. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/releases). |
| `AUTO_DEVOPS_ATOMIC_RELEASE` | Auto DevOps uses [`--atomic`](https://v2.helm.sh/docs/helm/#options-43) for Helm deployments by default. Set this variable to `false` to disable the use of `--atomic` |
| `AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER` | The builder used when building with Cloud Native Buildpacks. The default builder is `heroku/buildpacks:22`. [More details](stages.md#auto-build-using-cloud-native-buildpacks). |
| `AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` | Extra arguments to be passed to the `docker build` command. Using quotes doesn't prevent word splitting. [More details](customize.md#pass-arguments-to-docker-build). |
| `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` | A [comma-separated list of CI/CD variable names](customize.md#forward-cicd-variables-to-the-build-environment) to be forwarded to the build environment (the buildpack builder or `docker build`). |
| `AUTO_DEVOPS_BUILD_IMAGE_CNB_PORT` | In GitLab 15.0 and later, port exposed by the generated Docker image. Set to `false` to prevent exposing any ports. Defaults to `5000`. |
| `AUTO_DEVOPS_BUILD_IMAGE_CONTEXT` | Used to set the build context directory for Dockerfile and Cloud Native Buildpacks. Defaults to the root directory. |
| `AUTO_DEVOPS_CHART` | Helm Chart used to deploy your apps. Defaults to the one [provided by GitLab](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app). |
| `AUTO_DEVOPS_CHART_REPOSITORY` | Helm Chart repository used to search for charts. Defaults to `https://charts.gitlab.io`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_NAME` | Used to set the name of the Helm repository. Defaults to `gitlab`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME` | Used to set a username to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD` | Used to set a password to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASS_CREDENTIALS` | Set to a non-empty value to enable forwarding of the Helm repository credentials to the chart server when the chart artifacts are on a different host than repository. |
| `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` | Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands. By default, Helm uses TLS verification. |
| `AUTO_DEVOPS_CHART_CUSTOM_ONLY` | Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab. |
| `AUTO_DEVOPS_CHART_VERSION` | Set the version of the deployment chart. Defaults to the latest available version. |
| `AUTO_DEVOPS_COMMON_NAME` | From GitLab 15.5, set to a valid domain name to customize the common name used for the TLS certificate. Defaults to `le-$CI_PROJECT_ID.$KUBE_INGRESS_BASE_DOMAIN`. Set to `false` to not set this alternative host on the Ingress. |
| `AUTO_DEVOPS_DEPLOY_DEBUG` | If this variable is present, Helm outputs debug logs. |
| `AUTO_DEVOPS_ALLOW_TO_FORCE_DEPLOY_V<N>` | From [auto-deploy-image](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) v1.0.0, if this variable is present, a new major version of chart is forcibly deployed. For more information, see [Ignore warnings and continue deploying](upgrading_auto_deploy_dependencies.md#ignore-warnings-and-continue-deploying). |
| `BUILDPACK_URL` | A full Buildpack URL. [Must point to a URL supported by Pack](customize.md#custom-buildpacks). |
| `CANARY_ENABLED` | Used to define a [deploy policy for canary environments](#deploy-policy-for-canary-environments). |
| `BUILDPACK_VOLUMES` | Specify one or more [Buildpack volumes to mount](stages.md#mount-volumes-into-the-build-container). Use a pipe `|` as list separator. |
| `CANARY_PRODUCTION_REPLICAS` | Number of canary replicas to deploy for [Canary Deployments](../../user/project/canary_deployments.md) in the production environment. Takes precedence over `CANARY_REPLICAS`. Defaults to 1. |
| `CANARY_REPLICAS` | Number of canary replicas to deploy for [Canary Deployments](../../user/project/canary_deployments.md). Defaults to 1. |
| `CI_APPLICATION_REPOSITORY` | The repository of container image being built or deployed, `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`. For more details, read [Custom container image](customize.md#custom-container-image). |
| `CI_APPLICATION_TAG` | The tag of the container image being built or deployed, `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`. For more details, read [Custom container image](customize.md#custom-container-image). |
| `DAST_AUTO_DEPLOY_IMAGE_VERSION` | Customize the image version used for DAST deployments on the default branch. Should usually be the same as `AUTO_DEPLOY_IMAGE_VERSION`. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/releases). |
| `DOCKERFILE_PATH` | Allows overriding the [default Dockerfile path for the build stage](customize.md#custom-dockerfiles) |
| `HELM_RELEASE_NAME` | Allows the `helm` release name to be overridden. Can be used to assign unique release names when deploying multiple projects to a single namespace. |
| `HELM_UPGRADE_VALUES_FILE` | Allows the `helm upgrade` values file to be overridden. Defaults to `.gitlab/auto-deploy-values.yaml`. |
| `HELM_UPGRADE_EXTRA_ARGS` | Allows extra options in `helm upgrade` commands when deploying the application. Using quotes doesn't prevent word splitting. |
| `INCREMENTAL_ROLLOUT_MODE` | If present, can be used to enable an [incremental rollout](#incremental-rollout-to-production) of your application for the production environment. Set to `manual` for manual deployment jobs or `timed` for automatic rollout deployments with a 5 minute delay each one. |
| `K8S_SECRET_*` | Any variable prefixed with [`K8S_SECRET_`](#configure-application-secret-variables) is made available by Auto DevOps as environment variables to the deployed application. |
| `KUBE_CONTEXT` | Can be used to select a context to use from `KUBECONFIG`. When `KUBE_CONTEXT` is blank, the default context in `KUBECONFIG` (if any) is used. A context must be selected when used [with the agent for Kubernetes](../../user/clusters/agent/ci_cd_workflow.md). |
| `KUBE_INGRESS_BASE_DOMAIN` | Can be used to set a domain per cluster. See [cluster domains](../../user/project/clusters/gitlab_managed_clusters.md#base-domain) for more information. |
| `KUBE_NAMESPACE` | The namespace used for deployments. When using certificate-based clusters, [this value should not be overwritten directly](../../user/project/clusters/deploy_to_cluster.md#custom-namespace). |
| `KUBECONFIG` | The kubeconfig to use for deployments. User-provided values take priority over GitLab-provided values. |
| `PRODUCTION_REPLICAS` | Number of replicas to deploy in the production environment. Takes precedence over `REPLICAS` and defaults to 1. For zero-downtime upgrades, set to 2 or greater. |
| `REPLICAS` | Number of replicas to deploy. Defaults to 1. Change this variable instead of [modifying](customize.md#customize-helm-chart-values) `replicaCount`. |
| `ROLLOUT_RESOURCE_TYPE` | Allows specification of the resource type being deployed when using a custom Helm chart. Default value is `deployment`. |
| `ROLLOUT_STATUS_DISABLED` | Used to disable rollout status check because it does not support all resource types, for example, `cronjob`. |
| `STAGING_ENABLED` | Used to define a [deploy policy for staging and production environments](#deploy-policy-for-staging-and-production-environments). |
| `TRACE` | Set to any value to make Helm commands produce verbose output. You can use this setting to help diagnose Auto DevOps deployment problems. |
<!-- markdownlint-enable MD056 -->
## Database variables
{{< alert type="warning" >}}
From [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/343988), `POSTGRES_ENABLED` is no longer set by default.
{{< /alert >}}
Use these variables to integrate CI/CD with PostgreSQL databases.
| **CI/CD variable** | **Description** |
|-----------------------------------------|------------------------------------|
| `DB_INITIALIZE` | Used to specify the command to run to initialize the application's PostgreSQL database. Runs inside the application pod. |
| `DB_MIGRATE` | Used to specify the command to run to migrate the application's PostgreSQL database. Runs inside the application pod. |
| `POSTGRES_ENABLED` | Whether PostgreSQL is enabled. Set to `true` to enable the automatic deployment of PostgreSQL. |
| `POSTGRES_USER` | The PostgreSQL user. Defaults to `user`. Set it to use a custom username. |
| `POSTGRES_PASSWORD` | The PostgreSQL password. Defaults to `testing-password`. Set it to use a custom password. |
| `POSTGRES_DB` | The PostgreSQL database name. Defaults to the value of [`$CI_ENVIRONMENT_SLUG`](../../ci/variables/_index.md#predefined-cicd-variables). Set it to use a custom database name. |
| `POSTGRES_VERSION` | Tag for the [`postgres` Docker image](https://hub.docker.com/_/postgres) to use. Defaults to `9.6.16` for tests and deployments. If `AUTO_DEVOPS_POSTGRES_CHANNEL` is set to `1`, deployments uses the default version `9.6.2`. |
| `POSTGRES_HELM_UPGRADE_VALUES_FILE` | When using [auto-deploy-image v2](upgrading_auto_deploy_dependencies.md), this variable allows the `helm upgrade` values file for PostgreSQL to be overridden. Defaults to `.gitlab/auto-deploy-postgres-values.yaml`. |
| `POSTGRES_HELM_UPGRADE_EXTRA_ARGS` | When using [auto-deploy-image v2](upgrading_auto_deploy_dependencies.md), this variable allows extra PostgreSQL options in `helm upgrade` commands when deploying the application. Using quotes doesn't prevent word splitting. |
| `POSTGRES_CHART_REPOSITORY` | Helm Chart repository used to search for PostgreSQL chart. Defaults to `https://raw.githubusercontent.com/bitnami/charts/eb5f9a9513d987b519f0ecd732e7031241c50328/bitnami`. |
| `POSTGRES_CHART_VERSION` | Helm Chart version used for PostgreSQL chart. Defaults to `8.2.1`. |
## Job-skipping variables
Use these variables to skip specific types of CI/CD jobs. When skipped, the CI/CD jobs don't get created or run.
| **Job name** | **CI/CD variable** | **GitLab version** | **Description** |
|----------------------------------------|---------------------------------|-----------------------|-----------------|
| `.fuzz_base` | `COVFUZZ_DISABLED` | | [Read more](../../user/application_security/coverage_fuzzing/_index.md) about how `.fuzz_base` provide capability for your own jobs. The job isn't created if the value is `"true"`. |
| `apifuzzer_fuzz` | `API_FUZZING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `build` | `BUILD_DISABLED` | | If the variable is present, the job isn't created. |
| `build_artifact` | `BUILD_DISABLED` | | If the variable is present, the job isn't created. |
| `brakeman-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `canary` | `CANARY_ENABLED` | | This manual job is created if the variable is present. |
| `code_intelligence` | `CODE_INTELLIGENCE_DISABLED` | | If the variable is present, the job isn't created. |
| `code_quality` | `CODE_QUALITY_DISABLED` | | The job isn't created if the value is `"true"`. |
| `container_scanning` | `CONTAINER_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dast` | `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dast_environment_deploy` | `DAST_DISABLED_FOR_DEFAULT_BRANCH` or `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `flawfinder-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-maven-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-python-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `kubesec-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `license_management` | `LICENSE_MANAGEMENT_DISABLED` | GitLab 12.7 and earlier | If the variable is present, the job isn't created. Job deprecated [from GitLab 12.8](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/22773) |
| `license_scanning` | `LICENSE_MANAGEMENT_DISABLED` | | The job isn't created if the value is `"true"`. Job deprecated [from GitLab 15.9](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/111071) |
| `load_performance` | `LOAD_PERFORMANCE_DISABLED` | | If the variable is present, the job isn't created. |
| `nodejs-scan-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `performance` | `PERFORMANCE_DISABLED` | GitLab 13.12 and earlier | Browser performance. If the variable is present, the job isn't created. Replaced by `browser_performance`. |
| `browser_performance` | `BROWSER_PERFORMANCE_DISABLED` | | Browser performance. If the variable is present, the job isn't created. Replaces `performance`. |
| `phpcs-security-audit-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `pmd-apex-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `review` | `REVIEW_DISABLED` | | If the variable is present, the job isn't created. |
| `review:stop` | `REVIEW_DISABLED` | | Manual job. If the variable is present, the job isn't created. |
| `secret_detection` | `SECRET_DETECTION_DISABLED` | | The job isn't created if the value is `"true"`. |
| `secret_detection_default_branch` | `SECRET_DETECTION_DISABLED` | | The job isn't created if the value is `"true"`. |
| `semgrep-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `sobelow-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `stop_dast_environment` | `DAST_DISABLED_FOR_DEFAULT_BRANCH` or `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `spotbugs-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `test` | `TEST_DISABLED` | | If the variable is present, the job isn't created. |
| `staging` | `STAGING_ENABLED` | | The job is created if the variable is present. |
| `stop_review` | `REVIEW_DISABLED` | | If the variable is present, the job isn't created. |
## Configure application secret variables
Some deployed applications require access to secret variables.
Auto DevOps detects CI/CD variables starting with `K8S_SECRET_`,
and makes them available to the deployed application as
environment variables.
Prerequisites:
- The variable value must be a single line.
To configure secret variables:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Variables**.
1. Create a CI/CD variable with the prefix `K8S_SECRET_`. For example, you
can create a variable called `K8S_SECRET_RAILS_MASTER_KEY`.
1. Run an Auto DevOps pipeline, either by manually creating a new
pipeline or by pushing a code change to GitLab.
### Kubernetes secrets
Auto DevOps pipelines use your application secret variables to
populate a Kubernetes secret. This secret is unique per environment.
When deploying your application, the secret is loaded as environment
variables in the container running the application. For example, if
you create a secret called `K8S_SECRET_RAILS_MASTER_KEY`, your
Kubernetes secret might look like:
```shell
$ kubectl get secret production-secret -n minimal-ruby-app-54 -o yaml
apiVersion: v1
data:
RAILS_MASTER_KEY: MTIzNC10ZXN0
kind: Secret
metadata:
creationTimestamp: 2018-12-20T01:48:26Z
name: production-secret
namespace: minimal-ruby-app-54
resourceVersion: "429422"
selfLink: /api/v1/namespaces/minimal-ruby-app-54/secrets/production-secret
uid: 57ac2bfd-03f9-11e9-b812-42010a9400e4
type: Opaque
```
## Update application secrets
Environment variables are generally immutable in a Kubernetes pod.
If you update an application secret and then manually
create a new pipeline, running applications do not receive the
updated secret.
To update application secrets, either:
- Push a code update to GitLab to force the Kubernetes deployment to recreate pods.
- Manually delete running pods to cause Kubernetes to create new pods with updated
secrets.
Variables with multi-line values are not supported due to
limitations with the Auto DevOps scripting environment.
## Configure replica variables
Add replica variables when you want to scale your deployments:
1. Add a replica variable as a [project CI/CD variable](../../ci/variables/_index.md#for-a-project).
1. To scale your application, redeploy it.
{{< alert type="warning" >}}
Do not scale your application using Kubernetes directly. Helm might not detect the change,
and subsequent deployments with Auto DevOps can undo your changes.
{{< /alert >}}
### Custom replica variables
You can create custom replica variables with the format `<TRACK>_<ENV>_REPLICAS`:
- `<TRACK>` is the all-caps value of the `track`
[Kubernetes label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
set in the Helm Chart app definition. If `track` is not set, omit `<TRACK>` from the custom variable.
- `<ENV>` is the all-caps environment name of the deploy job set in
`.gitlab-ci.yml`.
For example, if the environment is `qa` and the track is
`foo`, create an environment variable called `FOO_QA_REPLICAS`:
```yaml
QA testing:
stage: deploy
environment:
name: qa
script:
- deploy foo
```
The track `foo` must be defined in the application's Helm chart.
For example:
```yaml
replicaCount: 1
image:
repository: gitlab.example.com/group/project
tag: stable
pullPolicy: Always
secrets:
- name: gitlab-registry
application:
track: foo
tier: web
service:
enabled: true
name: web
type: ClusterIP
url: http://my.host.com/
externalPort: 5000
internalPort: 5000
```
## Deploy policy for staging and production environments
Auto DevOps typically uses continuous deployment, and pushes
automatically to the `production` environment whenever a new pipeline
runs on the default branch. To deploy to production manually, you can
use the `STAGING_ENABLED` CI/CD variable.
If you set `STAGING_ENABLED`, GitLab automatically deploys the
application to a `staging` environment. When you're ready to deploy to
production, GitLab creates a `production_manual` job.
You can also enable manual deployment in your [project settings](requirements.md#auto-devops-deployment-strategy).
## Deploy policy for canary environments
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can use a [canary environment](../../user/project/canary_deployments.md) before
deploying any changes to production.
If you set `CANARY_ENABLED`, GitLab creates two [manual jobs](../../ci/pipelines/_index.md#add-manual-interaction-to-your-pipeline):
- `canary` - Deploys the application to the canary environment.
- `production_manual` - Deploys the application to production.
## Incremental rollout to production
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use an incremental rollout to continuously deploy your application,
starting with only a few pods. You can increase the number of pods
manually.
You can enable manual deployment in your [project settings](requirements.md#auto-devops-deployment-strategy),
or by setting `INCREMENTAL_ROLLOUT_MODE` to `manual`.
If you set `INCREMENTAL_ROLLOUT_MODE` to `manual`, GitLab creates four
manual jobs:
1. `rollout 10%`
1. `rollout 25%`
1. `rollout 50%`
1. `rollout 100%`
The percentage is based on the `REPLICAS` CI/CD variable, and defines the number of
pods used for deployment. For example, if the value is `10` and you run the
`10%` rollout job, your application is deployed to only one pod.
You can run the rollout jobs in any order. To scale down, rerun a
lower percentage job.
After you run the `rollout 100%` job, you cannot scale down, and must
[roll back your deployment](../../ci/environments/deployments.md#retry-or-roll-back-a-deployment).
### Example incremental rollout configurations
Without `INCREMENTAL_ROLLOUT_MODE` and without `STAGING_ENABLED`:

Without `INCREMENTAL_ROLLOUT_MODE` and with `STAGING_ENABLED`:

With `INCREMENTAL_ROLLOUT_MODE` set to `manual` and without `STAGING_ENABLED`:

With `INCREMENTAL_ROLLOUT_MODE` set to `manual` and with `STAGING_ENABLED`:

## Timed incremental rollout to production
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use a timed incremental rollout to continuously deploy your application, starting with
only a few pods.
You can enable timed incremental deployment in your [project settings](requirements.md#auto-devops-deployment-strategy),
or by setting the `INCREMENTAL_ROLLOUT_MODE` CI/CD variable to `timed`.
If you set `INCREMENTAL_ROLLOUT_MODE` to `timed`, GitLab creates four jobs:
1. `timed rollout 10%`
1. `timed rollout 25%`
1. `timed rollout 50%`
1. `timed rollout 100%`
There is a five-minute delay between jobs.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: CI/CD variables
breadcrumbs:
- doc
- topics
- autodevops
---
Use CI/CD variables to set up the Auto DevOps domain, provide a custom
Helm chart, or scale your application.
## Build and deployment variables
Use these variables to customize and deploy your build.
<!-- markdownlint-disable MD056 -->
| **CI/CD variable** | **Description** |
|-----------------------------------------|-----------------|
| `ADDITIONAL_HOSTS` | Fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. |
| `<ENVIRONMENT>_ADDITIONAL_HOSTS` | For a specific environment, the fully qualified domain names specified as a comma-separated list that are added to the Ingress hosts. This takes precedence over `ADDITIONAL_HOSTS`. |
| `AUTO_BUILD_IMAGE_VERSION` | Customize the image version used for the `build` job. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-build-image/-/releases). |
| `AUTO_DEPLOY_IMAGE_VERSION` | Customize the image version used for Kubernetes deployment jobs. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/releases). |
| `AUTO_DEVOPS_ATOMIC_RELEASE` | Auto DevOps uses [`--atomic`](https://v2.helm.sh/docs/helm/#options-43) for Helm deployments by default. Set this variable to `false` to disable the use of `--atomic` |
| `AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER` | The builder used when building with Cloud Native Buildpacks. The default builder is `heroku/buildpacks:22`. [More details](stages.md#auto-build-using-cloud-native-buildpacks). |
| `AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` | Extra arguments to be passed to the `docker build` command. Using quotes doesn't prevent word splitting. [More details](customize.md#pass-arguments-to-docker-build). |
| `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` | A [comma-separated list of CI/CD variable names](customize.md#forward-cicd-variables-to-the-build-environment) to be forwarded to the build environment (the buildpack builder or `docker build`). |
| `AUTO_DEVOPS_BUILD_IMAGE_CNB_PORT` | In GitLab 15.0 and later, port exposed by the generated Docker image. Set to `false` to prevent exposing any ports. Defaults to `5000`. |
| `AUTO_DEVOPS_BUILD_IMAGE_CONTEXT` | Used to set the build context directory for Dockerfile and Cloud Native Buildpacks. Defaults to the root directory. |
| `AUTO_DEVOPS_CHART` | Helm Chart used to deploy your apps. Defaults to the one [provided by GitLab](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app). |
| `AUTO_DEVOPS_CHART_REPOSITORY` | Helm Chart repository used to search for charts. Defaults to `https://charts.gitlab.io`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_NAME` | Used to set the name of the Helm repository. Defaults to `gitlab`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME` | Used to set a username to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASSWORD` | Used to set a password to connect to the Helm repository. Defaults to no credentials. Also set `AUTO_DEVOPS_CHART_REPOSITORY_USERNAME`. |
| `AUTO_DEVOPS_CHART_REPOSITORY_PASS_CREDENTIALS` | Set to a non-empty value to enable forwarding of the Helm repository credentials to the chart server when the chart artifacts are on a different host than repository. |
| `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` | Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands. By default, Helm uses TLS verification. |
| `AUTO_DEVOPS_CHART_CUSTOM_ONLY` | Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab. |
| `AUTO_DEVOPS_CHART_VERSION` | Set the version of the deployment chart. Defaults to the latest available version. |
| `AUTO_DEVOPS_COMMON_NAME` | From GitLab 15.5, set to a valid domain name to customize the common name used for the TLS certificate. Defaults to `le-$CI_PROJECT_ID.$KUBE_INGRESS_BASE_DOMAIN`. Set to `false` to not set this alternative host on the Ingress. |
| `AUTO_DEVOPS_DEPLOY_DEBUG` | If this variable is present, Helm outputs debug logs. |
| `AUTO_DEVOPS_ALLOW_TO_FORCE_DEPLOY_V<N>` | From [auto-deploy-image](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) v1.0.0, if this variable is present, a new major version of chart is forcibly deployed. For more information, see [Ignore warnings and continue deploying](upgrading_auto_deploy_dependencies.md#ignore-warnings-and-continue-deploying). |
| `BUILDPACK_URL` | A full Buildpack URL. [Must point to a URL supported by Pack](customize.md#custom-buildpacks). |
| `CANARY_ENABLED` | Used to define a [deploy policy for canary environments](#deploy-policy-for-canary-environments). |
| `BUILDPACK_VOLUMES` | Specify one or more [Buildpack volumes to mount](stages.md#mount-volumes-into-the-build-container). Use a pipe `|` as list separator. |
| `CANARY_PRODUCTION_REPLICAS` | Number of canary replicas to deploy for [Canary Deployments](../../user/project/canary_deployments.md) in the production environment. Takes precedence over `CANARY_REPLICAS`. Defaults to 1. |
| `CANARY_REPLICAS` | Number of canary replicas to deploy for [Canary Deployments](../../user/project/canary_deployments.md). Defaults to 1. |
| `CI_APPLICATION_REPOSITORY` | The repository of container image being built or deployed, `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`. For more details, read [Custom container image](customize.md#custom-container-image). |
| `CI_APPLICATION_TAG` | The tag of the container image being built or deployed, `$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`. For more details, read [Custom container image](customize.md#custom-container-image). |
| `DAST_AUTO_DEPLOY_IMAGE_VERSION` | Customize the image version used for DAST deployments on the default branch. Should usually be the same as `AUTO_DEPLOY_IMAGE_VERSION`. See [list of versions](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/releases). |
| `DOCKERFILE_PATH` | Allows overriding the [default Dockerfile path for the build stage](customize.md#custom-dockerfiles) |
| `HELM_RELEASE_NAME` | Allows the `helm` release name to be overridden. Can be used to assign unique release names when deploying multiple projects to a single namespace. |
| `HELM_UPGRADE_VALUES_FILE` | Allows the `helm upgrade` values file to be overridden. Defaults to `.gitlab/auto-deploy-values.yaml`. |
| `HELM_UPGRADE_EXTRA_ARGS` | Allows extra options in `helm upgrade` commands when deploying the application. Using quotes doesn't prevent word splitting. |
| `INCREMENTAL_ROLLOUT_MODE` | If present, can be used to enable an [incremental rollout](#incremental-rollout-to-production) of your application for the production environment. Set to `manual` for manual deployment jobs or `timed` for automatic rollout deployments with a 5 minute delay each one. |
| `K8S_SECRET_*` | Any variable prefixed with [`K8S_SECRET_`](#configure-application-secret-variables) is made available by Auto DevOps as environment variables to the deployed application. |
| `KUBE_CONTEXT` | Can be used to select a context to use from `KUBECONFIG`. When `KUBE_CONTEXT` is blank, the default context in `KUBECONFIG` (if any) is used. A context must be selected when used [with the agent for Kubernetes](../../user/clusters/agent/ci_cd_workflow.md). |
| `KUBE_INGRESS_BASE_DOMAIN` | Can be used to set a domain per cluster. See [cluster domains](../../user/project/clusters/gitlab_managed_clusters.md#base-domain) for more information. |
| `KUBE_NAMESPACE` | The namespace used for deployments. When using certificate-based clusters, [this value should not be overwritten directly](../../user/project/clusters/deploy_to_cluster.md#custom-namespace). |
| `KUBECONFIG` | The kubeconfig to use for deployments. User-provided values take priority over GitLab-provided values. |
| `PRODUCTION_REPLICAS` | Number of replicas to deploy in the production environment. Takes precedence over `REPLICAS` and defaults to 1. For zero-downtime upgrades, set to 2 or greater. |
| `REPLICAS` | Number of replicas to deploy. Defaults to 1. Change this variable instead of [modifying](customize.md#customize-helm-chart-values) `replicaCount`. |
| `ROLLOUT_RESOURCE_TYPE` | Allows specification of the resource type being deployed when using a custom Helm chart. Default value is `deployment`. |
| `ROLLOUT_STATUS_DISABLED` | Used to disable rollout status check because it does not support all resource types, for example, `cronjob`. |
| `STAGING_ENABLED` | Used to define a [deploy policy for staging and production environments](#deploy-policy-for-staging-and-production-environments). |
| `TRACE` | Set to any value to make Helm commands produce verbose output. You can use this setting to help diagnose Auto DevOps deployment problems. |
<!-- markdownlint-enable MD056 -->
## Database variables
{{< alert type="warning" >}}
From [GitLab 16.0](https://gitlab.com/gitlab-org/gitlab/-/issues/343988), `POSTGRES_ENABLED` is no longer set by default.
{{< /alert >}}
Use these variables to integrate CI/CD with PostgreSQL databases.
| **CI/CD variable** | **Description** |
|-----------------------------------------|------------------------------------|
| `DB_INITIALIZE` | Used to specify the command to run to initialize the application's PostgreSQL database. Runs inside the application pod. |
| `DB_MIGRATE` | Used to specify the command to run to migrate the application's PostgreSQL database. Runs inside the application pod. |
| `POSTGRES_ENABLED` | Whether PostgreSQL is enabled. Set to `true` to enable the automatic deployment of PostgreSQL. |
| `POSTGRES_USER` | The PostgreSQL user. Defaults to `user`. Set it to use a custom username. |
| `POSTGRES_PASSWORD` | The PostgreSQL password. Defaults to `testing-password`. Set it to use a custom password. |
| `POSTGRES_DB` | The PostgreSQL database name. Defaults to the value of [`$CI_ENVIRONMENT_SLUG`](../../ci/variables/_index.md#predefined-cicd-variables). Set it to use a custom database name. |
| `POSTGRES_VERSION` | Tag for the [`postgres` Docker image](https://hub.docker.com/_/postgres) to use. Defaults to `9.6.16` for tests and deployments. If `AUTO_DEVOPS_POSTGRES_CHANNEL` is set to `1`, deployments uses the default version `9.6.2`. |
| `POSTGRES_HELM_UPGRADE_VALUES_FILE` | When using [auto-deploy-image v2](upgrading_auto_deploy_dependencies.md), this variable allows the `helm upgrade` values file for PostgreSQL to be overridden. Defaults to `.gitlab/auto-deploy-postgres-values.yaml`. |
| `POSTGRES_HELM_UPGRADE_EXTRA_ARGS` | When using [auto-deploy-image v2](upgrading_auto_deploy_dependencies.md), this variable allows extra PostgreSQL options in `helm upgrade` commands when deploying the application. Using quotes doesn't prevent word splitting. |
| `POSTGRES_CHART_REPOSITORY` | Helm Chart repository used to search for PostgreSQL chart. Defaults to `https://raw.githubusercontent.com/bitnami/charts/eb5f9a9513d987b519f0ecd732e7031241c50328/bitnami`. |
| `POSTGRES_CHART_VERSION` | Helm Chart version used for PostgreSQL chart. Defaults to `8.2.1`. |
## Job-skipping variables
Use these variables to skip specific types of CI/CD jobs. When skipped, the CI/CD jobs don't get created or run.
| **Job name** | **CI/CD variable** | **GitLab version** | **Description** |
|----------------------------------------|---------------------------------|-----------------------|-----------------|
| `.fuzz_base` | `COVFUZZ_DISABLED` | | [Read more](../../user/application_security/coverage_fuzzing/_index.md) about how `.fuzz_base` provide capability for your own jobs. The job isn't created if the value is `"true"`. |
| `apifuzzer_fuzz` | `API_FUZZING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `build` | `BUILD_DISABLED` | | If the variable is present, the job isn't created. |
| `build_artifact` | `BUILD_DISABLED` | | If the variable is present, the job isn't created. |
| `brakeman-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `canary` | `CANARY_ENABLED` | | This manual job is created if the variable is present. |
| `code_intelligence` | `CODE_INTELLIGENCE_DISABLED` | | If the variable is present, the job isn't created. |
| `code_quality` | `CODE_QUALITY_DISABLED` | | The job isn't created if the value is `"true"`. |
| `container_scanning` | `CONTAINER_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dast` | `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dast_environment_deploy` | `DAST_DISABLED_FOR_DEFAULT_BRANCH` or `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `flawfinder-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-maven-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `gemnasium-python-dependency_scanning` | `DEPENDENCY_SCANNING_DISABLED` | | The job isn't created if the value is `"true"`. |
| `kubesec-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `license_management` | `LICENSE_MANAGEMENT_DISABLED` | GitLab 12.7 and earlier | If the variable is present, the job isn't created. Job deprecated [from GitLab 12.8](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/22773) |
| `license_scanning` | `LICENSE_MANAGEMENT_DISABLED` | | The job isn't created if the value is `"true"`. Job deprecated [from GitLab 15.9](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/111071) |
| `load_performance` | `LOAD_PERFORMANCE_DISABLED` | | If the variable is present, the job isn't created. |
| `nodejs-scan-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `performance` | `PERFORMANCE_DISABLED` | GitLab 13.12 and earlier | Browser performance. If the variable is present, the job isn't created. Replaced by `browser_performance`. |
| `browser_performance` | `BROWSER_PERFORMANCE_DISABLED` | | Browser performance. If the variable is present, the job isn't created. Replaces `performance`. |
| `phpcs-security-audit-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `pmd-apex-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `review` | `REVIEW_DISABLED` | | If the variable is present, the job isn't created. |
| `review:stop` | `REVIEW_DISABLED` | | Manual job. If the variable is present, the job isn't created. |
| `secret_detection` | `SECRET_DETECTION_DISABLED` | | The job isn't created if the value is `"true"`. |
| `secret_detection_default_branch` | `SECRET_DETECTION_DISABLED` | | The job isn't created if the value is `"true"`. |
| `semgrep-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `sobelow-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `stop_dast_environment` | `DAST_DISABLED_FOR_DEFAULT_BRANCH` or `DAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `spotbugs-sast` | `SAST_DISABLED` | | The job isn't created if the value is `"true"`. |
| `test` | `TEST_DISABLED` | | If the variable is present, the job isn't created. |
| `staging` | `STAGING_ENABLED` | | The job is created if the variable is present. |
| `stop_review` | `REVIEW_DISABLED` | | If the variable is present, the job isn't created. |
## Configure application secret variables
Some deployed applications require access to secret variables.
Auto DevOps detects CI/CD variables starting with `K8S_SECRET_`,
and makes them available to the deployed application as
environment variables.
Prerequisites:
- The variable value must be a single line.
To configure secret variables:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Variables**.
1. Create a CI/CD variable with the prefix `K8S_SECRET_`. For example, you
can create a variable called `K8S_SECRET_RAILS_MASTER_KEY`.
1. Run an Auto DevOps pipeline, either by manually creating a new
pipeline or by pushing a code change to GitLab.
### Kubernetes secrets
Auto DevOps pipelines use your application secret variables to
populate a Kubernetes secret. This secret is unique per environment.
When deploying your application, the secret is loaded as environment
variables in the container running the application. For example, if
you create a secret called `K8S_SECRET_RAILS_MASTER_KEY`, your
Kubernetes secret might look like:
```shell
$ kubectl get secret production-secret -n minimal-ruby-app-54 -o yaml
apiVersion: v1
data:
RAILS_MASTER_KEY: MTIzNC10ZXN0
kind: Secret
metadata:
creationTimestamp: 2018-12-20T01:48:26Z
name: production-secret
namespace: minimal-ruby-app-54
resourceVersion: "429422"
selfLink: /api/v1/namespaces/minimal-ruby-app-54/secrets/production-secret
uid: 57ac2bfd-03f9-11e9-b812-42010a9400e4
type: Opaque
```
## Update application secrets
Environment variables are generally immutable in a Kubernetes pod.
If you update an application secret and then manually
create a new pipeline, running applications do not receive the
updated secret.
To update application secrets, either:
- Push a code update to GitLab to force the Kubernetes deployment to recreate pods.
- Manually delete running pods to cause Kubernetes to create new pods with updated
secrets.
Variables with multi-line values are not supported due to
limitations with the Auto DevOps scripting environment.
## Configure replica variables
Add replica variables when you want to scale your deployments:
1. Add a replica variable as a [project CI/CD variable](../../ci/variables/_index.md#for-a-project).
1. To scale your application, redeploy it.
{{< alert type="warning" >}}
Do not scale your application using Kubernetes directly. Helm might not detect the change,
and subsequent deployments with Auto DevOps can undo your changes.
{{< /alert >}}
### Custom replica variables
You can create custom replica variables with the format `<TRACK>_<ENV>_REPLICAS`:
- `<TRACK>` is the all-caps value of the `track`
[Kubernetes label](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
set in the Helm Chart app definition. If `track` is not set, omit `<TRACK>` from the custom variable.
- `<ENV>` is the all-caps environment name of the deploy job set in
`.gitlab-ci.yml`.
For example, if the environment is `qa` and the track is
`foo`, create an environment variable called `FOO_QA_REPLICAS`:
```yaml
QA testing:
stage: deploy
environment:
name: qa
script:
- deploy foo
```
The track `foo` must be defined in the application's Helm chart.
For example:
```yaml
replicaCount: 1
image:
repository: gitlab.example.com/group/project
tag: stable
pullPolicy: Always
secrets:
- name: gitlab-registry
application:
track: foo
tier: web
service:
enabled: true
name: web
type: ClusterIP
url: http://my.host.com/
externalPort: 5000
internalPort: 5000
```
## Deploy policy for staging and production environments
Auto DevOps typically uses continuous deployment, and pushes
automatically to the `production` environment whenever a new pipeline
runs on the default branch. To deploy to production manually, you can
use the `STAGING_ENABLED` CI/CD variable.
If you set `STAGING_ENABLED`, GitLab automatically deploys the
application to a `staging` environment. When you're ready to deploy to
production, GitLab creates a `production_manual` job.
You can also enable manual deployment in your [project settings](requirements.md#auto-devops-deployment-strategy).
## Deploy policy for canary environments
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can use a [canary environment](../../user/project/canary_deployments.md) before
deploying any changes to production.
If you set `CANARY_ENABLED`, GitLab creates two [manual jobs](../../ci/pipelines/_index.md#add-manual-interaction-to-your-pipeline):
- `canary` - Deploys the application to the canary environment.
- `production_manual` - Deploys the application to production.
## Incremental rollout to production
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use an incremental rollout to continuously deploy your application,
starting with only a few pods. You can increase the number of pods
manually.
You can enable manual deployment in your [project settings](requirements.md#auto-devops-deployment-strategy),
or by setting `INCREMENTAL_ROLLOUT_MODE` to `manual`.
If you set `INCREMENTAL_ROLLOUT_MODE` to `manual`, GitLab creates four
manual jobs:
1. `rollout 10%`
1. `rollout 25%`
1. `rollout 50%`
1. `rollout 100%`
The percentage is based on the `REPLICAS` CI/CD variable, and defines the number of
pods used for deployment. For example, if the value is `10` and you run the
`10%` rollout job, your application is deployed to only one pod.
You can run the rollout jobs in any order. To scale down, rerun a
lower percentage job.
After you run the `rollout 100%` job, you cannot scale down, and must
[roll back your deployment](../../ci/environments/deployments.md#retry-or-roll-back-a-deployment).
### Example incremental rollout configurations
Without `INCREMENTAL_ROLLOUT_MODE` and without `STAGING_ENABLED`:

Without `INCREMENTAL_ROLLOUT_MODE` and with `STAGING_ENABLED`:

With `INCREMENTAL_ROLLOUT_MODE` set to `manual` and without `STAGING_ENABLED`:

With `INCREMENTAL_ROLLOUT_MODE` set to `manual` and with `STAGING_ENABLED`:

## Timed incremental rollout to production
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use a timed incremental rollout to continuously deploy your application, starting with
only a few pods.
You can enable timed incremental deployment in your [project settings](requirements.md#auto-devops-deployment-strategy),
or by setting the `INCREMENTAL_ROLLOUT_MODE` CI/CD variable to `timed`.
If you set `INCREMENTAL_ROLLOUT_MODE` to `timed`, GitLab creates four jobs:
1. `timed rollout 10%`
1. `timed rollout 25%`
1. `timed rollout 50%`
1. `timed rollout 100%`
There is a five-minute delay between jobs.
|
https://docs.gitlab.com/topics/customize
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/customize.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
customize.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Customize Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can customize components of Auto DevOps to fit your needs. For example, you can:
- Add custom [buildpacks](#custom-buildpacks), [Dockerfiles](#custom-dockerfiles), and [Helm charts](#custom-helm-chart).
- Enable staging and canary deployments with a custom [CI/CD configuration](#customize-gitlab-ciyml).
- Extend Auto DevOps with the [GitLab API](#extend-auto-devops-with-the-api).
## Auto DevOps banner
When Auto DevOps is not enabled, a banner displays for users with at
least the Maintainer role:

The banner can be disabled for:
- A user, when they dismiss it themselves.
- A project, by explicitly [disabling Auto DevOps](_index.md#enable-or-disable-auto-devops).
- An entire GitLab instance:
- By an administrator running the following in a Rails console:
```ruby
Feature.enable(:auto_devops_banner_disabled)
```
- Through the REST API with an administrator access token:
```shell
curl --data "value=true" --header "PRIVATE-TOKEN: <personal_access_token>" "https://gitlab.example.com/api/v4/features/auto_devops_banner_disabled"
```
## Custom buildpacks
You can customize your buildpacks when either:
- The automatic buildpack detection fails for your project.
- You need more control over your build.
### Customize buildpacks with Cloud Native Buildpacks
Specify either:
- The CI/CD variable `BUILDPACK_URL` with any of [`pack`'s URI specification formats](https://buildpacks.io/docs/app-developer-guide/specify-buildpacks/).
- A [`project.toml` project descriptor](https://buildpacks.io/docs/app-developer-guide/using-project-descriptor/) with the buildpacks you would like to include.
### Multiple buildpacks
Because Auto Test cannot use the `.buildpacks` file, Auto DevOps does
not support multiple buildpacks. The buildpack
[heroku-buildpack-multi](https://github.com/heroku/heroku-buildpack-multi/),
used in the backend to parse the `.buildpacks` file, does not provide
the necessary commands `bin/test-compile` and `bin/test`.
To use only a single custom buildpack, you should provide the project CI/CD variable
`BUILDPACK_URL` instead.
## Custom Dockerfiles
If you have a Dockerfile in the root of your project repository, Auto
DevOps builds a Docker image based on the Dockerfile. This can be
faster than using a buildpack. It can also result in smaller images,
especially if your Dockerfile is based on
[Alpine](https://hub.docker.com/_/alpine/).
If you set the `DOCKERFILE_PATH` CI/CD variable, Auto Build looks for a Dockerfile there
instead.
### Pass arguments to `docker build`
You can pass arguments to `docker build` with the
`AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` project CI/CD variable.
For example, to build a Docker image based on based on the
`ruby:alpine` instead of the default `ruby:latest`:
1. Set `AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` to `--build-arg=RUBY_VERSION=alpine`.
1. Add the following to a custom Dockerfile:
```dockerfile
ARG RUBY_VERSION=latest
FROM ruby:$RUBY_VERSION
# Include your content here
```
To pass complex values like spaces and newlines, use Base64 encoding.
Complex, unencoded values can cause issues with character escaping.
{{< alert type="warning" >}}
Do not pass secrets as Docker build arguments. Secrets might persist in your image. For more information, see
[this discussion of best practices with secrets](https://github.com/moby/moby/issues/13490).
{{< /alert >}}
## Custom container image
By default, [Auto Deploy](stages.md#auto-deploy) deploys a container image built and pushed to the GitLab registry by [Auto Build](stages.md#auto-build).
You can override this behavior by defining specific variables:
| Entry | Default | Can be overridden by |
| ----- | ----- | ----- |
| Image Path | `$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG` for branch pipelines. `$CI_REGISTRY_IMAGE` for tag pipelines. | `$CI_APPLICATION_REPOSITORY` |
| Image Tag | `$CI_COMMIT_SHA` for branch pipelines. `$CI_COMMIT_TAG` for tag pipelines. | `$CI_APPLICATION_TAG` |
These variables also affect Auto Build and Auto Container Scanning. If you don't want to build and push an image to
`$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`, include only `Jobs/Deploy.gitlab-ci.yml`, or
[skip the `build` jobs](cicd_variables.md#job-skipping-variables).
If you use Auto Container Scanning and set a value for `$CI_APPLICATION_REPOSITORY`, then you should
also update `$CS_DEFAULT_BRANCH_IMAGE`. For more information, see
[Setting the default branch image](../../user/application_security/container_scanning/_index.md#setting-the-default-branch-image).
Here is an example setup in your `.gitlab-ci.yml`:
```yaml
variables:
CI_APPLICATION_REPOSITORY: <your-image-repository>
CI_APPLICATION_TAG: <the-tag>
```
## Extend Auto DevOps with the API
You can extend and manage your Auto DevOps configuration with GitLab APIs:
- [Use API calls to access settings](../../api/settings.md#available-settings),
which include `auto_devops_enabled`, to enable Auto DevOps on projects by default.
- [Create a new project](../../api/projects.md#create-a-project).
- [Edit groups](../../api/groups.md#update-group-attributes).
- [Edit projects](../../api/projects.md#edit-a-project).
## Forward CI/CD variables to the build environment
To forward CI/CD variables to the build environment, add the names of the variables
you want to forward to the `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` CI/CD variable.
Separate multiple variables with commas.
For example, to forward the variables `CI_COMMIT_SHA` and `CI_ENVIRONMENT_NAME`:
```yaml
variables:
AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES: CI_COMMIT_SHA,CI_ENVIRONMENT_NAME
```
If you use buildpacks, the forwarded variables are available automatically as environment variables.
If you use a Dockerfile:
1. To activate the experimental Dockerfile syntax, add the following to your Dockerfile:
```dockerfile
# syntax = docker/dockerfile:experimental
```
1. To make secrets available in any `RUN $COMMAND` in the `Dockerfile`, mount
the secret file and source it before you run `$COMMAND`:
```dockerfile
RUN --mount=type=secret,id=auto-devops-build-secrets . /run/secrets/auto-devops-build-secrets && $COMMAND
```
When `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` is set, Auto DevOps
enables the experimental [Docker BuildKit](https://docs.docker.com/build/buildkit/)
feature to use the `--secret` flag.
## Custom Helm chart
Auto DevOps uses [Helm](https://helm.sh/) to deploy your application to Kubernetes.
You can override the Helm chart used by bundling a chart in your project
repository or by specifying a project CI/CD variable:
- **Bundled chart** - If your project has a `./chart` directory with a `Chart.yaml`
file in it, Auto DevOps detects the chart and uses it instead of the
[default chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app).
- **Project variable** - Create a [project CI/CD variable](../../ci/variables/_index.md)
`AUTO_DEVOPS_CHART` with the URL of a custom chart. You can also create five project
variables:
- `AUTO_DEVOPS_CHART_REPOSITORY` - The URL of a custom chart repository.
- `AUTO_DEVOPS_CHART` - The path to the chart.
- `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` - Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands.
- `AUTO_DEVOPS_CHART_CUSTOM_ONLY` - Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab.
- `AUTO_DEVOPS_CHART_VERSION` - The version of the deployment chart.
### Customize Helm chart values
To override the default values in the `values.yaml` file in the
[default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app), either:
- Add a file named `.gitlab/auto-deploy-values.yaml` to your repository. This file is used by default for Helm upgrades.
- Add a file with a different name or path to the repository. Set the
`HELM_UPGRADE_VALUES_FILE` [CI/CD variable](cicd_variables.md) with the path and name of the file.
Some values cannot be overridden with the previous options, but [this issue](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/issues/31) proposes to change this behavior.
To override settings like `replicaCount`, use the `REPLICAS` [build and deployment](cicd_variables.md#build-and-deployment-variables) CI/CD variable.
### Customize `helm upgrade`
The [auto-deploy-image](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) uses the `helm upgrade` command.
To customize this command, pass it options with the `HELM_UPGRADE_EXTRA_ARGS` CI/CD variable.
For example, to disable pre- and post-upgrade hooks when `helm upgrade` runs:
```yaml
variables:
HELM_UPGRADE_EXTRA_ARGS: --no-hooks
```
For a full list of options, see [the official `helm upgrade` documentation](https://helm.sh/docs/helm/helm_upgrade/).
### Limit a Helm chart to one environment
To limit a custom chart to one environment, add the environment scope to your CI/CD variables.
For more information, see [Limit the environment scope of CI/CD variables](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
## Customize `.gitlab-ci.yml`
Auto DevOps is highly customizable because the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
is an implementation of a `.gitlab-ci.yml` file.
The template uses only features available to any implementation of `.gitlab-ci.yml`.
To add custom behaviors to the CI/CD pipeline used by Auto DevOps:
1. To the root of your repository, add a `.gitlab-ci.yml` file with the following contents:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
1. Add your changes to the `.gitlab-ci.yml` file. Your changes are merged with the Auto DevOps template. For more information about
how `include` merges your changes, see [the `include` documentation](../../ci/yaml/_index.md#include).
To remove behaviors from the Auto DevOps pipeline:
1. Copy the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
into your project.
1. Edit your copy of the template as needed.
### Use individual components of Auto DevOps
If you only require a subset of the features offered by Auto DevOps,
you can include individual Auto DevOps jobs in your own
`.gitlab-ci.yml`. Be sure to also define the stage required by each
job in your `.gitlab-ci.yml` file.
For example, to use [Auto Build](stages.md#auto-build), you can add the following to
your `.gitlab-ci.yml`:
```yaml
stages:
- build
include:
- template: Jobs/Build.gitlab-ci.yml
```
For a list of available jobs, see the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml).
## Use multiple Kubernetes clusters
See [Multiple Kubernetes clusters for Auto DevOps](multiple_clusters_auto_devops.md).
## Customizing the Kubernetes namespace
In GitLab 14.5 and earlier, you could use `environment:kubernetes:namespace`
to specify a namespace for the environment.
However, this feature was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8),
along with certificate-based integration.
You should now use the `KUBE_NAMESPACE` environment variable and
[limit its environment scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
## Use images hosted in a local Docker registry
You can configure many Auto DevOps jobs to run in an [offline environment](../../user/application_security/offline_deployments/_index.md):
1. Copy the required Auto DevOps Docker images from Docker Hub and `registry.gitlab.com` to their local GitLab container registry.
1. After the images are hosted and available in a local registry, edit `.gitlab-ci.yml` to point to the locally hosted images. For example:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
variables:
REGISTRY_URL: "registry.gitlab.example"
build:
image: "$REGISTRY_URL/docker/auto-build-image:v0.6.0"
services:
- name: "$REGISTRY_URL/greg/docker/docker:20.10.16-dind"
command: ['--tls=false', '--host=tcp://0.0.0.0:2375']
```
## PostgreSQL database support
{{< alert type="warning" >}}
Provisioning a PostgreSQL database by default was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/387766)
in GitLab 15.8 and will no longer be the default from 16.0. To enable database provisioning, set
the associated [CI/CD variable](cicd_variables.md#database-variables).
{{< /alert >}}
To support applications that require a database,
[PostgreSQL](https://www.postgresql.org/) is provisioned by default.
The credentials to access the database are preconfigured.
To customize the credentials, set the associated
[CI/CD variables](cicd_variables.md). You can also
define a custom `DATABASE_URL`:
```yaml
postgres://user:password@postgres-host:postgres-port/postgres-database
```
### Upgrading PostgreSQL
GitLab uses chart version 8.2.1 to provision PostgreSQL by default.
You can set the version from 0.7.1 to 8.2.1.
If you use an older chart version, you should [migrate your database](upgrading_postgresql.md)
to the newer PostgreSQL.
The CI/CD variable `AUTO_DEVOPS_POSTGRES_CHANNEL` that controls default provisioned
PostgreSQL changed to `2` in [GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/210499).
To use the old PostgreSQL, set the `AUTO_DEVOPS_POSTGRES_CHANNEL` variable to
`1`.
### Customize values for PostgreSQL Helm Chart
To set custom values, do one of the following:
- Add a file named `.gitlab/auto-deploy-postgres-values.yaml` to your repository. If found, this
file is used automatically. This file is used by default for PostgreSQL Helm upgrades.
- Add a file with a different name or path to the repository, and set the
`POSTGRES_HELM_UPGRADE_VALUES_FILE` [environment variable](cicd_variables.md#database-variables) with the path
and name.
- Set the `POSTGRES_HELM_UPGRADE_EXTRA_ARGS` [environment variable](cicd_variables.md#database-variables).
### Use external PostgreSQL database providers
Auto DevOps provides out-of-the-box support for a PostgreSQL container
for production environments. However, you might want to use an
external managed provider like AWS Relational Database Service.
To use an external managed provider:
1. Disable the built-in PostgreSQL installation for the required environments with
environment-scoped [CI/CD variables](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
Because the built-in PostgreSQL setup for review apps and staging is sufficient, you might only need to
disable the installation for `production`.

1. Define the `DATABASE_URL` variable as an environment-scoped variable
available to your application. This should be a URL in the following format:
```yaml
postgres://user:password@postgres-host:postgres-port/postgres-database
```
1. Ensure your Kubernetes cluster has network access to where PostgreSQL is hosted.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Customize Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can customize components of Auto DevOps to fit your needs. For example, you can:
- Add custom [buildpacks](#custom-buildpacks), [Dockerfiles](#custom-dockerfiles), and [Helm charts](#custom-helm-chart).
- Enable staging and canary deployments with a custom [CI/CD configuration](#customize-gitlab-ciyml).
- Extend Auto DevOps with the [GitLab API](#extend-auto-devops-with-the-api).
## Auto DevOps banner
When Auto DevOps is not enabled, a banner displays for users with at
least the Maintainer role:

The banner can be disabled for:
- A user, when they dismiss it themselves.
- A project, by explicitly [disabling Auto DevOps](_index.md#enable-or-disable-auto-devops).
- An entire GitLab instance:
- By an administrator running the following in a Rails console:
```ruby
Feature.enable(:auto_devops_banner_disabled)
```
- Through the REST API with an administrator access token:
```shell
curl --data "value=true" --header "PRIVATE-TOKEN: <personal_access_token>" "https://gitlab.example.com/api/v4/features/auto_devops_banner_disabled"
```
## Custom buildpacks
You can customize your buildpacks when either:
- The automatic buildpack detection fails for your project.
- You need more control over your build.
### Customize buildpacks with Cloud Native Buildpacks
Specify either:
- The CI/CD variable `BUILDPACK_URL` with any of [`pack`'s URI specification formats](https://buildpacks.io/docs/app-developer-guide/specify-buildpacks/).
- A [`project.toml` project descriptor](https://buildpacks.io/docs/app-developer-guide/using-project-descriptor/) with the buildpacks you would like to include.
### Multiple buildpacks
Because Auto Test cannot use the `.buildpacks` file, Auto DevOps does
not support multiple buildpacks. The buildpack
[heroku-buildpack-multi](https://github.com/heroku/heroku-buildpack-multi/),
used in the backend to parse the `.buildpacks` file, does not provide
the necessary commands `bin/test-compile` and `bin/test`.
To use only a single custom buildpack, you should provide the project CI/CD variable
`BUILDPACK_URL` instead.
## Custom Dockerfiles
If you have a Dockerfile in the root of your project repository, Auto
DevOps builds a Docker image based on the Dockerfile. This can be
faster than using a buildpack. It can also result in smaller images,
especially if your Dockerfile is based on
[Alpine](https://hub.docker.com/_/alpine/).
If you set the `DOCKERFILE_PATH` CI/CD variable, Auto Build looks for a Dockerfile there
instead.
### Pass arguments to `docker build`
You can pass arguments to `docker build` with the
`AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` project CI/CD variable.
For example, to build a Docker image based on based on the
`ruby:alpine` instead of the default `ruby:latest`:
1. Set `AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS` to `--build-arg=RUBY_VERSION=alpine`.
1. Add the following to a custom Dockerfile:
```dockerfile
ARG RUBY_VERSION=latest
FROM ruby:$RUBY_VERSION
# Include your content here
```
To pass complex values like spaces and newlines, use Base64 encoding.
Complex, unencoded values can cause issues with character escaping.
{{< alert type="warning" >}}
Do not pass secrets as Docker build arguments. Secrets might persist in your image. For more information, see
[this discussion of best practices with secrets](https://github.com/moby/moby/issues/13490).
{{< /alert >}}
## Custom container image
By default, [Auto Deploy](stages.md#auto-deploy) deploys a container image built and pushed to the GitLab registry by [Auto Build](stages.md#auto-build).
You can override this behavior by defining specific variables:
| Entry | Default | Can be overridden by |
| ----- | ----- | ----- |
| Image Path | `$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG` for branch pipelines. `$CI_REGISTRY_IMAGE` for tag pipelines. | `$CI_APPLICATION_REPOSITORY` |
| Image Tag | `$CI_COMMIT_SHA` for branch pipelines. `$CI_COMMIT_TAG` for tag pipelines. | `$CI_APPLICATION_TAG` |
These variables also affect Auto Build and Auto Container Scanning. If you don't want to build and push an image to
`$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG`, include only `Jobs/Deploy.gitlab-ci.yml`, or
[skip the `build` jobs](cicd_variables.md#job-skipping-variables).
If you use Auto Container Scanning and set a value for `$CI_APPLICATION_REPOSITORY`, then you should
also update `$CS_DEFAULT_BRANCH_IMAGE`. For more information, see
[Setting the default branch image](../../user/application_security/container_scanning/_index.md#setting-the-default-branch-image).
Here is an example setup in your `.gitlab-ci.yml`:
```yaml
variables:
CI_APPLICATION_REPOSITORY: <your-image-repository>
CI_APPLICATION_TAG: <the-tag>
```
## Extend Auto DevOps with the API
You can extend and manage your Auto DevOps configuration with GitLab APIs:
- [Use API calls to access settings](../../api/settings.md#available-settings),
which include `auto_devops_enabled`, to enable Auto DevOps on projects by default.
- [Create a new project](../../api/projects.md#create-a-project).
- [Edit groups](../../api/groups.md#update-group-attributes).
- [Edit projects](../../api/projects.md#edit-a-project).
## Forward CI/CD variables to the build environment
To forward CI/CD variables to the build environment, add the names of the variables
you want to forward to the `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` CI/CD variable.
Separate multiple variables with commas.
For example, to forward the variables `CI_COMMIT_SHA` and `CI_ENVIRONMENT_NAME`:
```yaml
variables:
AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES: CI_COMMIT_SHA,CI_ENVIRONMENT_NAME
```
If you use buildpacks, the forwarded variables are available automatically as environment variables.
If you use a Dockerfile:
1. To activate the experimental Dockerfile syntax, add the following to your Dockerfile:
```dockerfile
# syntax = docker/dockerfile:experimental
```
1. To make secrets available in any `RUN $COMMAND` in the `Dockerfile`, mount
the secret file and source it before you run `$COMMAND`:
```dockerfile
RUN --mount=type=secret,id=auto-devops-build-secrets . /run/secrets/auto-devops-build-secrets && $COMMAND
```
When `AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES` is set, Auto DevOps
enables the experimental [Docker BuildKit](https://docs.docker.com/build/buildkit/)
feature to use the `--secret` flag.
## Custom Helm chart
Auto DevOps uses [Helm](https://helm.sh/) to deploy your application to Kubernetes.
You can override the Helm chart used by bundling a chart in your project
repository or by specifying a project CI/CD variable:
- **Bundled chart** - If your project has a `./chart` directory with a `Chart.yaml`
file in it, Auto DevOps detects the chart and uses it instead of the
[default chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app).
- **Project variable** - Create a [project CI/CD variable](../../ci/variables/_index.md)
`AUTO_DEVOPS_CHART` with the URL of a custom chart. You can also create five project
variables:
- `AUTO_DEVOPS_CHART_REPOSITORY` - The URL of a custom chart repository.
- `AUTO_DEVOPS_CHART` - The path to the chart.
- `AUTO_DEVOPS_CHART_REPOSITORY_INSECURE` - Set to a non-empty value to add a `--insecure-skip-tls-verify` argument to the Helm commands.
- `AUTO_DEVOPS_CHART_CUSTOM_ONLY` - Set to a non-empty value to use only a custom chart. By default, the latest chart is downloaded from GitLab.
- `AUTO_DEVOPS_CHART_VERSION` - The version of the deployment chart.
### Customize Helm chart values
To override the default values in the `values.yaml` file in the
[default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app), either:
- Add a file named `.gitlab/auto-deploy-values.yaml` to your repository. This file is used by default for Helm upgrades.
- Add a file with a different name or path to the repository. Set the
`HELM_UPGRADE_VALUES_FILE` [CI/CD variable](cicd_variables.md) with the path and name of the file.
Some values cannot be overridden with the previous options, but [this issue](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/issues/31) proposes to change this behavior.
To override settings like `replicaCount`, use the `REPLICAS` [build and deployment](cicd_variables.md#build-and-deployment-variables) CI/CD variable.
### Customize `helm upgrade`
The [auto-deploy-image](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image) uses the `helm upgrade` command.
To customize this command, pass it options with the `HELM_UPGRADE_EXTRA_ARGS` CI/CD variable.
For example, to disable pre- and post-upgrade hooks when `helm upgrade` runs:
```yaml
variables:
HELM_UPGRADE_EXTRA_ARGS: --no-hooks
```
For a full list of options, see [the official `helm upgrade` documentation](https://helm.sh/docs/helm/helm_upgrade/).
### Limit a Helm chart to one environment
To limit a custom chart to one environment, add the environment scope to your CI/CD variables.
For more information, see [Limit the environment scope of CI/CD variables](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
## Customize `.gitlab-ci.yml`
Auto DevOps is highly customizable because the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
is an implementation of a `.gitlab-ci.yml` file.
The template uses only features available to any implementation of `.gitlab-ci.yml`.
To add custom behaviors to the CI/CD pipeline used by Auto DevOps:
1. To the root of your repository, add a `.gitlab-ci.yml` file with the following contents:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
1. Add your changes to the `.gitlab-ci.yml` file. Your changes are merged with the Auto DevOps template. For more information about
how `include` merges your changes, see [the `include` documentation](../../ci/yaml/_index.md#include).
To remove behaviors from the Auto DevOps pipeline:
1. Copy the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
into your project.
1. Edit your copy of the template as needed.
### Use individual components of Auto DevOps
If you only require a subset of the features offered by Auto DevOps,
you can include individual Auto DevOps jobs in your own
`.gitlab-ci.yml`. Be sure to also define the stage required by each
job in your `.gitlab-ci.yml` file.
For example, to use [Auto Build](stages.md#auto-build), you can add the following to
your `.gitlab-ci.yml`:
```yaml
stages:
- build
include:
- template: Jobs/Build.gitlab-ci.yml
```
For a list of available jobs, see the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml).
## Use multiple Kubernetes clusters
See [Multiple Kubernetes clusters for Auto DevOps](multiple_clusters_auto_devops.md).
## Customizing the Kubernetes namespace
In GitLab 14.5 and earlier, you could use `environment:kubernetes:namespace`
to specify a namespace for the environment.
However, this feature was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8),
along with certificate-based integration.
You should now use the `KUBE_NAMESPACE` environment variable and
[limit its environment scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
## Use images hosted in a local Docker registry
You can configure many Auto DevOps jobs to run in an [offline environment](../../user/application_security/offline_deployments/_index.md):
1. Copy the required Auto DevOps Docker images from Docker Hub and `registry.gitlab.com` to their local GitLab container registry.
1. After the images are hosted and available in a local registry, edit `.gitlab-ci.yml` to point to the locally hosted images. For example:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
variables:
REGISTRY_URL: "registry.gitlab.example"
build:
image: "$REGISTRY_URL/docker/auto-build-image:v0.6.0"
services:
- name: "$REGISTRY_URL/greg/docker/docker:20.10.16-dind"
command: ['--tls=false', '--host=tcp://0.0.0.0:2375']
```
## PostgreSQL database support
{{< alert type="warning" >}}
Provisioning a PostgreSQL database by default was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/387766)
in GitLab 15.8 and will no longer be the default from 16.0. To enable database provisioning, set
the associated [CI/CD variable](cicd_variables.md#database-variables).
{{< /alert >}}
To support applications that require a database,
[PostgreSQL](https://www.postgresql.org/) is provisioned by default.
The credentials to access the database are preconfigured.
To customize the credentials, set the associated
[CI/CD variables](cicd_variables.md). You can also
define a custom `DATABASE_URL`:
```yaml
postgres://user:password@postgres-host:postgres-port/postgres-database
```
### Upgrading PostgreSQL
GitLab uses chart version 8.2.1 to provision PostgreSQL by default.
You can set the version from 0.7.1 to 8.2.1.
If you use an older chart version, you should [migrate your database](upgrading_postgresql.md)
to the newer PostgreSQL.
The CI/CD variable `AUTO_DEVOPS_POSTGRES_CHANNEL` that controls default provisioned
PostgreSQL changed to `2` in [GitLab 13.0](https://gitlab.com/gitlab-org/gitlab/-/issues/210499).
To use the old PostgreSQL, set the `AUTO_DEVOPS_POSTGRES_CHANNEL` variable to
`1`.
### Customize values for PostgreSQL Helm Chart
To set custom values, do one of the following:
- Add a file named `.gitlab/auto-deploy-postgres-values.yaml` to your repository. If found, this
file is used automatically. This file is used by default for PostgreSQL Helm upgrades.
- Add a file with a different name or path to the repository, and set the
`POSTGRES_HELM_UPGRADE_VALUES_FILE` [environment variable](cicd_variables.md#database-variables) with the path
and name.
- Set the `POSTGRES_HELM_UPGRADE_EXTRA_ARGS` [environment variable](cicd_variables.md#database-variables).
### Use external PostgreSQL database providers
Auto DevOps provides out-of-the-box support for a PostgreSQL container
for production environments. However, you might want to use an
external managed provider like AWS Relational Database Service.
To use an external managed provider:
1. Disable the built-in PostgreSQL installation for the required environments with
environment-scoped [CI/CD variables](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
Because the built-in PostgreSQL setup for review apps and staging is sufficient, you might only need to
disable the installation for `production`.

1. Define the `DATABASE_URL` variable as an environment-scoped variable
available to your application. This should be a URL in the following format:
```yaml
postgres://user:password@postgres-host:postgres-port/postgres-database
```
1. Ensure your Kubernetes cluster has network access to where PostgreSQL is hosted.
|
https://docs.gitlab.com/topics/troubleshooting
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/troubleshooting.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
troubleshooting.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The information in this documentation page describes common errors when using
Auto DevOps, and any available workarounds.
## Trace Helm commands
Set the CI/CD variable `TRACE` to any value to make Helm commands produce verbose output. You can use this output to diagnose Auto DevOps deployment problems.
You can resolve some problems with Auto DevOps deployment by changing advanced Auto DevOps configuration variables. Read more about [customizing Auto DevOps CI/CD variables](cicd_variables.md).
## Unable to select a buildpack
Auto Test may fail to detect your language or framework with the
following error:
```plaintext
Step 5/11 : RUN /bin/herokuish buildpack build
---> Running in eb468cd46085
-----> Unable to select a buildpack
The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1
```
The following are possible reasons:
- Your application may be missing the key files the buildpack is looking for.
Ruby applications require a `Gemfile` to be properly detected,
even though it's possible to write a Ruby app without a `Gemfile`.
- No buildpack may exist for your application. Try specifying a
[custom buildpack](customize.md#custom-buildpacks).
## Builder sunset error
Because of this [Heroku update](https://github.com/heroku/cnb-builder-images/pull/478), legacy shimmed `heroku/buildpacks:20` and `heroku/builder-classic:22` images now generate errors instead of warnings.
To resolve this issue, you should to migrate to the `heroku/builder:*` builder images. As a temporary workaround, you can also set an environment variable to skip errors.
### Migrating to `heroku/builder:*`
Before you migrate, you should read the release notes for the each [spec release](https://github.com/buildpacks/spec/releases) to determine potential breaking changes.
In this case, the relevant buildpack API versions are 0.6 and 0.7.
These breaking changes are especially relevant to buildpack maintainers.
For more information about the changes, you can also diff the [spec itself](https://github.com/buildpacks/spec/compare/buildpack/v0.5...buildpack/v0.7#files_bucket).
### Skipping errors
As a temporary workaround, you can skip the errors by setting and forwarding the `ALLOW_EOL_SHIMMED_BUILDER` environment variable:
```yaml
variables:
ALLOW_EOL_SHIMMED_BUILDER: "1"
AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES: ALLOW_EOL_SHIMMED_BUILDER
```
## Pipeline that extends Auto DevOps with only / except fails
If your pipeline fails with the following message:
```plaintext
Unable to create pipeline
jobs:test config key may not be used with `rules`: only
```
This error appears when the included job's rules configuration has been overridden with the `only` or `except` syntax.
To fix this issue, you must either:
- Transition your `only/except` syntax to rules.
- (Temporarily) Pin your templates to the [GitLab 12.10 based templates](https://gitlab.com/gitlab-org/auto-devops-v12-10).
## Failure to create a Kubernetes namespace
Auto Deploy fails if GitLab can't create a Kubernetes namespace and
service account for your project. For help debugging this issue, see
[Troubleshooting failed deployment jobs](../../user/project/clusters/deploy_to_cluster.md#troubleshooting).
## Detected an existing PostgreSQL database
After upgrading to GitLab 13.0, you may encounter this message when deploying
with Auto DevOps:
```plaintext
Detected an existing PostgreSQL database installed on the
deprecated channel 1, but the current channel is set to 2. The default
channel changed to 2 in GitLab 13.0.
[...]
```
Auto DevOps, by default, installs an in-cluster PostgreSQL database alongside
your application. The default installation method changed in GitLab 13.0, and
upgrading existing databases requires user involvement. The two installation
methods are:
- **channel 1 (deprecated)**: Pulls in the database as a dependency of the associated
Helm chart. Only supports Kubernetes versions up to version 1.15.
- **channel 2 (current)**: Installs the database as an independent Helm chart. Required
for using the in-cluster database feature with Kubernetes versions 1.16 and greater.
If you receive this error, you can do one of the following actions:
- You can safely ignore the warning and continue using the channel 1 PostgreSQL
database by setting `AUTO_DEVOPS_POSTGRES_CHANNEL` to `1` and redeploying.
- You can delete the channel 1 PostgreSQL database and install a fresh channel 2
database by setting `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value and
redeploying.
{{< alert type="warning" >}}
Deleting the channel 1 PostgreSQL database permanently deletes the existing
channel 1 database and all its data. See
[Upgrading PostgreSQL](upgrading_postgresql.md)
for more information on backing up and upgrading your database.
{{< /alert >}}
- If you are not using the in-cluster database, you can set
`POSTGRES_ENABLED` to `false` and re-deploy. This option is especially relevant to
users of custom charts without the in-chart PostgreSQL dependency.
Database auto-detection is based on the `postgresql.enabled` Helm value for
your release. This value is set based on the `POSTGRES_ENABLED` CI/CD variable
and persisted by Helm, regardless of whether or not your chart uses the
variable.
{{< alert type="warning" >}}
Setting `POSTGRES_ENABLED` to `false` permanently deletes any existing
channel 1 database for your environment.
{{< /alert >}}
## Auto DevOps is automatically disabled for a project
If Auto DevOps is automatically disabled for a project, it may be due to the following reasons:
- The Auto DevOps setting has not been explicitly enabled in the [project](_index.md#per-project) itself. It is enabled only in the parent [group](_index.md#per-group) or its [instance](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects).
- The project has no history of successful Auto DevOps pipelines.
- An Auto DevOps pipeline failed.
To resolve this issue:
- Enable the Auto DevOps setting in the project.
- Fix errors that are breaking the pipeline so the pipeline reruns.
## `Error: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"`
After upgrading your Kubernetes cluster to [v1.16+](stages.md#kubernetes-116),
you may encounter this message when deploying with Auto DevOps:
```plaintext
UPGRADE FAILED
Error: failed decoding reader into objects: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
```
This can occur if your current deployments on the environment namespace were deployed with a
deprecated/removed API that doesn't exist in Kubernetes v1.16+. For example,
if [your in-cluster PostgreSQL was installed in a legacy way](#detected-an-existing-postgresql-database),
the resource was created via the `extensions/v1beta1` API. However, the deployment resource
was moved to the `app/v1` API in v1.16.
To recover such outdated resources, you must convert the current deployments by mapping legacy APIs
to newer APIs. There is a helper tool called [`mapkubeapis`](https://github.com/hickeyma/helm-mapkubeapis)
that works for this problem. Follow these steps to use the tool in Auto DevOps:
1. Modify your `.gitlab-ci.yml` with:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/shinya.maeda/ci-templates/-/raw/master/map-deprecated-api.gitlab-ci.yml
variables:
HELM_VERSION_FOR_MAPKUBEAPIS: "v2" # If you're using auto-depoy-image v2 or later, please specify "v3".
```
1. Run the job `<environment-name>:map-deprecated-api`. Ensure that this job succeeds before moving
to the next step. You should see something like the following output:
```shell
2020/10/06 07:20:49 Found deprecated or removed Kubernetes API:
"apiVersion: extensions/v1beta1
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
```
1. Revert your `.gitlab-ci.yml` to the previous version. You no longer need to include the
supplemental template `map-deprecated-api`.
1. Continue the deployments as usual.
## `Error: not a valid chart repository or cannot be reached`
As [announced in the official CNCF blog post](https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/),
the stable Helm chart repository was deprecated and removed on November 13th, 2020.
You may encounter this error after that date:
```plaintext
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com"
is not a valid chart repository or cannot be reached
```
Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them
to use new official repositories or the [Helm Stable Archive repository maintained by GitLab](https://gitlab.com/gitlab-org/cluster-integration/helm-stable-archive).
Auto Deploy contains [an example fix](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/merge_requests/127).
In Auto Deploy, `v1.0.6+` of `auto-deploy-image` no longer adds the deprecated stable repository to
the `helm` command. If you use a custom chart and it relies on the deprecated stable repository,
specify an older `auto-deploy-image` like this example:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.5"
```
Keep in mind that this approach stops working when the stable repository is removed,
so you must eventually fix your custom chart.
To fix your custom chart:
1. In your chart directory, update the `repository` value in your `requirements.yaml` file from :
```yaml
repository: "https://kubernetes-charts.storage.googleapis.com/"
```
to:
```yaml
repository: "https://charts.helm.sh/stable"
```
1. In your chart directory, run `helm dep update .` using the same Helm major version as Auto DevOps.
1. Commit the changes for the `requirements.yaml` file.
1. If you previously had a `requirements.lock` file, commit the changes to the file.
If you did not previously have a `requirements.lock` file in your chart,
you do not need to commit the new one. This file is optional, but when present,
it's used to verify the integrity of the downloaded dependencies.
You can find more information in
[issue #263778, "Migrate PostgreSQL from stable Helm repository"](https://gitlab.com/gitlab-org/gitlab/-/issues/263778).
## `Error: release .... failed: timed out waiting for the condition`
When getting started with Auto DevOps, you may encounter this error when first
deploying your application:
```plaintext
INSTALL FAILED
PURGING CHART
Error: release staging failed: timed out waiting for the condition
```
This is most likely caused by a failed liveness (or readiness) probe attempted
during the deployment process. By default, these probes are run against the root
page of the deployed application on port 5000. If your application isn't configured
to serve anything at the root page, or is configured to run on a specific port
*other* than 5000, this check fails.
If it fails, you should see these failures in the events for the relevant
Kubernetes namespace. These events look like the following example:
```plaintext
LAST SEEN TYPE REASON OBJECT MESSAGE
3m20s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Readiness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
3m32s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Liveness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
```
To change the port used for the liveness checks, pass
[custom values to the Helm chart](customize.md#customize-helm-chart-values)
used by Auto DevOps:
1. Create a directory and file at the root of your repository named `.gitlab/auto-deploy-values.yaml`.
1. Populate the file with the following content, replacing the port values with
the actual port number your application is configured to use:
```yaml
service:
internalPort: <port_value>
externalPort: <port_value>
```
1. Commit your changes.
After committing your changes, subsequent probes should use the newly-defined ports.
The page that's probed can also be changed by overriding the `livenessProbe.path`
and `readinessProbe.path` values (shown in the
[default `values.yaml`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/master/assets/auto-deploy-app/values.yaml)
file) in the same fashion.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Troubleshooting Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The information in this documentation page describes common errors when using
Auto DevOps, and any available workarounds.
## Trace Helm commands
Set the CI/CD variable `TRACE` to any value to make Helm commands produce verbose output. You can use this output to diagnose Auto DevOps deployment problems.
You can resolve some problems with Auto DevOps deployment by changing advanced Auto DevOps configuration variables. Read more about [customizing Auto DevOps CI/CD variables](cicd_variables.md).
## Unable to select a buildpack
Auto Test may fail to detect your language or framework with the
following error:
```plaintext
Step 5/11 : RUN /bin/herokuish buildpack build
---> Running in eb468cd46085
-----> Unable to select a buildpack
The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1
```
The following are possible reasons:
- Your application may be missing the key files the buildpack is looking for.
Ruby applications require a `Gemfile` to be properly detected,
even though it's possible to write a Ruby app without a `Gemfile`.
- No buildpack may exist for your application. Try specifying a
[custom buildpack](customize.md#custom-buildpacks).
## Builder sunset error
Because of this [Heroku update](https://github.com/heroku/cnb-builder-images/pull/478), legacy shimmed `heroku/buildpacks:20` and `heroku/builder-classic:22` images now generate errors instead of warnings.
To resolve this issue, you should to migrate to the `heroku/builder:*` builder images. As a temporary workaround, you can also set an environment variable to skip errors.
### Migrating to `heroku/builder:*`
Before you migrate, you should read the release notes for the each [spec release](https://github.com/buildpacks/spec/releases) to determine potential breaking changes.
In this case, the relevant buildpack API versions are 0.6 and 0.7.
These breaking changes are especially relevant to buildpack maintainers.
For more information about the changes, you can also diff the [spec itself](https://github.com/buildpacks/spec/compare/buildpack/v0.5...buildpack/v0.7#files_bucket).
### Skipping errors
As a temporary workaround, you can skip the errors by setting and forwarding the `ALLOW_EOL_SHIMMED_BUILDER` environment variable:
```yaml
variables:
ALLOW_EOL_SHIMMED_BUILDER: "1"
AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES: ALLOW_EOL_SHIMMED_BUILDER
```
## Pipeline that extends Auto DevOps with only / except fails
If your pipeline fails with the following message:
```plaintext
Unable to create pipeline
jobs:test config key may not be used with `rules`: only
```
This error appears when the included job's rules configuration has been overridden with the `only` or `except` syntax.
To fix this issue, you must either:
- Transition your `only/except` syntax to rules.
- (Temporarily) Pin your templates to the [GitLab 12.10 based templates](https://gitlab.com/gitlab-org/auto-devops-v12-10).
## Failure to create a Kubernetes namespace
Auto Deploy fails if GitLab can't create a Kubernetes namespace and
service account for your project. For help debugging this issue, see
[Troubleshooting failed deployment jobs](../../user/project/clusters/deploy_to_cluster.md#troubleshooting).
## Detected an existing PostgreSQL database
After upgrading to GitLab 13.0, you may encounter this message when deploying
with Auto DevOps:
```plaintext
Detected an existing PostgreSQL database installed on the
deprecated channel 1, but the current channel is set to 2. The default
channel changed to 2 in GitLab 13.0.
[...]
```
Auto DevOps, by default, installs an in-cluster PostgreSQL database alongside
your application. The default installation method changed in GitLab 13.0, and
upgrading existing databases requires user involvement. The two installation
methods are:
- **channel 1 (deprecated)**: Pulls in the database as a dependency of the associated
Helm chart. Only supports Kubernetes versions up to version 1.15.
- **channel 2 (current)**: Installs the database as an independent Helm chart. Required
for using the in-cluster database feature with Kubernetes versions 1.16 and greater.
If you receive this error, you can do one of the following actions:
- You can safely ignore the warning and continue using the channel 1 PostgreSQL
database by setting `AUTO_DEVOPS_POSTGRES_CHANNEL` to `1` and redeploying.
- You can delete the channel 1 PostgreSQL database and install a fresh channel 2
database by setting `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value and
redeploying.
{{< alert type="warning" >}}
Deleting the channel 1 PostgreSQL database permanently deletes the existing
channel 1 database and all its data. See
[Upgrading PostgreSQL](upgrading_postgresql.md)
for more information on backing up and upgrading your database.
{{< /alert >}}
- If you are not using the in-cluster database, you can set
`POSTGRES_ENABLED` to `false` and re-deploy. This option is especially relevant to
users of custom charts without the in-chart PostgreSQL dependency.
Database auto-detection is based on the `postgresql.enabled` Helm value for
your release. This value is set based on the `POSTGRES_ENABLED` CI/CD variable
and persisted by Helm, regardless of whether or not your chart uses the
variable.
{{< alert type="warning" >}}
Setting `POSTGRES_ENABLED` to `false` permanently deletes any existing
channel 1 database for your environment.
{{< /alert >}}
## Auto DevOps is automatically disabled for a project
If Auto DevOps is automatically disabled for a project, it may be due to the following reasons:
- The Auto DevOps setting has not been explicitly enabled in the [project](_index.md#per-project) itself. It is enabled only in the parent [group](_index.md#per-group) or its [instance](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects).
- The project has no history of successful Auto DevOps pipelines.
- An Auto DevOps pipeline failed.
To resolve this issue:
- Enable the Auto DevOps setting in the project.
- Fix errors that are breaking the pipeline so the pipeline reruns.
## `Error: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"`
After upgrading your Kubernetes cluster to [v1.16+](stages.md#kubernetes-116),
you may encounter this message when deploying with Auto DevOps:
```plaintext
UPGRADE FAILED
Error: failed decoding reader into objects: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
```
This can occur if your current deployments on the environment namespace were deployed with a
deprecated/removed API that doesn't exist in Kubernetes v1.16+. For example,
if [your in-cluster PostgreSQL was installed in a legacy way](#detected-an-existing-postgresql-database),
the resource was created via the `extensions/v1beta1` API. However, the deployment resource
was moved to the `app/v1` API in v1.16.
To recover such outdated resources, you must convert the current deployments by mapping legacy APIs
to newer APIs. There is a helper tool called [`mapkubeapis`](https://github.com/hickeyma/helm-mapkubeapis)
that works for this problem. Follow these steps to use the tool in Auto DevOps:
1. Modify your `.gitlab-ci.yml` with:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
- remote: https://gitlab.com/shinya.maeda/ci-templates/-/raw/master/map-deprecated-api.gitlab-ci.yml
variables:
HELM_VERSION_FOR_MAPKUBEAPIS: "v2" # If you're using auto-depoy-image v2 or later, please specify "v3".
```
1. Run the job `<environment-name>:map-deprecated-api`. Ensure that this job succeeds before moving
to the next step. You should see something like the following output:
```shell
2020/10/06 07:20:49 Found deprecated or removed Kubernetes API:
"apiVersion: extensions/v1beta1
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
```
1. Revert your `.gitlab-ci.yml` to the previous version. You no longer need to include the
supplemental template `map-deprecated-api`.
1. Continue the deployments as usual.
## `Error: not a valid chart repository or cannot be reached`
As [announced in the official CNCF blog post](https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/),
the stable Helm chart repository was deprecated and removed on November 13th, 2020.
You may encounter this error after that date:
```plaintext
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com"
is not a valid chart repository or cannot be reached
```
Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them
to use new official repositories or the [Helm Stable Archive repository maintained by GitLab](https://gitlab.com/gitlab-org/cluster-integration/helm-stable-archive).
Auto Deploy contains [an example fix](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/merge_requests/127).
In Auto Deploy, `v1.0.6+` of `auto-deploy-image` no longer adds the deprecated stable repository to
the `helm` command. If you use a custom chart and it relies on the deprecated stable repository,
specify an older `auto-deploy-image` like this example:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v1.0.5"
```
Keep in mind that this approach stops working when the stable repository is removed,
so you must eventually fix your custom chart.
To fix your custom chart:
1. In your chart directory, update the `repository` value in your `requirements.yaml` file from :
```yaml
repository: "https://kubernetes-charts.storage.googleapis.com/"
```
to:
```yaml
repository: "https://charts.helm.sh/stable"
```
1. In your chart directory, run `helm dep update .` using the same Helm major version as Auto DevOps.
1. Commit the changes for the `requirements.yaml` file.
1. If you previously had a `requirements.lock` file, commit the changes to the file.
If you did not previously have a `requirements.lock` file in your chart,
you do not need to commit the new one. This file is optional, but when present,
it's used to verify the integrity of the downloaded dependencies.
You can find more information in
[issue #263778, "Migrate PostgreSQL from stable Helm repository"](https://gitlab.com/gitlab-org/gitlab/-/issues/263778).
## `Error: release .... failed: timed out waiting for the condition`
When getting started with Auto DevOps, you may encounter this error when first
deploying your application:
```plaintext
INSTALL FAILED
PURGING CHART
Error: release staging failed: timed out waiting for the condition
```
This is most likely caused by a failed liveness (or readiness) probe attempted
during the deployment process. By default, these probes are run against the root
page of the deployed application on port 5000. If your application isn't configured
to serve anything at the root page, or is configured to run on a specific port
*other* than 5000, this check fails.
If it fails, you should see these failures in the events for the relevant
Kubernetes namespace. These events look like the following example:
```plaintext
LAST SEEN TYPE REASON OBJECT MESSAGE
3m20s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Readiness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
3m32s Warning Unhealthy pod/staging-85db88dcb6-rxd6g Liveness probe failed: Get http://10.192.0.6:5000/: dial tcp 10.192.0.6:5000: connect: connection refused
```
To change the port used for the liveness checks, pass
[custom values to the Helm chart](customize.md#customize-helm-chart-values)
used by Auto DevOps:
1. Create a directory and file at the root of your repository named `.gitlab/auto-deploy-values.yaml`.
1. Populate the file with the following content, replacing the port values with
the actual port number your application is configured to use:
```yaml
service:
internalPort: <port_value>
externalPort: <port_value>
```
1. Commit your changes.
After committing your changes, subsequent probes should use the newly-defined ports.
The page that's probed can also be changed by overriding the `livenessProbe.path`
and `readinessProbe.path` values (shown in the
[default `values.yaml`](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/blob/master/assets/auto-deploy-app/values.yaml)
file) in the same fashion.
|
https://docs.gitlab.com/topics/upgrading_postgresql
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/upgrading_postgresql.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
upgrading_postgresql.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Upgrading PostgreSQL for Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When `POSTGRES_ENABLED` is `true`, Auto DevOps provides an
[in-cluster PostgreSQL database](customize.md#postgresql-database-support) for your application.
The version of the chart used to provision PostgreSQL:
- Can be set from 0.7.1 to 8.2.1.
GitLab encourages users to migrate their database to the newer PostgreSQL chart.
This guide provides instructions on how to migrate your PostgreSQL database, which
involves:
1. Taking a database dump of your data.
1. Installing a new PostgreSQL database using the newer version 8.2.1 of the chart
and removing the old PostgreSQL installation.
1. Restoring the database dump into the new PostgreSQL.
## Prerequisites
1. Install
[`kubectl`](https://kubernetes.io/docs/tasks/tools/).
1. Ensure that you can access your Kubernetes cluster using `kubectl`.
This varies based on Kubernetes providers.
1. Prepare for downtime. The steps below include taking the application offline
so that the in-cluster database does not get modified after the database dump is created.
1. Ensure you have not set `POSTGRES_ENABLED` to `false`, as this setting deletes
any existing channel 1 database. For more information, see
[Detected an existing PostgreSQL database](troubleshooting.md#detected-an-existing-postgresql-database).
{{< alert type="note" >}}
If you have configured Auto DevOps to have staging,
consider trying out the backup and restore steps on staging first, or
trying this out on a review app.
{{< /alert >}}
## Take your application offline
If required, take your application offline to prevent the database from
being modified after the database dump is created.
1. Get the Kubernetes namespace for the environment. It typically looks like `<project-name>-<project-id>-<environment>`.
In our example, the namespace is called `minimal-ruby-app-4349298-production`.
```shell
$ kubectl get ns
NAME STATUS AGE
minimal-ruby-app-4349298-production Active 7d14h
```
1. For ease of use, export the namespace name:
```shell
export APP_NAMESPACE=minimal-ruby-app-4349298-production
```
1. Get the deployment name for your application with the following command. In our example, the deployment name is `production`.
```shell
$ kubectl get deployment --namespace "$APP_NAMESPACE"
NAME READY UP-TO-DATE AVAILABLE AGE
production 2/2 2 2 7d21h
production-postgres 1/1 1 1 7d21h
```
1. To prevent the database from being modified, set replicas to 0 for the deployment with the following command.
We use the deployment name from the previous step (`deployments/<DEPLOYMENT_NAME>`).
```shell
$ kubectl scale --replicas=0 deployments/production --namespace "$APP_NAMESPACE"
deployment.extensions/production scaled
```
1. You must also set replicas to zero for workers if you have any.
## Backup
1. Get the service name for PostgreSQL. The name of the service should end with `-postgres`. In our example the service name is `production-postgres`.
```shell
$ kubectl get svc --namespace "$APP_NAMESPACE"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-auto-deploy ClusterIP 10.30.13.90 <none> 5000/TCP 7d14h
production-postgres ClusterIP 10.30.4.57 <none> 5432/TCP 7d14h
```
1. Get the pod name for PostgreSQL with the following command. In our example, the pod name is `production-postgres-5db86568d7-qxlxv`.
```shell
$ kubectl get pod --namespace "$APP_NAMESPACE" -l app=production-postgres
NAME READY STATUS RESTARTS AGE
production-postgres-5db86568d7-qxlxv 1/1 Running 0 7d14h
```
1. Connect to the pod with:
```shell
kubectl exec -it production-postgres-5db86568d7-qxlxv --namespace "$APP_NAMESPACE" -- bash
```
1. Once, connected, create a dump file with the following command.
- `SERVICE_NAME` is the service name obtained in a previous step.
- `USERNAME` is the username you have configured for PostgreSQL. The default is `user`.
- `DATABASE_NAME` is usually the environment name.
- When prompted for the database password, the default is `testing-password`.
```shell
## Format is:
# pg_dump -h SERVICE_NAME -U USERNAME DATABASE_NAME > /tmp/backup.sql
pg_dump -h production-postgres -U user production > /tmp/backup.sql
```
1. Once the backup dump is complete, exit the Kubernetes exec process with `Control-D` or `exit`.
1. Download the dump file with the following command:
```shell
kubectl cp --namespace "$APP_NAMESPACE" production-postgres-5db86568d7-qxlxv:/tmp/backup.sql backup.sql
```
## Retain persistent volumes
By default the [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
used to store the underlying data for PostgreSQL is marked as `Delete`
when the pods and pod claims that use the volume is deleted.
This is significant as, when you opt into the newer 8.2.1 PostgreSQL, the older 0.7.1 PostgreSQL is
deleted causing the persistent volumes to be deleted as well.
You can verify this by using the following command:
```shell
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 8Gi RWO Delete Bound minimal-ruby-app-4349298-staging/staging-postgres standard 7d22h
pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 8Gi RWO Delete Bound minimal-ruby-app-4349298-production/production-postgres standard 7d22h
```
To retain the persistent volume, even when the older 0.7.1 PostgreSQL is
deleted, we can change the retention policy to `Retain`. In this example, we find
the persistent volume names by looking at the claims names. As we are
interested in keeping the volumes for the staging and production of the
`minimal-ruby-app-4349298` application, the volume names here are
`pvc-0da80c08-5239-11ea-9c8d-42010a8e0096` and `pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096`:
```shell
$ kubectl patch pv pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 patched
$ kubectl patch pv pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 patched
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 8Gi RWO Retain Bound minimal-ruby-app-4349298-staging/staging-postgres standard 7d22h
pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 8Gi RWO Retain Bound minimal-ruby-app-4349298-production/production-postgres standard 7d22h
```
## Install new PostgreSQL
{{< alert type="warning" >}}
Using the newer version of PostgreSQL deletes
the older 0.7.1 PostgreSQL. To prevent the underlying data from being
deleted, you can choose to retain the [persistent volume](#retain-persistent-volumes).
{{< /alert >}}
{{< alert type="note" >}}
You can also
[scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable) the
`AUTO_DEVOPS_POSTGRES_CHANNEL`, `AUTO_DEVOPS_POSTGRES_DELETE_V1` and
`POSTGRES_VERSION` variables to specific environments, for example, `staging`.
{{< /alert >}}
1. Set `AUTO_DEVOPS_POSTGRES_CHANNEL` to `2`. This opts into using the
newer 8.2.1-based PostgreSQL, and removes the older 0.7.1-based
PostgreSQL.
1. Set `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value. This flag is a
safeguard to prevent accidental deletion of databases.
<!-- DO NOT REPLACE when upgrading GitLab's supported version. This is NOT related to GitLab's PostgreSQL version support, but the one deployed by Auto DevOps. -->
1. If you have a `POSTGRES_VERSION` set, make sure it is set to `9.6.16` or later. This is the
minimum PostgreSQL version supported by Auto DevOps. See also the list of
[tags available](https://hub.docker.com/r/bitnami/postgresql/tags).
1. Set `PRODUCTION_REPLICAS` to `0`. For other environments, use
`REPLICAS` with an [environment scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
1. If you have set the `DB_INITIALIZE` or `DB_MIGRATE` variables, either
remove the variables, or rename the variables temporarily to
`XDB_INITIALIZE` or the `XDB_MIGRATE` to effectively disable them.
1. Run a new CI pipeline for the branch. In this case, we run a new CI
pipeline for `main`.
1. After the pipeline is successful, your application is upgraded
with the new PostgreSQL installed. Zero replicas exist at this time, so
no traffic is served for your application (to prevent
new data from coming in).
## Restore
1. Get the pod name for the new PostgreSQL, in our example, the pod name is
`production-postgresql-0`:
```shell
$ kubectl get pod --namespace "$APP_NAMESPACE" -l app=postgresql
NAME READY STATUS RESTARTS AGE
production-postgresql-0 1/1 Running 0 19m
````
1. Copy the dump file from the backup steps to the pod:
```shell
kubectl cp --namespace "$APP_NAMESPACE" backup.sql production-postgresql-0:/tmp/backup.sql
```
1. Connect to the pod:
```shell
kubectl exec -it production-postgresql-0 --namespace "$APP_NAMESPACE" -- bash
```
1. Once connected to the pod, run the following command to restore the database.
- When asked for the database password, the default is `testing-password`.
- `USERNAME` is the username you have configured for PostgreSQL. The default is `user`.
- `DATABASE_NAME` is usually the environment name.
```shell
## Format is:
# psql -U USERNAME -d DATABASE_NAME < /tmp/backup.sql
psql -U user -d production < /tmp/backup.sql
```
1. You can now check that your data restored correctly after the restore
is complete. You can perform spot checks of your data by using the
`psql`.
## Reinstate your application
Once you are satisfied the database has been restored, run the following
steps to reinstate your application:
1. Restore the `DB_INITIALIZE` and `DB_MIGRATE` variables, if previously
removed or disabled.
1. Restore the `PRODUCTION_REPLICAS` or `REPLICAS` variable to its original value.
1. Run a new CI pipeline for the branch. In this case, we run a new CI
pipeline for `main`. After the pipeline is successful, your
application should be serving traffic as before.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Upgrading PostgreSQL for Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When `POSTGRES_ENABLED` is `true`, Auto DevOps provides an
[in-cluster PostgreSQL database](customize.md#postgresql-database-support) for your application.
The version of the chart used to provision PostgreSQL:
- Can be set from 0.7.1 to 8.2.1.
GitLab encourages users to migrate their database to the newer PostgreSQL chart.
This guide provides instructions on how to migrate your PostgreSQL database, which
involves:
1. Taking a database dump of your data.
1. Installing a new PostgreSQL database using the newer version 8.2.1 of the chart
and removing the old PostgreSQL installation.
1. Restoring the database dump into the new PostgreSQL.
## Prerequisites
1. Install
[`kubectl`](https://kubernetes.io/docs/tasks/tools/).
1. Ensure that you can access your Kubernetes cluster using `kubectl`.
This varies based on Kubernetes providers.
1. Prepare for downtime. The steps below include taking the application offline
so that the in-cluster database does not get modified after the database dump is created.
1. Ensure you have not set `POSTGRES_ENABLED` to `false`, as this setting deletes
any existing channel 1 database. For more information, see
[Detected an existing PostgreSQL database](troubleshooting.md#detected-an-existing-postgresql-database).
{{< alert type="note" >}}
If you have configured Auto DevOps to have staging,
consider trying out the backup and restore steps on staging first, or
trying this out on a review app.
{{< /alert >}}
## Take your application offline
If required, take your application offline to prevent the database from
being modified after the database dump is created.
1. Get the Kubernetes namespace for the environment. It typically looks like `<project-name>-<project-id>-<environment>`.
In our example, the namespace is called `minimal-ruby-app-4349298-production`.
```shell
$ kubectl get ns
NAME STATUS AGE
minimal-ruby-app-4349298-production Active 7d14h
```
1. For ease of use, export the namespace name:
```shell
export APP_NAMESPACE=minimal-ruby-app-4349298-production
```
1. Get the deployment name for your application with the following command. In our example, the deployment name is `production`.
```shell
$ kubectl get deployment --namespace "$APP_NAMESPACE"
NAME READY UP-TO-DATE AVAILABLE AGE
production 2/2 2 2 7d21h
production-postgres 1/1 1 1 7d21h
```
1. To prevent the database from being modified, set replicas to 0 for the deployment with the following command.
We use the deployment name from the previous step (`deployments/<DEPLOYMENT_NAME>`).
```shell
$ kubectl scale --replicas=0 deployments/production --namespace "$APP_NAMESPACE"
deployment.extensions/production scaled
```
1. You must also set replicas to zero for workers if you have any.
## Backup
1. Get the service name for PostgreSQL. The name of the service should end with `-postgres`. In our example the service name is `production-postgres`.
```shell
$ kubectl get svc --namespace "$APP_NAMESPACE"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-auto-deploy ClusterIP 10.30.13.90 <none> 5000/TCP 7d14h
production-postgres ClusterIP 10.30.4.57 <none> 5432/TCP 7d14h
```
1. Get the pod name for PostgreSQL with the following command. In our example, the pod name is `production-postgres-5db86568d7-qxlxv`.
```shell
$ kubectl get pod --namespace "$APP_NAMESPACE" -l app=production-postgres
NAME READY STATUS RESTARTS AGE
production-postgres-5db86568d7-qxlxv 1/1 Running 0 7d14h
```
1. Connect to the pod with:
```shell
kubectl exec -it production-postgres-5db86568d7-qxlxv --namespace "$APP_NAMESPACE" -- bash
```
1. Once, connected, create a dump file with the following command.
- `SERVICE_NAME` is the service name obtained in a previous step.
- `USERNAME` is the username you have configured for PostgreSQL. The default is `user`.
- `DATABASE_NAME` is usually the environment name.
- When prompted for the database password, the default is `testing-password`.
```shell
## Format is:
# pg_dump -h SERVICE_NAME -U USERNAME DATABASE_NAME > /tmp/backup.sql
pg_dump -h production-postgres -U user production > /tmp/backup.sql
```
1. Once the backup dump is complete, exit the Kubernetes exec process with `Control-D` or `exit`.
1. Download the dump file with the following command:
```shell
kubectl cp --namespace "$APP_NAMESPACE" production-postgres-5db86568d7-qxlxv:/tmp/backup.sql backup.sql
```
## Retain persistent volumes
By default the [persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
used to store the underlying data for PostgreSQL is marked as `Delete`
when the pods and pod claims that use the volume is deleted.
This is significant as, when you opt into the newer 8.2.1 PostgreSQL, the older 0.7.1 PostgreSQL is
deleted causing the persistent volumes to be deleted as well.
You can verify this by using the following command:
```shell
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 8Gi RWO Delete Bound minimal-ruby-app-4349298-staging/staging-postgres standard 7d22h
pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 8Gi RWO Delete Bound minimal-ruby-app-4349298-production/production-postgres standard 7d22h
```
To retain the persistent volume, even when the older 0.7.1 PostgreSQL is
deleted, we can change the retention policy to `Retain`. In this example, we find
the persistent volume names by looking at the claims names. As we are
interested in keeping the volumes for the staging and production of the
`minimal-ruby-app-4349298` application, the volume names here are
`pvc-0da80c08-5239-11ea-9c8d-42010a8e0096` and `pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096`:
```shell
$ kubectl patch pv pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 patched
$ kubectl patch pv pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
persistentvolume/pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 patched
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0da80c08-5239-11ea-9c8d-42010a8e0096 8Gi RWO Retain Bound minimal-ruby-app-4349298-staging/staging-postgres standard 7d22h
pvc-9085e3d3-5239-11ea-9c8d-42010a8e0096 8Gi RWO Retain Bound minimal-ruby-app-4349298-production/production-postgres standard 7d22h
```
## Install new PostgreSQL
{{< alert type="warning" >}}
Using the newer version of PostgreSQL deletes
the older 0.7.1 PostgreSQL. To prevent the underlying data from being
deleted, you can choose to retain the [persistent volume](#retain-persistent-volumes).
{{< /alert >}}
{{< alert type="note" >}}
You can also
[scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable) the
`AUTO_DEVOPS_POSTGRES_CHANNEL`, `AUTO_DEVOPS_POSTGRES_DELETE_V1` and
`POSTGRES_VERSION` variables to specific environments, for example, `staging`.
{{< /alert >}}
1. Set `AUTO_DEVOPS_POSTGRES_CHANNEL` to `2`. This opts into using the
newer 8.2.1-based PostgreSQL, and removes the older 0.7.1-based
PostgreSQL.
1. Set `AUTO_DEVOPS_POSTGRES_DELETE_V1` to a non-empty value. This flag is a
safeguard to prevent accidental deletion of databases.
<!-- DO NOT REPLACE when upgrading GitLab's supported version. This is NOT related to GitLab's PostgreSQL version support, but the one deployed by Auto DevOps. -->
1. If you have a `POSTGRES_VERSION` set, make sure it is set to `9.6.16` or later. This is the
minimum PostgreSQL version supported by Auto DevOps. See also the list of
[tags available](https://hub.docker.com/r/bitnami/postgresql/tags).
1. Set `PRODUCTION_REPLICAS` to `0`. For other environments, use
`REPLICAS` with an [environment scope](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable).
1. If you have set the `DB_INITIALIZE` or `DB_MIGRATE` variables, either
remove the variables, or rename the variables temporarily to
`XDB_INITIALIZE` or the `XDB_MIGRATE` to effectively disable them.
1. Run a new CI pipeline for the branch. In this case, we run a new CI
pipeline for `main`.
1. After the pipeline is successful, your application is upgraded
with the new PostgreSQL installed. Zero replicas exist at this time, so
no traffic is served for your application (to prevent
new data from coming in).
## Restore
1. Get the pod name for the new PostgreSQL, in our example, the pod name is
`production-postgresql-0`:
```shell
$ kubectl get pod --namespace "$APP_NAMESPACE" -l app=postgresql
NAME READY STATUS RESTARTS AGE
production-postgresql-0 1/1 Running 0 19m
````
1. Copy the dump file from the backup steps to the pod:
```shell
kubectl cp --namespace "$APP_NAMESPACE" backup.sql production-postgresql-0:/tmp/backup.sql
```
1. Connect to the pod:
```shell
kubectl exec -it production-postgresql-0 --namespace "$APP_NAMESPACE" -- bash
```
1. Once connected to the pod, run the following command to restore the database.
- When asked for the database password, the default is `testing-password`.
- `USERNAME` is the username you have configured for PostgreSQL. The default is `user`.
- `DATABASE_NAME` is usually the environment name.
```shell
## Format is:
# psql -U USERNAME -d DATABASE_NAME < /tmp/backup.sql
psql -U user -d production < /tmp/backup.sql
```
1. You can now check that your data restored correctly after the restore
is complete. You can perform spot checks of your data by using the
`psql`.
## Reinstate your application
Once you are satisfied the database has been restored, run the following
steps to reinstate your application:
1. Restore the `DB_INITIALIZE` and `DB_MIGRATE` variables, if previously
removed or disabled.
1. Restore the `PRODUCTION_REPLICAS` or `REPLICAS` variable to its original value.
1. Run a new CI pipeline for the branch. In this case, we run a new CI
pipeline for `main`. After the pipeline is successful, your
application should be serving traffic as before.
|
https://docs.gitlab.com/topics/stages
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/stages.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
stages.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Stages of Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The following sections describe the stages of [Auto DevOps](_index.md).
Read them carefully to understand how each one works.
## Auto Build
{{< alert type="note" >}}
Auto Build is not supported if Docker in Docker is not available for your GitLab Runners, like in OpenShift clusters. The OpenShift support in GitLab is tracked [in a dedicated epic](https://gitlab.com/groups/gitlab-org/-/epics/2068).
{{< /alert >}}
Auto Build creates a build of the application using an existing `Dockerfile` or
Heroku buildpacks. The resulting Docker image is pushed to the
[Container Registry](../../user/packages/container_registry/_index.md), and tagged
with the commit SHA or tag.
### Auto Build using a Dockerfile
If a project's repository contains a `Dockerfile` at its root, Auto Build uses
`docker build` to create a Docker image.
If you're also using Auto Review Apps and Auto Deploy, and you choose to provide
your own `Dockerfile`, you must either:
- Expose your application to port `5000`, as the
[default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
assumes this port is available.
- Override the default values by
[customizing the Auto Deploy Helm chart](customize.md#custom-helm-chart).
### Auto Build using Cloud Native Buildpacks
Auto Build builds an application using a project's `Dockerfile` if present. If no
`Dockerfile` is present, Auto Build builds your application using
[Cloud Native Buildpacks](https://buildpacks.io) to detect and build the
application into a Docker image. The feature uses the
[`pack` command](https://github.com/buildpacks/pack).
The default [builder](https://buildpacks.io/docs/for-app-developers/concepts/builder/)
is `heroku/buildpacks:22` but a different builder can be selected using
the CI/CD variable `AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER`.
Each buildpack requires your project's repository to contain certain files for
Auto Build to build your application successfully. The structure is
specific to the builder and buildpacks you have selected.
For example, when using the Heroku builder (the default), your application's
root directory must contain the appropriate file for your application's
language:
- For Python projects, a `Pipfile` or `requirements.txt` file.
- For Ruby projects, a `Gemfile` or `Gemfile.lock` file.
For the requirements of other languages and frameworks, read the
[Heroku buildpacks documentation](https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks).
{{< alert type="note" >}}
Auto Test still uses Herokuish, as test suite detection is not
yet part of the Cloud Native Buildpack specification. For more information, see
[issue 212689](https://gitlab.com/gitlab-org/gitlab/-/issues/212689).
{{< /alert >}}
#### Mount volumes into the build container
The variable `BUILDPACK_VOLUMES` can be used to pass volume mount definitions to the
`pack` command. The mounts are passed to `pack build` using `--volume` arguments.
Each volume definition can include any of the capabilities provided by `build pack`
such as the host path, the target path, whether the volume is writable, and
one or more volume options.
Use a pipe `|` character to pass multiple volumes.
Each item from the list is passed to `build back` using a separate `--volume` argument.
In this example, three volumes are mounted in the container as `/etc/foo`, `/opt/foo`, and `/var/opt/foo`:
```yaml
buildjob:
variables:
BUILDPACK_VOLUMES: /mnt/1:/etc/foo:ro|/mnt/2:/opt/foo:ro|/mnt/3:/var/opt/foo:rw
```
Read more about defining volumes in the [`pack build` documentation](https://buildpacks.io/docs/for-platform-operators/how-to/integrate-ci/pack/cli/pack_build/).
### Moving from Herokuish to Cloud Native Buildpacks
Builds using Cloud Native Buildpacks support the same options as builds using
Herokuish, with the following caveats:
- The buildpack must be a Cloud Native Buildpack. A Heroku buildpack can be
converted to a Cloud Native Buildpack using Heroku's
[`cnb-shim`](https://github.com/heroku/cnb-shim).
- `BUILDPACK_URL` must be in a format
[supported by `pack`](https://buildpacks.io/docs/app-developer-guide/specify-buildpacks/).
- The `/bin/herokuish` command is not present in the built image, and prefixing
commands with `/bin/herokuish procfile exec` is no longer required (nor possible).
Instead, custom commands should be prefixed with `/cnb/lifecycle/launcher`
to receive the correct execution environment.
## Auto Test
Auto Test runs the appropriate tests for your application using
[Herokuish](https://github.com/gliderlabs/herokuish) and
[Heroku buildpacks](https://devcenter.heroku.com/articles/buildpacks) by analyzing
your project to detect the language and framework. Several languages and
frameworks are detected automatically, but if your language is not detected,
you may be able to create a [custom buildpack](customize.md#custom-buildpacks).
Check the [currently supported languages](#currently-supported-languages).
Auto Test uses tests you already have in your application. If there are no
tests, it's up to you to add them.
<!-- vale gitlab_base.Spelling = NO -->
{{< alert type="note" >}}
Not all buildpacks supported by [Auto Build](#auto-build) are supported by Auto Test.
Auto Test uses [Herokuish](https://gitlab.com/gitlab-org/gitlab/-/issues/212689), *not*
Cloud Native Buildpacks, and only buildpacks that implement the
[Testpack API](https://devcenter.heroku.com/articles/testpack-api) are supported.
{{< /alert >}}
<!-- vale gitlab_base.Spelling = YES -->
### Currently supported languages
Not all buildpacks support Auto Test yet, as it's a relatively new
enhancement. All of Heroku's
[officially supported languages](https://devcenter.heroku.com/articles/heroku-ci#supported-languages)
support Auto Test. The languages supported by Heroku's Herokuish buildpacks all
support Auto Test, but notably the multi-buildpack does not.
The supported buildpacks are:
```plaintext
- heroku-buildpack-multi
- heroku-buildpack-ruby
- heroku-buildpack-nodejs
- heroku-buildpack-clojure
- heroku-buildpack-python
- heroku-buildpack-java
- heroku-buildpack-gradle
- heroku-buildpack-scala
- heroku-buildpack-play
- heroku-buildpack-php
- heroku-buildpack-go
- buildpack-nginx
```
If your application needs a buildpack that is not in the previous list, you
might want to use a [custom buildpack](customize.md#custom-buildpacks).
## Auto Code Quality
{{< history >}}
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/212499) from GitLab Starter to GitLab Free in 13.2.
{{< /history >}}
Auto Code Quality uses the
[Code Quality image](https://gitlab.com/gitlab-org/ci-cd/codequality) to run
static analysis and other code checks on the current code. After creating the
report, it's uploaded as an artifact which you can later download and check
out. The merge request widget also displays any
[differences between the source and target branches](../../ci/testing/code_quality.md).
## Auto SAST
{{< history >}}
- Introduced in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.3.
- Select functionality made available in all tiers beginning in 13.1
{{< /history >}}
Static Application Security Testing (SAST) runs static
analysis on the current code, and checks for potential security issues. The
Auto SAST stage requires [GitLab Runner](https://docs.gitlab.com/runner/) 11.5 or later.
After creating the report, it's uploaded as an artifact which you can later
download and check out. The merge request widget also displays any security
warnings on [Ultimate](https://about.gitlab.com/pricing/) licenses.
For more information, see
[Static Application Security Testing (SAST)](../../user/application_security/sast/_index.md).
## Auto Secret Detection
Secret Detection uses the
[Secret Detection Docker image](https://gitlab.com/gitlab-org/security-products/analyzers/secrets) to run Secret Detection on the current code, and checks for leaked secrets.
After creating the report, it's uploaded as an artifact which you can later
download and evaluate. The merge request widget also displays any security
warnings on [Ultimate](https://about.gitlab.com/pricing/) licenses.
For more information, see [Secret Detection](../../user/application_security/secret_detection/_index.md).
## Auto Dependency Scanning
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Dependency Scanning runs analysis on the project's dependencies and checks for potential security issues.
The Auto Dependency Scanning stage is skipped on licenses other than
[Ultimate](https://about.gitlab.com/pricing/).
After creating the report, it's uploaded as an artifact which you can later download and
check out. The merge request widget displays any security warnings detected,
For more information, see
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md).
## Auto Container Scanning
Vulnerability static analysis for containers uses [Trivy](https://aquasecurity.github.io/trivy/latest/)
to check for potential security issues in Docker images. The Auto Container Scanning stage is
skipped on licenses other than [Ultimate](https://about.gitlab.com/pricing/).
After creating the report, it's uploaded as an artifact which you can later download and
check out. The merge request displays any detected security issues.
For more information, see
[Container Scanning](../../user/application_security/container_scanning/_index.md).
## Auto Review Apps
This is an optional step because many projects don't have a Kubernetes cluster
available. If the [requirements](requirements.md) are not met, the job is
silently skipped.
[Review apps](../../ci/review_apps/_index.md) are temporary application environments based on the
branch's code so developers, designers, QA, product managers, and other
reviewers can actually see and interact with code changes as part of the review
process. Auto Review Apps create a Review App for each branch.
Auto Review Apps deploy your application to your Kubernetes cluster only. If no cluster
is available, no deployment occurs.
The Review App has a unique URL based on a combination of the project ID, the branch
or tag name, a unique number, and the Auto DevOps base domain, such as
`13083-review-project-branch-123456.example.com`. The merge request widget displays
a link to the Review App for easy discovery. When the branch or tag is deleted,
such as after merging a merge request, the Review App is also deleted.
Review apps are deployed using the
[auto-deploy-app](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) chart with
Helm, which you can [customize](customize.md#custom-helm-chart). The application deploys
into the [Kubernetes namespace](../../user/project/clusters/deploy_to_cluster.md#deployment-variables)
for the environment.
[Local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is
used. Previous versions of GitLab had a Tiller installed in the project
namespace.
{{< alert type="warning" >}}
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
This can cause confusion with Helm not detecting the change and subsequent
deploys with Auto DevOps can undo your changes. Also, if you change something
and want to undo it by deploying again, Helm may not detect that anything changed
in the first place, and thus not realize that it needs to re-apply the old configuration.
{{< /alert >}}
## Auto DAST
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Dynamic Application Security Testing (DAST) uses the popular open source tool
[OWASP ZAProxy](https://github.com/zaproxy/zaproxy) to analyze the current code
and check for potential security issues. The Auto DAST stage is skipped on
licenses other than [Ultimate](https://about.gitlab.com/pricing/).
- On your default branch, DAST scans an application deployed specifically for that purpose
unless you [override the target branch](#overriding-the-dast-target).
The app is deleted after DAST has run.
- On feature branches, DAST scans the [review app](#auto-review-apps).
After the DAST scan completes, any security warnings are displayed
on the [Security Dashboard](../../user/application_security/security_dashboard/_index.md)
and the merge request widget.
For more information, see
[Dynamic Application Security Testing (DAST)](../../user/application_security/dast/_index.md).
### Overriding the DAST target
To use a custom target instead of the auto-deployed review apps,
set a `DAST_WEBSITE` CI/CD variable to the URL for DAST to scan.
{{< alert type="warning" >}}
If [DAST Full Scan](../../user/application_security/dast/browser/_index.md) is
enabled, GitLab strongly advises **not**
to set `DAST_WEBSITE` to any staging or production environment. DAST Full Scan
actively attacks the target, which can take down your application and lead to
data loss or corruption.
{{< /alert >}}
### Skipping Auto DAST
You can skip DAST jobs:
- On all branches by setting the `DAST_DISABLED` CI/CD variable to `"true"`.
- Only on the default branch by setting the `DAST_DISABLED_FOR_DEFAULT_BRANCH`
variable to `"true"`.
- Only on feature branches by setting `REVIEW_DISABLED` variable to
`"true"`. This also skips the Review App.
## Auto Browser Performance Testing
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto [Browser Performance Testing](../../ci/testing/browser_performance_testing.md)
measures the browser performance of a web page with the
[Sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/),
creates a JSON report including the overall performance score for each page, and
uploads the report as an artifact. By default, it tests the root page of your Review and
Production environments. If you want to test additional URLs, add the paths to a
file named `.gitlab-urls.txt` in the root directory, one file per line. For example:
```plaintext
/
/features
/direction
```
Any browser performance differences between the source and target branches are also
[shown in the merge request widget](../../ci/testing/browser_performance_testing.md).
## Auto Load Performance Testing
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto [Load Performance Testing](../../ci/testing/load_performance_testing.md)
measures the server performance of an application with the
[k6 container](https://hub.docker.com/r/loadimpact/k6/),
creates a JSON report including several key result metrics, and
uploads the report as an artifact.
Some initial setup is required. A [k6](https://k6.io/) test needs to be
written that's tailored to your specific application. The test also needs to be
configured so it can pick up the environment's dynamic URL via a CI/CD variable.
Any load performance test result differences between the source and target branches are also
[shown in the merge request widget](../../user/project/merge_requests/widgets.md).
## Auto Deploy
You have the choice to deploy to [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/) in addition to a Kubernetes cluster.
Auto Deploy is an optional step for Auto DevOps. If the [requirements](requirements.md) are not met, the job is skipped.
After a branch or merge request is merged into the project's default branch, Auto Deploy deploys the application to a `production` environment in
the Kubernetes cluster, with a namespace based on the project name and unique
project ID, such as `project-4321`.
Auto Deploy does not include deployments to staging or canary environments by
default, but the
[Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
contains job definitions for these tasks if you want to enable them.
You can use [CI/CD variables](cicd_variables.md) to automatically
scale your pod replicas, and to apply custom arguments to the Auto DevOps `helm upgrade`
commands. This is an easy way to
[customize the Auto Deploy Helm chart](customize.md#custom-helm-chart).
Helm uses the [auto-deploy-app](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
chart to deploy the application into the
[Kubernetes namespace](../../user/project/clusters/deploy_to_cluster.md#deployment-variables)
for the environment.
[Local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is
used. Previous versions of GitLab had a Tiller installed in the project
namespace.
{{< alert type="warning" >}}
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
This can cause confusion with Helm not detecting the change and subsequent
deploys with Auto DevOps can undo your changes. Also, if you change something
and want to undo it by deploying again, Helm may not detect that anything changed
in the first place, and thus not realize that it needs to re-apply the old configuration.
{{< /alert >}}
### GitLab deploy tokens
[GitLab Deploy Tokens](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token)
are created for internal and private projects when Auto DevOps is enabled, and the
Auto DevOps settings are saved. You can use a Deploy Token for permanent access to
the registry. After you manually revoke the GitLab Deploy Token, it isn't
automatically created.
If the GitLab Deploy Token can't be found, `CI_REGISTRY_PASSWORD` is
used.
{{< alert type="note" >}}
`CI_REGISTRY_PASSWORD` is only valid during deployment. Kubernetes can
successfully pull the container image during deployment, but if the image must
be pulled again, such as after pod eviction, Kubernetes cannot do so
as it attempts to fetch the image using `CI_REGISTRY_PASSWORD`.
{{< /alert >}}
### Kubernetes 1.16+
{{< alert type="warning" >}}
The default value for the `deploymentApiVersion` setting was changed from
`extensions/v1beta` to `apps/v1`.
{{< /alert >}}
In Kubernetes 1.16 and later, a number of
[APIs were removed](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/),
including support for `Deployment` in the `extensions/v1beta1` version.
To use Auto Deploy on a Kubernetes 1.16+ cluster:
1. If you are deploying your application for the first time in GitLab 13.0 or
later, no configuration should be required.
1. If you have an in-cluster PostgreSQL database installed with
`AUTO_DEVOPS_POSTGRES_CHANNEL` set to `1`, follow the
[guide to upgrade PostgreSQL](upgrading_postgresql.md).
{{< alert type="warning" >}}
Follow the [guide to upgrading PostgreSQL](upgrading_postgresql.md)
to back up and restore your database before opting into version `2`.
{{< /alert >}}
### Migrations
You can configure database initialization and migrations for PostgreSQL to run
within the application pod by setting the project CI/CD variables `DB_INITIALIZE` and
`DB_MIGRATE`.
If present, `DB_INITIALIZE` is run as a shell command within an application pod
as a Helm post-install hook. As some applications can't run without a successful
database initialization step, GitLab deploys the first release without the
application deployment, and only the database initialization step. After the database
initialization completes, GitLab deploys a second release with the application
deployment as standard.
A post-install hook means that if any deploy succeeds,
`DB_INITIALIZE` isn't processed thereafter.
If present, `DB_MIGRATE` is run as a shell command within an application pod as
a Helm pre-upgrade hook.
For example, in a Rails application in an image built with
[Cloud Native Buildpacks](#auto-build-using-cloud-native-buildpacks):
- `DB_INITIALIZE` can be set to `RAILS_ENV=production /cnb/lifecycle/launcher bin/rails db:setup`
- `DB_MIGRATE` can be set to `RAILS_ENV=production /cnb/lifecycle/launcher bin/rails db:migrate`
Unless your repository contains a `Dockerfile`, your image is built with
Cloud Native Buildpacks, and you must prefix commands run in these images with
`/cnb/lifecycle/launcher` to replicate the environment where your application runs.
### Upgrade auto-deploy-app Chart
You can upgrade the auto-deploy-app chart by following the [upgrade guide](upgrading_auto_deploy_dependencies.md).
### Workers
Some web applications must run extra deployments for "worker processes". For
example, Rails applications commonly use separate worker processes
to run background tasks like sending emails.
The [default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
used in Auto Deploy
[has support for running worker processes](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/9).
To run a worker, you must ensure the worker can respond to
the standard health checks, which expect a successful HTTP response on port
`5000`. For [Sidekiq](https://github.com/mperham/sidekiq), you can use
the [`sidekiq_alive` gem](https://rubygems.org/gems/sidekiq_alive).
To work with Sidekiq, you must also ensure your deployments have
access to a Redis instance. Auto DevOps doesn't deploy this instance for you, so
you must:
- Maintain your own Redis instance.
- Set a CI/CD variable `K8S_SECRET_REDIS_URL`, which is the URL of this instance,
to ensure it's passed into your deployments.
After configuring your worker to respond to health checks, run a Sidekiq
worker for your Rails application. You can enable workers by setting the
following in the [`.gitlab/auto-deploy-values.yaml` file](customize.md#customize-helm-chart-values):
```yaml
workers:
sidekiq:
replicaCount: 1
command:
- /cnb/lifecycle/launcher
- sidekiq
preStopCommand:
- /cnb/lifecycle/launcher
- sidekiqctl
- quiet
terminationGracePeriodSeconds: 60
```
### Running commands in the container
Unless your repository contains [a custom Dockerfile](#auto-build-using-a-dockerfile), applications built with [Auto Build](#auto-build)
might require commands to be wrapped as follows:
```shell
/cnb/lifecycle/launcher $COMMAND
```
Some of the reasons you may need to wrap commands:
- Attaching using `kubectl exec`.
- Using the GitLab [Web Terminal](../../ci/environments/_index.md#web-terminals-deprecated).
For example, to start a Rails console from the application root directory, run:
```shell
/cnb/lifecycle/launcher procfile exec bin/rails c
```
## Auto Code Intelligence
[GitLab code intelligence](../../user/project/code_intelligence.md) adds
code navigation features common to interactive development environments (IDE),
including type signatures, symbol documentation, and go-to definition. It's powered by
[LSIF](https://lsif.dev/) and available for Auto DevOps projects using Go language only.
GitLab plans to add support for more languages as more LSIF indexers become available.
You can follow the [code intelligence epic](https://gitlab.com/groups/gitlab-org/-/epics/4212)
for updates.
This stage is enabled by default. You can disable it by adding the
`CODE_INTELLIGENCE_DISABLED` CI/CD variable. Read more about
[disabling Auto DevOps jobs](cicd_variables.md#job-skipping-variables).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Stages of Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The following sections describe the stages of [Auto DevOps](_index.md).
Read them carefully to understand how each one works.
## Auto Build
{{< alert type="note" >}}
Auto Build is not supported if Docker in Docker is not available for your GitLab Runners, like in OpenShift clusters. The OpenShift support in GitLab is tracked [in a dedicated epic](https://gitlab.com/groups/gitlab-org/-/epics/2068).
{{< /alert >}}
Auto Build creates a build of the application using an existing `Dockerfile` or
Heroku buildpacks. The resulting Docker image is pushed to the
[Container Registry](../../user/packages/container_registry/_index.md), and tagged
with the commit SHA or tag.
### Auto Build using a Dockerfile
If a project's repository contains a `Dockerfile` at its root, Auto Build uses
`docker build` to create a Docker image.
If you're also using Auto Review Apps and Auto Deploy, and you choose to provide
your own `Dockerfile`, you must either:
- Expose your application to port `5000`, as the
[default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
assumes this port is available.
- Override the default values by
[customizing the Auto Deploy Helm chart](customize.md#custom-helm-chart).
### Auto Build using Cloud Native Buildpacks
Auto Build builds an application using a project's `Dockerfile` if present. If no
`Dockerfile` is present, Auto Build builds your application using
[Cloud Native Buildpacks](https://buildpacks.io) to detect and build the
application into a Docker image. The feature uses the
[`pack` command](https://github.com/buildpacks/pack).
The default [builder](https://buildpacks.io/docs/for-app-developers/concepts/builder/)
is `heroku/buildpacks:22` but a different builder can be selected using
the CI/CD variable `AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER`.
Each buildpack requires your project's repository to contain certain files for
Auto Build to build your application successfully. The structure is
specific to the builder and buildpacks you have selected.
For example, when using the Heroku builder (the default), your application's
root directory must contain the appropriate file for your application's
language:
- For Python projects, a `Pipfile` or `requirements.txt` file.
- For Ruby projects, a `Gemfile` or `Gemfile.lock` file.
For the requirements of other languages and frameworks, read the
[Heroku buildpacks documentation](https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks).
{{< alert type="note" >}}
Auto Test still uses Herokuish, as test suite detection is not
yet part of the Cloud Native Buildpack specification. For more information, see
[issue 212689](https://gitlab.com/gitlab-org/gitlab/-/issues/212689).
{{< /alert >}}
#### Mount volumes into the build container
The variable `BUILDPACK_VOLUMES` can be used to pass volume mount definitions to the
`pack` command. The mounts are passed to `pack build` using `--volume` arguments.
Each volume definition can include any of the capabilities provided by `build pack`
such as the host path, the target path, whether the volume is writable, and
one or more volume options.
Use a pipe `|` character to pass multiple volumes.
Each item from the list is passed to `build back` using a separate `--volume` argument.
In this example, three volumes are mounted in the container as `/etc/foo`, `/opt/foo`, and `/var/opt/foo`:
```yaml
buildjob:
variables:
BUILDPACK_VOLUMES: /mnt/1:/etc/foo:ro|/mnt/2:/opt/foo:ro|/mnt/3:/var/opt/foo:rw
```
Read more about defining volumes in the [`pack build` documentation](https://buildpacks.io/docs/for-platform-operators/how-to/integrate-ci/pack/cli/pack_build/).
### Moving from Herokuish to Cloud Native Buildpacks
Builds using Cloud Native Buildpacks support the same options as builds using
Herokuish, with the following caveats:
- The buildpack must be a Cloud Native Buildpack. A Heroku buildpack can be
converted to a Cloud Native Buildpack using Heroku's
[`cnb-shim`](https://github.com/heroku/cnb-shim).
- `BUILDPACK_URL` must be in a format
[supported by `pack`](https://buildpacks.io/docs/app-developer-guide/specify-buildpacks/).
- The `/bin/herokuish` command is not present in the built image, and prefixing
commands with `/bin/herokuish procfile exec` is no longer required (nor possible).
Instead, custom commands should be prefixed with `/cnb/lifecycle/launcher`
to receive the correct execution environment.
## Auto Test
Auto Test runs the appropriate tests for your application using
[Herokuish](https://github.com/gliderlabs/herokuish) and
[Heroku buildpacks](https://devcenter.heroku.com/articles/buildpacks) by analyzing
your project to detect the language and framework. Several languages and
frameworks are detected automatically, but if your language is not detected,
you may be able to create a [custom buildpack](customize.md#custom-buildpacks).
Check the [currently supported languages](#currently-supported-languages).
Auto Test uses tests you already have in your application. If there are no
tests, it's up to you to add them.
<!-- vale gitlab_base.Spelling = NO -->
{{< alert type="note" >}}
Not all buildpacks supported by [Auto Build](#auto-build) are supported by Auto Test.
Auto Test uses [Herokuish](https://gitlab.com/gitlab-org/gitlab/-/issues/212689), *not*
Cloud Native Buildpacks, and only buildpacks that implement the
[Testpack API](https://devcenter.heroku.com/articles/testpack-api) are supported.
{{< /alert >}}
<!-- vale gitlab_base.Spelling = YES -->
### Currently supported languages
Not all buildpacks support Auto Test yet, as it's a relatively new
enhancement. All of Heroku's
[officially supported languages](https://devcenter.heroku.com/articles/heroku-ci#supported-languages)
support Auto Test. The languages supported by Heroku's Herokuish buildpacks all
support Auto Test, but notably the multi-buildpack does not.
The supported buildpacks are:
```plaintext
- heroku-buildpack-multi
- heroku-buildpack-ruby
- heroku-buildpack-nodejs
- heroku-buildpack-clojure
- heroku-buildpack-python
- heroku-buildpack-java
- heroku-buildpack-gradle
- heroku-buildpack-scala
- heroku-buildpack-play
- heroku-buildpack-php
- heroku-buildpack-go
- buildpack-nginx
```
If your application needs a buildpack that is not in the previous list, you
might want to use a [custom buildpack](customize.md#custom-buildpacks).
## Auto Code Quality
{{< history >}}
- [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/212499) from GitLab Starter to GitLab Free in 13.2.
{{< /history >}}
Auto Code Quality uses the
[Code Quality image](https://gitlab.com/gitlab-org/ci-cd/codequality) to run
static analysis and other code checks on the current code. After creating the
report, it's uploaded as an artifact which you can later download and check
out. The merge request widget also displays any
[differences between the source and target branches](../../ci/testing/code_quality.md).
## Auto SAST
{{< history >}}
- Introduced in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.3.
- Select functionality made available in all tiers beginning in 13.1
{{< /history >}}
Static Application Security Testing (SAST) runs static
analysis on the current code, and checks for potential security issues. The
Auto SAST stage requires [GitLab Runner](https://docs.gitlab.com/runner/) 11.5 or later.
After creating the report, it's uploaded as an artifact which you can later
download and check out. The merge request widget also displays any security
warnings on [Ultimate](https://about.gitlab.com/pricing/) licenses.
For more information, see
[Static Application Security Testing (SAST)](../../user/application_security/sast/_index.md).
## Auto Secret Detection
Secret Detection uses the
[Secret Detection Docker image](https://gitlab.com/gitlab-org/security-products/analyzers/secrets) to run Secret Detection on the current code, and checks for leaked secrets.
After creating the report, it's uploaded as an artifact which you can later
download and evaluate. The merge request widget also displays any security
warnings on [Ultimate](https://about.gitlab.com/pricing/) licenses.
For more information, see [Secret Detection](../../user/application_security/secret_detection/_index.md).
## Auto Dependency Scanning
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Dependency Scanning runs analysis on the project's dependencies and checks for potential security issues.
The Auto Dependency Scanning stage is skipped on licenses other than
[Ultimate](https://about.gitlab.com/pricing/).
After creating the report, it's uploaded as an artifact which you can later download and
check out. The merge request widget displays any security warnings detected,
For more information, see
[Dependency Scanning](../../user/application_security/dependency_scanning/_index.md).
## Auto Container Scanning
Vulnerability static analysis for containers uses [Trivy](https://aquasecurity.github.io/trivy/latest/)
to check for potential security issues in Docker images. The Auto Container Scanning stage is
skipped on licenses other than [Ultimate](https://about.gitlab.com/pricing/).
After creating the report, it's uploaded as an artifact which you can later download and
check out. The merge request displays any detected security issues.
For more information, see
[Container Scanning](../../user/application_security/container_scanning/_index.md).
## Auto Review Apps
This is an optional step because many projects don't have a Kubernetes cluster
available. If the [requirements](requirements.md) are not met, the job is
silently skipped.
[Review apps](../../ci/review_apps/_index.md) are temporary application environments based on the
branch's code so developers, designers, QA, product managers, and other
reviewers can actually see and interact with code changes as part of the review
process. Auto Review Apps create a Review App for each branch.
Auto Review Apps deploy your application to your Kubernetes cluster only. If no cluster
is available, no deployment occurs.
The Review App has a unique URL based on a combination of the project ID, the branch
or tag name, a unique number, and the Auto DevOps base domain, such as
`13083-review-project-branch-123456.example.com`. The merge request widget displays
a link to the Review App for easy discovery. When the branch or tag is deleted,
such as after merging a merge request, the Review App is also deleted.
Review apps are deployed using the
[auto-deploy-app](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app) chart with
Helm, which you can [customize](customize.md#custom-helm-chart). The application deploys
into the [Kubernetes namespace](../../user/project/clusters/deploy_to_cluster.md#deployment-variables)
for the environment.
[Local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is
used. Previous versions of GitLab had a Tiller installed in the project
namespace.
{{< alert type="warning" >}}
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
This can cause confusion with Helm not detecting the change and subsequent
deploys with Auto DevOps can undo your changes. Also, if you change something
and want to undo it by deploying again, Helm may not detect that anything changed
in the first place, and thus not realize that it needs to re-apply the old configuration.
{{< /alert >}}
## Auto DAST
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Dynamic Application Security Testing (DAST) uses the popular open source tool
[OWASP ZAProxy](https://github.com/zaproxy/zaproxy) to analyze the current code
and check for potential security issues. The Auto DAST stage is skipped on
licenses other than [Ultimate](https://about.gitlab.com/pricing/).
- On your default branch, DAST scans an application deployed specifically for that purpose
unless you [override the target branch](#overriding-the-dast-target).
The app is deleted after DAST has run.
- On feature branches, DAST scans the [review app](#auto-review-apps).
After the DAST scan completes, any security warnings are displayed
on the [Security Dashboard](../../user/application_security/security_dashboard/_index.md)
and the merge request widget.
For more information, see
[Dynamic Application Security Testing (DAST)](../../user/application_security/dast/_index.md).
### Overriding the DAST target
To use a custom target instead of the auto-deployed review apps,
set a `DAST_WEBSITE` CI/CD variable to the URL for DAST to scan.
{{< alert type="warning" >}}
If [DAST Full Scan](../../user/application_security/dast/browser/_index.md) is
enabled, GitLab strongly advises **not**
to set `DAST_WEBSITE` to any staging or production environment. DAST Full Scan
actively attacks the target, which can take down your application and lead to
data loss or corruption.
{{< /alert >}}
### Skipping Auto DAST
You can skip DAST jobs:
- On all branches by setting the `DAST_DISABLED` CI/CD variable to `"true"`.
- Only on the default branch by setting the `DAST_DISABLED_FOR_DEFAULT_BRANCH`
variable to `"true"`.
- Only on feature branches by setting `REVIEW_DISABLED` variable to
`"true"`. This also skips the Review App.
## Auto Browser Performance Testing
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto [Browser Performance Testing](../../ci/testing/browser_performance_testing.md)
measures the browser performance of a web page with the
[Sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/),
creates a JSON report including the overall performance score for each page, and
uploads the report as an artifact. By default, it tests the root page of your Review and
Production environments. If you want to test additional URLs, add the paths to a
file named `.gitlab-urls.txt` in the root directory, one file per line. For example:
```plaintext
/
/features
/direction
```
Any browser performance differences between the source and target branches are also
[shown in the merge request widget](../../ci/testing/browser_performance_testing.md).
## Auto Load Performance Testing
{{< details >}}
- Tier: Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto [Load Performance Testing](../../ci/testing/load_performance_testing.md)
measures the server performance of an application with the
[k6 container](https://hub.docker.com/r/loadimpact/k6/),
creates a JSON report including several key result metrics, and
uploads the report as an artifact.
Some initial setup is required. A [k6](https://k6.io/) test needs to be
written that's tailored to your specific application. The test also needs to be
configured so it can pick up the environment's dynamic URL via a CI/CD variable.
Any load performance test result differences between the source and target branches are also
[shown in the merge request widget](../../user/project/merge_requests/widgets.md).
## Auto Deploy
You have the choice to deploy to [Amazon Elastic Compute Cloud (Amazon EC2)](https://aws.amazon.com/ec2/) in addition to a Kubernetes cluster.
Auto Deploy is an optional step for Auto DevOps. If the [requirements](requirements.md) are not met, the job is skipped.
After a branch or merge request is merged into the project's default branch, Auto Deploy deploys the application to a `production` environment in
the Kubernetes cluster, with a namespace based on the project name and unique
project ID, such as `project-4321`.
Auto Deploy does not include deployments to staging or canary environments by
default, but the
[Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml)
contains job definitions for these tasks if you want to enable them.
You can use [CI/CD variables](cicd_variables.md) to automatically
scale your pod replicas, and to apply custom arguments to the Auto DevOps `helm upgrade`
commands. This is an easy way to
[customize the Auto Deploy Helm chart](customize.md#custom-helm-chart).
Helm uses the [auto-deploy-app](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
chart to deploy the application into the
[Kubernetes namespace](../../user/project/clusters/deploy_to_cluster.md#deployment-variables)
for the environment.
[Local Tiller](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22036) is
used. Previous versions of GitLab had a Tiller installed in the project
namespace.
{{< alert type="warning" >}}
Your apps should not be manipulated outside of Helm (using Kubernetes directly).
This can cause confusion with Helm not detecting the change and subsequent
deploys with Auto DevOps can undo your changes. Also, if you change something
and want to undo it by deploying again, Helm may not detect that anything changed
in the first place, and thus not realize that it needs to re-apply the old configuration.
{{< /alert >}}
### GitLab deploy tokens
[GitLab Deploy Tokens](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token)
are created for internal and private projects when Auto DevOps is enabled, and the
Auto DevOps settings are saved. You can use a Deploy Token for permanent access to
the registry. After you manually revoke the GitLab Deploy Token, it isn't
automatically created.
If the GitLab Deploy Token can't be found, `CI_REGISTRY_PASSWORD` is
used.
{{< alert type="note" >}}
`CI_REGISTRY_PASSWORD` is only valid during deployment. Kubernetes can
successfully pull the container image during deployment, but if the image must
be pulled again, such as after pod eviction, Kubernetes cannot do so
as it attempts to fetch the image using `CI_REGISTRY_PASSWORD`.
{{< /alert >}}
### Kubernetes 1.16+
{{< alert type="warning" >}}
The default value for the `deploymentApiVersion` setting was changed from
`extensions/v1beta` to `apps/v1`.
{{< /alert >}}
In Kubernetes 1.16 and later, a number of
[APIs were removed](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/),
including support for `Deployment` in the `extensions/v1beta1` version.
To use Auto Deploy on a Kubernetes 1.16+ cluster:
1. If you are deploying your application for the first time in GitLab 13.0 or
later, no configuration should be required.
1. If you have an in-cluster PostgreSQL database installed with
`AUTO_DEVOPS_POSTGRES_CHANNEL` set to `1`, follow the
[guide to upgrade PostgreSQL](upgrading_postgresql.md).
{{< alert type="warning" >}}
Follow the [guide to upgrading PostgreSQL](upgrading_postgresql.md)
to back up and restore your database before opting into version `2`.
{{< /alert >}}
### Migrations
You can configure database initialization and migrations for PostgreSQL to run
within the application pod by setting the project CI/CD variables `DB_INITIALIZE` and
`DB_MIGRATE`.
If present, `DB_INITIALIZE` is run as a shell command within an application pod
as a Helm post-install hook. As some applications can't run without a successful
database initialization step, GitLab deploys the first release without the
application deployment, and only the database initialization step. After the database
initialization completes, GitLab deploys a second release with the application
deployment as standard.
A post-install hook means that if any deploy succeeds,
`DB_INITIALIZE` isn't processed thereafter.
If present, `DB_MIGRATE` is run as a shell command within an application pod as
a Helm pre-upgrade hook.
For example, in a Rails application in an image built with
[Cloud Native Buildpacks](#auto-build-using-cloud-native-buildpacks):
- `DB_INITIALIZE` can be set to `RAILS_ENV=production /cnb/lifecycle/launcher bin/rails db:setup`
- `DB_MIGRATE` can be set to `RAILS_ENV=production /cnb/lifecycle/launcher bin/rails db:migrate`
Unless your repository contains a `Dockerfile`, your image is built with
Cloud Native Buildpacks, and you must prefix commands run in these images with
`/cnb/lifecycle/launcher` to replicate the environment where your application runs.
### Upgrade auto-deploy-app Chart
You can upgrade the auto-deploy-app chart by following the [upgrade guide](upgrading_auto_deploy_dependencies.md).
### Workers
Some web applications must run extra deployments for "worker processes". For
example, Rails applications commonly use separate worker processes
to run background tasks like sending emails.
The [default Helm chart](https://gitlab.com/gitlab-org/cluster-integration/auto-deploy-image/-/tree/master/assets/auto-deploy-app)
used in Auto Deploy
[has support for running worker processes](https://gitlab.com/gitlab-org/charts/auto-deploy-app/-/merge_requests/9).
To run a worker, you must ensure the worker can respond to
the standard health checks, which expect a successful HTTP response on port
`5000`. For [Sidekiq](https://github.com/mperham/sidekiq), you can use
the [`sidekiq_alive` gem](https://rubygems.org/gems/sidekiq_alive).
To work with Sidekiq, you must also ensure your deployments have
access to a Redis instance. Auto DevOps doesn't deploy this instance for you, so
you must:
- Maintain your own Redis instance.
- Set a CI/CD variable `K8S_SECRET_REDIS_URL`, which is the URL of this instance,
to ensure it's passed into your deployments.
After configuring your worker to respond to health checks, run a Sidekiq
worker for your Rails application. You can enable workers by setting the
following in the [`.gitlab/auto-deploy-values.yaml` file](customize.md#customize-helm-chart-values):
```yaml
workers:
sidekiq:
replicaCount: 1
command:
- /cnb/lifecycle/launcher
- sidekiq
preStopCommand:
- /cnb/lifecycle/launcher
- sidekiqctl
- quiet
terminationGracePeriodSeconds: 60
```
### Running commands in the container
Unless your repository contains [a custom Dockerfile](#auto-build-using-a-dockerfile), applications built with [Auto Build](#auto-build)
might require commands to be wrapped as follows:
```shell
/cnb/lifecycle/launcher $COMMAND
```
Some of the reasons you may need to wrap commands:
- Attaching using `kubectl exec`.
- Using the GitLab [Web Terminal](../../ci/environments/_index.md#web-terminals-deprecated).
For example, to start a Rails console from the application root directory, run:
```shell
/cnb/lifecycle/launcher procfile exec bin/rails c
```
## Auto Code Intelligence
[GitLab code intelligence](../../user/project/code_intelligence.md) adds
code navigation features common to interactive development environments (IDE),
including type signatures, symbol documentation, and go-to definition. It's powered by
[LSIF](https://lsif.dev/) and available for Auto DevOps projects using Go language only.
GitLab plans to add support for more languages as more LSIF indexers become available.
You can follow the [code intelligence epic](https://gitlab.com/groups/gitlab-org/-/epics/4212)
for updates.
This stage is enabled by default. You can disable it by adding the
`CODE_INTELLIGENCE_DISABLED` CI/CD variable. Read more about
[disabling Auto DevOps jobs](cicd_variables.md#job-skipping-variables).
|
https://docs.gitlab.com/topics/multiple_clusters_auto_devops
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/multiple_clusters_auto_devops.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
multiple_clusters_auto_devops.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Multiple Kubernetes clusters for Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When using Auto DevOps, you can deploy different environments to different Kubernetes clusters.
The [Deploy Job template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml) used by Auto DevOps defines three environment names:
- `review/` (every environment starting with `review/`)
- `staging`
- `production`
These environments are tied to jobs using [Auto Deploy](stages.md#auto-deploy), so they must have different deployment domains. You must define separate [`KUBE_CONTEXT`](../../user/clusters/agent/ci_cd_workflow.md#environments-that-use-auto-devops) and [`KUBE_INGRESS_BASE_DOMAIN`](requirements.md#auto-devops-base-domain) variables for each of the three environments.
## Deploy to different clusters
To deploy your environments to different Kubernetes clusters:
1. [Create Kubernetes clusters](../../user/infrastructure/clusters/connect/new_gke_cluster.md).
1. Associate the clusters to your project:
1. [Install a GitLab agent for Kubernetes on each cluster](../../user/clusters/agent/_index.md).
1. [Configure each agent to access your project](../../user/clusters/agent/work_with_agent.md#configure-your-agent).
1. [Install NGINX Ingress Controller](cloud_deployments/auto_devops_with_gke.md#install-ingress) in each cluster. Save the IP address and Kubernetes namespace for the next step.
1. [Configure the Auto DevOps CI/CD Pipeline variables](cicd_variables.md#build-and-deployment-variables)
- Set up a `KUBE_CONTEXT` variable [for each environment](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). The value must point to the agent of the relevant cluster.
- Set up a `KUBE_INGRESS_BASE_DOMAIN`. You must [configure the base domain](requirements.md#auto-devops-base-domain) for each environment to point to the Ingress of the relevant cluster.
- Add a `KUBE_NAMESPACE` variable with a value of the Kubernetes namespace you want your deployments to target. You can scope the variable to multiple environments.
For deprecated, [certificate-based clusters](../../user/infrastructure/clusters/_index.md#certificate-based-kubernetes-integration-deprecated):
1. Go to the project and select **Operate > Kubernetes clusters** from the left sidebar.
1. [Set the environment scope of each cluster](../../user/project/clusters/multiple_kubernetes_clusters.md#setting-the-environment-scope).
1. For each cluster, [add a domain based on its Ingress IP address](../../user/project/clusters/gitlab_managed_clusters.md#base-domain).
{{< alert type="note" >}}
[Cluster environment scope is not respected when checking for active Kubernetes clusters](https://gitlab.com/gitlab-org/gitlab/-/issues/20351). For a multi-cluster setup to work with Auto DevOps, you must create a fallback cluster with **Cluster environment scope** set to `*`. You can set any of the clusters you've already added as a fallback cluster.
{{< /alert >}}
### Example configurations
| Cluster name | Cluster environment scope | `KUBE_INGRESS_BASE_DOMAIN` value | `KUBE CONTEXT` value | Variable environment scope | Notes |
|:-------------|:--------------------------|:---------------------------------|:-----------------------------------|:---------------------------|:------|
| review | `review/*` | `review.example.com` | `path/to/project:review-agent` | `review/*` | A review cluster that runs all [review apps](../../ci/review_apps/_index.md). |
| staging | `staging` | `staging.example.com` | `path/to/project:staging-agent` | `staging` | Optional. A staging cluster that runs the deployments of the staging environments. You must [enable it first](cicd_variables.md#deploy-policy-for-staging-and-production-environments). |
| production | `production` | `example.com` | `path/to/project:production-agent` | `production` | A production cluster that runs the production environment deployments. You can use [incremental rollouts](cicd_variables.md#incremental-rollout-to-production). |
## Test your configuration
After completing configuration, test your setup by creating a merge request.
Verify whether your application deployed as a Review App in the Kubernetes
cluster with the `review/*` environment scope. Similarly, check the
other environments.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Multiple Kubernetes clusters for Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When using Auto DevOps, you can deploy different environments to different Kubernetes clusters.
The [Deploy Job template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml) used by Auto DevOps defines three environment names:
- `review/` (every environment starting with `review/`)
- `staging`
- `production`
These environments are tied to jobs using [Auto Deploy](stages.md#auto-deploy), so they must have different deployment domains. You must define separate [`KUBE_CONTEXT`](../../user/clusters/agent/ci_cd_workflow.md#environments-that-use-auto-devops) and [`KUBE_INGRESS_BASE_DOMAIN`](requirements.md#auto-devops-base-domain) variables for each of the three environments.
## Deploy to different clusters
To deploy your environments to different Kubernetes clusters:
1. [Create Kubernetes clusters](../../user/infrastructure/clusters/connect/new_gke_cluster.md).
1. Associate the clusters to your project:
1. [Install a GitLab agent for Kubernetes on each cluster](../../user/clusters/agent/_index.md).
1. [Configure each agent to access your project](../../user/clusters/agent/work_with_agent.md#configure-your-agent).
1. [Install NGINX Ingress Controller](cloud_deployments/auto_devops_with_gke.md#install-ingress) in each cluster. Save the IP address and Kubernetes namespace for the next step.
1. [Configure the Auto DevOps CI/CD Pipeline variables](cicd_variables.md#build-and-deployment-variables)
- Set up a `KUBE_CONTEXT` variable [for each environment](../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). The value must point to the agent of the relevant cluster.
- Set up a `KUBE_INGRESS_BASE_DOMAIN`. You must [configure the base domain](requirements.md#auto-devops-base-domain) for each environment to point to the Ingress of the relevant cluster.
- Add a `KUBE_NAMESPACE` variable with a value of the Kubernetes namespace you want your deployments to target. You can scope the variable to multiple environments.
For deprecated, [certificate-based clusters](../../user/infrastructure/clusters/_index.md#certificate-based-kubernetes-integration-deprecated):
1. Go to the project and select **Operate > Kubernetes clusters** from the left sidebar.
1. [Set the environment scope of each cluster](../../user/project/clusters/multiple_kubernetes_clusters.md#setting-the-environment-scope).
1. For each cluster, [add a domain based on its Ingress IP address](../../user/project/clusters/gitlab_managed_clusters.md#base-domain).
{{< alert type="note" >}}
[Cluster environment scope is not respected when checking for active Kubernetes clusters](https://gitlab.com/gitlab-org/gitlab/-/issues/20351). For a multi-cluster setup to work with Auto DevOps, you must create a fallback cluster with **Cluster environment scope** set to `*`. You can set any of the clusters you've already added as a fallback cluster.
{{< /alert >}}
### Example configurations
| Cluster name | Cluster environment scope | `KUBE_INGRESS_BASE_DOMAIN` value | `KUBE CONTEXT` value | Variable environment scope | Notes |
|:-------------|:--------------------------|:---------------------------------|:-----------------------------------|:---------------------------|:------|
| review | `review/*` | `review.example.com` | `path/to/project:review-agent` | `review/*` | A review cluster that runs all [review apps](../../ci/review_apps/_index.md). |
| staging | `staging` | `staging.example.com` | `path/to/project:staging-agent` | `staging` | Optional. A staging cluster that runs the deployments of the staging environments. You must [enable it first](cicd_variables.md#deploy-policy-for-staging-and-production-environments). |
| production | `production` | `example.com` | `path/to/project:production-agent` | `production` | A production cluster that runs the production environment deployments. You can use [incremental rollouts](cicd_variables.md#incremental-rollout-to-production). |
## Test your configuration
After completing configuration, test your setup by creating a merge request.
Verify whether your application deployed as a Review App in the Kubernetes
cluster with the `review/*` environment scope. Similarly, check the
other environments.
|
https://docs.gitlab.com/topics/requirements
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/requirements.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
requirements.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Requirements for Auto DevOps
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Before enabling [Auto DevOps](_index.md), we recommend you to prepare it for
deployment. If you don't, you can use it to build and test your apps, and
then configure the deployment later.
To prepare the deployment:
1. Define the [deployment strategy](#auto-devops-deployment-strategy).
1. Prepare the [base domain](#auto-devops-base-domain).
1. Define where you want to deploy it:
1. [Kubernetes](#auto-devops-requirements-for-kubernetes).
1. [Amazon Elastic Container Service (ECS)](cloud_deployments/auto_devops_with_ecs.md).
1. [Amazon Elastic Kubernetes Service (EKS)](https://about.gitlab.com/blog/2020/05/05/deploying-application-eks/).
1. [Amazon EC2](cloud_deployments/auto_devops_with_ec2.md).
1. [Google Kubernetes Engine](cloud_deployments/auto_devops_with_gke.md).
1. [Bare metal](#auto-devops-requirements-for-bare-metal).
1. [Enable Auto DevOps](_index.md#enable-or-disable-auto-devops).
## Auto DevOps deployment strategy
When using Auto DevOps to deploy your applications, choose the
[continuous deployment strategy](../../ci/_index.md)
that works best for your needs:
| Deployment strategy | Setup | Methodology |
|-------------------------------------------------------------------------|-------|-------------|
| **Continuous deployment to production** | Enables [Auto Deploy](stages.md#auto-deploy) with the default branch continuously deployed to production. | Continuous deployment to production.|
| **Continuous deployment to production using timed incremental rollout** | Sets the [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#timed-incremental-rollout-to-production) variable to `timed`. | Continuously deploy to production with a 5 minutes delay between rollouts. |
| **Automatic deployment to staging, manual deployment to production** | Sets [`STAGING_ENABLED`](cicd_variables.md#deploy-policy-for-staging-and-production-environments) to `1` and [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#incremental-rollout-to-production) to `manual`. | The default branch is continuously deployed to staging and continuously delivered to production. |
You can choose the deployment method when enabling Auto DevOps or later:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Choose the deployment strategy.
1. Select **Save changes**.
{{< alert type="note" >}}
Use the [blue-green deployment](../../ci/environments/incremental_rollouts.md#blue-green-deployment) technique
to minimize downtime and risk.
{{< /alert >}}
## Auto DevOps base domain
The Auto DevOps base domain is required to use
[Auto Review Apps](stages.md#auto-review-apps) and [Auto Deploy](stages.md#auto-deploy).
To define the base domain, either:
- In the project, group, or instance level: go to your cluster settings and add it there.
- In the project or group level: add it as an environment variable: `KUBE_INGRESS_BASE_DOMAIN`.
- In the instance level: go to the **Admin** area, then **Settings > CI/CD > Continuous Integration and Delivery** and add it there.
The base domain variable `KUBE_INGRESS_BASE_DOMAIN` follows the same order of
[precedence as other environment variables](../../ci/variables/_index.md#cicd-variable-precedence).
If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide **Auto DevOps domain**.
Auto DevOps requires a wildcard DNS `A` record that matches the base domains. For
a base domain of `example.com`, you'd need a DNS entry like:
```plaintext
*.example.com 3600 A 10.0.2.2
```
In this case, the deployed applications are served from `example.com`, and `10.0.2.2`
is the IP address of your load balancer, generally NGINX ([see requirements](requirements.md)).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
Alternatively, you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io),
set the Auto DevOps base domain to `10.0.2.2.nip.io`.
After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.
## Auto DevOps requirements for Kubernetes
To make full use of Auto DevOps with Kubernetes, you need:
- **Kubernetes** (for [Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy))
To enable deployments, you need:
1. A [Kubernetes 1.12+ cluster](../../user/infrastructure/clusters/_index.md) for your
project.
For Kubernetes 1.16+ clusters, you must perform additional configuration for
[Auto Deploy for Kubernetes 1.16+](stages.md#kubernetes-116).
1. For external HTTP traffic, an Ingress controller is required. For regular
deployments, any Ingress controller should work, but as of GitLab 14.0,
[canary deployments](../../user/project/canary_deployments.md) require
NGINX Ingress. You can deploy the NGINX Ingress controller to your
Kubernetes cluster either through the GitLab [Cluster management project template](../../user/clusters/management_project_template.md)
or manually by using the [`ingress-nginx`](https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx)
Helm chart.
When deploying [using custom charts](customize.md#custom-helm-chart), you must
[annotate](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
the Ingress manifest to be scraped by Prometheus using
`prometheus.io/scrape: "true"` and `prometheus.io/port: "10254"`.
{{< alert type="note" >}}
If your cluster is installed on bare metal, see
[Auto DevOps Requirements for bare metal](#auto-devops-requirements-for-bare-metal).
{{< /alert >}}
- **Base domain** (for [Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy))
You must [specify the Auto DevOps base domain](#auto-devops-base-domain),
which all of your Auto DevOps applications use. This domain must be configured
with wildcard DNS.
- **GitLab Runner** (for all stages)
Your runner must be configured to run Docker, usually with either the
[Docker](https://docs.gitlab.com/runner/executors/docker.html)
or [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/) executors, with
[privileged mode enabled](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode).
The runners don't need to be installed in the Kubernetes cluster, but the
Kubernetes executor is easy to use and automatically autoscales.
You can configure Docker-based runners to autoscale as well, using
[Docker Machine](https://docs.gitlab.com/runner/executors/docker_machine.html).
Runners should be registered as [instance runners](../../ci/runners/runners_scope.md#instance-runners)
for the entire GitLab instance, or [project runners](../../ci/runners/runners_scope.md#project-runners)
that are assigned to specific projects.
- **cert-manager** (optional, for TLS/HTTPS)
To enable HTTPS endpoints for your application, you can [install cert-manager](https://cert-manager.io/docs/releases/),
a native Kubernetes certificate management controller that helps with issuing
certificates. Installing cert-manager on your cluster issues a
[Let's Encrypt](https://letsencrypt.org/) certificate and ensures the
certificates are valid and up-to-date.
If you don't have Kubernetes or Prometheus configured, then
[Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy)
are skipped.
After all requirements are met, you can [enable Auto DevOps](_index.md#enable-or-disable-auto-devops).
## Auto DevOps requirements for bare metal
According to the [Kubernetes Ingress-NGINX docs](https://kubernetes.github.io/ingress-nginx/deploy/baremetal/):
> In traditional cloud environments, where network load balancers are available on-demand,
> a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress
> controller to external clients and, indirectly, to any application running inside the cluster.
> Bare-metal environments lack this commodity, requiring a slightly different setup to offer the
> same kind of access to external consumers.
The documentation linked previously explains the issue and provides possible solutions, for example:
- Through [MetalLB](https://github.com/metallb/metallb).
- Through [PorterLB](https://github.com/kubesphere/porterlb).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Requirements for Auto DevOps
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Before enabling [Auto DevOps](_index.md), we recommend you to prepare it for
deployment. If you don't, you can use it to build and test your apps, and
then configure the deployment later.
To prepare the deployment:
1. Define the [deployment strategy](#auto-devops-deployment-strategy).
1. Prepare the [base domain](#auto-devops-base-domain).
1. Define where you want to deploy it:
1. [Kubernetes](#auto-devops-requirements-for-kubernetes).
1. [Amazon Elastic Container Service (ECS)](cloud_deployments/auto_devops_with_ecs.md).
1. [Amazon Elastic Kubernetes Service (EKS)](https://about.gitlab.com/blog/2020/05/05/deploying-application-eks/).
1. [Amazon EC2](cloud_deployments/auto_devops_with_ec2.md).
1. [Google Kubernetes Engine](cloud_deployments/auto_devops_with_gke.md).
1. [Bare metal](#auto-devops-requirements-for-bare-metal).
1. [Enable Auto DevOps](_index.md#enable-or-disable-auto-devops).
## Auto DevOps deployment strategy
When using Auto DevOps to deploy your applications, choose the
[continuous deployment strategy](../../ci/_index.md)
that works best for your needs:
| Deployment strategy | Setup | Methodology |
|-------------------------------------------------------------------------|-------|-------------|
| **Continuous deployment to production** | Enables [Auto Deploy](stages.md#auto-deploy) with the default branch continuously deployed to production. | Continuous deployment to production.|
| **Continuous deployment to production using timed incremental rollout** | Sets the [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#timed-incremental-rollout-to-production) variable to `timed`. | Continuously deploy to production with a 5 minutes delay between rollouts. |
| **Automatic deployment to staging, manual deployment to production** | Sets [`STAGING_ENABLED`](cicd_variables.md#deploy-policy-for-staging-and-production-environments) to `1` and [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#incremental-rollout-to-production) to `manual`. | The default branch is continuously deployed to staging and continuously delivered to production. |
You can choose the deployment method when enabling Auto DevOps or later:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Choose the deployment strategy.
1. Select **Save changes**.
{{< alert type="note" >}}
Use the [blue-green deployment](../../ci/environments/incremental_rollouts.md#blue-green-deployment) technique
to minimize downtime and risk.
{{< /alert >}}
## Auto DevOps base domain
The Auto DevOps base domain is required to use
[Auto Review Apps](stages.md#auto-review-apps) and [Auto Deploy](stages.md#auto-deploy).
To define the base domain, either:
- In the project, group, or instance level: go to your cluster settings and add it there.
- In the project or group level: add it as an environment variable: `KUBE_INGRESS_BASE_DOMAIN`.
- In the instance level: go to the **Admin** area, then **Settings > CI/CD > Continuous Integration and Delivery** and add it there.
The base domain variable `KUBE_INGRESS_BASE_DOMAIN` follows the same order of
[precedence as other environment variables](../../ci/variables/_index.md#cicd-variable-precedence).
If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide **Auto DevOps domain**.
Auto DevOps requires a wildcard DNS `A` record that matches the base domains. For
a base domain of `example.com`, you'd need a DNS entry like:
```plaintext
*.example.com 3600 A 10.0.2.2
```
In this case, the deployed applications are served from `example.com`, and `10.0.2.2`
is the IP address of your load balancer, generally NGINX ([see requirements](requirements.md)).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
Alternatively, you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io),
set the Auto DevOps base domain to `10.0.2.2.nip.io`.
After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.
## Auto DevOps requirements for Kubernetes
To make full use of Auto DevOps with Kubernetes, you need:
- **Kubernetes** (for [Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy))
To enable deployments, you need:
1. A [Kubernetes 1.12+ cluster](../../user/infrastructure/clusters/_index.md) for your
project.
For Kubernetes 1.16+ clusters, you must perform additional configuration for
[Auto Deploy for Kubernetes 1.16+](stages.md#kubernetes-116).
1. For external HTTP traffic, an Ingress controller is required. For regular
deployments, any Ingress controller should work, but as of GitLab 14.0,
[canary deployments](../../user/project/canary_deployments.md) require
NGINX Ingress. You can deploy the NGINX Ingress controller to your
Kubernetes cluster either through the GitLab [Cluster management project template](../../user/clusters/management_project_template.md)
or manually by using the [`ingress-nginx`](https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx)
Helm chart.
When deploying [using custom charts](customize.md#custom-helm-chart), you must
[annotate](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
the Ingress manifest to be scraped by Prometheus using
`prometheus.io/scrape: "true"` and `prometheus.io/port: "10254"`.
{{< alert type="note" >}}
If your cluster is installed on bare metal, see
[Auto DevOps Requirements for bare metal](#auto-devops-requirements-for-bare-metal).
{{< /alert >}}
- **Base domain** (for [Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy))
You must [specify the Auto DevOps base domain](#auto-devops-base-domain),
which all of your Auto DevOps applications use. This domain must be configured
with wildcard DNS.
- **GitLab Runner** (for all stages)
Your runner must be configured to run Docker, usually with either the
[Docker](https://docs.gitlab.com/runner/executors/docker.html)
or [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/) executors, with
[privileged mode enabled](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode).
The runners don't need to be installed in the Kubernetes cluster, but the
Kubernetes executor is easy to use and automatically autoscales.
You can configure Docker-based runners to autoscale as well, using
[Docker Machine](https://docs.gitlab.com/runner/executors/docker_machine.html).
Runners should be registered as [instance runners](../../ci/runners/runners_scope.md#instance-runners)
for the entire GitLab instance, or [project runners](../../ci/runners/runners_scope.md#project-runners)
that are assigned to specific projects.
- **cert-manager** (optional, for TLS/HTTPS)
To enable HTTPS endpoints for your application, you can [install cert-manager](https://cert-manager.io/docs/releases/),
a native Kubernetes certificate management controller that helps with issuing
certificates. Installing cert-manager on your cluster issues a
[Let's Encrypt](https://letsencrypt.org/) certificate and ensures the
certificates are valid and up-to-date.
If you don't have Kubernetes or Prometheus configured, then
[Auto Review Apps](stages.md#auto-review-apps) and
[Auto Deploy](stages.md#auto-deploy)
are skipped.
After all requirements are met, you can [enable Auto DevOps](_index.md#enable-or-disable-auto-devops).
## Auto DevOps requirements for bare metal
According to the [Kubernetes Ingress-NGINX docs](https://kubernetes.github.io/ingress-nginx/deploy/baremetal/):
> In traditional cloud environments, where network load balancers are available on-demand,
> a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress
> controller to external clients and, indirectly, to any application running inside the cluster.
> Bare-metal environments lack this commodity, requiring a slightly different setup to offer the
> same kind of access to external consumers.
The documentation linked previously explains the issue and provides possible solutions, for example:
- Through [MetalLB](https://github.com/metallb/metallb).
- Through [PorterLB](https://github.com/kubesphere/porterlb).
|
https://docs.gitlab.com/topics/autodevops
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/_index.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
_index.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Auto DevOps
|
Automated DevOps, language detection, deployment, and customization.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto DevOps turns your code into production-ready applications without the usual configuration overhead.
The entire DevOps lifecycle is pre-configured using industry best practices. Start with the defaults
to ship quickly, then customize when you need more control. No complex configuration files or deep
DevOps expertise is required.
With Auto DevOps you get:
- CI/CD pipelines that automatically detect your language and framework
- Built-in security scanning to find vulnerabilities before they reach production
- Code quality and performance testing on every commit
- Ready-to-use review apps for previewing changes in a live environment
- Quick deployments to Kubernetes clusters
- Progressive deployment strategies that reduce risk and downtime
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an introduction to Auto DevOps, watch [Auto DevOps](https://youtu.be/0Tc0YYBxqi4).
<!-- Video published on 2018-06-22 -->
## Auto DevOps features
Auto DevOps supports development during each of the [DevOps stages](stages.md).
| Stage | Auto DevOps feature |
|---------|-------------|
| Build | [Auto Build](stages.md#auto-build) |
| Build | [Auto Dependency Scanning](stages.md#auto-dependency-scanning) |
| Test | [Auto Test](stages.md#auto-test) |
| Test | [Auto Browser Performance Testing](stages.md#auto-browser-performance-testing) |
| Test | [Auto Code Intelligence](stages.md#auto-code-intelligence) |
| Test | [Auto Code Quality](stages.md#auto-code-quality) |
| Test | [Auto Container Scanning](stages.md#auto-container-scanning) |
| Deploy | [Auto Review Apps](stages.md#auto-review-apps) |
| Deploy | [Auto Deploy](stages.md#auto-deploy) |
| Secure | [Auto Dynamic Application Security Testing (DAST)](stages.md#auto-dast) |
| Secure | [Auto Static Application Security Testing (SAST)](stages.md#auto-sast) |
| Secure | [Auto Secret Detection](stages.md#auto-secret-detection) |
### Comparison to application platforms and PaaS
Auto DevOps provides features often included in an application
platform or in a Platform as a Service (PaaS).
Inspired by [Heroku](https://www.heroku.com/), Auto DevOps goes beyond it
in multiple ways:
- Auto DevOps works with any Kubernetes cluster.
- There is no additional cost.
- You can use a cluster hosted by yourself or on any public cloud.
- Auto DevOps offers an incremental graduation path. If you need to [customize](customize.md), start by changing the templates and evolve from there.
## Get started with Auto DevOps
To get started, you only need to [enable Auto DevOps](#enable-or-disable-auto-devops).
This is enough to run an Auto DevOps pipeline to build and
test your application.
If you want to build, test, and deploy your app:
1. View the [requirements for deployment](requirements.md).
1. [Enable Auto DevOps](#enable-or-disable-auto-devops).
1. [Deploy your app to a cloud provider](#deploy-your-app-to-a-cloud-provider).
### Enable or disable Auto DevOps
Auto DevOps runs pipelines automatically only if a [`Dockerfile` or matching buildpack](stages.md#auto-build) exists.
You can enable or disable Auto DevOps for a project or an entire group. Instance administrators
can also [set Auto DevOps as the default](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects)
for all projects in an instance.
Before enabling Auto DevOps, consider [preparing it for deployment](requirements.md).
If you don't, Auto DevOps can build and test your app, but cannot deploy it.
#### Per project
To use Auto DevOps for individual projects, you can enable it in a
project-by-project basis. If you intend to use it for more projects,
you can enable it for a [group](#per-group) or an
[instance](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects).
This can save you the time of enabling it in each project.
Prerequisites:
- You must have at least the Maintainer role for the project.
- Ensure your project does not have a `.gitlab-ci.yml` present. If present, your CI/CD configuration takes
precedence over the Auto DevOps pipeline.
To enable Auto DevOps for a project:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select the **Default to Auto DevOps pipeline** checkbox.
1. Optional but recommended. Add the [base domain](requirements.md#auto-devops-base-domain).
1. Optional but recommended. Choose the [deployment strategy](requirements.md#auto-devops-deployment-strategy).
1. Select **Save changes**.
GitLab triggers the Auto DevOps pipeline on the default branch.
To disable it, follow the same process and clear the
**Default to Auto DevOps pipeline** checkbox.
#### Per group
When you enable Auto DevOps for a group, the subgroups and
projects in that group inherit the configuration. You can save time by
enabling Auto DevOps for a group instead of enabling it for each
subgroup or project.
When enabled for a group, you can still disable Auto DevOps
for the subgroups and projects where you don't want to use it.
Prerequisites:
- You must have the Owner role for the group.
To enable Auto DevOps for a group:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select the **Default to Auto DevOps pipeline** checkbox.
1. Select **Save changes**.
To disable Auto DevOps for a group, follow the same process and
clear the **Default to Auto DevOps pipeline** checkbox.
After enabling Auto DevOps for a group, you can trigger the
Auto DevOps pipeline for any project that belongs to that group:
1. On the left sidebar, select **Search or go to** and find your project.
1. Make sure the project doesn't contain a `.gitlab-ci.yml` file.
1. Select **Build > Pipelines**.
1. To trigger the Auto DevOps pipeline, select **New pipeline**.
### Deploy your app to a cloud provider
- [Use Auto DevOps to deploy to a Kubernetes cluster on Google Kubernetes Engine (GKE)](cloud_deployments/auto_devops_with_gke.md)
- [Use Auto DevOps to deploy to a Kubernetes cluster on Amazon Elastic Kubernetes Service (EKS)](cloud_deployments/auto_devops_with_eks.md)
- [Use Auto DevOps to deploy to EC2](cloud_deployments/auto_devops_with_ec2.md)
- [Use Auto DevOps to deploy to ECS](cloud_deployments/auto_devops_with_ecs.md)
## Upgrade Auto DevOps dependencies when updating GitLab
When updating GitLab, you might need to upgrade Auto DevOps dependencies to
match your new GitLab version:
- [Upgrading Auto DevOps resources](upgrading_auto_deploy_dependencies.md):
- Auto DevOps template.
- Auto Deploy template.
- Auto Deploy image.
- Helm.
- Kubernetes.
- Environment variables.
- [Upgrading PostgreSQL](upgrading_postgresql.md).
## Private registry support
There is no guarantee that you can use a private container registry with Auto DevOps.
Instead, use the [GitLab container registry](../../user/packages/container_registry/_index.md) with Auto DevOps to
simplify configuration and prevent any unforeseen issues.
## Install applications behind a proxy
The GitLab integration with Helm does not support installing applications when
behind a proxy.
If you want to do so, you must inject proxy settings into the
installation pods at runtime.
## Related topics
- [Continuous methodologies](../../ci/_index.md)
- [Docker](https://docs.docker.com)
- [GitLab Runner](https://docs.gitlab.com/runner/)
- [Helm](https://helm.sh/docs/)
- [Kubernetes](https://kubernetes.io/docs/home/)
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
## Troubleshooting
See [troubleshooting Auto DevOps](troubleshooting.md).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Auto DevOps
description: Automated DevOps, language detection, deployment, and customization.
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Auto DevOps turns your code into production-ready applications without the usual configuration overhead.
The entire DevOps lifecycle is pre-configured using industry best practices. Start with the defaults
to ship quickly, then customize when you need more control. No complex configuration files or deep
DevOps expertise is required.
With Auto DevOps you get:
- CI/CD pipelines that automatically detect your language and framework
- Built-in security scanning to find vulnerabilities before they reach production
- Code quality and performance testing on every commit
- Ready-to-use review apps for previewing changes in a live environment
- Quick deployments to Kubernetes clusters
- Progressive deployment strategies that reduce risk and downtime
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For an introduction to Auto DevOps, watch [Auto DevOps](https://youtu.be/0Tc0YYBxqi4).
<!-- Video published on 2018-06-22 -->
## Auto DevOps features
Auto DevOps supports development during each of the [DevOps stages](stages.md).
| Stage | Auto DevOps feature |
|---------|-------------|
| Build | [Auto Build](stages.md#auto-build) |
| Build | [Auto Dependency Scanning](stages.md#auto-dependency-scanning) |
| Test | [Auto Test](stages.md#auto-test) |
| Test | [Auto Browser Performance Testing](stages.md#auto-browser-performance-testing) |
| Test | [Auto Code Intelligence](stages.md#auto-code-intelligence) |
| Test | [Auto Code Quality](stages.md#auto-code-quality) |
| Test | [Auto Container Scanning](stages.md#auto-container-scanning) |
| Deploy | [Auto Review Apps](stages.md#auto-review-apps) |
| Deploy | [Auto Deploy](stages.md#auto-deploy) |
| Secure | [Auto Dynamic Application Security Testing (DAST)](stages.md#auto-dast) |
| Secure | [Auto Static Application Security Testing (SAST)](stages.md#auto-sast) |
| Secure | [Auto Secret Detection](stages.md#auto-secret-detection) |
### Comparison to application platforms and PaaS
Auto DevOps provides features often included in an application
platform or in a Platform as a Service (PaaS).
Inspired by [Heroku](https://www.heroku.com/), Auto DevOps goes beyond it
in multiple ways:
- Auto DevOps works with any Kubernetes cluster.
- There is no additional cost.
- You can use a cluster hosted by yourself or on any public cloud.
- Auto DevOps offers an incremental graduation path. If you need to [customize](customize.md), start by changing the templates and evolve from there.
## Get started with Auto DevOps
To get started, you only need to [enable Auto DevOps](#enable-or-disable-auto-devops).
This is enough to run an Auto DevOps pipeline to build and
test your application.
If you want to build, test, and deploy your app:
1. View the [requirements for deployment](requirements.md).
1. [Enable Auto DevOps](#enable-or-disable-auto-devops).
1. [Deploy your app to a cloud provider](#deploy-your-app-to-a-cloud-provider).
### Enable or disable Auto DevOps
Auto DevOps runs pipelines automatically only if a [`Dockerfile` or matching buildpack](stages.md#auto-build) exists.
You can enable or disable Auto DevOps for a project or an entire group. Instance administrators
can also [set Auto DevOps as the default](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects)
for all projects in an instance.
Before enabling Auto DevOps, consider [preparing it for deployment](requirements.md).
If you don't, Auto DevOps can build and test your app, but cannot deploy it.
#### Per project
To use Auto DevOps for individual projects, you can enable it in a
project-by-project basis. If you intend to use it for more projects,
you can enable it for a [group](#per-group) or an
[instance](../../administration/settings/continuous_integration.md#configure-auto-devops-for-all-projects).
This can save you the time of enabling it in each project.
Prerequisites:
- You must have at least the Maintainer role for the project.
- Ensure your project does not have a `.gitlab-ci.yml` present. If present, your CI/CD configuration takes
precedence over the Auto DevOps pipeline.
To enable Auto DevOps for a project:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select the **Default to Auto DevOps pipeline** checkbox.
1. Optional but recommended. Add the [base domain](requirements.md#auto-devops-base-domain).
1. Optional but recommended. Choose the [deployment strategy](requirements.md#auto-devops-deployment-strategy).
1. Select **Save changes**.
GitLab triggers the Auto DevOps pipeline on the default branch.
To disable it, follow the same process and clear the
**Default to Auto DevOps pipeline** checkbox.
#### Per group
When you enable Auto DevOps for a group, the subgroups and
projects in that group inherit the configuration. You can save time by
enabling Auto DevOps for a group instead of enabling it for each
subgroup or project.
When enabled for a group, you can still disable Auto DevOps
for the subgroups and projects where you don't want to use it.
Prerequisites:
- You must have the Owner role for the group.
To enable Auto DevOps for a group:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select the **Default to Auto DevOps pipeline** checkbox.
1. Select **Save changes**.
To disable Auto DevOps for a group, follow the same process and
clear the **Default to Auto DevOps pipeline** checkbox.
After enabling Auto DevOps for a group, you can trigger the
Auto DevOps pipeline for any project that belongs to that group:
1. On the left sidebar, select **Search or go to** and find your project.
1. Make sure the project doesn't contain a `.gitlab-ci.yml` file.
1. Select **Build > Pipelines**.
1. To trigger the Auto DevOps pipeline, select **New pipeline**.
### Deploy your app to a cloud provider
- [Use Auto DevOps to deploy to a Kubernetes cluster on Google Kubernetes Engine (GKE)](cloud_deployments/auto_devops_with_gke.md)
- [Use Auto DevOps to deploy to a Kubernetes cluster on Amazon Elastic Kubernetes Service (EKS)](cloud_deployments/auto_devops_with_eks.md)
- [Use Auto DevOps to deploy to EC2](cloud_deployments/auto_devops_with_ec2.md)
- [Use Auto DevOps to deploy to ECS](cloud_deployments/auto_devops_with_ecs.md)
## Upgrade Auto DevOps dependencies when updating GitLab
When updating GitLab, you might need to upgrade Auto DevOps dependencies to
match your new GitLab version:
- [Upgrading Auto DevOps resources](upgrading_auto_deploy_dependencies.md):
- Auto DevOps template.
- Auto Deploy template.
- Auto Deploy image.
- Helm.
- Kubernetes.
- Environment variables.
- [Upgrading PostgreSQL](upgrading_postgresql.md).
## Private registry support
There is no guarantee that you can use a private container registry with Auto DevOps.
Instead, use the [GitLab container registry](../../user/packages/container_registry/_index.md) with Auto DevOps to
simplify configuration and prevent any unforeseen issues.
## Install applications behind a proxy
The GitLab integration with Helm does not support installing applications when
behind a proxy.
If you want to do so, you must inject proxy settings into the
installation pods at runtime.
## Related topics
- [Continuous methodologies](../../ci/_index.md)
- [Docker](https://docs.docker.com)
- [GitLab Runner](https://docs.gitlab.com/runner/)
- [Helm](https://helm.sh/docs/)
- [Kubernetes](https://kubernetes.io/docs/home/)
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
## Troubleshooting
See [troubleshooting Auto DevOps](troubleshooting.md).
|
https://docs.gitlab.com/topics/prepare_deployment
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/prepare_deployment.md
|
2025-08-13
|
doc/topics/autodevops
|
[
"doc",
"topics",
"autodevops"
] |
prepare_deployment.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Prepare Auto DevOps for deployment
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
If you enable Auto DevOps without setting the base domain and deployment
strategy, GitLab can't deploy your application directly. Therefore, we
recommend that you prepare them before enabling Auto DevOps.
## Deployment strategy
When using Auto DevOps to deploy your applications, choose the
[continuous deployment strategy](../../ci/_index.md)
that works best for your needs:
| Deployment strategy | Setup | Methodology |
|-------------------------------------------------------------------------|-------|-------------|
| **Continuous deployment to production** | Enables [Auto Deploy](stages.md#auto-deploy) with the default branch continuously deployed to production. | Continuous deployment to production.|
| **Continuous deployment to production using timed incremental rollout** | Sets the [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#timed-incremental-rollout-to-production) variable to `timed`. | Continuously deploy to production with a 5 minutes delay between rollouts. |
| **Automatic deployment to staging, manual deployment to production** | Sets [`STAGING_ENABLED`](cicd_variables.md#deploy-policy-for-staging-and-production-environments) to `1` and [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#incremental-rollout-to-production) to `manual`. | The default branch is continuously deployed to staging and continuously delivered to production. |
You can choose the deployment method when enabling Auto DevOps or later:
1. In GitLab, go to your project's **Settings > CI/CD > Auto DevOps**.
1. Choose the deployment strategy.
1. Select **Save changes**.
{{< alert type="note" >}}
Use the [blue-green deployment](../../ci/environments/incremental_rollouts.md#blue-green-deployment) technique
to minimize downtime and risk.
{{< /alert >}}
## Auto DevOps base domain
The Auto DevOps base domain is required to use
[Auto Review Apps](stages.md#auto-review-apps) and [Auto Deploy](stages.md#auto-deploy).
To define the base domain, either:
- In the project, group, or instance: go to your cluster settings and add it there.
- In the project or group: add it as an environment variable: `KUBE_INGRESS_BASE_DOMAIN`.
- In the instance: go to the **Admin** area, then **Settings > CI/CD > Continuous Integration and Delivery** and add it there.
The base domain variable `KUBE_INGRESS_BASE_DOMAIN` follows the same order of precedence
as other environment [variables](../../ci/variables/_index.md#cicd-variable-precedence).
If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide **Auto DevOps domain**.
Auto DevOps requires a wildcard DNS `A` record matching the base domains. For
a base domain of `example.com`, you'd need a DNS entry like:
```plaintext
*.example.com 3600 A 10.0.2.2
```
In this case, the deployed applications are served from `example.com`, and `10.0.2.2`
is the IP address of your load balancer, generally NGINX ([see requirements](requirements.md)).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
Alternatively, you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io),
set the Auto DevOps base domain to `10.0.2.2.nip.io`.
After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Prepare Auto DevOps for deployment
breadcrumbs:
- doc
- topics
- autodevops
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
If you enable Auto DevOps without setting the base domain and deployment
strategy, GitLab can't deploy your application directly. Therefore, we
recommend that you prepare them before enabling Auto DevOps.
## Deployment strategy
When using Auto DevOps to deploy your applications, choose the
[continuous deployment strategy](../../ci/_index.md)
that works best for your needs:
| Deployment strategy | Setup | Methodology |
|-------------------------------------------------------------------------|-------|-------------|
| **Continuous deployment to production** | Enables [Auto Deploy](stages.md#auto-deploy) with the default branch continuously deployed to production. | Continuous deployment to production.|
| **Continuous deployment to production using timed incremental rollout** | Sets the [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#timed-incremental-rollout-to-production) variable to `timed`. | Continuously deploy to production with a 5 minutes delay between rollouts. |
| **Automatic deployment to staging, manual deployment to production** | Sets [`STAGING_ENABLED`](cicd_variables.md#deploy-policy-for-staging-and-production-environments) to `1` and [`INCREMENTAL_ROLLOUT_MODE`](cicd_variables.md#incremental-rollout-to-production) to `manual`. | The default branch is continuously deployed to staging and continuously delivered to production. |
You can choose the deployment method when enabling Auto DevOps or later:
1. In GitLab, go to your project's **Settings > CI/CD > Auto DevOps**.
1. Choose the deployment strategy.
1. Select **Save changes**.
{{< alert type="note" >}}
Use the [blue-green deployment](../../ci/environments/incremental_rollouts.md#blue-green-deployment) technique
to minimize downtime and risk.
{{< /alert >}}
## Auto DevOps base domain
The Auto DevOps base domain is required to use
[Auto Review Apps](stages.md#auto-review-apps) and [Auto Deploy](stages.md#auto-deploy).
To define the base domain, either:
- In the project, group, or instance: go to your cluster settings and add it there.
- In the project or group: add it as an environment variable: `KUBE_INGRESS_BASE_DOMAIN`.
- In the instance: go to the **Admin** area, then **Settings > CI/CD > Continuous Integration and Delivery** and add it there.
The base domain variable `KUBE_INGRESS_BASE_DOMAIN` follows the same order of precedence
as other environment [variables](../../ci/variables/_index.md#cicd-variable-precedence).
If you don't specify the base domain in your projects and groups, Auto DevOps uses the instance-wide **Auto DevOps domain**.
Auto DevOps requires a wildcard DNS `A` record matching the base domains. For
a base domain of `example.com`, you'd need a DNS entry like:
```plaintext
*.example.com 3600 A 10.0.2.2
```
In this case, the deployed applications are served from `example.com`, and `10.0.2.2`
is the IP address of your load balancer, generally NGINX ([see requirements](requirements.md)).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
Alternatively, you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io),
set the Auto DevOps base domain to `10.0.2.2.nip.io`.
After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.
|
https://docs.gitlab.com/topics/autodevops/auto_devops_with_eks
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/autodevops/auto_devops_with_eks.md
|
2025-08-13
|
doc/topics/autodevops/cloud_deployments
|
[
"doc",
"topics",
"autodevops",
"cloud_deployments"
] |
auto_devops_with_eks.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use Auto DevOps to deploy an application to Amazon Elastic Kubernetes Service (EKS)
| null |
In this tutorial, we'll help you to get started with [Auto DevOps](../_index.md)
through an example of how to deploy an application to Amazon Elastic Kubernetes Service (EKS).
The tutorial uses the GitLab native Kubernetes integration, so you don't need
to create a Kubernetes cluster manually using the AWS console.
You can also follow this tutorial on a GitLab Self-Managed instance.
Ensure your own [runners are configured](../../../ci/runners/_index.md).
To deploy a project to EKS:
1. [Configure your Amazon account](#configure-your-amazon-account)
1. [Create a Kubernetes cluster and deploy the agent](#create-a-kubernetes-cluster)
1. [Create a new project from a template](#create-an-application-project-from-a-template)
1. [Configure the agent](#configure-the-agent)
1. [Install Ingress](#install-ingress)
1. [Configure Auto DevOps](#configure-auto-devops)
1. [Enable Auto DevOps and run the pipeline](#enable-auto-devops-and-run-the-pipeline)
1. [Deploy the application](#deploy-the-application)
## Configure your Amazon account
Before you create and connect your Kubernetes cluster to your GitLab project,
you need an [Amazon Web Services account](https://aws.amazon.com/).
Sign in with an existing Amazon account or create a new one.
## Create a Kubernetes cluster
To create a new cluster on Amazon EKS:
- Follow the steps in [Create an Amazon EKS cluster](../../../user/infrastructure/clusters/connect/new_eks_cluster.md).
If you prefer, you can also create a cluster manually using `eksctl`.
## Create an application project from a template
Use a GitLab project template to get started. As the name suggests,
those projects provide a bare-bones application built on some well-known frameworks.
{{< alert type="warning" >}}
Create the application project in the group hierarchy at the same level or below the project for cluster management. Otherwise, it fails to [authorize the agent](../../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access).
{{< /alert >}}
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
1. Select **Create from template**.
1. Select the **Ruby on Rails** template.
1. Give your project a name, optionally a description, and make it public so that
you can take advantage of the features available in the
[GitLab Ultimate plan](https://about.gitlab.com/pricing/).
1. Select **Create project**.
Now you have an application project you are going to deploy to the EKS cluster.
## Configure the agent
Next, we'll configure the GitLab agent for Kubernetes so we can use it to deploy the application project.
1. Go to the project [we created to manage the cluster](#create-a-kubernetes-cluster).
1. Go to the [agent configuration file](../../../user/clusters/agent/install/_index.md#create-an-agent-configuration-file) (`.gitlab/agents/eks-agent/config.yaml`) and edit it.
1. Configure `ci_access:projects` attribute. Use the application project path as `id`:
```yaml
ci_access:
projects:
- id: path/to/application-project
```
## Install Ingress
After your cluster is running, you must install NGINX Ingress Controller as a
load balancer to route traffic from the internet to your application.
Install the NGINX Ingress Controller
through the GitLab [Cluster management project template](../../../user/clusters/management_project_template.md),
or manually via the command line:
1. Ensure you have `kubectl` and Helm installed on your machine.
1. Create an IAM role to access the cluster.
1. Create an access token to access the cluster.
1. Use `kubectl` to connect to your cluster:
```shell
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace gitlab-managed-apps --create-namespace
# Check that the ingress controller is installed successfully
kubectl get service ingress-nginx-controller -n gitlab-managed-apps
```
## Configure Auto DevOps
Follow these steps to configure the base domain and other settings required for Auto DevOps.
1. A few minutes after you install NGINX, the load balancer obtains an IP address, and you can
get the external IP address with the following command:
```shell
kubectl get all -n gitlab-managed-apps --selector app.kubernetes.io/instance=ingress-nginx
```
Replace `gitlab-managed-apps` if you have overwritten your namespace.
Next, find the actual external IP address for your cluster with the following command:
```shell
nslookup [External IP]
```
Where the `[External IP]` is the hostname found with the previous command.
The IP address might be listed in the `Non-authoritative answer:` section of the response.
Copy this IP address, as you need it in the next step.
1. Go back to the application project.
1. On the left sidebar, select **Settings > CI/CD** and expand **Variables**.
- Add a key called `KUBE_INGRESS_BASE_DOMAIN` with the application deployment domain as the value. For this example, use the domain `<IP address>.nip.io`.
- Add a key called `KUBE_NAMESPACE` with a value of the Kubernetes namespace for your deployments to target. You can use different namespaces per environment. Configure the environment, use the environment scope.
- Add a key called `KUBE_CONTEXT` with a value like `path/to/agent/project:eks-agent`. Select the environment scope of your choice.
- Select **Save changes**.
## Enable Auto DevOps and run the pipeline
While Auto DevOps is enabled by default, Auto DevOps can be disabled for
the entire instance (for GitLab Self-Managed instances) and for individual groups. Complete
these steps to enable Auto DevOps if it's disabled:
1. On the left sidebar, select **Search or go to** and find the application project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select **Default to Auto DevOps pipeline** to display more options.
1. In **Deployment strategy**, select your desired [continuous deployment strategy](../requirements.md#auto-devops-deployment-strategy)
to deploy the application to production after the pipeline successfully runs on the default branch.
1. Select **Save changes**.
1. Edit `.gitlab-ci.yml` file to include the Auto DevOps template and commit the change to the default branch:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
The commit should trigger a pipeline. In the next section, we explain what each job does in the pipeline.
## Deploy the application
When your pipeline runs, what is it doing?
To view the jobs in the pipeline, select the pipeline's status badge. The
{{< icon name="status_running" >}} icon displays when pipeline jobs are running, and updates
without refreshing the page to {{< icon name="status_success" >}} (for success) or
{{< icon name="status_failed" >}} (for failure) when the jobs complete.
The jobs are separated into stages:

- **Build** - The application builds a Docker image and uploads it to your project's
[Container Registry](../../../user/packages/container_registry/_index.md) ([Auto Build](../stages.md#auto-build)).
- **Test** - GitLab runs various checks on the application, but all jobs except `test`
are allowed to fail in the test stage:
- The `test` job runs unit and integration tests by detecting the language and
framework ([Auto Test](../stages.md#auto-test))
- The `code_quality` job checks the code quality and is allowed to fail
([Auto Code Quality](../stages.md#auto-code-quality))
- The `container_scanning` job checks the Docker container if it has any
vulnerabilities and is allowed to fail ([Auto Container Scanning](../stages.md#auto-container-scanning))
- The `dependency_scanning` job checks if the application has any dependencies
susceptible to vulnerabilities and is allowed to fail
([Auto Dependency Scanning](../stages.md#auto-dependency-scanning))
- Jobs suffixed with `-sast` run static analysis on the current code to check for potential
security issues, and are allowed to fail ([Auto SAST](../stages.md#auto-sast))
- The `secret-detection` job checks for leaked secrets and is allowed to fail ([Auto Secret Detection](../stages.md#auto-secret-detection))
- **Review** - Pipelines on the default branch include this stage with a `dast_environment_deploy` job.
To learn more, see [Dynamic Application Security Testing (DAST)](../../../user/application_security/dast/_index.md).
- **Production** - After the tests and checks finish, the application deploys in
Kubernetes ([Auto Deploy](../stages.md#auto-deploy)).
- **Performance** - Performance tests are run on the deployed application
([Auto Browser Performance Testing](../stages.md#auto-browser-performance-testing)).
- **Cleanup** - Pipelines on the default branch include this stage with a `stop_dast_environment` job.
After running a pipeline, you should view your deployed website and learn how
to monitor it.
### Monitor your project
After successfully deploying your application, you can view its website and check
on its health on the **Environments** page by navigating to
**Operate > Environments**. This page displays details about
the deployed applications, and the right-hand column displays icons that link
you to common environment tasks:

- **Open live environment** ({{< icon name="external-link" >}}) - Opens the URL of the application deployed in production
- **Monitoring** ({{< icon name="chart" >}}) - Opens the metrics page where Prometheus collects data
about the Kubernetes cluster and how the application
affects it in terms of memory usage, CPU usage, and latency
- **Deploy to** ({{< icon name="play" >}} {{< icon name="chevron-lg-down" >}}) - Displays a list of environments you can deploy to
- **Terminal** ({{< icon name="terminal" >}}) - Opens a [web terminal](../../../ci/environments/_index.md#web-terminals-deprecated)
session inside the container where the application is running
- **Re-deploy to environment** ({{< icon name="repeat" >}}) - For more information, see
[Retrying and rolling back](../../../ci/environments/deployments.md#retry-or-roll-back-a-deployment)
- **Stop environment** ({{< icon name="stop" >}}) - For more information, see
[Stopping an environment](../../../ci/environments/_index.md#stopping-an-environment)
GitLab displays the [deploy board](../../../user/project/deploy_boards.md) below the
environment's information, with squares representing pods in your
Kubernetes cluster, color-coded to show their status. Hovering over a square on
the deploy board displays the state of the deployment, and selecting the square
takes you to the pod's logs page.
Although the example shows only one pod hosting the application at the moment, you can add
more pods by defining the [`REPLICAS` CI/CD variable](../cicd_variables.md)
in **Settings > CI/CD > Variables**.
### Work with branches
Next, create a feature branch to add content to your application:
1. In your project's repository, go to the following file: `app/views/welcome/index.html.erb`.
This file should only contain a paragraph: `<p>You're on Rails!</p>`.
1. Open the GitLab [Web IDE](../../../user/project/web_ide/_index.md) to make the change.
1. Edit the file so it contains:
```html
<p>You're on Rails! Powered by GitLab Auto DevOps.</p>
```
1. Stage the file. Add a commit message, then create a new branch and a merge request
by selecting **Commit**.

After submitting the merge request, GitLab runs your pipeline, and all the jobs
in it, as [described previously](#deploy-the-application), in addition to
a few more that run only on branches other than the default branch.
After a few minutes a test fails, which means a test was
'broken' by your change. Select the failed `test` job to see more information
about it:
```plaintext
Failure:
WelcomeControllerTest#test_should_get_index [/app/test/controllers/welcome_controller_test.rb:7]:
<You're on Rails!> expected but was
<You're on Rails! Powered by GitLab Auto DevOps.>..
Expected 0 to be >= 1.
bin/rails test test/controllers/welcome_controller_test.rb:4
```
To fix the broken test:
1. Return to your merge request.
1. In the upper right corner, select **Code**, then select **Open in Web IDE**.
1. In the left-hand directory of files, find the `test/controllers/welcome_controller_test.rb`
file, and select it to open it.
1. Change line 7 to say `You're on Rails! Powered by GitLab Auto DevOps.`
1. On the left sidebar, select **Source Control** ({{< icon name="merge" >}}).
1. Write a commit message, and select **Commit**.
Return to the **Overview** page of your merge request, and you should not only
see the test passing, but also the application deployed as a
[review application](../stages.md#auto-review-apps). You can visit it by selecting
the **View app** {{< icon name="external-link" >}} button to see your changes deployed.
After merging the merge request, GitLab runs the pipeline on the default branch,
and then deploys the application to production.
## Conclusion
After implementing this project, you should have a solid understanding of the basics of Auto DevOps.
You started from building and testing, to deploying and monitoring an application
all in GitLab. Despite its automatic nature, Auto DevOps can also be configured
and customized to fit your workflow. Here are some helpful resources for further reading:
1. [Auto DevOps](../_index.md)
1. [Multiple Kubernetes clusters](../multiple_clusters_auto_devops.md)
1. [Incremental rollout to production](../cicd_variables.md#incremental-rollout-to-production)
1. [Disable jobs you don't need with CI/CD variables](../cicd_variables.md)
1. [Use your own buildpacks to build your application](../customize.md#custom-buildpacks)
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Use Auto DevOps to deploy an application to Amazon Elastic Kubernetes Service
(EKS)
breadcrumbs:
- doc
- topics
- autodevops
- cloud_deployments
---
In this tutorial, we'll help you to get started with [Auto DevOps](../_index.md)
through an example of how to deploy an application to Amazon Elastic Kubernetes Service (EKS).
The tutorial uses the GitLab native Kubernetes integration, so you don't need
to create a Kubernetes cluster manually using the AWS console.
You can also follow this tutorial on a GitLab Self-Managed instance.
Ensure your own [runners are configured](../../../ci/runners/_index.md).
To deploy a project to EKS:
1. [Configure your Amazon account](#configure-your-amazon-account)
1. [Create a Kubernetes cluster and deploy the agent](#create-a-kubernetes-cluster)
1. [Create a new project from a template](#create-an-application-project-from-a-template)
1. [Configure the agent](#configure-the-agent)
1. [Install Ingress](#install-ingress)
1. [Configure Auto DevOps](#configure-auto-devops)
1. [Enable Auto DevOps and run the pipeline](#enable-auto-devops-and-run-the-pipeline)
1. [Deploy the application](#deploy-the-application)
## Configure your Amazon account
Before you create and connect your Kubernetes cluster to your GitLab project,
you need an [Amazon Web Services account](https://aws.amazon.com/).
Sign in with an existing Amazon account or create a new one.
## Create a Kubernetes cluster
To create a new cluster on Amazon EKS:
- Follow the steps in [Create an Amazon EKS cluster](../../../user/infrastructure/clusters/connect/new_eks_cluster.md).
If you prefer, you can also create a cluster manually using `eksctl`.
## Create an application project from a template
Use a GitLab project template to get started. As the name suggests,
those projects provide a bare-bones application built on some well-known frameworks.
{{< alert type="warning" >}}
Create the application project in the group hierarchy at the same level or below the project for cluster management. Otherwise, it fails to [authorize the agent](../../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access).
{{< /alert >}}
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
1. Select **Create from template**.
1. Select the **Ruby on Rails** template.
1. Give your project a name, optionally a description, and make it public so that
you can take advantage of the features available in the
[GitLab Ultimate plan](https://about.gitlab.com/pricing/).
1. Select **Create project**.
Now you have an application project you are going to deploy to the EKS cluster.
## Configure the agent
Next, we'll configure the GitLab agent for Kubernetes so we can use it to deploy the application project.
1. Go to the project [we created to manage the cluster](#create-a-kubernetes-cluster).
1. Go to the [agent configuration file](../../../user/clusters/agent/install/_index.md#create-an-agent-configuration-file) (`.gitlab/agents/eks-agent/config.yaml`) and edit it.
1. Configure `ci_access:projects` attribute. Use the application project path as `id`:
```yaml
ci_access:
projects:
- id: path/to/application-project
```
## Install Ingress
After your cluster is running, you must install NGINX Ingress Controller as a
load balancer to route traffic from the internet to your application.
Install the NGINX Ingress Controller
through the GitLab [Cluster management project template](../../../user/clusters/management_project_template.md),
or manually via the command line:
1. Ensure you have `kubectl` and Helm installed on your machine.
1. Create an IAM role to access the cluster.
1. Create an access token to access the cluster.
1. Use `kubectl` to connect to your cluster:
```shell
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace gitlab-managed-apps --create-namespace
# Check that the ingress controller is installed successfully
kubectl get service ingress-nginx-controller -n gitlab-managed-apps
```
## Configure Auto DevOps
Follow these steps to configure the base domain and other settings required for Auto DevOps.
1. A few minutes after you install NGINX, the load balancer obtains an IP address, and you can
get the external IP address with the following command:
```shell
kubectl get all -n gitlab-managed-apps --selector app.kubernetes.io/instance=ingress-nginx
```
Replace `gitlab-managed-apps` if you have overwritten your namespace.
Next, find the actual external IP address for your cluster with the following command:
```shell
nslookup [External IP]
```
Where the `[External IP]` is the hostname found with the previous command.
The IP address might be listed in the `Non-authoritative answer:` section of the response.
Copy this IP address, as you need it in the next step.
1. Go back to the application project.
1. On the left sidebar, select **Settings > CI/CD** and expand **Variables**.
- Add a key called `KUBE_INGRESS_BASE_DOMAIN` with the application deployment domain as the value. For this example, use the domain `<IP address>.nip.io`.
- Add a key called `KUBE_NAMESPACE` with a value of the Kubernetes namespace for your deployments to target. You can use different namespaces per environment. Configure the environment, use the environment scope.
- Add a key called `KUBE_CONTEXT` with a value like `path/to/agent/project:eks-agent`. Select the environment scope of your choice.
- Select **Save changes**.
## Enable Auto DevOps and run the pipeline
While Auto DevOps is enabled by default, Auto DevOps can be disabled for
the entire instance (for GitLab Self-Managed instances) and for individual groups. Complete
these steps to enable Auto DevOps if it's disabled:
1. On the left sidebar, select **Search or go to** and find the application project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select **Default to Auto DevOps pipeline** to display more options.
1. In **Deployment strategy**, select your desired [continuous deployment strategy](../requirements.md#auto-devops-deployment-strategy)
to deploy the application to production after the pipeline successfully runs on the default branch.
1. Select **Save changes**.
1. Edit `.gitlab-ci.yml` file to include the Auto DevOps template and commit the change to the default branch:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
The commit should trigger a pipeline. In the next section, we explain what each job does in the pipeline.
## Deploy the application
When your pipeline runs, what is it doing?
To view the jobs in the pipeline, select the pipeline's status badge. The
{{< icon name="status_running" >}} icon displays when pipeline jobs are running, and updates
without refreshing the page to {{< icon name="status_success" >}} (for success) or
{{< icon name="status_failed" >}} (for failure) when the jobs complete.
The jobs are separated into stages:

- **Build** - The application builds a Docker image and uploads it to your project's
[Container Registry](../../../user/packages/container_registry/_index.md) ([Auto Build](../stages.md#auto-build)).
- **Test** - GitLab runs various checks on the application, but all jobs except `test`
are allowed to fail in the test stage:
- The `test` job runs unit and integration tests by detecting the language and
framework ([Auto Test](../stages.md#auto-test))
- The `code_quality` job checks the code quality and is allowed to fail
([Auto Code Quality](../stages.md#auto-code-quality))
- The `container_scanning` job checks the Docker container if it has any
vulnerabilities and is allowed to fail ([Auto Container Scanning](../stages.md#auto-container-scanning))
- The `dependency_scanning` job checks if the application has any dependencies
susceptible to vulnerabilities and is allowed to fail
([Auto Dependency Scanning](../stages.md#auto-dependency-scanning))
- Jobs suffixed with `-sast` run static analysis on the current code to check for potential
security issues, and are allowed to fail ([Auto SAST](../stages.md#auto-sast))
- The `secret-detection` job checks for leaked secrets and is allowed to fail ([Auto Secret Detection](../stages.md#auto-secret-detection))
- **Review** - Pipelines on the default branch include this stage with a `dast_environment_deploy` job.
To learn more, see [Dynamic Application Security Testing (DAST)](../../../user/application_security/dast/_index.md).
- **Production** - After the tests and checks finish, the application deploys in
Kubernetes ([Auto Deploy](../stages.md#auto-deploy)).
- **Performance** - Performance tests are run on the deployed application
([Auto Browser Performance Testing](../stages.md#auto-browser-performance-testing)).
- **Cleanup** - Pipelines on the default branch include this stage with a `stop_dast_environment` job.
After running a pipeline, you should view your deployed website and learn how
to monitor it.
### Monitor your project
After successfully deploying your application, you can view its website and check
on its health on the **Environments** page by navigating to
**Operate > Environments**. This page displays details about
the deployed applications, and the right-hand column displays icons that link
you to common environment tasks:

- **Open live environment** ({{< icon name="external-link" >}}) - Opens the URL of the application deployed in production
- **Monitoring** ({{< icon name="chart" >}}) - Opens the metrics page where Prometheus collects data
about the Kubernetes cluster and how the application
affects it in terms of memory usage, CPU usage, and latency
- **Deploy to** ({{< icon name="play" >}} {{< icon name="chevron-lg-down" >}}) - Displays a list of environments you can deploy to
- **Terminal** ({{< icon name="terminal" >}}) - Opens a [web terminal](../../../ci/environments/_index.md#web-terminals-deprecated)
session inside the container where the application is running
- **Re-deploy to environment** ({{< icon name="repeat" >}}) - For more information, see
[Retrying and rolling back](../../../ci/environments/deployments.md#retry-or-roll-back-a-deployment)
- **Stop environment** ({{< icon name="stop" >}}) - For more information, see
[Stopping an environment](../../../ci/environments/_index.md#stopping-an-environment)
GitLab displays the [deploy board](../../../user/project/deploy_boards.md) below the
environment's information, with squares representing pods in your
Kubernetes cluster, color-coded to show their status. Hovering over a square on
the deploy board displays the state of the deployment, and selecting the square
takes you to the pod's logs page.
Although the example shows only one pod hosting the application at the moment, you can add
more pods by defining the [`REPLICAS` CI/CD variable](../cicd_variables.md)
in **Settings > CI/CD > Variables**.
### Work with branches
Next, create a feature branch to add content to your application:
1. In your project's repository, go to the following file: `app/views/welcome/index.html.erb`.
This file should only contain a paragraph: `<p>You're on Rails!</p>`.
1. Open the GitLab [Web IDE](../../../user/project/web_ide/_index.md) to make the change.
1. Edit the file so it contains:
```html
<p>You're on Rails! Powered by GitLab Auto DevOps.</p>
```
1. Stage the file. Add a commit message, then create a new branch and a merge request
by selecting **Commit**.

After submitting the merge request, GitLab runs your pipeline, and all the jobs
in it, as [described previously](#deploy-the-application), in addition to
a few more that run only on branches other than the default branch.
After a few minutes a test fails, which means a test was
'broken' by your change. Select the failed `test` job to see more information
about it:
```plaintext
Failure:
WelcomeControllerTest#test_should_get_index [/app/test/controllers/welcome_controller_test.rb:7]:
<You're on Rails!> expected but was
<You're on Rails! Powered by GitLab Auto DevOps.>..
Expected 0 to be >= 1.
bin/rails test test/controllers/welcome_controller_test.rb:4
```
To fix the broken test:
1. Return to your merge request.
1. In the upper right corner, select **Code**, then select **Open in Web IDE**.
1. In the left-hand directory of files, find the `test/controllers/welcome_controller_test.rb`
file, and select it to open it.
1. Change line 7 to say `You're on Rails! Powered by GitLab Auto DevOps.`
1. On the left sidebar, select **Source Control** ({{< icon name="merge" >}}).
1. Write a commit message, and select **Commit**.
Return to the **Overview** page of your merge request, and you should not only
see the test passing, but also the application deployed as a
[review application](../stages.md#auto-review-apps). You can visit it by selecting
the **View app** {{< icon name="external-link" >}} button to see your changes deployed.
After merging the merge request, GitLab runs the pipeline on the default branch,
and then deploys the application to production.
## Conclusion
After implementing this project, you should have a solid understanding of the basics of Auto DevOps.
You started from building and testing, to deploying and monitoring an application
all in GitLab. Despite its automatic nature, Auto DevOps can also be configured
and customized to fit your workflow. Here are some helpful resources for further reading:
1. [Auto DevOps](../_index.md)
1. [Multiple Kubernetes clusters](../multiple_clusters_auto_devops.md)
1. [Incremental rollout to production](../cicd_variables.md#incremental-rollout-to-production)
1. [Disable jobs you don't need with CI/CD variables](../cicd_variables.md)
1. [Use your own buildpacks to build your application](../customize.md#custom-buildpacks)
|
https://docs.gitlab.com/topics/autodevops/auto_devops_with_ecs
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/autodevops/auto_devops_with_ecs.md
|
2025-08-13
|
doc/topics/autodevops/cloud_deployments
|
[
"doc",
"topics",
"autodevops",
"cloud_deployments"
] |
auto_devops_with_ecs.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use Auto DevOps to deploy to Amazon ECS
| null |
You can choose to target AWS ECS as a deployment platform instead of using Kubernetes.
To get started on Auto DevOps to AWS ECS, you must add a specific CI/CD variable.
To do so, follow these steps:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Specify which AWS platform to target during the Auto DevOps deployment
by adding the `AUTO_DEVOPS_PLATFORM_TARGET` variable with one of the following values:
- `FARGATE` if the service you're targeting must be of launch type FARGATE.
- `ECS` if you're not enforcing any launch type check when deploying to ECS.
When you trigger a pipeline, if you have Auto DevOps enabled and if you have correctly
[entered AWS credentials as variables](../../../ci/cloud_deployment/_index.md#authenticate-gitlab-with-aws),
your application is deployed to AWS ECS.
If you have both a valid `AUTO_DEVOPS_PLATFORM_TARGET` variable and a Kubernetes cluster tied to your project,
only the deployment to Kubernetes runs.
{{< alert type="warning" >}}
Setting the `AUTO_DEVOPS_PLATFORM_TARGET` variable to `ECS` triggers jobs
defined in the [`Jobs/Deploy/ECS.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml).
However, it's not recommended to [include](../../../ci/yaml/_index.md#includetemplate)
it on its own. This template is designed to be used with Auto DevOps only. It may change
unexpectedly causing your pipeline to fail if included on its own. Also, the job
names within this template may also change. Do not override these jobs' names in your
own pipeline, as the override stops working when the name changes.
{{< /alert >}}
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Use Auto DevOps to deploy to Amazon ECS
breadcrumbs:
- doc
- topics
- autodevops
- cloud_deployments
---
You can choose to target AWS ECS as a deployment platform instead of using Kubernetes.
To get started on Auto DevOps to AWS ECS, you must add a specific CI/CD variable.
To do so, follow these steps:
1. On the left sidebar, select **Search or go to** and find your project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Specify which AWS platform to target during the Auto DevOps deployment
by adding the `AUTO_DEVOPS_PLATFORM_TARGET` variable with one of the following values:
- `FARGATE` if the service you're targeting must be of launch type FARGATE.
- `ECS` if you're not enforcing any launch type check when deploying to ECS.
When you trigger a pipeline, if you have Auto DevOps enabled and if you have correctly
[entered AWS credentials as variables](../../../ci/cloud_deployment/_index.md#authenticate-gitlab-with-aws),
your application is deployed to AWS ECS.
If you have both a valid `AUTO_DEVOPS_PLATFORM_TARGET` variable and a Kubernetes cluster tied to your project,
only the deployment to Kubernetes runs.
{{< alert type="warning" >}}
Setting the `AUTO_DEVOPS_PLATFORM_TARGET` variable to `ECS` triggers jobs
defined in the [`Jobs/Deploy/ECS.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml).
However, it's not recommended to [include](../../../ci/yaml/_index.md#includetemplate)
it on its own. This template is designed to be used with Auto DevOps only. It may change
unexpectedly causing your pipeline to fail if included on its own. Also, the job
names within this template may also change. Do not override these jobs' names in your
own pipeline, as the override stops working when the name changes.
{{< /alert >}}
|
https://docs.gitlab.com/topics/autodevops/auto_devops_with_ec2
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/autodevops/auto_devops_with_ec2.md
|
2025-08-13
|
doc/topics/autodevops/cloud_deployments
|
[
"doc",
"topics",
"autodevops",
"cloud_deployments"
] |
auto_devops_with_ec2.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use Auto DevOps to deploy to EC2
| null |
To use [Auto DevOps](../_index.md) to deploy to EC2:
1. Define [your AWS credentials as CI/CD variables](../../../ci/cloud_deployment/_index.md#authenticate-gitlab-with-aws).
1. In your `.gitlab-ci.yml` file, reference the `Auto-Devops.gitlab-ci.yml` template.
1. Define a job for the `build` stage named `build_artifact`. For example:
```yaml
# .gitlab-ci.yml
include:
- template: Auto-DevOps.gitlab-ci.yml
variables:
AUTO_DEVOPS_PLATFORM_TARGET: EC2
build_artifact:
stage: build
script:
- <your build script goes here>
artifacts:
paths:
- <built artifact>
```
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video walkthrough of this process, view [Auto Deploy to EC2](https://www.youtube.com/watch?v=4B-qSwKnacA).
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Use Auto DevOps to deploy to EC2
breadcrumbs:
- doc
- topics
- autodevops
- cloud_deployments
---
To use [Auto DevOps](../_index.md) to deploy to EC2:
1. Define [your AWS credentials as CI/CD variables](../../../ci/cloud_deployment/_index.md#authenticate-gitlab-with-aws).
1. In your `.gitlab-ci.yml` file, reference the `Auto-Devops.gitlab-ci.yml` template.
1. Define a job for the `build` stage named `build_artifact`. For example:
```yaml
# .gitlab-ci.yml
include:
- template: Auto-DevOps.gitlab-ci.yml
variables:
AUTO_DEVOPS_PLATFORM_TARGET: EC2
build_artifact:
stage: build
script:
- <your build script goes here>
artifacts:
paths:
- <built artifact>
```
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video walkthrough of this process, view [Auto Deploy to EC2](https://www.youtube.com/watch?v=4B-qSwKnacA).
|
https://docs.gitlab.com/topics/autodevops/auto_devops_with_gke
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/autodevops/auto_devops_with_gke.md
|
2025-08-13
|
doc/topics/autodevops/cloud_deployments
|
[
"doc",
"topics",
"autodevops",
"cloud_deployments"
] |
auto_devops_with_gke.md
|
Deploy
|
Environments
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use Auto DevOps to deploy an application to Google Kubernetes Engine
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
In this tutorial, we'll help you to get started with [Auto DevOps](../_index.md)
through an example of how to deploy an application to Google Kubernetes Engine (GKE).
You are using the GitLab native Kubernetes integration, so you don't need
to create a Kubernetes cluster manually using the Google Cloud Platform console.
You are creating and deploying an application that you create from a GitLab template.
These instructions also work for GitLab Self-Managed.
Ensure your own [runners are configured](../../../ci/runners/_index.md) and
[Google OAuth is enabled](../../../integration/google.md).
To deploy a project to Google Kubernetes Engine, follow the steps below:
1. [Configure your Google account](#configure-your-google-account)
1. [Create a Kubernetes cluster and deploy the agent](#create-a-kubernetes-cluster)
1. [Create a new project from a template](#create-an-application-project-from-a-template)
1. [Configure the agent](#configure-the-agent)
1. [Install Ingress](#install-ingress)
1. [Configure Auto DevOps](#configure-auto-devops)
1. [Enable Auto DevOps and run the pipeline](#enable-auto-devops-and-run-the-pipeline)
1. [Deploy the application](#deploy-the-application)
## Configure your Google account
Before creating and connecting your Kubernetes cluster to your GitLab project,
you need a [Google Cloud Platform account](https://console.cloud.google.com).
Sign in with an existing Google account, such as the one you use to access Gmail
or Google Drive, or create a new one.
1. Follow the steps described in the ["Before you begin" section](https://cloud.google.com/kubernetes-engine/docs/deploy-app-cluster#before-you-begin)
of the Kubernetes Engine documentation to enable the required APIs and related services.
1. Ensure you've created a [billing account](https://cloud.google.com/billing/docs/how-to/manage-billing-account)
with Google Cloud Platform.
{{< alert type="note" >}}
Every new Google Cloud Platform (GCP) account receives [$300 in credit](https://console.cloud.google.com/freetrial),
and in partnership with Google, GitLab is able to offer an additional $200 for new
GCP accounts to get started with the GitLab integration with Google Kubernetes Engine.
[Follow this link](https://cloud.google.com/partners?pcn_code=0014M00001h35gDQAQ#contact-form)
and apply for credit.
{{< /alert >}}
## Create a Kubernetes cluster
To create a new cluster on Google Kubernetes Engine (GKE), use Infrastructure as Code (IaC) approach
by following steps in [Create a Google GKE cluster](../../../user/infrastructure/clusters/connect/new_gke_cluster.md) guide.
The guide requires you to create a new project that uses [Terraform](https://www.terraform.io/) to create a GKE cluster and install a GitLab agent for Kubernetes.
This project is where configuration for the GitLab agent for Kubernetes resides.
## Create an application project from a template
Use a GitLab project template to get started. As the name suggests,
those projects provide a bare-bones application built on some well-known frameworks.
{{< alert type="warning" >}}
Create the application project in the group hierarchy at the same level or below the project for cluster management. Otherwise, it fails to [authorize the agent](../../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access).
{{< /alert >}}
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
1. Select **Create from template**.
1. Select the **Ruby on Rails** template.
1. Give your project a name, optionally a description, and make it public so that
you can take advantage of the features available in the
[GitLab Ultimate plan](https://about.gitlab.com/pricing/).
1. Select **Create project**.
Now you have an application project you are going to deploy to the GKE cluster.
## Configure the agent
Now we need to configure the GitLab agent for Kubernetes for us to be able to use it to deploy the application project.
1. Go to the project [we created to manage the cluster](#create-a-kubernetes-cluster).
1. Go to the [agent configuration file](../../../user/clusters/agent/install/_index.md#create-an-agent-configuration-file) (`.gitlab/agents/<agent-name>/config.yaml`) and edit it.
1. Configure `ci_access:projects` attribute. Use application's project path as `id`:
```yaml
ci_access:
projects:
- id: path/to/application-project
```
## Install Ingress
After your cluster is running, you must install NGINX Ingress Controller as a
load balancer, to route traffic from the internet to your application.
Install the NGINX Ingress Controller
through the GitLab [Cluster management project template](../../../user/clusters/management_project_template.md),
or manually with Google Cloud Shell:
1. Go to your cluster's details page, and select the **Advanced Settings** tab.
1. Select the link to Google Kubernetes Engine to visit the cluster on Google Cloud Console.
1. On the GKE cluster page, select **Connect**, then select **Run in Cloud Shell**.
1. After the Cloud Shell starts, run these commands to install NGINX Ingress Controller:
```shell
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace gitlab-managed-apps --create-namespace
# Check that the ingress controller is installed successfully
kubectl get service ingress-nginx-controller -n gitlab-managed-apps
```
## Configure Auto DevOps
Follow these steps to configure the base domain and other settings required for Auto DevOps.
1. A few minutes after you install NGINX, the load balancer obtains an IP address, and you can
get the external IP address with the following command:
```shell
kubectl get service ingress-nginx-controller -n gitlab-managed-apps -ojson | jq -r '.status.loadBalancer.ingress[].ip'
```
Replace `gitlab-managed-apps` if you have overwritten your namespace.
Copy this IP address, as you need it in the next step.
1. Go back to the application project.
1. On the left sidebar, select **Settings > CI/CD** and expand **Variables**.
- Add a key called `KUBE_INGRESS_BASE_DOMAIN` with the application deployment domain as the value. For this example, use the domain `<IP address>.nip.io`.
- Add a key called `KUBE_NAMESPACE` with a value of the Kubernetes namespace for your deployments to target. You can use different namespaces per environment. Configure the environment, use the environment scope.
- Add a key called `KUBE_CONTEXT` with the value `<path/to/agent/project>:<agent-name>`. Select the environment scope of your choice.
- Select **Save changes**.
## Enable Auto DevOps and run the pipeline
While Auto DevOps is enabled by default, Auto DevOps can be disabled for both
the instance (for GitLab Self-Managed instances) and the group. Complete
these steps to enable Auto DevOps if it's disabled:
1. On the left sidebar, select **Search or go to** and find the application project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select **Default to Auto DevOps pipeline** to display more options.
1. In **Deployment strategy**, select your desired [continuous deployment strategy](../requirements.md#auto-devops-deployment-strategy)
to deploy the application to production after the pipeline successfully runs on the default branch.
1. Select **Save changes**.
1. Edit `.gitlab-ci.yml` file to include Auto DevOps template and commit the change to `master` branch:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
The commit should trigger a pipeline. In the next section, we explain what each job does in the pipeline.
## Deploy the application
When your pipeline runs, what is it doing?
To view the jobs in the pipeline, select the pipeline's status badge. The
{{< icon name="status_running" >}} icon displays when pipeline jobs are running, and updates
without refreshing the page to {{< icon name="status_success" >}} (for success) or
{{< icon name="status_failed" >}} (for failure) when the jobs complete.
The jobs are separated into stages:

- **Build** - The application builds a Docker image and uploads it to your project's
[Container Registry](../../../user/packages/container_registry/_index.md) ([Auto Build](../stages.md#auto-build)).
- **Test** - GitLab runs various checks on the application, but all jobs except `test`
are allowed to fail in the test stage:
- The `test` job runs unit and integration tests by detecting the language and
framework ([Auto Test](../stages.md#auto-test))
- The `code_quality` job checks the code quality and is allowed to fail
([Auto Code Quality](../stages.md#auto-code-quality))
- The `container_scanning` job checks the Docker container if it has any
vulnerabilities and is allowed to fail ([Auto Container Scanning](../stages.md#auto-container-scanning))
- The `dependency_scanning` job checks if the application has any dependencies
susceptible to vulnerabilities and is allowed to fail
([Auto Dependency Scanning](../stages.md#auto-dependency-scanning))
- Jobs suffixed with `-sast` run static analysis on the current code to check for potential
security issues, and are allowed to fail ([Auto SAST](../stages.md#auto-sast))
- The `secret-detection` job checks for leaked secrets and is allowed to fail ([Auto Secret Detection](../stages.md#auto-secret-detection))
- **Review** - Pipelines on the default branch include this stage with a `dast_environment_deploy` job.
For more information, see [Dynamic Application Security Testing (DAST)](../../../user/application_security/dast/_index.md).
- **Production** - After the tests and checks finish, the application deploys in
Kubernetes ([Auto Deploy](../stages.md#auto-deploy)).
- **Performance** - Performance tests are run on the deployed application
([Auto Browser Performance Testing](../stages.md#auto-browser-performance-testing)).
- **Cleanup** - Pipelines on the default branch include this stage with a `stop_dast_environment` job.
After running a pipeline, you should view your deployed website and learn how
to monitor it.
### Monitor your project
After successfully deploying your application, you can view its website and check
on its health on the **Environments** page by navigating to
**Operate > Environments**. This page displays details about
the deployed applications, and the right-hand column displays icons that link
you to common environment tasks:

- **Open live environment** ({{< icon name="external-link" >}}) - Opens the URL of the application deployed in production
- **Monitoring** ({{< icon name="chart" >}}) - Opens the metrics page where Prometheus collects data
about the Kubernetes cluster and how the application
affects it in terms of memory usage, CPU usage, and latency
- **Deploy to** ({{< icon name="play" >}} {{< icon name="chevron-lg-down" >}}) - Displays a list of environments you can deploy to
- **Terminal** ({{< icon name="terminal" >}}) - Opens a [web terminal](../../../ci/environments/_index.md#web-terminals-deprecated)
session inside the container where the application is running
- **Re-deploy to environment** ({{< icon name="repeat" >}}) - For more information, see
[Retrying and rolling back](../../../ci/environments/deployments.md#retry-or-roll-back-a-deployment)
- **Stop environment** ({{< icon name="stop" >}}) - For more information, see
[Stopping an environment](../../../ci/environments/_index.md#stopping-an-environment)
GitLab displays the [deploy board](../../../user/project/deploy_boards.md) below the
environment's information, with squares representing pods in your
Kubernetes cluster, color-coded to show their status. Hovering over a square on
the deploy board displays the state of the deployment, and selecting the square
takes you to the pod's logs page.
{{< alert type="note" >}}
The example shows only one pod hosting the application at the moment, but you can add
more pods by defining the [`REPLICAS` CI/CD variable](../cicd_variables.md)
in **Settings > CI/CD > Variables**.
{{< /alert >}}
### Work with branches
Next, create a feature branch to add content to your application:
1. In your project's repository, go to the following file: `app/views/welcome/index.html.erb`.
This file should only contain a paragraph: `<p>You're on Rails!</p>`.
1. Open the GitLab [Web IDE](../../../user/project/web_ide/_index.md) to make the change.
1. Edit the file so it contains:
```html
<p>You're on Rails! Powered by GitLab Auto DevOps.</p>
```
1. Stage the file. Add a commit message, then create a new branch and a merge request
by selecting **Commit**.

After submitting the merge request, GitLab runs your pipeline, and all the jobs
in it, as [described previously](#deploy-the-application), in addition to
a few more that run only on branches other than the default branch.
After a few minutes a test fails, which means a test was
'broken' by your change. Select the failed `test` job to see more information
about it:
```plaintext
Failure:
WelcomeControllerTest#test_should_get_index [/app/test/controllers/welcome_controller_test.rb:7]:
<You're on Rails!> expected but was
<You're on Rails! Powered by GitLab Auto DevOps.>..
Expected 0 to be >= 1.
bin/rails test test/controllers/welcome_controller_test.rb:4
```
To fix the broken test:
1. Return to your merge request.
1. In the upper right corner, select **Code**, then select **Open in Web IDE**.
1. In the left-hand directory of files, find the `test/controllers/welcome_controller_test.rb`
file, and select it to open it.
1. Change line 7 to say `You're on Rails! Powered by GitLab Auto DevOps.`
1. On the left sidebar, select **Source Control** ({{< icon name="merge" >}}).
1. Write a commit message, and select **Commit**.
Return to the **Overview** page of your merge request, and you should not only
see the test passing, but also the application deployed as a
[review application](../stages.md#auto-review-apps). You can visit it by selecting
the **View app** {{< icon name="external-link" >}} button to see your changes deployed.
After merging the merge request, GitLab runs the pipeline on the default branch,
and then deploys the application to production.
## Conclusion
After implementing this project, you should have a solid understanding of the basics of Auto DevOps.
You started from building and testing, to deploying and monitoring an application
all in GitLab. Despite its automatic nature, Auto DevOps can also be configured
and customized to fit your workflow. Here are some helpful resources for further reading:
1. [Auto DevOps](../_index.md)
1. [Multiple Kubernetes clusters](../multiple_clusters_auto_devops.md)
1. [Incremental rollout to production](../cicd_variables.md#incremental-rollout-to-production)
1. [Disable jobs you don't need with CI/CD variables](../cicd_variables.md)
1. [Use your own buildpacks to build your application](../customize.md#custom-buildpacks)
|
---
stage: Deploy
group: Environments
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Use Auto DevOps to deploy an application to Google Kubernetes Engine
breadcrumbs:
- doc
- topics
- autodevops
- cloud_deployments
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
In this tutorial, we'll help you to get started with [Auto DevOps](../_index.md)
through an example of how to deploy an application to Google Kubernetes Engine (GKE).
You are using the GitLab native Kubernetes integration, so you don't need
to create a Kubernetes cluster manually using the Google Cloud Platform console.
You are creating and deploying an application that you create from a GitLab template.
These instructions also work for GitLab Self-Managed.
Ensure your own [runners are configured](../../../ci/runners/_index.md) and
[Google OAuth is enabled](../../../integration/google.md).
To deploy a project to Google Kubernetes Engine, follow the steps below:
1. [Configure your Google account](#configure-your-google-account)
1. [Create a Kubernetes cluster and deploy the agent](#create-a-kubernetes-cluster)
1. [Create a new project from a template](#create-an-application-project-from-a-template)
1. [Configure the agent](#configure-the-agent)
1. [Install Ingress](#install-ingress)
1. [Configure Auto DevOps](#configure-auto-devops)
1. [Enable Auto DevOps and run the pipeline](#enable-auto-devops-and-run-the-pipeline)
1. [Deploy the application](#deploy-the-application)
## Configure your Google account
Before creating and connecting your Kubernetes cluster to your GitLab project,
you need a [Google Cloud Platform account](https://console.cloud.google.com).
Sign in with an existing Google account, such as the one you use to access Gmail
or Google Drive, or create a new one.
1. Follow the steps described in the ["Before you begin" section](https://cloud.google.com/kubernetes-engine/docs/deploy-app-cluster#before-you-begin)
of the Kubernetes Engine documentation to enable the required APIs and related services.
1. Ensure you've created a [billing account](https://cloud.google.com/billing/docs/how-to/manage-billing-account)
with Google Cloud Platform.
{{< alert type="note" >}}
Every new Google Cloud Platform (GCP) account receives [$300 in credit](https://console.cloud.google.com/freetrial),
and in partnership with Google, GitLab is able to offer an additional $200 for new
GCP accounts to get started with the GitLab integration with Google Kubernetes Engine.
[Follow this link](https://cloud.google.com/partners?pcn_code=0014M00001h35gDQAQ#contact-form)
and apply for credit.
{{< /alert >}}
## Create a Kubernetes cluster
To create a new cluster on Google Kubernetes Engine (GKE), use Infrastructure as Code (IaC) approach
by following steps in [Create a Google GKE cluster](../../../user/infrastructure/clusters/connect/new_gke_cluster.md) guide.
The guide requires you to create a new project that uses [Terraform](https://www.terraform.io/) to create a GKE cluster and install a GitLab agent for Kubernetes.
This project is where configuration for the GitLab agent for Kubernetes resides.
## Create an application project from a template
Use a GitLab project template to get started. As the name suggests,
those projects provide a bare-bones application built on some well-known frameworks.
{{< alert type="warning" >}}
Create the application project in the group hierarchy at the same level or below the project for cluster management. Otherwise, it fails to [authorize the agent](../../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access).
{{< /alert >}}
1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**.
1. Select **Create from template**.
1. Select the **Ruby on Rails** template.
1. Give your project a name, optionally a description, and make it public so that
you can take advantage of the features available in the
[GitLab Ultimate plan](https://about.gitlab.com/pricing/).
1. Select **Create project**.
Now you have an application project you are going to deploy to the GKE cluster.
## Configure the agent
Now we need to configure the GitLab agent for Kubernetes for us to be able to use it to deploy the application project.
1. Go to the project [we created to manage the cluster](#create-a-kubernetes-cluster).
1. Go to the [agent configuration file](../../../user/clusters/agent/install/_index.md#create-an-agent-configuration-file) (`.gitlab/agents/<agent-name>/config.yaml`) and edit it.
1. Configure `ci_access:projects` attribute. Use application's project path as `id`:
```yaml
ci_access:
projects:
- id: path/to/application-project
```
## Install Ingress
After your cluster is running, you must install NGINX Ingress Controller as a
load balancer, to route traffic from the internet to your application.
Install the NGINX Ingress Controller
through the GitLab [Cluster management project template](../../../user/clusters/management_project_template.md),
or manually with Google Cloud Shell:
1. Go to your cluster's details page, and select the **Advanced Settings** tab.
1. Select the link to Google Kubernetes Engine to visit the cluster on Google Cloud Console.
1. On the GKE cluster page, select **Connect**, then select **Run in Cloud Shell**.
1. After the Cloud Shell starts, run these commands to install NGINX Ingress Controller:
```shell
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace gitlab-managed-apps --create-namespace
# Check that the ingress controller is installed successfully
kubectl get service ingress-nginx-controller -n gitlab-managed-apps
```
## Configure Auto DevOps
Follow these steps to configure the base domain and other settings required for Auto DevOps.
1. A few minutes after you install NGINX, the load balancer obtains an IP address, and you can
get the external IP address with the following command:
```shell
kubectl get service ingress-nginx-controller -n gitlab-managed-apps -ojson | jq -r '.status.loadBalancer.ingress[].ip'
```
Replace `gitlab-managed-apps` if you have overwritten your namespace.
Copy this IP address, as you need it in the next step.
1. Go back to the application project.
1. On the left sidebar, select **Settings > CI/CD** and expand **Variables**.
- Add a key called `KUBE_INGRESS_BASE_DOMAIN` with the application deployment domain as the value. For this example, use the domain `<IP address>.nip.io`.
- Add a key called `KUBE_NAMESPACE` with a value of the Kubernetes namespace for your deployments to target. You can use different namespaces per environment. Configure the environment, use the environment scope.
- Add a key called `KUBE_CONTEXT` with the value `<path/to/agent/project>:<agent-name>`. Select the environment scope of your choice.
- Select **Save changes**.
## Enable Auto DevOps and run the pipeline
While Auto DevOps is enabled by default, Auto DevOps can be disabled for both
the instance (for GitLab Self-Managed instances) and the group. Complete
these steps to enable Auto DevOps if it's disabled:
1. On the left sidebar, select **Search or go to** and find the application project.
1. Select **Settings > CI/CD**.
1. Expand **Auto DevOps**.
1. Select **Default to Auto DevOps pipeline** to display more options.
1. In **Deployment strategy**, select your desired [continuous deployment strategy](../requirements.md#auto-devops-deployment-strategy)
to deploy the application to production after the pipeline successfully runs on the default branch.
1. Select **Save changes**.
1. Edit `.gitlab-ci.yml` file to include Auto DevOps template and commit the change to `master` branch:
```yaml
include:
- template: Auto-DevOps.gitlab-ci.yml
```
The commit should trigger a pipeline. In the next section, we explain what each job does in the pipeline.
## Deploy the application
When your pipeline runs, what is it doing?
To view the jobs in the pipeline, select the pipeline's status badge. The
{{< icon name="status_running" >}} icon displays when pipeline jobs are running, and updates
without refreshing the page to {{< icon name="status_success" >}} (for success) or
{{< icon name="status_failed" >}} (for failure) when the jobs complete.
The jobs are separated into stages:

- **Build** - The application builds a Docker image and uploads it to your project's
[Container Registry](../../../user/packages/container_registry/_index.md) ([Auto Build](../stages.md#auto-build)).
- **Test** - GitLab runs various checks on the application, but all jobs except `test`
are allowed to fail in the test stage:
- The `test` job runs unit and integration tests by detecting the language and
framework ([Auto Test](../stages.md#auto-test))
- The `code_quality` job checks the code quality and is allowed to fail
([Auto Code Quality](../stages.md#auto-code-quality))
- The `container_scanning` job checks the Docker container if it has any
vulnerabilities and is allowed to fail ([Auto Container Scanning](../stages.md#auto-container-scanning))
- The `dependency_scanning` job checks if the application has any dependencies
susceptible to vulnerabilities and is allowed to fail
([Auto Dependency Scanning](../stages.md#auto-dependency-scanning))
- Jobs suffixed with `-sast` run static analysis on the current code to check for potential
security issues, and are allowed to fail ([Auto SAST](../stages.md#auto-sast))
- The `secret-detection` job checks for leaked secrets and is allowed to fail ([Auto Secret Detection](../stages.md#auto-secret-detection))
- **Review** - Pipelines on the default branch include this stage with a `dast_environment_deploy` job.
For more information, see [Dynamic Application Security Testing (DAST)](../../../user/application_security/dast/_index.md).
- **Production** - After the tests and checks finish, the application deploys in
Kubernetes ([Auto Deploy](../stages.md#auto-deploy)).
- **Performance** - Performance tests are run on the deployed application
([Auto Browser Performance Testing](../stages.md#auto-browser-performance-testing)).
- **Cleanup** - Pipelines on the default branch include this stage with a `stop_dast_environment` job.
After running a pipeline, you should view your deployed website and learn how
to monitor it.
### Monitor your project
After successfully deploying your application, you can view its website and check
on its health on the **Environments** page by navigating to
**Operate > Environments**. This page displays details about
the deployed applications, and the right-hand column displays icons that link
you to common environment tasks:

- **Open live environment** ({{< icon name="external-link" >}}) - Opens the URL of the application deployed in production
- **Monitoring** ({{< icon name="chart" >}}) - Opens the metrics page where Prometheus collects data
about the Kubernetes cluster and how the application
affects it in terms of memory usage, CPU usage, and latency
- **Deploy to** ({{< icon name="play" >}} {{< icon name="chevron-lg-down" >}}) - Displays a list of environments you can deploy to
- **Terminal** ({{< icon name="terminal" >}}) - Opens a [web terminal](../../../ci/environments/_index.md#web-terminals-deprecated)
session inside the container where the application is running
- **Re-deploy to environment** ({{< icon name="repeat" >}}) - For more information, see
[Retrying and rolling back](../../../ci/environments/deployments.md#retry-or-roll-back-a-deployment)
- **Stop environment** ({{< icon name="stop" >}}) - For more information, see
[Stopping an environment](../../../ci/environments/_index.md#stopping-an-environment)
GitLab displays the [deploy board](../../../user/project/deploy_boards.md) below the
environment's information, with squares representing pods in your
Kubernetes cluster, color-coded to show their status. Hovering over a square on
the deploy board displays the state of the deployment, and selecting the square
takes you to the pod's logs page.
{{< alert type="note" >}}
The example shows only one pod hosting the application at the moment, but you can add
more pods by defining the [`REPLICAS` CI/CD variable](../cicd_variables.md)
in **Settings > CI/CD > Variables**.
{{< /alert >}}
### Work with branches
Next, create a feature branch to add content to your application:
1. In your project's repository, go to the following file: `app/views/welcome/index.html.erb`.
This file should only contain a paragraph: `<p>You're on Rails!</p>`.
1. Open the GitLab [Web IDE](../../../user/project/web_ide/_index.md) to make the change.
1. Edit the file so it contains:
```html
<p>You're on Rails! Powered by GitLab Auto DevOps.</p>
```
1. Stage the file. Add a commit message, then create a new branch and a merge request
by selecting **Commit**.

After submitting the merge request, GitLab runs your pipeline, and all the jobs
in it, as [described previously](#deploy-the-application), in addition to
a few more that run only on branches other than the default branch.
After a few minutes a test fails, which means a test was
'broken' by your change. Select the failed `test` job to see more information
about it:
```plaintext
Failure:
WelcomeControllerTest#test_should_get_index [/app/test/controllers/welcome_controller_test.rb:7]:
<You're on Rails!> expected but was
<You're on Rails! Powered by GitLab Auto DevOps.>..
Expected 0 to be >= 1.
bin/rails test test/controllers/welcome_controller_test.rb:4
```
To fix the broken test:
1. Return to your merge request.
1. In the upper right corner, select **Code**, then select **Open in Web IDE**.
1. In the left-hand directory of files, find the `test/controllers/welcome_controller_test.rb`
file, and select it to open it.
1. Change line 7 to say `You're on Rails! Powered by GitLab Auto DevOps.`
1. On the left sidebar, select **Source Control** ({{< icon name="merge" >}}).
1. Write a commit message, and select **Commit**.
Return to the **Overview** page of your merge request, and you should not only
see the test passing, but also the application deployed as a
[review application](../stages.md#auto-review-apps). You can visit it by selecting
the **View app** {{< icon name="external-link" >}} button to see your changes deployed.
After merging the merge request, GitLab runs the pipeline on the default branch,
and then deploys the application to production.
## Conclusion
After implementing this project, you should have a solid understanding of the basics of Auto DevOps.
You started from building and testing, to deploying and monitoring an application
all in GitLab. Despite its automatic nature, Auto DevOps can also be configured
and customized to fit your workflow. Here are some helpful resources for further reading:
1. [Auto DevOps](../_index.md)
1. [Multiple Kubernetes clusters](../multiple_clusters_auto_devops.md)
1. [Incremental rollout to production](../cicd_variables.md#incremental-rollout-to-production)
1. [Disable jobs you don't need with CI/CD variables](../cicd_variables.md)
1. [Use your own buildpacks to build your application](../customize.md#custom-buildpacks)
|
https://docs.gitlab.com/topics/cron
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/_index.md
|
2025-08-13
|
doc/topics/cron
|
[
"doc",
"topics",
"cron"
] |
_index.md
|
none
|
unassigned
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Cron
|
Schedule when jobs should run.
|
Cron syntax is used to schedule when jobs should run.
You may need to use a cron syntax string to
create a [pipeline schedule](../../ci/pipelines/schedules.md),
or to prevent unintentional releases by setting a
[deploy freeze](../../user/project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze).
## Cron syntax
Cron scheduling uses a series of five numbers, separated by spaces:
```plaintext
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ │
# * * * * * <command to execute>
```
(Source: [Wikipedia](https://en.wikipedia.org/wiki/Cron))
In cron syntax, the asterisk (`*`) means 'every,' so the following cron strings
are valid:
- Run once an hour at the beginning of the hour: `0 * * * *`
- Run once a day at midnight: `0 0 * * *`
- Run once a week at midnight on Sunday morning: `0 0 * * 0`
- Run once a month at midnight of the first day of the month: `0 0 1 * *`
- Run once a month on the 22nd: `0 0 22 * *`
- Run once a year at midnight of 1 January: `0 0 1 1 *`
- Run twice a month at 3 AM, on the 1st and 15th of the month: `0 3 1,15 * *`
For complete cron documentation, refer to the
[crontab(5) Linux manual page](https://man7.org/linux/man-pages/man5/crontab.5.html).
This documentation is accessible offline by entering `man 5 crontab` in a Linux or MacOS
terminal.
Additionally, GitLab uses [`fugit`](#how-gitlab-parses-cron-syntax-strings), which
accepts `#` and `%` syntax. This syntax might not work in all cron testers:
- Run once a month on the 2nd Monday: `0 0 * * 1#2`. This syntax is from the [`fugit` hash extension](https://github.com/floraison/fugit#the-hash-extension).
- Run every other Sunday at 0900 hours: `0 9 * * sun%2`. This syntax is from the [`fugit` modulo extension](https://github.com/floraison/fugit#the-modulo-extension).
## Cron examples
```plaintext
# Run at 7:00pm every day:
0 19 * * *
# Run every minute on the 3rd of June:
* * 3 6 *
# Run at 06:30 every Friday:
30 6 * * 5
```
More examples of how to write a cron schedule can be found at
[crontab.guru](https://crontab.guru/examples.html).
## How GitLab parses cron syntax strings
GitLab uses [`fugit`](https://github.com/floraison/fugit) to parse cron syntax
strings on the server and [cron-validator](https://github.com/TheCloudConnectors/cron-validator)
to validate cron syntax in the browser. GitLab uses
[`cRonstrue`](https://github.com/bradymholt/cRonstrue) to convert cron to human-readable strings
in the browser.
|
---
stage: none
group: unassigned
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Cron
description: Schedule when jobs should run.
breadcrumbs:
- doc
- topics
- cron
---
Cron syntax is used to schedule when jobs should run.
You may need to use a cron syntax string to
create a [pipeline schedule](../../ci/pipelines/schedules.md),
or to prevent unintentional releases by setting a
[deploy freeze](../../user/project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze).
## Cron syntax
Cron scheduling uses a series of five numbers, separated by spaces:
```plaintext
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ │
# * * * * * <command to execute>
```
(Source: [Wikipedia](https://en.wikipedia.org/wiki/Cron))
In cron syntax, the asterisk (`*`) means 'every,' so the following cron strings
are valid:
- Run once an hour at the beginning of the hour: `0 * * * *`
- Run once a day at midnight: `0 0 * * *`
- Run once a week at midnight on Sunday morning: `0 0 * * 0`
- Run once a month at midnight of the first day of the month: `0 0 1 * *`
- Run once a month on the 22nd: `0 0 22 * *`
- Run once a year at midnight of 1 January: `0 0 1 1 *`
- Run twice a month at 3 AM, on the 1st and 15th of the month: `0 3 1,15 * *`
For complete cron documentation, refer to the
[crontab(5) Linux manual page](https://man7.org/linux/man-pages/man5/crontab.5.html).
This documentation is accessible offline by entering `man 5 crontab` in a Linux or MacOS
terminal.
Additionally, GitLab uses [`fugit`](#how-gitlab-parses-cron-syntax-strings), which
accepts `#` and `%` syntax. This syntax might not work in all cron testers:
- Run once a month on the 2nd Monday: `0 0 * * 1#2`. This syntax is from the [`fugit` hash extension](https://github.com/floraison/fugit#the-hash-extension).
- Run every other Sunday at 0900 hours: `0 9 * * sun%2`. This syntax is from the [`fugit` modulo extension](https://github.com/floraison/fugit#the-modulo-extension).
## Cron examples
```plaintext
# Run at 7:00pm every day:
0 19 * * *
# Run every minute on the 3rd of June:
* * 3 6 *
# Run at 06:30 every Friday:
30 6 * * 5
```
More examples of how to write a cron schedule can be found at
[crontab.guru](https://crontab.guru/examples.html).
## How GitLab parses cron syntax strings
GitLab uses [`fugit`](https://github.com/floraison/fugit) to parse cron syntax
strings on the server and [cron-validator](https://github.com/TheCloudConnectors/cron-validator)
to validate cron syntax in the browser. GitLab uses
[`cRonstrue`](https://github.com/bradymholt/cRonstrue) to convert cron to human-readable strings
in the browser.
|
https://docs.gitlab.com/topics/repository
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/repository.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
repository.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Reduce repository size
|
To remove unwanted large files from a Git repository and reduce its storage size, use the filter-repo command.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The size of a Git repository can significantly impact performance and storage costs.
It can differ slightly from one instance to another due to compression, housekeeping, and other factors.
For more information about repository size, see:
- [Repository size](../../user/project/repository/repository_size.md)
- [How repository size is calculated](../../user/project/repository/repository_size.md#size-calculation)
- [Size and storage limits](../../user/project/repository/repository_size.md#size-and-storage-limits)
- [GitLab UI methods to reduce repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size)
## Purge files from repository history
Use this method to remove large files from the entire Git history.
It is not suitable for removing sensitive data like passwords or keys from your repository.
Information about commits, including file content, is cached in the database, and remain visible
even after they have been removed from the repository. To remove sensitive data, use the method
described in [Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
Prerequisites:
- You must install [`git filter-repo`](https://github.com/newren/git-filter-repo/blob/main/INSTALL.md).
- Optional. Install [`git-sizer`](https://github.com/github/git-sizer#getting-started).
{{< alert type="warning" >}}
Purging files is a destructive operation. Before proceeding, ensure you have a backup of the repository.
{{< /alert >}}
To purge files from a GitLab repository:
1. [Export the project](../../user/project/settings/import_export.md#export-a-project-and-its-data) that contains
a copy of your repository, and download it.
- For large projects, you can use the [Project relations export API](../../api/project_relations_export.md).
1. Decompress and extract the backup:
```shell
tar xzf project-backup.tar.gz
```
1. Clone the repository using `--bare` and `--mirror` options:
```shell
git clone --bare --mirror /path/to/project.bundle
```
1. Go to the `project.git` directory:
```shell
cd project.git
```
1. Update the remote URL:
```shell
git remote set-url origin https://gitlab.example.com/<namespace>/<project_name>.git
```
1. Analyze the repository using `git filter-repo` or `git-sizer`:
- `git filter-repo`:
```shell
git filter-repo --analyze
head filter-repo/analysis/*-{all,deleted}-sizes.txt
```
- `git-sizer`:
```shell
git-sizer
```
1. Purge the history of your repository using one of the following `git filter-repo` options:
- `--path` and `--invert-paths` to purge specific files:
```shell
git filter-repo --path path/to/file.ext --invert-paths
```
- `--strip-blobs-bigger-than` to purge all files larger than for example 10M:
```shell
git filter-repo --strip-blobs-bigger-than 10M
```
For more examples, see the
[`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES).
1. Back up the `commit-map`:
```shell
cp filter-repo/commit-map ./_filter_repo_commit_map_$(date +%s)
```
1. Unset the mirror flag:
```shell
git config --unset remote.origin.mirror
```
1. Force push the changes:
```shell
git push origin --force 'refs/heads/*'
git push origin --force 'refs/tags/*'
git push origin --force 'refs/replace/*'
```
For more information about references, see
Git references used by Gitaly.
{{< alert type="note" >}}
This step fails for [protected branches](../../user/project/repository/branches/protected.md) and
[protected tags](../../user/project/protected_tags.md). To proceed, temporarily remove protections.
{{< /alert >}}
1. Wait at least 30 minutes before the next step.
1. Run the [clean up repository](../../user/project/repository/repository_size.md#clean-up-repository) process.
This process only cleans up objects that are more than 30 minutes old.
For more information, see [space not being freed after cleanup](../../user/project/repository/repository_size.md#space-not-being-freed-after-cleanup).
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: To remove unwanted large files from a Git repository and reduce its storage
size, use the filter-repo command.
title: Reduce repository size
breadcrumbs:
- doc
- topics
- git
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
The size of a Git repository can significantly impact performance and storage costs.
It can differ slightly from one instance to another due to compression, housekeeping, and other factors.
For more information about repository size, see:
- [Repository size](../../user/project/repository/repository_size.md)
- [How repository size is calculated](../../user/project/repository/repository_size.md#size-calculation)
- [Size and storage limits](../../user/project/repository/repository_size.md#size-and-storage-limits)
- [GitLab UI methods to reduce repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size)
## Purge files from repository history
Use this method to remove large files from the entire Git history.
It is not suitable for removing sensitive data like passwords or keys from your repository.
Information about commits, including file content, is cached in the database, and remain visible
even after they have been removed from the repository. To remove sensitive data, use the method
described in [Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
Prerequisites:
- You must install [`git filter-repo`](https://github.com/newren/git-filter-repo/blob/main/INSTALL.md).
- Optional. Install [`git-sizer`](https://github.com/github/git-sizer#getting-started).
{{< alert type="warning" >}}
Purging files is a destructive operation. Before proceeding, ensure you have a backup of the repository.
{{< /alert >}}
To purge files from a GitLab repository:
1. [Export the project](../../user/project/settings/import_export.md#export-a-project-and-its-data) that contains
a copy of your repository, and download it.
- For large projects, you can use the [Project relations export API](../../api/project_relations_export.md).
1. Decompress and extract the backup:
```shell
tar xzf project-backup.tar.gz
```
1. Clone the repository using `--bare` and `--mirror` options:
```shell
git clone --bare --mirror /path/to/project.bundle
```
1. Go to the `project.git` directory:
```shell
cd project.git
```
1. Update the remote URL:
```shell
git remote set-url origin https://gitlab.example.com/<namespace>/<project_name>.git
```
1. Analyze the repository using `git filter-repo` or `git-sizer`:
- `git filter-repo`:
```shell
git filter-repo --analyze
head filter-repo/analysis/*-{all,deleted}-sizes.txt
```
- `git-sizer`:
```shell
git-sizer
```
1. Purge the history of your repository using one of the following `git filter-repo` options:
- `--path` and `--invert-paths` to purge specific files:
```shell
git filter-repo --path path/to/file.ext --invert-paths
```
- `--strip-blobs-bigger-than` to purge all files larger than for example 10M:
```shell
git filter-repo --strip-blobs-bigger-than 10M
```
For more examples, see the
[`git filter-repo` documentation](https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#EXAMPLES).
1. Back up the `commit-map`:
```shell
cp filter-repo/commit-map ./_filter_repo_commit_map_$(date +%s)
```
1. Unset the mirror flag:
```shell
git config --unset remote.origin.mirror
```
1. Force push the changes:
```shell
git push origin --force 'refs/heads/*'
git push origin --force 'refs/tags/*'
git push origin --force 'refs/replace/*'
```
For more information about references, see
Git references used by Gitaly.
{{< alert type="note" >}}
This step fails for [protected branches](../../user/project/repository/branches/protected.md) and
[protected tags](../../user/project/protected_tags.md). To proceed, temporarily remove protections.
{{< /alert >}}
1. Wait at least 30 minutes before the next step.
1. Run the [clean up repository](../../user/project/repository/repository_size.md#clean-up-repository) process.
This process only cleans up objects that are more than 30 minutes old.
For more information, see [space not being freed after cleanup](../../user/project/repository/repository_size.md#space-not-being-freed-after-cleanup).
|
https://docs.gitlab.com/topics/git_rebase
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/git_rebase.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
git_rebase.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Rebase and resolve merge conflicts
|
Introduction to Git rebase and force push, methods to resolve merge conflicts through the command line.
|
Git rebase combines changes from one branch into another by moving your commits to the
tip of the target branch. This action:
- Updates branches with the latest code from the target branch.
- Maintains a clean, linear commit history for easier debugging and code reviews.
- Resolves [merge conflicts](../../user/project/merge_requests/conflicts.md) at the commit level
for conflict resolution.
- Preserves the chronological order of code changes.
When you rebase:
1. Git imports all the commits submitted to your target branch after you initially created
your branch from it.
1. Git applies the commits from your branch on top of the imported commits. In this example, after
a branch named `feature` is created (in orange), four commits from `main` (in purple) are
imported into the `feature` branch:

While most rebases are performed against `main`, you can rebase against any other
branch. You can also specify a different remote repository.
For example, `upstream` instead of `origin`.
{{< alert type="warning" >}}
`git rebase` rewrites the commit history. It can cause conflicts in
shared branches and complex merge conflicts.
Instead of rebasing your branch against the default branch,
consider using `git pull origin master`. Pulling has similar
effects with less risk of compromising others' work.
{{< /alert >}}
## Rebase
When you use Git to rebase, each commit is applied to your branch.
When merge conflicts occur, you are prompted to address them.
For more advanced options for your commits, use [an interactive rebase](#interactive-rebase).
Prerequisites:
- You must have [permissions](../../user/permissions.md) to force push to branches.
To use Git to rebase your branch against the target branch:
1. Open a terminal and change to your project directory.
1. Ensure you have the latest contents of the target branch.
In this example, the target branch is `main`:
```shell
git fetch origin main
```
1. Check out your branch:
```shell
git checkout my-branch
```
1. Optional. Create a backup of your branch:
```shell
git branch my-branch-backup
```
Changes added to `my-branch` after this point are lost
if you restore from the backup branch.
1. Rebase against the `main` branch:
```shell
git rebase origin/main
```
1. If merge conflicts exist:
1. Resolve the conflicts in your editor.
1. Stage the changes:
```shell
git add .
```
1. Continue the rebase:
```shell
git rebase --continue
```
1. Force push your changes to the target branch, while protecting others' commits:
```shell
git push origin my-branch --force-with-lease
```
## Interactive rebase
Use an interactive rebase to specify how to handle each commit.
The following instructions use the [Vim](https://www.vim.org/) text editor to edit commits.
To rebase interactively:
1. Open a terminal and change to your project directory.
1. Ensure you have the latest contents of the target branch. In this example, the target branch is `main`:
```shell
git fetch origin main
```
1. Check out your branch:
```shell
git checkout my-branch
```
1. Optional. Create a backup of your branch:
```shell
git branch my-branch-backup
```
Changes added to `my-branch` after this point are lost
if you restore from the backup branch.
1. In the GitLab UI, in your merge request, confirm the number of commits
to rebase in the **Commits** tab.
1. Open these commits. For example, to edit the last five commits:
```shell
git rebase -i HEAD~5
```
Git opens the commits in your terminal text editor, oldest first.
Each commit shows the action to take, the SHA, and the commit title. For example:
```shell
pick 111111111111 Second round of structural revisions
pick 222222222222 Update inbound link to this changed page
pick 333333333333 Shifts from H4 to H3
pick 444444444444 Adds revisions from editorial
pick 555555555555 Revisions continue to build the concept part out
# Rebase 111111111111..222222222222 onto zzzzzzzzzzzz (5 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
```
1. Switch to Vim's edit mode by pressing <kbd>i</kbd>.
1. Use the arrow keys to move the cursor to the commit you want to edit.
1. For each commit, except the first one, change `pick` to `squash` or `fixup` (or `s` or `f`).
1. Repeat for the remaining commits.
1. End edit mode, save, and quit:
- Press <kbd>ESC</kbd>.
- Type `:wq`.
1. When squashing, Git prompts you to edit the commit message:
- Lines starting with `#` are ignored and not included in the commit
message.
- To keep the current message, type `:wq`.
- To edit the commit message, switch to
edit mode, make changes, and save.
1. Push your changes to the target branch.
- If you didn't push your commits to the target branch before rebasing:
```shell
git push origin my-branch
```
- If you already pushed the commits:
```shell
git push origin my-branch --force-with-lease
```
Some actions require a force push to make changes to the branch. For more information, see [Force push to a remote branch](#force-push-to-a-remote-branch).
## Resolve conflicts from the command line
To give you the most control over each change, you should fix complex conflicts locally from the command line, instead of in GitLab.
Prerequisites:
- You must have [permissions](../../user/permissions.md) to force push to branches.
1. Open the terminal and check out your feature branch:
```shell
git switch my-feature-branch
```
1. Rebase your branch against the target branch. In this example, the target branch is `main`:
```shell
git fetch
git rebase origin/main
```
1. Open the conflicting file in your preferred code editor.
1. Locate and resolve the conflict block:
1. Choose which version (before or after `=======`) you want to keep.
1. Delete the version you don't want to keep.
1. Delete the conflict markers.
1. Save the file.
1. Repeat the process for each file with conflicts.
1. Stage your changes:
```shell
git add .
```
1. Commit your changes:
```shell
git commit -m "Resolve merge conflicts"
```
{{< alert type="warning" >}}
You can run `git rebase --abort` to stop the process before this point.
Git aborts the rebase and rolls back the branch to the state
before running `git rebase`. After you run `git rebase --continue`, you cannot abort the rebase.
{{< /alert >}}
1. Continue the rebase:
```shell
git rebase --continue
```
1. Force push the changes to your
remote branch:
```shell
git push origin my-feature-branch --force-with-lease
```
## Force push to a remote branch
Complex Git operations like squashing commits, resetting a branch, or rebasing rewrite branch history.
Git requires a forced update for these changes.
Force pushing is not recommended on shared branches, because you risk destroying
others' changes.
If the branch is [protected](../../user/project/repository/branches/protected.md),
you can't force push unless you:
- Unprotect it.
- Allow force pushes.
For more information, see [Allow force push on a protected branch](../../user/project/repository/branches/protected.md#allow-force-push).
## Restore your backed up branch
If a rebase or force push fails, restore your branch from its backup:
1. Ensure you're on the correct branch:
```shell
git checkout my-branch
```
1. Reset your branch to the backup:
```shell
git reset --hard my-branch-backup
```
## Approving after rebase
If you rebase a branch, you've added commits. If your project is configured to
[prevent approvals by users who add commits](../../user/project/merge_requests/approvals/settings.md#prevent-approvals-by-users-who-add-commits), you can't approve a merge request you've rebased.
## Related topics
- [Revert and undo changes](undo.md)
- [Git documentation for branches and rebases](https://git-scm.com/book/en/v2/Git-Branching-Rebasing)
- [Project squash and merge settings](../../user/project/merge_requests/squash_and_merge.md#configure-squash-options-for-a-project)
- [Merge conflicts](../../user/project/merge_requests/conflicts.md)
## Troubleshooting
For CI/CD pipeline troubleshooting information, see [Debugging CI/CD pipelines](../../ci/debugging.md).
### `Unmergeable state` after `/rebase` quick action
The `/rebase` command schedules a background task. The task attempts to rebase
the changes in the source branch on the latest commit of the target branch.
If, after using the `/rebase`
[quick action](../../user/project/quick_actions.md#issues-merge-requests-and-epics),
you see this error, a rebase cannot be scheduled:
```plaintext
This merge request is currently in an unmergeable state, and cannot be rebased.
```
This error occurs if any of these conditions are true:
- Conflicts exist between the source and target branches.
- The source branch contains no commits.
- Either the source or target branch does not exist.
- An error has occurred, resulting in no diff being generated.
To resolve the `unmergeable state` error:
1. Resolve any merge conflicts.
1. Confirm the source branch exists, and has commits.
1. Confirm the target branch exists.
1. Confirm the diff has been generated.
### `/merge` quick action ignored after `/rebase`
If `/rebase` is used, `/merge` is ignored to avoid a race condition where the source branch is merged or deleted before it is rebased.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Introduction to Git rebase and force push, methods to resolve merge conflicts
through the command line.
title: Rebase and resolve merge conflicts
breadcrumbs:
- doc
- topics
- git
---
Git rebase combines changes from one branch into another by moving your commits to the
tip of the target branch. This action:
- Updates branches with the latest code from the target branch.
- Maintains a clean, linear commit history for easier debugging and code reviews.
- Resolves [merge conflicts](../../user/project/merge_requests/conflicts.md) at the commit level
for conflict resolution.
- Preserves the chronological order of code changes.
When you rebase:
1. Git imports all the commits submitted to your target branch after you initially created
your branch from it.
1. Git applies the commits from your branch on top of the imported commits. In this example, after
a branch named `feature` is created (in orange), four commits from `main` (in purple) are
imported into the `feature` branch:

While most rebases are performed against `main`, you can rebase against any other
branch. You can also specify a different remote repository.
For example, `upstream` instead of `origin`.
{{< alert type="warning" >}}
`git rebase` rewrites the commit history. It can cause conflicts in
shared branches and complex merge conflicts.
Instead of rebasing your branch against the default branch,
consider using `git pull origin master`. Pulling has similar
effects with less risk of compromising others' work.
{{< /alert >}}
## Rebase
When you use Git to rebase, each commit is applied to your branch.
When merge conflicts occur, you are prompted to address them.
For more advanced options for your commits, use [an interactive rebase](#interactive-rebase).
Prerequisites:
- You must have [permissions](../../user/permissions.md) to force push to branches.
To use Git to rebase your branch against the target branch:
1. Open a terminal and change to your project directory.
1. Ensure you have the latest contents of the target branch.
In this example, the target branch is `main`:
```shell
git fetch origin main
```
1. Check out your branch:
```shell
git checkout my-branch
```
1. Optional. Create a backup of your branch:
```shell
git branch my-branch-backup
```
Changes added to `my-branch` after this point are lost
if you restore from the backup branch.
1. Rebase against the `main` branch:
```shell
git rebase origin/main
```
1. If merge conflicts exist:
1. Resolve the conflicts in your editor.
1. Stage the changes:
```shell
git add .
```
1. Continue the rebase:
```shell
git rebase --continue
```
1. Force push your changes to the target branch, while protecting others' commits:
```shell
git push origin my-branch --force-with-lease
```
## Interactive rebase
Use an interactive rebase to specify how to handle each commit.
The following instructions use the [Vim](https://www.vim.org/) text editor to edit commits.
To rebase interactively:
1. Open a terminal and change to your project directory.
1. Ensure you have the latest contents of the target branch. In this example, the target branch is `main`:
```shell
git fetch origin main
```
1. Check out your branch:
```shell
git checkout my-branch
```
1. Optional. Create a backup of your branch:
```shell
git branch my-branch-backup
```
Changes added to `my-branch` after this point are lost
if you restore from the backup branch.
1. In the GitLab UI, in your merge request, confirm the number of commits
to rebase in the **Commits** tab.
1. Open these commits. For example, to edit the last five commits:
```shell
git rebase -i HEAD~5
```
Git opens the commits in your terminal text editor, oldest first.
Each commit shows the action to take, the SHA, and the commit title. For example:
```shell
pick 111111111111 Second round of structural revisions
pick 222222222222 Update inbound link to this changed page
pick 333333333333 Shifts from H4 to H3
pick 444444444444 Adds revisions from editorial
pick 555555555555 Revisions continue to build the concept part out
# Rebase 111111111111..222222222222 onto zzzzzzzzzzzz (5 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
```
1. Switch to Vim's edit mode by pressing <kbd>i</kbd>.
1. Use the arrow keys to move the cursor to the commit you want to edit.
1. For each commit, except the first one, change `pick` to `squash` or `fixup` (or `s` or `f`).
1. Repeat for the remaining commits.
1. End edit mode, save, and quit:
- Press <kbd>ESC</kbd>.
- Type `:wq`.
1. When squashing, Git prompts you to edit the commit message:
- Lines starting with `#` are ignored and not included in the commit
message.
- To keep the current message, type `:wq`.
- To edit the commit message, switch to
edit mode, make changes, and save.
1. Push your changes to the target branch.
- If you didn't push your commits to the target branch before rebasing:
```shell
git push origin my-branch
```
- If you already pushed the commits:
```shell
git push origin my-branch --force-with-lease
```
Some actions require a force push to make changes to the branch. For more information, see [Force push to a remote branch](#force-push-to-a-remote-branch).
## Resolve conflicts from the command line
To give you the most control over each change, you should fix complex conflicts locally from the command line, instead of in GitLab.
Prerequisites:
- You must have [permissions](../../user/permissions.md) to force push to branches.
1. Open the terminal and check out your feature branch:
```shell
git switch my-feature-branch
```
1. Rebase your branch against the target branch. In this example, the target branch is `main`:
```shell
git fetch
git rebase origin/main
```
1. Open the conflicting file in your preferred code editor.
1. Locate and resolve the conflict block:
1. Choose which version (before or after `=======`) you want to keep.
1. Delete the version you don't want to keep.
1. Delete the conflict markers.
1. Save the file.
1. Repeat the process for each file with conflicts.
1. Stage your changes:
```shell
git add .
```
1. Commit your changes:
```shell
git commit -m "Resolve merge conflicts"
```
{{< alert type="warning" >}}
You can run `git rebase --abort` to stop the process before this point.
Git aborts the rebase and rolls back the branch to the state
before running `git rebase`. After you run `git rebase --continue`, you cannot abort the rebase.
{{< /alert >}}
1. Continue the rebase:
```shell
git rebase --continue
```
1. Force push the changes to your
remote branch:
```shell
git push origin my-feature-branch --force-with-lease
```
## Force push to a remote branch
Complex Git operations like squashing commits, resetting a branch, or rebasing rewrite branch history.
Git requires a forced update for these changes.
Force pushing is not recommended on shared branches, because you risk destroying
others' changes.
If the branch is [protected](../../user/project/repository/branches/protected.md),
you can't force push unless you:
- Unprotect it.
- Allow force pushes.
For more information, see [Allow force push on a protected branch](../../user/project/repository/branches/protected.md#allow-force-push).
## Restore your backed up branch
If a rebase or force push fails, restore your branch from its backup:
1. Ensure you're on the correct branch:
```shell
git checkout my-branch
```
1. Reset your branch to the backup:
```shell
git reset --hard my-branch-backup
```
## Approving after rebase
If you rebase a branch, you've added commits. If your project is configured to
[prevent approvals by users who add commits](../../user/project/merge_requests/approvals/settings.md#prevent-approvals-by-users-who-add-commits), you can't approve a merge request you've rebased.
## Related topics
- [Revert and undo changes](undo.md)
- [Git documentation for branches and rebases](https://git-scm.com/book/en/v2/Git-Branching-Rebasing)
- [Project squash and merge settings](../../user/project/merge_requests/squash_and_merge.md#configure-squash-options-for-a-project)
- [Merge conflicts](../../user/project/merge_requests/conflicts.md)
## Troubleshooting
For CI/CD pipeline troubleshooting information, see [Debugging CI/CD pipelines](../../ci/debugging.md).
### `Unmergeable state` after `/rebase` quick action
The `/rebase` command schedules a background task. The task attempts to rebase
the changes in the source branch on the latest commit of the target branch.
If, after using the `/rebase`
[quick action](../../user/project/quick_actions.md#issues-merge-requests-and-epics),
you see this error, a rebase cannot be scheduled:
```plaintext
This merge request is currently in an unmergeable state, and cannot be rebased.
```
This error occurs if any of these conditions are true:
- Conflicts exist between the source and target branches.
- The source branch contains no commits.
- Either the source or target branch does not exist.
- An error has occurred, resulting in no diff being generated.
To resolve the `unmergeable state` error:
1. Resolve any merge conflicts.
1. Confirm the source branch exists, and has commits.
1. Confirm the target branch exists.
1. Confirm the diff has been generated.
### `/merge` quick action ignored after `/rebase`
If `/rebase` is used, `/merge` is ignored to avoid a race condition where the source branch is merged or deleted before it is rebased.
|
https://docs.gitlab.com/topics/forks
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/forks.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
forks.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Update a fork
|
Fork a Git repository when you want to contribute changes back to an upstream repository you don't have permission to contribute to directly.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
A fork is a personal copy of the repository and all its branches, which you create
in a namespace of your choice. You can use forks to propose changes to another project
that you don't have access to. For more information,
see [Forking workflows](../../user/project/repository/forking_workflow.md).
You can also update a fork with the [GitLab UI](../../user/project/repository/forking_workflow.md#from-the-ui).
Prerequisites:
- You must [download and install the Git client](how_to_install_git/_index.md) on your local machine.
- You must [create a fork](../../user/project/repository/forking_workflow.md#create-a-fork) of the
repository you want to update.
To update your fork from the command line:
1. Check if an `upstream` remote repository is configured for your fork:
1. Clone your fork locally, if you haven't already. For more information, see [Clone a repository](clone.md).
1. View the configured remotes for your fork:
```shell
git remote -v
```
1. If your fork doesn't have a remote pointing to the original repository, use one of these examples
to configure a remote called upstream:
```shell
# Set any repository as your upstream after editing <upstream_url>
git remote add upstream <upstream_url>
# Set the main GitLab repository as your upstream
git remote add upstream https://gitlab.com/gitlab-org/gitlab.git
```
1. Update your fork:
1. In your local copy, check out the [default branch](../../user/project/repository/branches/default.md).
Replace `main` with the name of your default branch:
```shell
git checkout main
```
{{< alert type="note" >}}
If Git identifies unstaged changes, [commit or stash](commit.md) them before continuing.
{{< /alert >}}
1. Fetch the changes from the upstream repository:
```shell
git fetch upstream
```
1. Pull the changes into your fork. Replace `main` with the name of the branch you're updating:
```shell
git pull upstream main
```
1. Push the changes to your fork repository on the server:
```shell
git push origin main
```
## Collaborate across forks
GitLab enables collaboration between the upstream project maintainers and the fork owners.
For more information, see:
- [Collaborate on merge requests across forks](../../user/project/merge_requests/allow_collaboration.md)
- [Allow commits from upstream members](../../user/project/merge_requests/allow_collaboration.md#allow-commits-from-upstream-members)
- [Prevent commits from upstream members](../../user/project/merge_requests/allow_collaboration.md#prevent-commits-from-upstream-members)
### Push to a fork as an upstream member
You can push directly to the branch of the forked repository if:
- The author of the merge request enabled contributions from upstream members.
- You have at least the Developer role for the upstream project.
In the following example:
- The forked repository URL is `git@gitlab.com:contributor/forked-project.git`.
- The branch of the merge request is `fork-branch`.
To change or add a commit to the contributor's merge request:
1. On the left sidebar, select **Search or go to** and find your project.
1. Go to **Code** > **Merge requests** and find the merge request.
1. In the upper-right corner, select **Code**, then select **Check out branch**.
1. On the dialog, select **Copy** ({{< icon name="copy-to-clipboard" >}}).
1. In your terminal, go to the cloned version of the repository, and paste the commands. For example:
```shell
git fetch "git@gitlab.com:contributor/forked-project.git" 'fork-branch'
git checkout -b 'contributor/fork-branch' FETCH_HEAD
```
These commands fetch the branch from the forked project and create a local branch for you to work on.
1. Make your changes to the local copy of the branch, and then commit them.
1. Push your local changes to the forked project. The following command pushes the
local branch `contributor/fork-branch` to the `fork-branch` branch of
the `git@gitlab.com:contributor/forked-project.git` repository:
```shell
git push git@gitlab.com:contributor/forked-project.git contributor/fork-branch:fork-branch
```
If you've amended or squashed any commits, you must use `git push --force`. Proceed with caution as this command rewrites the commit history.
```shell
git push --force git@gitlab.com:contributor/forked-project.git contributor/fork-branch:fork-branch
```
The colon (`:`) specifies the source branch and the destination branch. The scheme is:
```shell
git push <forked_repository_git_url> <local_branch>:<fork_branch>
```
## Related topics
- [Forking workflows](../../user/project/repository/forking_workflow.md)
- [Create a fork](../../user/project/repository/forking_workflow.md#create-a-fork)
- [Unlink a fork](../../user/project/repository/forking_workflow.md#unlink-a-fork)
- [Collaborate on merge requests across forks](../../user/project/merge_requests/allow_collaboration.md)
- [Merge requests](../../user/project/merge_requests/_index.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Fork a Git repository when you want to contribute changes back to an
upstream repository you don't have permission to contribute to directly.
title: Update a fork
breadcrumbs:
- doc
- topics
- git
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
A fork is a personal copy of the repository and all its branches, which you create
in a namespace of your choice. You can use forks to propose changes to another project
that you don't have access to. For more information,
see [Forking workflows](../../user/project/repository/forking_workflow.md).
You can also update a fork with the [GitLab UI](../../user/project/repository/forking_workflow.md#from-the-ui).
Prerequisites:
- You must [download and install the Git client](how_to_install_git/_index.md) on your local machine.
- You must [create a fork](../../user/project/repository/forking_workflow.md#create-a-fork) of the
repository you want to update.
To update your fork from the command line:
1. Check if an `upstream` remote repository is configured for your fork:
1. Clone your fork locally, if you haven't already. For more information, see [Clone a repository](clone.md).
1. View the configured remotes for your fork:
```shell
git remote -v
```
1. If your fork doesn't have a remote pointing to the original repository, use one of these examples
to configure a remote called upstream:
```shell
# Set any repository as your upstream after editing <upstream_url>
git remote add upstream <upstream_url>
# Set the main GitLab repository as your upstream
git remote add upstream https://gitlab.com/gitlab-org/gitlab.git
```
1. Update your fork:
1. In your local copy, check out the [default branch](../../user/project/repository/branches/default.md).
Replace `main` with the name of your default branch:
```shell
git checkout main
```
{{< alert type="note" >}}
If Git identifies unstaged changes, [commit or stash](commit.md) them before continuing.
{{< /alert >}}
1. Fetch the changes from the upstream repository:
```shell
git fetch upstream
```
1. Pull the changes into your fork. Replace `main` with the name of the branch you're updating:
```shell
git pull upstream main
```
1. Push the changes to your fork repository on the server:
```shell
git push origin main
```
## Collaborate across forks
GitLab enables collaboration between the upstream project maintainers and the fork owners.
For more information, see:
- [Collaborate on merge requests across forks](../../user/project/merge_requests/allow_collaboration.md)
- [Allow commits from upstream members](../../user/project/merge_requests/allow_collaboration.md#allow-commits-from-upstream-members)
- [Prevent commits from upstream members](../../user/project/merge_requests/allow_collaboration.md#prevent-commits-from-upstream-members)
### Push to a fork as an upstream member
You can push directly to the branch of the forked repository if:
- The author of the merge request enabled contributions from upstream members.
- You have at least the Developer role for the upstream project.
In the following example:
- The forked repository URL is `git@gitlab.com:contributor/forked-project.git`.
- The branch of the merge request is `fork-branch`.
To change or add a commit to the contributor's merge request:
1. On the left sidebar, select **Search or go to** and find your project.
1. Go to **Code** > **Merge requests** and find the merge request.
1. In the upper-right corner, select **Code**, then select **Check out branch**.
1. On the dialog, select **Copy** ({{< icon name="copy-to-clipboard" >}}).
1. In your terminal, go to the cloned version of the repository, and paste the commands. For example:
```shell
git fetch "git@gitlab.com:contributor/forked-project.git" 'fork-branch'
git checkout -b 'contributor/fork-branch' FETCH_HEAD
```
These commands fetch the branch from the forked project and create a local branch for you to work on.
1. Make your changes to the local copy of the branch, and then commit them.
1. Push your local changes to the forked project. The following command pushes the
local branch `contributor/fork-branch` to the `fork-branch` branch of
the `git@gitlab.com:contributor/forked-project.git` repository:
```shell
git push git@gitlab.com:contributor/forked-project.git contributor/fork-branch:fork-branch
```
If you've amended or squashed any commits, you must use `git push --force`. Proceed with caution as this command rewrites the commit history.
```shell
git push --force git@gitlab.com:contributor/forked-project.git contributor/fork-branch:fork-branch
```
The colon (`:`) specifies the source branch and the destination branch. The scheme is:
```shell
git push <forked_repository_git_url> <local_branch>:<fork_branch>
```
## Related topics
- [Forking workflows](../../user/project/repository/forking_workflow.md)
- [Create a fork](../../user/project/repository/forking_workflow.md#create-a-fork)
- [Unlink a fork](../../user/project/repository/forking_workflow.md#unlink-a-fork)
- [Collaborate on merge requests across forks](../../user/project/merge_requests/allow_collaboration.md)
- [Merge requests](../../user/project/merge_requests/_index.md)
|
https://docs.gitlab.com/topics/merge
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/merge.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
merge.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Merge your branch into the main branch
| null |
After you have [created a branch](branch.md), made the required changes, and [committed them locally](commit.md),
you [push your branch](commit.md#send-changes-to-gitlab) and its commits to GitLab.
In the response to the `git push`, GitLab provides a direct link to create the merge request. For example:
```plaintext
...
remote: To create a merge request for my-new-branch, visit:
remote: https://gitlab.example.com/my-group/my-project/merge_requests/new?merge_request%5Bsource_branch%5D=my-new-branch
```
To get your branch merged into the main branch:
1. Go to the page provided in the link that was provided by Git and
[create your merge request](../../user/project/merge_requests/creating_merge_requests.md). The merge request's
**Source branch** is your branch and the **Target branch** should be the main branch.
1. If necessary, have your [merge request reviewed](../../user/project/merge_requests/reviews/_index.md#request-a-review).
1. Have someone [merge your merge request](../../user/project/merge_requests/_index.md#merge-a-merge-request), or merge
the merge request yourself, depending on your process.
## Related topics
- [Merge requests](../../user/project/merge_requests/_index.md)
- [Merge methods](../../user/project/merge_requests/methods/_index.md)
- [Merge conflicts](../../user/project/merge_requests/conflicts.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Merge your branch into the main branch
breadcrumbs:
- doc
- topics
- git
---
After you have [created a branch](branch.md), made the required changes, and [committed them locally](commit.md),
you [push your branch](commit.md#send-changes-to-gitlab) and its commits to GitLab.
In the response to the `git push`, GitLab provides a direct link to create the merge request. For example:
```plaintext
...
remote: To create a merge request for my-new-branch, visit:
remote: https://gitlab.example.com/my-group/my-project/merge_requests/new?merge_request%5Bsource_branch%5D=my-new-branch
```
To get your branch merged into the main branch:
1. Go to the page provided in the link that was provided by Git and
[create your merge request](../../user/project/merge_requests/creating_merge_requests.md). The merge request's
**Source branch** is your branch and the **Target branch** should be the main branch.
1. If necessary, have your [merge request reviewed](../../user/project/merge_requests/reviews/_index.md#request-a-review).
1. Have someone [merge your merge request](../../user/project/merge_requests/_index.md#merge-a-merge-request), or merge
the merge request yourself, depending on your process.
## Related topics
- [Merge requests](../../user/project/merge_requests/_index.md)
- [Merge methods](../../user/project/merge_requests/methods/_index.md)
- [Merge conflicts](../../user/project/merge_requests/conflicts.md)
|
https://docs.gitlab.com/topics/stash
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/stash.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
stash.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Stash changes for later
| null |
Use `git stash` when you want to change to a different branch, and you want to store changes that are not ready to be
committed.
- To stash uncommitted changes without a message:
```shell
git stash
```
- To stash uncommitted changes with a message:
```shell
git stash save "this is a message to display on the list"
```
- To retrieve changes from the stash and apply them to your branch:
```shell
git stash apply
```
- To apply a specific change from the stash to your branch:
```shell
git stash apply stash@{3}
```
- To see all of the changes in the stash:
```shell
git stash list
```
- To see a list of changes in that stash with more information:
```shell
git stash list --stat
```
- To delete the most recently stashed change from the stash:
```shell
git stash drop
```
- To delete a specific change from the stash:
```shell
git stash drop <name>
```
- To delete all changes from the stash:
```shell
git stash clear
```
- To apply the most recently stashed change and delete it from the stash:
```shell
git stash pop
```
If you make a lot of changes after stashing your changes, conflicts might occur when you apply
these previous changes back to your branch. You must resolve these conflicts before the changes can be applied
from the stash.
## Git stash sample workflow
To try using Git stashing yourself:
1. Modify a file in a Git repository.
1. Stash the modification:
```shell
git stash save "Saving changes from edit this file"
```
1. View the stash list:
```shell
git stash list
```
1. Confirm there are no pending changes:
```shell
git status
```
1. Apply the stashed changes and drop the change from the stash:
```shell
git stash pop
```
1. View stash list to confirm that the change was removed:
```shell
git stash list
```
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Stash changes for later
breadcrumbs:
- doc
- topics
- git
---
Use `git stash` when you want to change to a different branch, and you want to store changes that are not ready to be
committed.
- To stash uncommitted changes without a message:
```shell
git stash
```
- To stash uncommitted changes with a message:
```shell
git stash save "this is a message to display on the list"
```
- To retrieve changes from the stash and apply them to your branch:
```shell
git stash apply
```
- To apply a specific change from the stash to your branch:
```shell
git stash apply stash@{3}
```
- To see all of the changes in the stash:
```shell
git stash list
```
- To see a list of changes in that stash with more information:
```shell
git stash list --stat
```
- To delete the most recently stashed change from the stash:
```shell
git stash drop
```
- To delete a specific change from the stash:
```shell
git stash drop <name>
```
- To delete all changes from the stash:
```shell
git stash clear
```
- To apply the most recently stashed change and delete it from the stash:
```shell
git stash pop
```
If you make a lot of changes after stashing your changes, conflicts might occur when you apply
these previous changes back to your branch. You must resolve these conflicts before the changes can be applied
from the stash.
## Git stash sample workflow
To try using Git stashing yourself:
1. Modify a file in a Git repository.
1. Stash the modification:
```shell
git stash save "Saving changes from edit this file"
```
1. View the stash list:
```shell
git stash list
```
1. Confirm there are no pending changes:
```shell
git status
```
1. Apply the stashed changes and drop the change from the stash:
```shell
git stash pop
```
1. View stash list to confirm that the change was removed:
```shell
git stash list
```
|
https://docs.gitlab.com/topics/branch
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/branch.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
branch.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Create a Git branch for your changes
| null |
A branch is a copy of the files in the repository at the time you create the branch.
You can work in your branch without affecting other branches. When
you're ready to add your changes to the main codebase, you can merge your branch into
the default branch, for example, `main`.
Use branches when you:
- Want to add code to a project but you're not sure if it works properly.
- Are collaborating on the project with others, and don't want your work to get mixed up.
## Create a branch
To create a branch:
```shell
git checkout -b <name-of-branch>
```
GitLab enforces [branch naming rules](../../user/project/repository/branches/_index.md#name-your-branch)
to prevent problems, and provides
[branch naming patterns](../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number)
to streamline merge request creation.
## Switch to a branch
All work in Git is done in a branch.
You can switch between branches to see the state of the files and work in that branch.
To switch to an existing branch:
```shell
git checkout <name-of-branch>
```
For example, to change to the `main` branch:
```shell
git checkout main
```
## Keep a branch up-to-date
Your branch does not automatically include changes merged to the default branch from other branches.
To include changes merged after you created your branch, you must update your branch manually.
To update your branch with the latest changes in the default branch, either:
- Run `git rebase` to [rebase](git_rebase.md) your branch against the default branch. Use this command when you want
your changes to be listed in Git logs after the changes from the default branch.
- Run `git pull <remote-name> <default-branch-name>`. Use this command when you want your changes to appear in Git logs
in chronological order with the changes from the default branch, or if you're sharing your branch with others. If
you're unsure of the correct value for `<remote-name>`, run: `git remote`.
## Related topics
- [Branches](../../user/project/repository/branches/_index.md)
- [Tags](../../user/project/repository/tags/_index.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Create a Git branch for your changes
breadcrumbs:
- doc
- topics
- git
---
A branch is a copy of the files in the repository at the time you create the branch.
You can work in your branch without affecting other branches. When
you're ready to add your changes to the main codebase, you can merge your branch into
the default branch, for example, `main`.
Use branches when you:
- Want to add code to a project but you're not sure if it works properly.
- Are collaborating on the project with others, and don't want your work to get mixed up.
## Create a branch
To create a branch:
```shell
git checkout -b <name-of-branch>
```
GitLab enforces [branch naming rules](../../user/project/repository/branches/_index.md#name-your-branch)
to prevent problems, and provides
[branch naming patterns](../../user/project/repository/branches/_index.md#prefix-branch-names-with-a-number)
to streamline merge request creation.
## Switch to a branch
All work in Git is done in a branch.
You can switch between branches to see the state of the files and work in that branch.
To switch to an existing branch:
```shell
git checkout <name-of-branch>
```
For example, to change to the `main` branch:
```shell
git checkout main
```
## Keep a branch up-to-date
Your branch does not automatically include changes merged to the default branch from other branches.
To include changes merged after you created your branch, you must update your branch manually.
To update your branch with the latest changes in the default branch, either:
- Run `git rebase` to [rebase](git_rebase.md) your branch against the default branch. Use this command when you want
your changes to be listed in Git logs after the changes from the default branch.
- Run `git pull <remote-name> <default-branch-name>`. Use this command when you want your changes to appear in Git logs
in chronological order with the changes from the default branch, or if you're sharing your branch with others. If
you're unsure of the correct value for `<remote-name>`, run: `git remote`.
## Related topics
- [Branches](../../user/project/repository/branches/_index.md)
- [Tags](../../user/project/repository/tags/_index.md)
|
https://docs.gitlab.com/topics/troubleshooting_git
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/troubleshooting_git.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
troubleshooting_git.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Troubleshooting Git
|
Tips to resolve Git issues.
|
Sometimes things don't work the way they should or as you might expect when
you're using Git. Here are some tips on troubleshooting and resolving issues
with Git.
## Debugging
When troubleshooting problems with Git, try these debugging techniques.
### Use a custom SSH key for a Git command
```shell
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitlabadmin" git <command>
```
### Debug problems with cloning
For Git over SSH:
```shell
GIT_SSH_COMMAND="ssh -vvv" git clone <git@url>
```
For Git over HTTPS:
```shell
GIT_TRACE_PACKET=1 GIT_TRACE=2 GIT_CURL_VERBOSE=1 git clone <url>
```
### Debug Git with traces
Git includes a complete set of [traces for debugging Git commands](https://git-scm.com/book/en/v2/Git-Internals-Environment-Variables#_debugging), for example:
- `GIT_TRACE_PERFORMANCE=1`: enables tracing of performance data, showing how long each particular `git` invocation takes.
- `GIT_TRACE_SETUP=1`: enables tracing of what `git` is discovering about the repository and environment it's interacting with.
- `GIT_TRACE_PACKET=1`: enables packet-level tracing for network operations.
- `GIT_CURL_VERBOSE=1`: enables `curl`'s verbose output, which [may include credentials](https://curl.se/docs/manpage.html#-v).
## Broken pipe errors on `git push`
'Broken pipe' errors can occur when attempting to push to a remote repository.
When pushing you usually see:
```plaintext
Write failed: Broken pipe
fatal: The remote end hung up unexpectedly
```
To fix this issue, here are some possible solutions.
### Increase the POST buffer size in Git
When you attempt to push large repositories with Git over HTTPS, you might get an error message like:
```shell
fatal: pack has bad object at offset XXXXXXXXX: inflate returned -5
```
To resolve this issue:
- Increase the
[http.postBuffer](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httppostBuffer)
value in your local Git configuration. The default value is 1 MB. For example, if `git clone`
fails when cloning a 500 MB repository, execute the following:
1. Open a terminal or command prompt.
1. Increase the `http.postBuffer` value:
```shell
# Set the http.postBuffer size in bytes
git config http.postBuffer 524288000
```
If the local configuration doesn't resolve the issue, you may need to modify the server configuration.
This should be done cautiously and only if you have server access.
- Increase the `http.postBuffer` on the server side:
1. Open a terminal or command prompt.
1. Modify the GitLab instance's
[`gitlab.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/13.5.1+ee.0/files/gitlab-config-template/gitlab.rb.template#L1435-1455) file:
```ruby
gitaly['configuration'] = {
# ...
git: {
# ...
config: [
# Set the http.postBuffer size, in bytes
{key: "http.postBuffer", value: "524288000"},
],
},
}
```
1. Apply the configuration change:
```shell
sudo gitlab-ctl reconfigure
```
### Stream 0 was not closed cleanly
If you see this error, it may be caused by a slow internet connection:
```plaintext
RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
```
If you use Git over HTTP instead of SSH, try one of these fixes:
- Increase the POST buffer size in the Git configuration with `git config http.postBuffer 52428800`.
- Switch to the `HTTP/1.1` protocol with `git config http.version HTTP/1.1`.
If neither approach fixes the error, you may need a different internet service provider.
### Check your SSH configuration
If pushing over SSH, first check your SSH configuration as 'Broken pipe'
errors can sometimes be caused by underlying issues with SSH (such as
authentication). Make sure that SSH is correctly configured by following the
instructions in the [SSH troubleshooting](../../user/ssh_troubleshooting.md#password-prompt-with-git-clone) documentation.
If you're a GitLab administrator with server access, you can also prevent
session timeouts by configuring SSH `keep-alive` on the client or the server.
{{< alert type="note" >}}
Configuring both the client and the server is unnecessary.
{{< /alert >}}
To configure SSH on the client side:
- On UNIX, edit `~/.ssh/config` (create the file if it doesn't exist) and
add or edit:
```plaintext
Host your-gitlab-instance-url.com
ServerAliveInterval 60
ServerAliveCountMax 5
```
- On Windows, if you are using PuTTY, go to your session properties, then
go to "Connection" and under "Sending of null packets to keep
session active", set `Seconds between keepalives (0 to turn off)` to `60`.
To configure SSH on the server side, edit `/etc/ssh/sshd_config` and add:
```plaintext
ClientAliveInterval 60
ClientAliveCountMax 5
```
### Running a `git repack`
If 'pack-objects' type errors are also being displayed, you can try to
run a `git repack` before attempting to push to the remote repository again:
```shell
git repack
git push
```
### Upgrade your Git client
In case you're running an older version of Git (< 2.9), consider upgrading
to >= 2.9 (see [Broken pipe when pushing to Git repository](https://stackoverflow.com/questions/19120120/broken-pipe-when-pushing-to-git-repository/36971469#36971469)).
## `ssh_exchange_identification` error
Users may experience the following error when attempting to push or pull
using Git over SSH:
```plaintext
Please make sure you have the correct access rights
and the repository exists.
...
ssh_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.
```
or
```plaintext
ssh_exchange_identification: Connection closed by remote host
fatal: The remote end hung up unexpectedly
```
or
```plaintext
kex_exchange_identification: Connection closed by remote host
Connection closed by x.x.x.x port 22
```
This error usually indicates that SSH daemon's `MaxStartups` value is throttling
SSH connections. This setting specifies the maximum number of concurrent, unauthenticated
connections to the SSH daemon. This affects users with proper authentication
credentials (SSH keys) because every connection is 'unauthenticated' in the
beginning. The [default value](https://man.openbsd.org/sshd_config#MaxStartups) is `10`.
This can be verified by examining the host's [`sshd`](https://en.wikibooks.org/wiki/OpenSSH/Logging_and_Troubleshooting#Server_Logs)
logs. For systems in the Debian family, refer to `/var/log/auth.log`, and for RHEL derivatives,
check `/var/log/secure` for the following errors:
```plaintext
sshd[17242]: error: beginning MaxStartups throttling
sshd[17242]: drop connection #1 from [CLIENT_IP]:52114 on [CLIENT_IP]:22 past MaxStartups
```
The absence of this error suggests that the SSH daemon is not limiting connections,
indicating that the underlying issue may be network-related.
### Increase the number of unauthenticated concurrent SSH connections
Increase `MaxStartups` on the GitLab server
by adding or modifying the value in `/etc/ssh/sshd_config`:
```plaintext
MaxStartups 100:30:200
```
`100:30:200` means up to 100 SSH sessions are allowed without restriction,
after which 30% of connections are dropped until reaching an absolute maximum of 200.
After you modify the value of `MaxStartups`, check for any errors in the configuration.
```shell
sudo sshd -t -f /etc/ssh/sshd_config
```
If the configuration check runs without errors, it should be safe to restart the
SSH daemon for the change to take effect.
```shell
# Debian/Ubuntu
sudo systemctl restart ssh
# CentOS/RHEL
sudo service sshd restart
```
## Timeout during `git push` / `git pull`
If pulling/pushing from/to your repository ends up taking more than 50 seconds,
a timeout is issued. It contains a log of the number of operations performed
and their respective timings, like the example below:
```plaintext
remote: Running checks for branch: master
remote: Scanning for LFS objects... (153ms)
remote: Calculating new repository size... (cancelled after 729ms)
```
This could be used to further investigate what operation is performing poorly
and provide GitLab with more information on how to improve the service.
### Error: Operation timed out
If you encounter an error like this when using Git, it usually indicates a network issue:
```shell
ssh: connect to host gitlab.com port 22: Operation timed out
fatal: Could not read from remote repository
```
To help identify the underlying issue:
- Connect through a different network (for example, switch from Wi-Fi to cellular data) to rule out
local network or firewall issues.
- Run this bash command to gather `traceroute` and `ping` information: `mtr -T -P 22 <gitlab_server>.com`.
To learn about MTR and how to read its output, see the Cloudflare article
[What is My Traceroute (MTR)?](https://www.cloudflare.com/en-gb/learning/network-layer/what-is-mtr/).
## Error: transfer closed with outstanding read data remaining
Sometimes, when cloning old or large repositories, the following error is shown when running `git clone` over HTTP:
```plaintext
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
```
This problem is common in Git itself, due to its inability to handle large files or large quantities of files.
[Git LFS](https://about.gitlab.com/blog/2017/01/30/getting-started-with-git-lfs-tutorial/) was created to work around this problem; however, even it has limitations. It's usually due to one of these reasons:
- The number of files in the repository.
- The number of revisions in the history.
- The existence of large files in the repository.
If this error occurs when cloning a large repository, you can
[decrease the cloning depth](../../user/project/repository/monorepos/_index.md#use-shallow-clones-in-cicd-processes) to a value of `1`. For example:
This approach doesn't resolve the underlying cause, but you can successfully clone the repository.
To decrease the cloning depth to `1`, run:
```shell
variables:
GIT_DEPTH: 1
```
## Password expired error on Git fetch with SSH for LDAP user
If `git fetch` returns this `HTTP 403 Forbidden` error on GitLab Self-Managed,
the password expiration date (`users.password_expires_at`) for this user in the
GitLab database is a date in the past:
```plaintext
Your password expired. Please access GitLab from a web browser to update your password.
```
Requests made with a SSO account and where `password_expires_at` is not `null`
return this error:
```plaintext
"403 Forbidden - Your password expired. Please access GitLab from a web browser to update your password."
```
To resolve this issue, you can update the password expiration by either:
- Using the [GitLab Rails console](../../administration/operations/rails_console.md)
to check and update the user data:
```ruby
user = User.find_by_username('<USERNAME>')
user.password_expired?
user.password_expires_at
user.update!(password_expires_at: nil)
```
- Using `gitlab-psql`:
```sql
# gitlab-psql
UPDATE users SET password_expires_at = null WHERE username='<USERNAME>';
```
The bug was reported [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/332455).
## Error on Git fetch: "HTTP Basic: Access Denied"
If you receive an `HTTP Basic: Access denied` error when using Git over HTTP(S),
refer to the [two-factor authentication troubleshooting guide](../../user/profile/account/two_factor_authentication_troubleshooting.md).
This error may also occur with [Git for Windows](https://gitforwindows.org/)
2.46.0 and later when specifying an empty username.
When authenticating with a token, the username can be any value, but an empty value
could trigger an authentication error. To resolve this, specify a username string.
## `401` errors logged during successful `git clone`
When cloning a repository with HTTP, the
[`production_json.log`](../../administration/logs/_index.md#production_jsonlog) file
may show an initial status of `401` (unauthorized), quickly followed by a `200`.
```json
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":401,
"time":"2023-04-18T22:55:15.371Z",
"remote_ip":"x.x.x.x",
"ua":"git/2.39.2",
"correlation_id":"01GYB98MBM28T981DJDGAD98WZ",
"duration_s":0.03585
}
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":200,
"time":"2023-04-18T22:55:15.714Z",
"remote_ip":"x.x.x.x",
"user_id":1,
"username":"root",
"ua":"git/2.39.2",
"correlation_id":"01GYB98MJ0CA3G9K8WDH7HWMQX",
"duration_s":0.17111
}
```
You should expect this initial `401` log entry for each Git operation performed over HTTP,
due to how [HTTP Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) works.
When the Git client initiates a clone, the initial request sent to GitLab does not provide
any authentication details. GitLab returns a `401 Unauthorized` result for that request.
A few milliseconds later, the Git client sends a follow-up request containing authentication
details. This second request should succeed, and result in a `200 OK` log entry.
If a `401` log entry lacks a corresponding `200` log entry, the Git client is likely using either:
- An incorrect password.
- An expired or revoked token.
If not rectified, you could encounter
[`403` (Forbidden) errors](#403-error-when-performing-git-operations-over-http)
instead.
## `403` error when performing Git operations over HTTP
When performing Git operations over HTTP, a `403` (Forbidden) error indicates that
your IP address has been blocked by the failed-authentication ban:
```plaintext
fatal: unable to access 'https://gitlab.com/group/project.git/': The requested URL returned error: 403
```
The `403` can be seen in the [`production_json.log`](../../administration/logs/_index.md#production_jsonlog):
```json
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":403,
"time":"2023-04-19T22:14:25.894Z",
"remote_ip":"x.x.x.x",
"user_id":1,
"username":"root",
"ua":"git/2.39.2",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"duration_s":0.00875
}
```
If your IP address has been blocked, a corresponding log entry exists in the
[`auth_json.log`](../../administration/logs/_index.md#auth_jsonlog):
```json
{
"severity":"ERROR",
"time":"2023-04-19T22:14:25.893Z",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"message":"Rack_Attack",
"env":"blocklist",
"remote_ip":"x.x.x.x",
"request_method":"GET",
"path":"/group/project.git/info/refs?service=git-upload-pack"}
```
The failed authentication ban limits differ depending if you are using a
[GitLab Self-Managed](../../security/rate_limits.md#failed-authentication-ban-for-git-and-container-registry)
or [GitLab SaaS](../../user/gitlab_com/_index.md#ip-blocks).
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Tips to resolve Git issues.
title: Troubleshooting Git
breadcrumbs:
- doc
- topics
- git
---
Sometimes things don't work the way they should or as you might expect when
you're using Git. Here are some tips on troubleshooting and resolving issues
with Git.
## Debugging
When troubleshooting problems with Git, try these debugging techniques.
### Use a custom SSH key for a Git command
```shell
GIT_SSH_COMMAND="ssh -i ~/.ssh/gitlabadmin" git <command>
```
### Debug problems with cloning
For Git over SSH:
```shell
GIT_SSH_COMMAND="ssh -vvv" git clone <git@url>
```
For Git over HTTPS:
```shell
GIT_TRACE_PACKET=1 GIT_TRACE=2 GIT_CURL_VERBOSE=1 git clone <url>
```
### Debug Git with traces
Git includes a complete set of [traces for debugging Git commands](https://git-scm.com/book/en/v2/Git-Internals-Environment-Variables#_debugging), for example:
- `GIT_TRACE_PERFORMANCE=1`: enables tracing of performance data, showing how long each particular `git` invocation takes.
- `GIT_TRACE_SETUP=1`: enables tracing of what `git` is discovering about the repository and environment it's interacting with.
- `GIT_TRACE_PACKET=1`: enables packet-level tracing for network operations.
- `GIT_CURL_VERBOSE=1`: enables `curl`'s verbose output, which [may include credentials](https://curl.se/docs/manpage.html#-v).
## Broken pipe errors on `git push`
'Broken pipe' errors can occur when attempting to push to a remote repository.
When pushing you usually see:
```plaintext
Write failed: Broken pipe
fatal: The remote end hung up unexpectedly
```
To fix this issue, here are some possible solutions.
### Increase the POST buffer size in Git
When you attempt to push large repositories with Git over HTTPS, you might get an error message like:
```shell
fatal: pack has bad object at offset XXXXXXXXX: inflate returned -5
```
To resolve this issue:
- Increase the
[http.postBuffer](https://git-scm.com/docs/git-config#Documentation/git-config.txt-httppostBuffer)
value in your local Git configuration. The default value is 1 MB. For example, if `git clone`
fails when cloning a 500 MB repository, execute the following:
1. Open a terminal or command prompt.
1. Increase the `http.postBuffer` value:
```shell
# Set the http.postBuffer size in bytes
git config http.postBuffer 524288000
```
If the local configuration doesn't resolve the issue, you may need to modify the server configuration.
This should be done cautiously and only if you have server access.
- Increase the `http.postBuffer` on the server side:
1. Open a terminal or command prompt.
1. Modify the GitLab instance's
[`gitlab.rb`](https://gitlab.com/gitlab-org/omnibus-gitlab/-/blob/13.5.1+ee.0/files/gitlab-config-template/gitlab.rb.template#L1435-1455) file:
```ruby
gitaly['configuration'] = {
# ...
git: {
# ...
config: [
# Set the http.postBuffer size, in bytes
{key: "http.postBuffer", value: "524288000"},
],
},
}
```
1. Apply the configuration change:
```shell
sudo gitlab-ctl reconfigure
```
### Stream 0 was not closed cleanly
If you see this error, it may be caused by a slow internet connection:
```plaintext
RPC failed; curl 92 HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
```
If you use Git over HTTP instead of SSH, try one of these fixes:
- Increase the POST buffer size in the Git configuration with `git config http.postBuffer 52428800`.
- Switch to the `HTTP/1.1` protocol with `git config http.version HTTP/1.1`.
If neither approach fixes the error, you may need a different internet service provider.
### Check your SSH configuration
If pushing over SSH, first check your SSH configuration as 'Broken pipe'
errors can sometimes be caused by underlying issues with SSH (such as
authentication). Make sure that SSH is correctly configured by following the
instructions in the [SSH troubleshooting](../../user/ssh_troubleshooting.md#password-prompt-with-git-clone) documentation.
If you're a GitLab administrator with server access, you can also prevent
session timeouts by configuring SSH `keep-alive` on the client or the server.
{{< alert type="note" >}}
Configuring both the client and the server is unnecessary.
{{< /alert >}}
To configure SSH on the client side:
- On UNIX, edit `~/.ssh/config` (create the file if it doesn't exist) and
add or edit:
```plaintext
Host your-gitlab-instance-url.com
ServerAliveInterval 60
ServerAliveCountMax 5
```
- On Windows, if you are using PuTTY, go to your session properties, then
go to "Connection" and under "Sending of null packets to keep
session active", set `Seconds between keepalives (0 to turn off)` to `60`.
To configure SSH on the server side, edit `/etc/ssh/sshd_config` and add:
```plaintext
ClientAliveInterval 60
ClientAliveCountMax 5
```
### Running a `git repack`
If 'pack-objects' type errors are also being displayed, you can try to
run a `git repack` before attempting to push to the remote repository again:
```shell
git repack
git push
```
### Upgrade your Git client
In case you're running an older version of Git (< 2.9), consider upgrading
to >= 2.9 (see [Broken pipe when pushing to Git repository](https://stackoverflow.com/questions/19120120/broken-pipe-when-pushing-to-git-repository/36971469#36971469)).
## `ssh_exchange_identification` error
Users may experience the following error when attempting to push or pull
using Git over SSH:
```plaintext
Please make sure you have the correct access rights
and the repository exists.
...
ssh_exchange_identification: read: Connection reset by peer
fatal: Could not read from remote repository.
```
or
```plaintext
ssh_exchange_identification: Connection closed by remote host
fatal: The remote end hung up unexpectedly
```
or
```plaintext
kex_exchange_identification: Connection closed by remote host
Connection closed by x.x.x.x port 22
```
This error usually indicates that SSH daemon's `MaxStartups` value is throttling
SSH connections. This setting specifies the maximum number of concurrent, unauthenticated
connections to the SSH daemon. This affects users with proper authentication
credentials (SSH keys) because every connection is 'unauthenticated' in the
beginning. The [default value](https://man.openbsd.org/sshd_config#MaxStartups) is `10`.
This can be verified by examining the host's [`sshd`](https://en.wikibooks.org/wiki/OpenSSH/Logging_and_Troubleshooting#Server_Logs)
logs. For systems in the Debian family, refer to `/var/log/auth.log`, and for RHEL derivatives,
check `/var/log/secure` for the following errors:
```plaintext
sshd[17242]: error: beginning MaxStartups throttling
sshd[17242]: drop connection #1 from [CLIENT_IP]:52114 on [CLIENT_IP]:22 past MaxStartups
```
The absence of this error suggests that the SSH daemon is not limiting connections,
indicating that the underlying issue may be network-related.
### Increase the number of unauthenticated concurrent SSH connections
Increase `MaxStartups` on the GitLab server
by adding or modifying the value in `/etc/ssh/sshd_config`:
```plaintext
MaxStartups 100:30:200
```
`100:30:200` means up to 100 SSH sessions are allowed without restriction,
after which 30% of connections are dropped until reaching an absolute maximum of 200.
After you modify the value of `MaxStartups`, check for any errors in the configuration.
```shell
sudo sshd -t -f /etc/ssh/sshd_config
```
If the configuration check runs without errors, it should be safe to restart the
SSH daemon for the change to take effect.
```shell
# Debian/Ubuntu
sudo systemctl restart ssh
# CentOS/RHEL
sudo service sshd restart
```
## Timeout during `git push` / `git pull`
If pulling/pushing from/to your repository ends up taking more than 50 seconds,
a timeout is issued. It contains a log of the number of operations performed
and their respective timings, like the example below:
```plaintext
remote: Running checks for branch: master
remote: Scanning for LFS objects... (153ms)
remote: Calculating new repository size... (cancelled after 729ms)
```
This could be used to further investigate what operation is performing poorly
and provide GitLab with more information on how to improve the service.
### Error: Operation timed out
If you encounter an error like this when using Git, it usually indicates a network issue:
```shell
ssh: connect to host gitlab.com port 22: Operation timed out
fatal: Could not read from remote repository
```
To help identify the underlying issue:
- Connect through a different network (for example, switch from Wi-Fi to cellular data) to rule out
local network or firewall issues.
- Run this bash command to gather `traceroute` and `ping` information: `mtr -T -P 22 <gitlab_server>.com`.
To learn about MTR and how to read its output, see the Cloudflare article
[What is My Traceroute (MTR)?](https://www.cloudflare.com/en-gb/learning/network-layer/what-is-mtr/).
## Error: transfer closed with outstanding read data remaining
Sometimes, when cloning old or large repositories, the following error is shown when running `git clone` over HTTP:
```plaintext
error: RPC failed; curl 18 transfer closed with outstanding read data remaining
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
```
This problem is common in Git itself, due to its inability to handle large files or large quantities of files.
[Git LFS](https://about.gitlab.com/blog/2017/01/30/getting-started-with-git-lfs-tutorial/) was created to work around this problem; however, even it has limitations. It's usually due to one of these reasons:
- The number of files in the repository.
- The number of revisions in the history.
- The existence of large files in the repository.
If this error occurs when cloning a large repository, you can
[decrease the cloning depth](../../user/project/repository/monorepos/_index.md#use-shallow-clones-in-cicd-processes) to a value of `1`. For example:
This approach doesn't resolve the underlying cause, but you can successfully clone the repository.
To decrease the cloning depth to `1`, run:
```shell
variables:
GIT_DEPTH: 1
```
## Password expired error on Git fetch with SSH for LDAP user
If `git fetch` returns this `HTTP 403 Forbidden` error on GitLab Self-Managed,
the password expiration date (`users.password_expires_at`) for this user in the
GitLab database is a date in the past:
```plaintext
Your password expired. Please access GitLab from a web browser to update your password.
```
Requests made with a SSO account and where `password_expires_at` is not `null`
return this error:
```plaintext
"403 Forbidden - Your password expired. Please access GitLab from a web browser to update your password."
```
To resolve this issue, you can update the password expiration by either:
- Using the [GitLab Rails console](../../administration/operations/rails_console.md)
to check and update the user data:
```ruby
user = User.find_by_username('<USERNAME>')
user.password_expired?
user.password_expires_at
user.update!(password_expires_at: nil)
```
- Using `gitlab-psql`:
```sql
# gitlab-psql
UPDATE users SET password_expires_at = null WHERE username='<USERNAME>';
```
The bug was reported [in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/332455).
## Error on Git fetch: "HTTP Basic: Access Denied"
If you receive an `HTTP Basic: Access denied` error when using Git over HTTP(S),
refer to the [two-factor authentication troubleshooting guide](../../user/profile/account/two_factor_authentication_troubleshooting.md).
This error may also occur with [Git for Windows](https://gitforwindows.org/)
2.46.0 and later when specifying an empty username.
When authenticating with a token, the username can be any value, but an empty value
could trigger an authentication error. To resolve this, specify a username string.
## `401` errors logged during successful `git clone`
When cloning a repository with HTTP, the
[`production_json.log`](../../administration/logs/_index.md#production_jsonlog) file
may show an initial status of `401` (unauthorized), quickly followed by a `200`.
```json
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":401,
"time":"2023-04-18T22:55:15.371Z",
"remote_ip":"x.x.x.x",
"ua":"git/2.39.2",
"correlation_id":"01GYB98MBM28T981DJDGAD98WZ",
"duration_s":0.03585
}
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":200,
"time":"2023-04-18T22:55:15.714Z",
"remote_ip":"x.x.x.x",
"user_id":1,
"username":"root",
"ua":"git/2.39.2",
"correlation_id":"01GYB98MJ0CA3G9K8WDH7HWMQX",
"duration_s":0.17111
}
```
You should expect this initial `401` log entry for each Git operation performed over HTTP,
due to how [HTTP Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) works.
When the Git client initiates a clone, the initial request sent to GitLab does not provide
any authentication details. GitLab returns a `401 Unauthorized` result for that request.
A few milliseconds later, the Git client sends a follow-up request containing authentication
details. This second request should succeed, and result in a `200 OK` log entry.
If a `401` log entry lacks a corresponding `200` log entry, the Git client is likely using either:
- An incorrect password.
- An expired or revoked token.
If not rectified, you could encounter
[`403` (Forbidden) errors](#403-error-when-performing-git-operations-over-http)
instead.
## `403` error when performing Git operations over HTTP
When performing Git operations over HTTP, a `403` (Forbidden) error indicates that
your IP address has been blocked by the failed-authentication ban:
```plaintext
fatal: unable to access 'https://gitlab.com/group/project.git/': The requested URL returned error: 403
```
The `403` can be seen in the [`production_json.log`](../../administration/logs/_index.md#production_jsonlog):
```json
{
"method":"GET",
"path":"/group/project.git/info/refs",
"format":"*/*",
"controller":"Repositories::GitHttpController",
"action":"info_refs",
"status":403,
"time":"2023-04-19T22:14:25.894Z",
"remote_ip":"x.x.x.x",
"user_id":1,
"username":"root",
"ua":"git/2.39.2",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"duration_s":0.00875
}
```
If your IP address has been blocked, a corresponding log entry exists in the
[`auth_json.log`](../../administration/logs/_index.md#auth_jsonlog):
```json
{
"severity":"ERROR",
"time":"2023-04-19T22:14:25.893Z",
"correlation_id":"01GYDSAKAN2SPZPAMJNRWW5H8S",
"message":"Rack_Attack",
"env":"blocklist",
"remote_ip":"x.x.x.x",
"request_method":"GET",
"path":"/group/project.git/info/refs?service=git-upload-pack"}
```
The failed authentication ban limits differ depending if you are using a
[GitLab Self-Managed](../../security/rate_limits.md#failed-authentication-ban-for-git-and-container-registry)
or [GitLab SaaS](../../user/gitlab_com/_index.md#ip-blocks).
|
https://docs.gitlab.com/topics/commands
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/commands.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
commands.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Common Git commands
| null |
Git commands save you time throughout your development workflow. This reference page contains
frequently used commands for common tasks such as code changes, branch management,
and history review. Each command section provides the exact syntax, practical examples,
and links to additional documentation.
## `git add`
Use `git add` to files to the staging area.
```shell
git add <file_path>
```
You can recursively stage changes from the current working directory with `git add .`, or stage all changes in the Git
repository with `git add --all`.
For more information, see [Add files to your branch](add_files.md).
## `git blame`
Use `git blame` to report which users changed which parts of a file.
```shell
git blame <file_name>
```
You can use `git blame -L <line_start>, <line_end>` to check a specific range of lines.
For more information, see [Git file blame](../../user/project/repository/files/git_blame.md).
### Example
To check which user most recently modified line five of `example.txt`:
```shell
$ git blame -L 5, 5 example.txt
123abc (Zhang Wei 2021-07-04 12:23:04 +0000 5)
```
## `git bisect`
Use `git bisect`to use binary search to find the commit that introduced a bug.
Start by identifying a commit that is "bad" (contains the bug) and a commit that is "good" (doesn't contain the bug).
```shell
git bisect start
git bisect bad # Current version is bad
git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good
```
`git bisect` then picks a commit in between the two points and asks you identify if the commit is "good" or "bad" with
`git bisect good`or `git bisect bad`. Repeat the process until the commit is found.
## `git checkout`
Use `git checkout` to switch to a specific branch.
```shell
git checkout <branch_name>
```
To create a new branch and switch to it, use `git checkout -b <branch_name>`.
For more information, see [Create a Git branch for your changes](branch.md).
## `git clone`
Use `git clone` to copy an existing Git repository.
```shell
git clone <repository>
```
For more information, see [Clone a Git repository to your local computer](clone.md).
## `git commit`
Use `git commit` to commits staged changes to the repository.
```shell
git commit -m "<commit_message>"
```
If the commit message contains a blank line, the first line becomes the commit subject while the remainder becomes the
commit body. Use the subject to briefly summarize a change, and the commit body to provide additional details.
For more information, see [Stage, commit, and push changes](commit.md).
## `git commit --amend`
Use `git commit --amend` to modify the most recent commit.
```shell
git commit --amend
```
## `git diff`
Use `git diff` to view the differences between your local unstaged changes and the latest version that you cloned or
pulled.
```shell
git diff
```
You can display the difference (or diff) between your local changes and the most recent version of a branch. View a
diff to understand your local changes before you commit them to the branch.
To compare your changes against a specific branch, run:
```shell
git diff <branch>
```
In the output:
- Lines with additions begin with a plus (`+`) and are displayed in green.
- Lines with removals or changes begin with a minus (`-`) and are displayed in red.
## `git init`
Use `git init` to initialize a directory so Git tracks it as a repository.
```shell
git init
```
A `.git` file with configuration and log files is added to the directory. You shouldn't edit the `.git` file directly.
The default branch is set to `main`. You can change the name of the default branch with `git branch -m <branch_name>`,
or initialize with `git init -b <branch_name>`.
## `git pull`
Use `git pull` to get all the changes made by users after the last time you cloned or pulled the project.
```shell
git pull <optional_remote> <branch_name>
```
## `git push`
Use `git push` to update remote refs.
```shell
git push
```
For more information, see [Stage, commit, and push changes](commit.md).
## `git reflog`
Use `git reflog` to display a list of changes to the Git reference logs.
```shell
git reflog
```
By default, `git reflog` shows a list of changes to `HEAD`.
For more information, see [Undo changes](undo.md).
## `git remote add`
Use `git remote add` to tell Git which remote repository in GitLab is linked to a local directory.
```shell
git remote add <remote_name> <repository_url>
```
When you clone a repository, by default the source repository is associated with the remote name `origin`.
For more information on configuring additional remotes, see [Forks](../../user/project/repository/forking_workflow.md).
## `git log`
Use `git log` to display a list of commits in chronological order.
```shell
git log
```
## `git show`
Use `git show` to show information about an object in Git.
### Example
To see what commit `HEAD` points to:
```shell
$ git show HEAD
commit ab123c (HEAD -> main, origin/main, origin/HEAD)
```
## `git merge`
Use `git merge` to combine the changes from one branch with another.
For more information on an alternative to `git merge`, see [Rebase to address merge conflicts](git_rebase.md).
### Example
To apply the changes in `feature_branch` to the `target_branch`:
```shell
git checkout target_branch
git merge feature_branch
```
## `git rebase`
Use `git rebase` to rewrite the commit history of a branch.
```shell
git rebase <branch_name>
```
You can use `git rebase` to [resolve merge conflicts](git_rebase.md).
In most cases, you want to rebase against the default branch.
## `git reset`
Use `git reset` to undo a commit and rewind the commit history and continue on from an earlier commit.
```shell
git reset
```
For more information, see [Undo changes](undo.md).
## `git status`
Use `git status` to show the status of the working directory and staged files.
```shell
git status
```
When you add, change, or delete files, Git can show you the changes.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Common Git commands
breadcrumbs:
- doc
- topics
- git
---
Git commands save you time throughout your development workflow. This reference page contains
frequently used commands for common tasks such as code changes, branch management,
and history review. Each command section provides the exact syntax, practical examples,
and links to additional documentation.
## `git add`
Use `git add` to files to the staging area.
```shell
git add <file_path>
```
You can recursively stage changes from the current working directory with `git add .`, or stage all changes in the Git
repository with `git add --all`.
For more information, see [Add files to your branch](add_files.md).
## `git blame`
Use `git blame` to report which users changed which parts of a file.
```shell
git blame <file_name>
```
You can use `git blame -L <line_start>, <line_end>` to check a specific range of lines.
For more information, see [Git file blame](../../user/project/repository/files/git_blame.md).
### Example
To check which user most recently modified line five of `example.txt`:
```shell
$ git blame -L 5, 5 example.txt
123abc (Zhang Wei 2021-07-04 12:23:04 +0000 5)
```
## `git bisect`
Use `git bisect`to use binary search to find the commit that introduced a bug.
Start by identifying a commit that is "bad" (contains the bug) and a commit that is "good" (doesn't contain the bug).
```shell
git bisect start
git bisect bad # Current version is bad
git bisect good v2.6.13-rc2 # v2.6.13-rc2 is known to be good
```
`git bisect` then picks a commit in between the two points and asks you identify if the commit is "good" or "bad" with
`git bisect good`or `git bisect bad`. Repeat the process until the commit is found.
## `git checkout`
Use `git checkout` to switch to a specific branch.
```shell
git checkout <branch_name>
```
To create a new branch and switch to it, use `git checkout -b <branch_name>`.
For more information, see [Create a Git branch for your changes](branch.md).
## `git clone`
Use `git clone` to copy an existing Git repository.
```shell
git clone <repository>
```
For more information, see [Clone a Git repository to your local computer](clone.md).
## `git commit`
Use `git commit` to commits staged changes to the repository.
```shell
git commit -m "<commit_message>"
```
If the commit message contains a blank line, the first line becomes the commit subject while the remainder becomes the
commit body. Use the subject to briefly summarize a change, and the commit body to provide additional details.
For more information, see [Stage, commit, and push changes](commit.md).
## `git commit --amend`
Use `git commit --amend` to modify the most recent commit.
```shell
git commit --amend
```
## `git diff`
Use `git diff` to view the differences between your local unstaged changes and the latest version that you cloned or
pulled.
```shell
git diff
```
You can display the difference (or diff) between your local changes and the most recent version of a branch. View a
diff to understand your local changes before you commit them to the branch.
To compare your changes against a specific branch, run:
```shell
git diff <branch>
```
In the output:
- Lines with additions begin with a plus (`+`) and are displayed in green.
- Lines with removals or changes begin with a minus (`-`) and are displayed in red.
## `git init`
Use `git init` to initialize a directory so Git tracks it as a repository.
```shell
git init
```
A `.git` file with configuration and log files is added to the directory. You shouldn't edit the `.git` file directly.
The default branch is set to `main`. You can change the name of the default branch with `git branch -m <branch_name>`,
or initialize with `git init -b <branch_name>`.
## `git pull`
Use `git pull` to get all the changes made by users after the last time you cloned or pulled the project.
```shell
git pull <optional_remote> <branch_name>
```
## `git push`
Use `git push` to update remote refs.
```shell
git push
```
For more information, see [Stage, commit, and push changes](commit.md).
## `git reflog`
Use `git reflog` to display a list of changes to the Git reference logs.
```shell
git reflog
```
By default, `git reflog` shows a list of changes to `HEAD`.
For more information, see [Undo changes](undo.md).
## `git remote add`
Use `git remote add` to tell Git which remote repository in GitLab is linked to a local directory.
```shell
git remote add <remote_name> <repository_url>
```
When you clone a repository, by default the source repository is associated with the remote name `origin`.
For more information on configuring additional remotes, see [Forks](../../user/project/repository/forking_workflow.md).
## `git log`
Use `git log` to display a list of commits in chronological order.
```shell
git log
```
## `git show`
Use `git show` to show information about an object in Git.
### Example
To see what commit `HEAD` points to:
```shell
$ git show HEAD
commit ab123c (HEAD -> main, origin/main, origin/HEAD)
```
## `git merge`
Use `git merge` to combine the changes from one branch with another.
For more information on an alternative to `git merge`, see [Rebase to address merge conflicts](git_rebase.md).
### Example
To apply the changes in `feature_branch` to the `target_branch`:
```shell
git checkout target_branch
git merge feature_branch
```
## `git rebase`
Use `git rebase` to rewrite the commit history of a branch.
```shell
git rebase <branch_name>
```
You can use `git rebase` to [resolve merge conflicts](git_rebase.md).
In most cases, you want to rebase against the default branch.
## `git reset`
Use `git reset` to undo a commit and rewind the commit history and continue on from an earlier commit.
```shell
git reset
```
For more information, see [Undo changes](undo.md).
## `git status`
Use `git status` to show the status of the working directory and staged files.
```shell
git status
```
When you add, change, or delete files, Git can show you the changes.
|
https://docs.gitlab.com/topics/add_files
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/add_files.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
add_files.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Add files to your branch
|
Add, commit, and push a file to your Git repository using the command line.
|
Use Git to add files to a branch in your local repository.
This action creates a snapshot of the file for your
next commit and starts version control monitoring.
When you add files with Git, you:
- Prepare content for version control tracking.
- Create a record of file additions and modifications.
- Preserve file history for future reference.
- Make project files available for team collaboration.
## Add files to a Git repository
To add a new file from the command line:
1. Open a terminal.
1. Change directories until you are in your project's folder.
```shell
cd my-project
```
1. Choose a Git branch to work in.
- To create a branch: `git checkout -b <branchname>`
- To switch to an existing branch: `git checkout <branchname>`
1. Copy the file you want to add into the directory where you want to add it.
1. Confirm that your file is in the directory:
- Windows: `dir`
- All other operating systems: `ls`
The filename should be displayed.
1. Check the status of the file:
```shell
git status
```
The filename should be in red. The file is in your file system, but Git isn't tracking it yet.
1. Tell Git to track the file:
```shell
git add <filename>
```
1. Check the status of the file again:
```shell
git status
```
The filename should be green. The file is staged (tracked locally) by Git, but
has not been [committed and pushed](commit.md).
## Add a file to the last commit
To add changes to a file to the last commit, instead of to a new commit, amend the existing commit:
```shell
git add <filename>
git commit --amend
```
If you do not want to edit the commit message, append `--no-edit` to the `commit` command.
## Related topics
- [Add file from the UI](../../user/project/repository/_index.md#add-a-file-from-the-ui)
- [Add file from the Web IDE](../../user/project/repository/web_editor.md#upload-a-file)
- [Sign commits](../../user/project/repository/signed_commits/gpg.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Add, commit, and push a file to your Git repository using the command
line.
title: Add files to your branch
breadcrumbs:
- doc
- topics
- git
---
Use Git to add files to a branch in your local repository.
This action creates a snapshot of the file for your
next commit and starts version control monitoring.
When you add files with Git, you:
- Prepare content for version control tracking.
- Create a record of file additions and modifications.
- Preserve file history for future reference.
- Make project files available for team collaboration.
## Add files to a Git repository
To add a new file from the command line:
1. Open a terminal.
1. Change directories until you are in your project's folder.
```shell
cd my-project
```
1. Choose a Git branch to work in.
- To create a branch: `git checkout -b <branchname>`
- To switch to an existing branch: `git checkout <branchname>`
1. Copy the file you want to add into the directory where you want to add it.
1. Confirm that your file is in the directory:
- Windows: `dir`
- All other operating systems: `ls`
The filename should be displayed.
1. Check the status of the file:
```shell
git status
```
The filename should be in red. The file is in your file system, but Git isn't tracking it yet.
1. Tell Git to track the file:
```shell
git add <filename>
```
1. Check the status of the file again:
```shell
git status
```
The filename should be green. The file is staged (tracked locally) by Git, but
has not been [committed and pushed](commit.md).
## Add a file to the last commit
To add changes to a file to the last commit, instead of to a new commit, amend the existing commit:
```shell
git add <filename>
git commit --amend
```
If you do not want to edit the commit message, append `--no-edit` to the `commit` command.
## Related topics
- [Add file from the UI](../../user/project/repository/_index.md#add-a-file-from-the-ui)
- [Add file from the Web IDE](../../user/project/repository/web_editor.md#upload-a-file)
- [Sign commits](../../user/project/repository/signed_commits/gpg.md)
|
https://docs.gitlab.com/topics/advanced
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/advanced.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
advanced.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Advanced Git operations
|
Rebase, cherry-pick, revert changes, repository, and file management.
|
Advanced Git operations help you perform tasks to maintain and manage your code.
They are more complex actions that go beyond [basic Git operations](basics.md).
These operations enable you to:
- Rewrite commit history.
- Revert and undo changes.
- Manage remote repository connections.
They provide you with the following benefits:
- Code quality: Maintain a clean, linear project history.
- Problem solving: Provide tools to fix mistakes or adjust your repository's state.
- Workflow optimization: Streamline complex development processes.
- Collaboration: Facilitate smoother teamwork in large or complex projects.
To use Git operations effectively, it's important to understand key concepts such as
repositories, branches, commits, and merge requests.
For more information, see [Get started learning Git](get_started.md).
## Best practices
When you use advanced Git operations, you should:
- Create a backup or work on a [separate branch](branch.md).
- Communicate with your team before when you use operations that affect shared branch history.
- Use descriptive [commit messages](../../tutorials/update_commit_messages/_index.md)
when you rewrite history.
- Update your knowledge of Git to stay current with best practices and new features.
For more information, see the [Git documentation](https://git-scm.com/docs).
- Practice advanced operations in a test repository.
## Rebase and resolve conflicts
The `git rebase` command updates your branch with the contents of another branch.
It confirms that changes in your branch don't conflict with changes in the target branch.
If you have a [merge conflict](../../user/project/merge_requests/conflicts.md),
you can rebase to fix it.
For more information, see [Rebase to address merge conflicts](git_rebase.md).
## Cherry-pick changes
The `git cherry-pick` command applies specific commits from one branch to another.
Use it to:
- Backport bug fixes from the default branch to previous release branches.
- Copy changes from a fork to the upstream repository.
- Apply specific changes without merging entire branches.
For more information, see [Cherry-pick changes with Git](cherry_pick.md).
## Revert and undo changes
The following Git commands help you to revert and undo changes:
- `git revert`: Creates a new commit that undoes the changes made in a previous commit.
This helps you to undo a mistake or a change that you no longer need.
- `git reset`: Resets and undoes changes that are not yet committed.
- `git restore`: Restores changes that are lost or deleted.
For more information, see [Revert changes](undo.md).
## Reduce repository size
The size of a Git repository can impact performance and storage costs.
It can differ slightly from one instance to another due to compression, housekeeping, and other factors.
For more information about repository size, see [Repository size](../../user/project/repository/repository_size.md)
You can use Git to purge files from your repository's history and reduce its size. For more information, see [Reduce repository size](repository.md).
## File management
You can use Git to manage files in your repository. It helps you track changes, collaborate with others, and manage large files. The following options are available:
- `git log`: View changes to files in your repository.
- `git blame`: Identify who last modified a line of code in a file.
- `git lfs`: Manages, track, and lock files in your repository.
<!-- Include when the relevant MR is merged.
For more information, see [File management](file_management.md).
-->
## Update Git remote URLs
The `git remote set-url` command updates the URL of the remote repository.
Use this if:
- You imported an existing project from another Git repository host.
- Your organization moved your projects to a new GitLab instance with a new domain name.
- The project was renamed to a new path in the same GitLab instance.
For more information, see [Update Git remote URLs](../../tutorials/update_git_remote_url/_index.md).
## Related topics
- [Getting started](get_started.md)
- [Basic Git operations](basics.md)
- [Common Git commands](commands.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Rebase, cherry-pick, revert changes, repository, and file management.
title: Advanced Git operations
breadcrumbs:
- doc
- topics
- git
---
Advanced Git operations help you perform tasks to maintain and manage your code.
They are more complex actions that go beyond [basic Git operations](basics.md).
These operations enable you to:
- Rewrite commit history.
- Revert and undo changes.
- Manage remote repository connections.
They provide you with the following benefits:
- Code quality: Maintain a clean, linear project history.
- Problem solving: Provide tools to fix mistakes or adjust your repository's state.
- Workflow optimization: Streamline complex development processes.
- Collaboration: Facilitate smoother teamwork in large or complex projects.
To use Git operations effectively, it's important to understand key concepts such as
repositories, branches, commits, and merge requests.
For more information, see [Get started learning Git](get_started.md).
## Best practices
When you use advanced Git operations, you should:
- Create a backup or work on a [separate branch](branch.md).
- Communicate with your team before when you use operations that affect shared branch history.
- Use descriptive [commit messages](../../tutorials/update_commit_messages/_index.md)
when you rewrite history.
- Update your knowledge of Git to stay current with best practices and new features.
For more information, see the [Git documentation](https://git-scm.com/docs).
- Practice advanced operations in a test repository.
## Rebase and resolve conflicts
The `git rebase` command updates your branch with the contents of another branch.
It confirms that changes in your branch don't conflict with changes in the target branch.
If you have a [merge conflict](../../user/project/merge_requests/conflicts.md),
you can rebase to fix it.
For more information, see [Rebase to address merge conflicts](git_rebase.md).
## Cherry-pick changes
The `git cherry-pick` command applies specific commits from one branch to another.
Use it to:
- Backport bug fixes from the default branch to previous release branches.
- Copy changes from a fork to the upstream repository.
- Apply specific changes without merging entire branches.
For more information, see [Cherry-pick changes with Git](cherry_pick.md).
## Revert and undo changes
The following Git commands help you to revert and undo changes:
- `git revert`: Creates a new commit that undoes the changes made in a previous commit.
This helps you to undo a mistake or a change that you no longer need.
- `git reset`: Resets and undoes changes that are not yet committed.
- `git restore`: Restores changes that are lost or deleted.
For more information, see [Revert changes](undo.md).
## Reduce repository size
The size of a Git repository can impact performance and storage costs.
It can differ slightly from one instance to another due to compression, housekeeping, and other factors.
For more information about repository size, see [Repository size](../../user/project/repository/repository_size.md)
You can use Git to purge files from your repository's history and reduce its size. For more information, see [Reduce repository size](repository.md).
## File management
You can use Git to manage files in your repository. It helps you track changes, collaborate with others, and manage large files. The following options are available:
- `git log`: View changes to files in your repository.
- `git blame`: Identify who last modified a line of code in a file.
- `git lfs`: Manages, track, and lock files in your repository.
<!-- Include when the relevant MR is merged.
For more information, see [File management](file_management.md).
-->
## Update Git remote URLs
The `git remote set-url` command updates the URL of the remote repository.
Use this if:
- You imported an existing project from another Git repository host.
- Your organization moved your projects to a new GitLab instance with a new domain name.
- The project was renamed to a new path in the same GitLab instance.
For more information, see [Update Git remote URLs](../../tutorials/update_git_remote_url/_index.md).
## Related topics
- [Getting started](get_started.md)
- [Basic Git operations](basics.md)
- [Common Git commands](commands.md)
|
https://docs.gitlab.com/topics/basics
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/basics.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
basics.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Basic Git operations
|
Create a project, clone a repository, stash changes, branches, and forks.
|
Basic Git operations help you to manage your Git repositories and to make changes to your code.
They provide you with the following benefits:
- Version control: Maintain a history of your project to track changes and revert to previous versions if needed.
- Collaboration: Enable collaboration and makes it easier to share code and work simultaneously.
- Organization: Use branches and merge requests to organize and manage your work.
- Code quality: Facilitates code reviews through merge requests, and helps to maintain code quality and consistency.
- Backup and recovery: Push changes to remote repositories to ensure your work is backed up and recoverable.
To use Git operations effectively, it's important to understand key concepts such as repositories, branches,
commits, and merge requests. For more information, see [Get started learning Git](get_started.md).
To learn more about commonly used Git commands, see [Git commands](commands.md).
## Create a project
The `git push` command sends your local repository changes to a remote repository.
You can create a project from a local repository or import an existing repository.
After you add a repository, GitLab creates a project in your chosen namespace.
For more information, see [Create a project](project.md).
## Clone a repository
The `git clone` command creates a copy of a remote repository on your computer.
You can work on the code locally and push changes back to the remote repository.
For more information, see [Clone a Git repository](clone.md).
## Create a branch
The `git checkout -b <name-of-branch>` command creates a new branch in your repository.
A branch is a copy of the files in your repository that you can modify without affecting the default branch.
For more information, see [Create a branch](branch.md).
## Stage, commit, and push changes
The `git add`, `git commit`, and `git push` commands update your remote repository with your changes.
Git tracks the changes against the most recent version of the checked out branch.
For more information, see [Stage, commit, and push changes](commit.md).
## Stash changes
The `git stash` command temporarily saves changes that you don't want to commit immediately.
You can switch branches or perform other operations without committing incomplete changes.
For more information, see [Stash changes](stash.md).
## Add files to a branch
The `git add <filename>` command adds files to a Git repository or a branch.
You an add new files, modify existing files, or delete files.
For more information, see [Add files to a branch](add_files.md).
## Merge requests
A merge request is a request to merge changes from one branch into another branch.
Merge requests provide a way to collaborate and review code changes.
For more information, see [Merge requests](../../user/project/merge_requests/_index.md)
and [Merge your branch](merge.md).
## Update your fork
A fork is a personal copy of the repository and all its branches, which you create in a
namespace of your choice. You can make changes in your own fork and submit them using `git push`.
For more information, see [Update a fork](forks.md).
## Related topics
- [Get started learning Git](get_started.md)
- [Install Git](how_to_install_git/_index.md)
- [Common Git commands](commands.md)
- [Advanced operations](advanced.md)
- [Troubleshooting Git](troubleshooting_git.md)
- [Git cheat sheet](https://about.gitlab.com/images/press/git-cheat-sheet.pdf)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Create a project, clone a repository, stash changes, branches, and forks.
title: Basic Git operations
breadcrumbs:
- doc
- topics
- git
---
Basic Git operations help you to manage your Git repositories and to make changes to your code.
They provide you with the following benefits:
- Version control: Maintain a history of your project to track changes and revert to previous versions if needed.
- Collaboration: Enable collaboration and makes it easier to share code and work simultaneously.
- Organization: Use branches and merge requests to organize and manage your work.
- Code quality: Facilitates code reviews through merge requests, and helps to maintain code quality and consistency.
- Backup and recovery: Push changes to remote repositories to ensure your work is backed up and recoverable.
To use Git operations effectively, it's important to understand key concepts such as repositories, branches,
commits, and merge requests. For more information, see [Get started learning Git](get_started.md).
To learn more about commonly used Git commands, see [Git commands](commands.md).
## Create a project
The `git push` command sends your local repository changes to a remote repository.
You can create a project from a local repository or import an existing repository.
After you add a repository, GitLab creates a project in your chosen namespace.
For more information, see [Create a project](project.md).
## Clone a repository
The `git clone` command creates a copy of a remote repository on your computer.
You can work on the code locally and push changes back to the remote repository.
For more information, see [Clone a Git repository](clone.md).
## Create a branch
The `git checkout -b <name-of-branch>` command creates a new branch in your repository.
A branch is a copy of the files in your repository that you can modify without affecting the default branch.
For more information, see [Create a branch](branch.md).
## Stage, commit, and push changes
The `git add`, `git commit`, and `git push` commands update your remote repository with your changes.
Git tracks the changes against the most recent version of the checked out branch.
For more information, see [Stage, commit, and push changes](commit.md).
## Stash changes
The `git stash` command temporarily saves changes that you don't want to commit immediately.
You can switch branches or perform other operations without committing incomplete changes.
For more information, see [Stash changes](stash.md).
## Add files to a branch
The `git add <filename>` command adds files to a Git repository or a branch.
You an add new files, modify existing files, or delete files.
For more information, see [Add files to a branch](add_files.md).
## Merge requests
A merge request is a request to merge changes from one branch into another branch.
Merge requests provide a way to collaborate and review code changes.
For more information, see [Merge requests](../../user/project/merge_requests/_index.md)
and [Merge your branch](merge.md).
## Update your fork
A fork is a personal copy of the repository and all its branches, which you create in a
namespace of your choice. You can make changes in your own fork and submit them using `git push`.
For more information, see [Update a fork](forks.md).
## Related topics
- [Get started learning Git](get_started.md)
- [Install Git](how_to_install_git/_index.md)
- [Common Git commands](commands.md)
- [Advanced operations](advanced.md)
- [Troubleshooting Git](troubleshooting_git.md)
- [Git cheat sheet](https://about.gitlab.com/images/press/git-cheat-sheet.pdf)
|
https://docs.gitlab.com/topics/project
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/project.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
project.md
|
Tenant Scale
|
Organizations
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Create a project with `git push`
| null |
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can use `git push` to add a local project repository to GitLab. After you add a repository,
GitLab creates your project in your chosen namespace.
{{< alert type="note" >}}
You cannot use `git push` to create projects with paths that were previously used or
[renamed](../../user/project/working_with_projects.md#rename-a-repository).
Previously used project paths have a redirect. Instead of creating a new project,
the redirect causes push attempts to redirect requests to the renamed project location.
To create a new project for a previously used or renamed project, use the UI
or the [Projects API](../../api/projects.md#create-a-project).
{{< /alert >}}
Prerequisites:
<!--- To push with SSH, you must have [an SSH key](../ssh.md) that is
[added to your GitLab account](../ssh.md#add-an-ssh-key-to-your-gitlab-account).
-->
- You must have permission to add new projects to a [namespace](../../user/namespace/_index.md).
To verify your permissions:
1. On the left sidebar, select **Search or go to** and find your group.
1. In the upper-right corner, confirm that **New project** is visible.
If you do not have the necessary permission, contact your GitLab administrator.
To create a project with `git push`:
1. Push your local repository to GitLab with one of the following:
- With SSH:
- If your project uses the standard port 22, run:
```shell
git push --set-upstream git@gitlab.example.com:namespace/myproject.git main
```
- If your project requires a non-standard port number, run:
```shell
git push --set-upstream ssh://git@gitlab.example.com:00/namespace/myproject.git main
```
- With HTTP, run:
```shell
git push --set-upstream https://gitlab.example.com/namespace/myproject.git master
```
Replace the following values:
- `gitlab.example.com` with the machine domain name hosts your Git repository.
- `namespace` with your [namespace](../../user/namespace/_index.md) name.
- `myproject` with your project name.
- If specifying a port, change `00` to your project's required port number.
- Optional. To export existing repository tags, append the `--tags` flag to
your `git push` command.
1. Optional. Configure the remote:
```shell
git remote add origin https://gitlab.example.com/namespace/myproject.git
```
When the `git push` operation completes, GitLab displays the following message:
```shell
remote: The private project namespace/myproject was created.
```
To view your new project, go to `https://gitlab.example.com/namespace/myproject`.
By default, your project's visibility is set to **Private**,
but you can [change the project's visibility](../../user/public_access.md#change-project-visibility).
## Related topics
- [Create a blank project](../../user/project/_index.md)
- [Create a project from a template](../../user/project/_index.md#create-a-project-from-a-built-in-template)
- [Clone a repository to your local machine](clone.md)
|
---
stage: Tenant Scale
group: Organizations
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Create a project with `git push`
breadcrumbs:
- doc
- topics
- git
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
You can use `git push` to add a local project repository to GitLab. After you add a repository,
GitLab creates your project in your chosen namespace.
{{< alert type="note" >}}
You cannot use `git push` to create projects with paths that were previously used or
[renamed](../../user/project/working_with_projects.md#rename-a-repository).
Previously used project paths have a redirect. Instead of creating a new project,
the redirect causes push attempts to redirect requests to the renamed project location.
To create a new project for a previously used or renamed project, use the UI
or the [Projects API](../../api/projects.md#create-a-project).
{{< /alert >}}
Prerequisites:
<!--- To push with SSH, you must have [an SSH key](../ssh.md) that is
[added to your GitLab account](../ssh.md#add-an-ssh-key-to-your-gitlab-account).
-->
- You must have permission to add new projects to a [namespace](../../user/namespace/_index.md).
To verify your permissions:
1. On the left sidebar, select **Search or go to** and find your group.
1. In the upper-right corner, confirm that **New project** is visible.
If you do not have the necessary permission, contact your GitLab administrator.
To create a project with `git push`:
1. Push your local repository to GitLab with one of the following:
- With SSH:
- If your project uses the standard port 22, run:
```shell
git push --set-upstream git@gitlab.example.com:namespace/myproject.git main
```
- If your project requires a non-standard port number, run:
```shell
git push --set-upstream ssh://git@gitlab.example.com:00/namespace/myproject.git main
```
- With HTTP, run:
```shell
git push --set-upstream https://gitlab.example.com/namespace/myproject.git master
```
Replace the following values:
- `gitlab.example.com` with the machine domain name hosts your Git repository.
- `namespace` with your [namespace](../../user/namespace/_index.md) name.
- `myproject` with your project name.
- If specifying a port, change `00` to your project's required port number.
- Optional. To export existing repository tags, append the `--tags` flag to
your `git push` command.
1. Optional. Configure the remote:
```shell
git remote add origin https://gitlab.example.com/namespace/myproject.git
```
When the `git push` operation completes, GitLab displays the following message:
```shell
remote: The private project namespace/myproject was created.
```
To view your new project, go to `https://gitlab.example.com/namespace/myproject`.
By default, your project's visibility is set to **Private**,
but you can [change the project's visibility](../../user/public_access.md#change-project-visibility).
## Related topics
- [Create a blank project](../../user/project/_index.md)
- [Create a project from a template](../../user/project/_index.md#create-a-project-from-a-built-in-template)
- [Clone a repository to your local machine](clone.md)
|
https://docs.gitlab.com/topics/git
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/_index.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
_index.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Use Git
|
Common Git commands and workflows.
|
Git is a [free and open source](https://git-scm.com/about/free-and-open-source)
distributed version control system. It handles projects of all sizes quickly and
efficiently, and provides support for rolling back changes when needed.
GitLab is built on top of (and with) Git, and provides you a Git-based, fully-integrated
platform for software development. GitLab adds many powerful
[features](https://about.gitlab.com/features/) on top of Git to enhance your workflow.
{{< cards >}}
- [Getting started](get_started.md)
- [Basic operations](basics.md)
- [Advanced operations](advanced.md)
- [Troubleshooting](troubleshooting_git.md)
{{< /cards >}}
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Common Git commands and workflows.
title: Use Git
breadcrumbs:
- doc
- topics
- git
---
Git is a [free and open source](https://git-scm.com/about/free-and-open-source)
distributed version control system. It handles projects of all sizes quickly and
efficiently, and provides support for rolling back changes when needed.
GitLab is built on top of (and with) Git, and provides you a Git-based, fully-integrated
platform for software development. GitLab adds many powerful
[features](https://about.gitlab.com/features/) on top of Git to enhance your workflow.
{{< cards >}}
- [Getting started](get_started.md)
- [Basic operations](basics.md)
- [Advanced operations](advanced.md)
- [Troubleshooting](troubleshooting_git.md)
{{< /cards >}}
|
https://docs.gitlab.com/topics/undo
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/undo.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
undo.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Revert and undo changes
| null |
Working with Git involves experimentation and iteration. Mistakes happen during development,
and sometimes you need to reverse changes. Git gives you control over your code history
with features to undo changes at any point in your
[Git workflow](get_started.md#understand-the-git-workflow).
Recover from accidental commits, remove sensitive data, fix incorrect merges, and maintain a clean
repository history. When collaborating with others, preserve transparency with new revert
commits, or reset your work locally before sharing. The method to use depends on whether the
changes are:
- Only on your local computer.
- Stored remotely on a Git server such as GitLab.com.
## Undo local changes
Until you push your changes to a remote repository, changes
you make in Git are only in your local development environment.
When you _stage_ a file in Git, you instruct Git to track changes to the file in
preparation for a commit. To disregard changes to a file, and not
include it in your next commit, _unstage_ the file.
### Revert unstaged local changes
To undo local changes that are not yet staged:
1. Confirm that the file is unstaged (that you did not use `git add <file>`) by running `git status`:
```shell
git status
```
Example output:
```shell
On branch main
Your branch is up-to-date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: <file>
no changes added to commit (use "git add" and/or "git commit -a")
```
1. Choose an option and undo your changes:
- To overwrite local changes:
```shell
git checkout -- <file>
```
- To discard local changes to all files, permanently:
```shell
git reset --hard
```
### Revert staged local changes
You can undo local changes that are already staged. In the following example,
a file was added to the staging, but not committed:
1. Confirm that the file is staged with `git status`:
```shell
git status
```
Example output:
```shell
On branch main
Your branch is up-to-date with 'origin/main'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: <file>
```
1. Choose an option and undo your changes:
- To unstage the file but keep your changes:
```shell
git restore --staged <file>
```
- To unstage everything but keep your changes:
```shell
git reset
```
- To unstage the file to current commit (HEAD):
```shell
git reset HEAD <file>
```
- To discard everything permanently:
```shell
git reset --hard
```
## Undo local commits
When you commit to your local repository with `git commit`, Git records
your changes. Because you did not push to a remote repository yet, your changes are
not public or shared with others. At this point, you can undo your changes.
### Revert commits without altering history
You can revert a commit while retaining the commit history.
This example uses five commits `A`,`B`,`C`,`D`,`E`, which were committed in order: `A-B-C-D-E`.
The commit you want to undo is `B`.
1. Find the commit SHA of the commit you want to revert to. To look
through a log of commits, use the command `git log`.
1. Choose an option and undo your changes:
- To revert changes introduced by commit `B`:
```shell
git revert <commit-B-SHA>
```
- To undo changes on a single file or directory from commit `B`, but retain them in the staged state:
```shell
git checkout <commit-B-SHA> <file>
```
- To undo changes on a single file or directory from commit `B`, but retain them in the unstaged state:
```shell
git reset <commit-B-SHA> <file>
```
### Revert commits and modify history
The following sections document tasks that rewrite Git history. For more information, see
[Rebase and resolve conflicts](git_rebase.md).
#### Delete a specific commit
You can delete a specific commit. For example, if you have
commits `A-B-C-D` and you want to delete commit `B`.
1. Rebase the range from current commit `D` to `B`:
```shell
git rebase -i A
```
A list of commits is displayed in your editor.
1. In front of commit `B`, replace `pick` with `drop`.
1. Leave the default, `pick`, for all other commits.
1. Save and exit the editor.
#### Edit a specific commit
You can modify a specific commit. For example, if you have
commits `A-B-C-D` and you want to modify something introduced in commit `B`.
1. Rebase the range from current commit `D` to `B`:
```shell
git rebase -i A
```
A list of commits is displayed in your editor.
1. In front of commit `B`, replace `pick` with `edit`.
1. Leave the default, `pick`, for all other commits.
1. Save and exit the editor.
1. Open the file in your editor, make your edits, and commit the changes:
```shell
git commit -a
```
### Undo multiple commits
If you create multiple commits (`A-B-C-D`) on your branch, then realize commits `C` and `D`
are wrong, undo both incorrect commits:
1. Check out the last correct commit. In this example, `B`.
```shell
git checkout <commit-B-SHA>
```
1. Create a new branch.
```shell
git checkout -b new-path-of-feature
```
1. Add, push, and commit your changes.
```shell
git add .
git commit -m "Undo commits C and D"
git push --set-upstream origin new-path-of-feature
```
The commits are now `A-B-C-D-E`.
Alternatively, [cherry-pick](../../user/project/merge_requests/cherry_pick_changes.md#cherry-pick-a-single-commit)
that commit into a new merge request.
{{< alert type="note" >}}
Another solution is to reset to `B` and commit `E`. However, this solution results in `A-B-E`,
which clashes with what others have locally. Don't use this solution if your branch is shared.
{{< /alert >}}
### Recover undone commits
You can recall previous local commits. However, not all previous commits are available, because
Git regularly [cleans the commits that are unreachable by branches or tags](https://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery).
To view repository history and track prior commits, run `git reflog show`. For example:
```shell
$ git reflog show
# Example output:
b673187 HEAD@{4}: merge 6e43d5987921bde189640cc1e37661f7f75c9c0b: Merge made by the 'recursive' strategy.
eb37e74 HEAD@{5}: rebase -i (finish): returning to refs/heads/master
eb37e74 HEAD@{6}: rebase -i (pick): Commit C
97436c6 HEAD@{7}: rebase -i (start): checkout 97436c6eec6396c63856c19b6a96372705b08b1b
...
88f1867 HEAD@{12}: commit: Commit D
97436c6 HEAD@{13}: checkout: moving from 97436c6eec6396c63856c19b6a96372705b08b1b to test
97436c6 HEAD@{14}: checkout: moving from master to 97436c6
05cc326 HEAD@{15}: commit: Commit C
6e43d59 HEAD@{16}: commit: Commit B
```
This output shows the repository history, including:
- The commit SHA.
- How many `HEAD`-changing actions ago the commit was made (`HEAD@{12}` was 12 `HEAD`-changing actions ago).
- The action that was taken, for example: commit, rebase, merge.
- A description of the action that changed `HEAD`.
## Undo remote changes
You can undo remote changes on your branch. However, you cannot undo changes on a branch that
was merged into your branch. In that case, you must revert the changes on the remote branch.
### Revert remote changes without altering history
To undo changes in the remote repository, you can create a new commit with the changes you
want to undo. This process preserves the history and provides a clear timeline and development structure:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: Git revert operation workflow diagram
accDescr: Shows commits A, B, C in sequence, then commit -B that reverses B's changes, followed by D. Commit B remains in history.
REMOTE["REMOTE"] --> A(A)
A --> B(B)
B --> C(C)
C --> negB("-B")
negB --> D(D)
B:::crossed
classDef crossed stroke:#000,stroke-width:3px,color:#000,stroke-dasharray: 5 5
negB -.->|reverts| B
```
To revert changes introduced in a specific commit `B`:
```shell
git revert B
```
### Revert remote changes and modify history
You can undo remote changes and change history.
Even with an updated history, old commits can still be
accessed by commit SHA, at least until all the automated cleanup
of detached commits is performed, or a cleanup is run manually. Even the cleanup might not remove old commits if there are still refs pointing to them.

You should not change the history when you're working in a public branch
or a branch that might be used by others.
{{< alert type="note" >}}
Never modify the commit history of your [default branch](../../user/project/repository/branches/default.md) or shared branch.
{{< /alert >}}
### Modify history with `git rebase`
A branch of a merge request is a public branch and might be used by
other developers. However, the project rules might require
you to use `git rebase` to reduce the number of
displayed commits on target branch after reviews are done.
You can modify history by using `git rebase -i`. Use this command to modify, squash,
and delete commits.
```shell
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Empty commits are commented out
```
{{< alert type="note" >}}
If you decide to stop a rebase, do not close your editor.
Instead, remove all uncommented lines and save.
{{< /alert >}}
Use `git rebase` carefully on shared and remote branches.
Experiment locally before you push to the remote repository.
```shell
# Modify history from commit-id to HEAD (current commit)
git rebase -i commit-id
```
### Modify history with `git merge --squash`
When contributing to large open source repositories, consider squashing your commits
into a single commit. This practice:
- Helps maintain a clean and linear project history.
- Simplifies the process of reverting changes, as all changes are condensed into one commit.
To squash commits on your branch to a single commit on a target branch
at merge, use `git merge --squash`. For example:
1. Check out the base branch. In this example, the base branch is `main`:
```shell
git checkout main
```
1. Merge your target branch with `--squash`:
```shell
git merge --squash <target-branch>
```
1. Commit the changes:
```shell
git commit -m "Squash commit from feature-branch"
```
For information on how to squash commits from the GitLab UI, see [Squash and merge](../../user/project/merge_requests/squash_and_merge.md).
### Revert a merge commit to a different parent
When you revert a merge commit, the branch you merged to is always the
first parent. For example, the [default branch](../../user/project/repository/branches/default.md) or `main`.
To revert a merge commit to a different parent, you must revert the commit from the command line:
1. Identify the SHA of the parent commit you want to revert to.
1. Identify the parent number of the commit you want to revert to. (Defaults to `1`, for the first parent.)
1. Run this command, replacing `2` with the parent number, and `7a39eb0` with the commit SHA:
```shell
git revert -m 2 7a39eb0
```
For information on reverting changes from the GitLab UI, see [Revert changes](../../user/project/merge_requests/revert_changes.md).
## Handle sensitive information
Sensitive information, such as passwords and API keys, can be
accidentally committed to a Git repository. This section covers
ways to handle this situation.
### Redact information
Permanently delete sensitive or confidential information that was accidentally committed, and ensure
it's no longer accessible in your repository's history. This process replaces a list of strings with `***REMOVED***`.
Alternatively, to completely delete specific files from a repository, see
[Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
To redact text from your repository, see [Redact text from repository](../../user/project/merge_requests/revert_changes.md#redact-text-from-repository).
### Remove information from commits
You can use Git to delete sensitive information from your past commits. However,
history is modified in the process.
To rewrite history with
[certain filters](https://git-scm.com/docs/git-filter-branch#_options),
run `git filter-branch`.
To remove a file from the history altogether use:
```shell
git filter-branch --tree-filter 'rm filename' HEAD
```
The `git filter-branch` command might be slow on large repositories.
Tools are available to execute Git commands more quickly.
These tools are faster because they do not provide the same
feature set as `git filter-branch` does, but focus on specific use cases.
For more information about purging files from the repository history and GitLab storage,
see [Reduce repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size).
## Undo and remove commits
- Undo your last commit and put everything back in the staging area:
```shell
git reset --soft HEAD^
```
- Add files and change the commit message:
```shell
git commit --amend -m "New Message"
```
- Undo the last change and remove all other changes,
if you did not push yet:
```shell
git reset --hard HEAD^
```
- Undo the last change and remove the last two commits,
if you did not push yet:
```shell
git reset --hard HEAD^^
```
### Example `git reset` workflow
The following is a common Git reset workflow:
1. Edit a file.
1. Check the status of the branch:
```shell
git status
```
1. Commit the changes to the branch with a wrong commit message:
```shell
git commit -am "kjkfjkg"
```
1. Check the Git log:
```shell
git log
```
1. Amend the commit with the correct commit message:
```shell
git commit --amend -m "New comment added"
```
1. Check the Git log again:
```shell
git log
```
1. Soft reset the branch:
```shell
git reset --soft HEAD^
```
1. Check the Git log again:
```shell
git log
```
1. Pull updates for the branch from the remote:
```shell
git pull origin <branch>
```
1. Push changes for the branch to the remote:
```shell
git push origin <branch>
```
## Undo commits with a new commit
If a file was changed in a commit, and you want to change it back to how it was in the previous commit,
but keep the commit history, you can use `git revert`. The command creates a new commit that reverses
all actions taken in the original commit.
For example, to remove a file's changes in commit `B`, and restore its contents from commit `A`, run:
```shell
git revert <commit-sha>
```
## Remove a file from a repository
- To remove a file from disk and repository, use `git rm`. To remove a directory, use the `-r` flag:
```shell
git rm '*.txt'
git rm -r <dirname>
```
- To keep a file on disk but remove it from the repository (such as a file you want
to add to `.gitignore`), use the `rm` command with the `--cache` flag:
```shell
git rm <filename> --cache
```
These commands remove the file from current branches, but do not expunge it from your repository's history.
To completely remove all traces of the file, past and present, from your repository, see
[Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
## Compare `git revert` and `git reset`
- The `git reset` command removes the commit entirely.
- The `git revert` command removes the changes, but leaves the commit intact.
It's safer, because you can revert a revert.
```shell
# Changed file
git commit -am "bug introduced"
git revert HEAD
# New commit created reverting changes
# Now we want to re apply the reverted commit
git log # take hash from the revert commit
git revert <rev commit hash>
# reverted commit is back (new commit created again)
```
## Related topics
- [`git blame`](../../user/project/repository/files/git_blame.md)
- [Cherry-pick](../../user/project/merge_requests/cherry_pick_changes.md)
- [Git history](../../user/project/repository/files/git_history.md)
- [Revert an existing commit](../../user/project/merge_requests/revert_changes.md#revert-a-commit)
- [Squash and merge](../../user/project/merge_requests/squash_and_merge.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Revert and undo changes
breadcrumbs:
- doc
- topics
- git
---
Working with Git involves experimentation and iteration. Mistakes happen during development,
and sometimes you need to reverse changes. Git gives you control over your code history
with features to undo changes at any point in your
[Git workflow](get_started.md#understand-the-git-workflow).
Recover from accidental commits, remove sensitive data, fix incorrect merges, and maintain a clean
repository history. When collaborating with others, preserve transparency with new revert
commits, or reset your work locally before sharing. The method to use depends on whether the
changes are:
- Only on your local computer.
- Stored remotely on a Git server such as GitLab.com.
## Undo local changes
Until you push your changes to a remote repository, changes
you make in Git are only in your local development environment.
When you _stage_ a file in Git, you instruct Git to track changes to the file in
preparation for a commit. To disregard changes to a file, and not
include it in your next commit, _unstage_ the file.
### Revert unstaged local changes
To undo local changes that are not yet staged:
1. Confirm that the file is unstaged (that you did not use `git add <file>`) by running `git status`:
```shell
git status
```
Example output:
```shell
On branch main
Your branch is up-to-date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: <file>
no changes added to commit (use "git add" and/or "git commit -a")
```
1. Choose an option and undo your changes:
- To overwrite local changes:
```shell
git checkout -- <file>
```
- To discard local changes to all files, permanently:
```shell
git reset --hard
```
### Revert staged local changes
You can undo local changes that are already staged. In the following example,
a file was added to the staging, but not committed:
1. Confirm that the file is staged with `git status`:
```shell
git status
```
Example output:
```shell
On branch main
Your branch is up-to-date with 'origin/main'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
new file: <file>
```
1. Choose an option and undo your changes:
- To unstage the file but keep your changes:
```shell
git restore --staged <file>
```
- To unstage everything but keep your changes:
```shell
git reset
```
- To unstage the file to current commit (HEAD):
```shell
git reset HEAD <file>
```
- To discard everything permanently:
```shell
git reset --hard
```
## Undo local commits
When you commit to your local repository with `git commit`, Git records
your changes. Because you did not push to a remote repository yet, your changes are
not public or shared with others. At this point, you can undo your changes.
### Revert commits without altering history
You can revert a commit while retaining the commit history.
This example uses five commits `A`,`B`,`C`,`D`,`E`, which were committed in order: `A-B-C-D-E`.
The commit you want to undo is `B`.
1. Find the commit SHA of the commit you want to revert to. To look
through a log of commits, use the command `git log`.
1. Choose an option and undo your changes:
- To revert changes introduced by commit `B`:
```shell
git revert <commit-B-SHA>
```
- To undo changes on a single file or directory from commit `B`, but retain them in the staged state:
```shell
git checkout <commit-B-SHA> <file>
```
- To undo changes on a single file or directory from commit `B`, but retain them in the unstaged state:
```shell
git reset <commit-B-SHA> <file>
```
### Revert commits and modify history
The following sections document tasks that rewrite Git history. For more information, see
[Rebase and resolve conflicts](git_rebase.md).
#### Delete a specific commit
You can delete a specific commit. For example, if you have
commits `A-B-C-D` and you want to delete commit `B`.
1. Rebase the range from current commit `D` to `B`:
```shell
git rebase -i A
```
A list of commits is displayed in your editor.
1. In front of commit `B`, replace `pick` with `drop`.
1. Leave the default, `pick`, for all other commits.
1. Save and exit the editor.
#### Edit a specific commit
You can modify a specific commit. For example, if you have
commits `A-B-C-D` and you want to modify something introduced in commit `B`.
1. Rebase the range from current commit `D` to `B`:
```shell
git rebase -i A
```
A list of commits is displayed in your editor.
1. In front of commit `B`, replace `pick` with `edit`.
1. Leave the default, `pick`, for all other commits.
1. Save and exit the editor.
1. Open the file in your editor, make your edits, and commit the changes:
```shell
git commit -a
```
### Undo multiple commits
If you create multiple commits (`A-B-C-D`) on your branch, then realize commits `C` and `D`
are wrong, undo both incorrect commits:
1. Check out the last correct commit. In this example, `B`.
```shell
git checkout <commit-B-SHA>
```
1. Create a new branch.
```shell
git checkout -b new-path-of-feature
```
1. Add, push, and commit your changes.
```shell
git add .
git commit -m "Undo commits C and D"
git push --set-upstream origin new-path-of-feature
```
The commits are now `A-B-C-D-E`.
Alternatively, [cherry-pick](../../user/project/merge_requests/cherry_pick_changes.md#cherry-pick-a-single-commit)
that commit into a new merge request.
{{< alert type="note" >}}
Another solution is to reset to `B` and commit `E`. However, this solution results in `A-B-E`,
which clashes with what others have locally. Don't use this solution if your branch is shared.
{{< /alert >}}
### Recover undone commits
You can recall previous local commits. However, not all previous commits are available, because
Git regularly [cleans the commits that are unreachable by branches or tags](https://git-scm.com/book/en/v2/Git-Internals-Maintenance-and-Data-Recovery).
To view repository history and track prior commits, run `git reflog show`. For example:
```shell
$ git reflog show
# Example output:
b673187 HEAD@{4}: merge 6e43d5987921bde189640cc1e37661f7f75c9c0b: Merge made by the 'recursive' strategy.
eb37e74 HEAD@{5}: rebase -i (finish): returning to refs/heads/master
eb37e74 HEAD@{6}: rebase -i (pick): Commit C
97436c6 HEAD@{7}: rebase -i (start): checkout 97436c6eec6396c63856c19b6a96372705b08b1b
...
88f1867 HEAD@{12}: commit: Commit D
97436c6 HEAD@{13}: checkout: moving from 97436c6eec6396c63856c19b6a96372705b08b1b to test
97436c6 HEAD@{14}: checkout: moving from master to 97436c6
05cc326 HEAD@{15}: commit: Commit C
6e43d59 HEAD@{16}: commit: Commit B
```
This output shows the repository history, including:
- The commit SHA.
- How many `HEAD`-changing actions ago the commit was made (`HEAD@{12}` was 12 `HEAD`-changing actions ago).
- The action that was taken, for example: commit, rebase, merge.
- A description of the action that changed `HEAD`.
## Undo remote changes
You can undo remote changes on your branch. However, you cannot undo changes on a branch that
was merged into your branch. In that case, you must revert the changes on the remote branch.
### Revert remote changes without altering history
To undo changes in the remote repository, you can create a new commit with the changes you
want to undo. This process preserves the history and provides a clear timeline and development structure:
```mermaid
%%{init: { "fontFamily": "GitLab Sans" }}%%
flowchart LR
accTitle: Git revert operation workflow diagram
accDescr: Shows commits A, B, C in sequence, then commit -B that reverses B's changes, followed by D. Commit B remains in history.
REMOTE["REMOTE"] --> A(A)
A --> B(B)
B --> C(C)
C --> negB("-B")
negB --> D(D)
B:::crossed
classDef crossed stroke:#000,stroke-width:3px,color:#000,stroke-dasharray: 5 5
negB -.->|reverts| B
```
To revert changes introduced in a specific commit `B`:
```shell
git revert B
```
### Revert remote changes and modify history
You can undo remote changes and change history.
Even with an updated history, old commits can still be
accessed by commit SHA, at least until all the automated cleanup
of detached commits is performed, or a cleanup is run manually. Even the cleanup might not remove old commits if there are still refs pointing to them.

You should not change the history when you're working in a public branch
or a branch that might be used by others.
{{< alert type="note" >}}
Never modify the commit history of your [default branch](../../user/project/repository/branches/default.md) or shared branch.
{{< /alert >}}
### Modify history with `git rebase`
A branch of a merge request is a public branch and might be used by
other developers. However, the project rules might require
you to use `git rebase` to reduce the number of
displayed commits on target branch after reviews are done.
You can modify history by using `git rebase -i`. Use this command to modify, squash,
and delete commits.
```shell
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Empty commits are commented out
```
{{< alert type="note" >}}
If you decide to stop a rebase, do not close your editor.
Instead, remove all uncommented lines and save.
{{< /alert >}}
Use `git rebase` carefully on shared and remote branches.
Experiment locally before you push to the remote repository.
```shell
# Modify history from commit-id to HEAD (current commit)
git rebase -i commit-id
```
### Modify history with `git merge --squash`
When contributing to large open source repositories, consider squashing your commits
into a single commit. This practice:
- Helps maintain a clean and linear project history.
- Simplifies the process of reverting changes, as all changes are condensed into one commit.
To squash commits on your branch to a single commit on a target branch
at merge, use `git merge --squash`. For example:
1. Check out the base branch. In this example, the base branch is `main`:
```shell
git checkout main
```
1. Merge your target branch with `--squash`:
```shell
git merge --squash <target-branch>
```
1. Commit the changes:
```shell
git commit -m "Squash commit from feature-branch"
```
For information on how to squash commits from the GitLab UI, see [Squash and merge](../../user/project/merge_requests/squash_and_merge.md).
### Revert a merge commit to a different parent
When you revert a merge commit, the branch you merged to is always the
first parent. For example, the [default branch](../../user/project/repository/branches/default.md) or `main`.
To revert a merge commit to a different parent, you must revert the commit from the command line:
1. Identify the SHA of the parent commit you want to revert to.
1. Identify the parent number of the commit you want to revert to. (Defaults to `1`, for the first parent.)
1. Run this command, replacing `2` with the parent number, and `7a39eb0` with the commit SHA:
```shell
git revert -m 2 7a39eb0
```
For information on reverting changes from the GitLab UI, see [Revert changes](../../user/project/merge_requests/revert_changes.md).
## Handle sensitive information
Sensitive information, such as passwords and API keys, can be
accidentally committed to a Git repository. This section covers
ways to handle this situation.
### Redact information
Permanently delete sensitive or confidential information that was accidentally committed, and ensure
it's no longer accessible in your repository's history. This process replaces a list of strings with `***REMOVED***`.
Alternatively, to completely delete specific files from a repository, see
[Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
To redact text from your repository, see [Redact text from repository](../../user/project/merge_requests/revert_changes.md#redact-text-from-repository).
### Remove information from commits
You can use Git to delete sensitive information from your past commits. However,
history is modified in the process.
To rewrite history with
[certain filters](https://git-scm.com/docs/git-filter-branch#_options),
run `git filter-branch`.
To remove a file from the history altogether use:
```shell
git filter-branch --tree-filter 'rm filename' HEAD
```
The `git filter-branch` command might be slow on large repositories.
Tools are available to execute Git commands more quickly.
These tools are faster because they do not provide the same
feature set as `git filter-branch` does, but focus on specific use cases.
For more information about purging files from the repository history and GitLab storage,
see [Reduce repository size](../../user/project/repository/repository_size.md#methods-to-reduce-repository-size).
## Undo and remove commits
- Undo your last commit and put everything back in the staging area:
```shell
git reset --soft HEAD^
```
- Add files and change the commit message:
```shell
git commit --amend -m "New Message"
```
- Undo the last change and remove all other changes,
if you did not push yet:
```shell
git reset --hard HEAD^
```
- Undo the last change and remove the last two commits,
if you did not push yet:
```shell
git reset --hard HEAD^^
```
### Example `git reset` workflow
The following is a common Git reset workflow:
1. Edit a file.
1. Check the status of the branch:
```shell
git status
```
1. Commit the changes to the branch with a wrong commit message:
```shell
git commit -am "kjkfjkg"
```
1. Check the Git log:
```shell
git log
```
1. Amend the commit with the correct commit message:
```shell
git commit --amend -m "New comment added"
```
1. Check the Git log again:
```shell
git log
```
1. Soft reset the branch:
```shell
git reset --soft HEAD^
```
1. Check the Git log again:
```shell
git log
```
1. Pull updates for the branch from the remote:
```shell
git pull origin <branch>
```
1. Push changes for the branch to the remote:
```shell
git push origin <branch>
```
## Undo commits with a new commit
If a file was changed in a commit, and you want to change it back to how it was in the previous commit,
but keep the commit history, you can use `git revert`. The command creates a new commit that reverses
all actions taken in the original commit.
For example, to remove a file's changes in commit `B`, and restore its contents from commit `A`, run:
```shell
git revert <commit-sha>
```
## Remove a file from a repository
- To remove a file from disk and repository, use `git rm`. To remove a directory, use the `-r` flag:
```shell
git rm '*.txt'
git rm -r <dirname>
```
- To keep a file on disk but remove it from the repository (such as a file you want
to add to `.gitignore`), use the `rm` command with the `--cache` flag:
```shell
git rm <filename> --cache
```
These commands remove the file from current branches, but do not expunge it from your repository's history.
To completely remove all traces of the file, past and present, from your repository, see
[Remove blobs](../../user/project/repository/repository_size.md#remove-blobs).
## Compare `git revert` and `git reset`
- The `git reset` command removes the commit entirely.
- The `git revert` command removes the changes, but leaves the commit intact.
It's safer, because you can revert a revert.
```shell
# Changed file
git commit -am "bug introduced"
git revert HEAD
# New commit created reverting changes
# Now we want to re apply the reverted commit
git log # take hash from the revert commit
git revert <rev commit hash>
# reverted commit is back (new commit created again)
```
## Related topics
- [`git blame`](../../user/project/repository/files/git_blame.md)
- [Cherry-pick](../../user/project/merge_requests/cherry_pick_changes.md)
- [Git history](../../user/project/repository/files/git_history.md)
- [Revert an existing commit](../../user/project/merge_requests/revert_changes.md#revert-a-commit)
- [Squash and merge](../../user/project/merge_requests/squash_and_merge.md)
|
https://docs.gitlab.com/topics/file_management
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/file_management.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
file_management.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
File management
|
Common commands and workflows.
|
Git provides file management capabilities that help you to track changes,
collaborate with others, and manage large files efficiently.
## File history
Use `git log` to view a file's complete history and understand how it has changed over time.
The file history shows you:
- The author of each change.
- The date and time of each modification.
- The specific changes made in each commit.
For example, to view `history` information about the `CONTRIBUTING.md` file in the root
of the `gitlab` repository, run:
```shell
git log CONTRIBUTING.md
```
Example output:
```shell
commit b350bf041666964c27834885e4590d90ad0bfe90
Author: Nick Malcolm <nmalcolm@gitlab.com>
Date: Fri Dec 8 13:43:07 2023 +1300
Update security contact and vulnerability disclosure info
commit 8e4c7f26317ff4689610bf9d031b4931aef54086
Author: Brett Walker <bwalker@gitlab.com>
Date: Fri Oct 20 17:53:25 2023 +0000
Fix link to Code of Conduct
and condense some of the verbiage
```
## Check previous changes to a file
Use `git blame` to see who made the last change to a file and when.
This helps to understand the context of a file's content,
resolve conflicts, and identify the person responsible for a specific change.
If you want to find `blame` information about a `README.md` file in the local directory:
1. Open a terminal or command prompt.
1. Go to your Git repository.
1. Run the following command:
```shell
git blame README.md
```
1. To navigate the results page, press <kbd>Space</kbd>.
1. To exit out of the results, press <kbd>Q</kbd>.
This output displays the file content with annotations showing the commit SHA, author,
and date for each line. For example:
```shell
58233c4f1054c (Dan Rhodes 2022-05-13 07:02:20 +0000 1) ## Contributor License Agreement
b87768f435185 (Jamie Hurewitz 2017-10-31 18:09:23 +0000 2)
8e4c7f26317ff (Brett Walker 2023-10-20 17:53:25 +0000 3) Contributions to this repository are subject to the
58233c4f1054c (Dan Rhodes 2022-05-13 07:02:20 +0000 4)
```
## Git LFS
Git Large File Storage (LFS) is an extension that helps you manage large files in Git repositories.
It replaces large files with text pointers in Git, and stores the file contents on a remote server.
Prerequisites:
- Download and install the appropriate version of the [CLI extension for Git LFS](https://git-lfs.com) for your operating system.
- [Configure your project to use Git LFS](lfs/_index.md).
- Install the Git LFS pre-push hook. To do this, run `git lfs install` in the root directory of your repository.
### Add and track files
To add a large file into your Git repository and track it with Git LFS:
1. Configure tracking for all files of a certain type. Replace `iso` with your desired file type:
```shell
git lfs track "*.iso"
```
This command creates a `.gitattributes` file with instructions to handle all
ISO files with Git LFS. The following line is added to your `.gitattributes` file:
```plaintext
*.iso filter=lfs -text
```
1. Add a file of that type, `.iso`, to your repository.
1. Track the changes to both the `.gitattributes` file and the `.iso` file:
```shell
git add .
```
1. Ensure you've added both files:
```shell
git status
```
The `.gitattributes` file must be included in your commit.
It if isn't included, Git does not track the ISO file with Git LFS.
{{< alert type="note" >}}
Ensure the files you're changing are not listed in a `.gitignore` file.
If they are, Git commits the change locally but doesn't push it to your upstream repository.
{{< /alert >}}
1. Commit both files to your local copy of the repository:
```shell
git commit -m "Add an ISO file and .gitattributes"
```
1. Push your changes upstream. Replace `main` with the name of your branch:
```shell
git push origin main
```
1. Create a merge request.
{{< alert type="note" >}}
When you add a new file type to Git LFS tracking, existing files of this type
are not converted to Git LFS. Only files of this type, added after you begin tracking, are added to Git LFS. Use `git lfs migrate` to convert existing files to use Git LFS.
{{< /alert >}}
### Stop tracking a file
When you stop tracking a file with Git LFS, the file remains on disk because it's still
part of your repository's history.
To stop tracking a file with Git LFS:
1. Run the `git lfs untrack` command and provide the path to the file:
```shell
git lfs untrack doc/example.iso
```
1. Use the `touch` command to convert it back to a standard file:
```shell
touch doc/example.iso
```
1. Track the changes to the file:
```shell
git add .
```
1. Commit and push your changes.
1. Create a merge request and request a review.
1. Merge the request into the target branch.
{{< alert type="note" >}}
If you delete an object tracked by Git LFS, without tracking it with `git lfs untrack`,
the object shows as `modified` in `git status`.
{{< /alert >}}
### Stop tracking all files of a single type
To stop tracking all files of a particular type in Git LFS:
1. Run the `git lfs untrack` command and provide the file type to stop tracking:
```shell
git lfs untrack "*.iso"
```
1. Use the `touch` command to convert the files back to standard files:
```shell
touch *.iso
```
1. Track the changes to the files:
```shell
git add .
```
1. Commit and push your changes.
1. Create a merge request and request a review.
1. Merge the request into the target branch.
## File locks
File locks help prevent conflicts and ensure that only one person can edit a file at a time.
It's a good option for:
- Binary files that can't be merged. For example, design files and videos.
- Files that require exclusive access during editing.
Prerequisites:
- You must have [Git LFS installed](lfs/_index.md).
- You must have the Maintainer role for the project.
### Configure file locks
To configure file locks for a specific file type:
1. Use the `git lfs track` command with the `--lockable` option. For example, to configure PNG files:
```shell
git lfs track "*.png" --lockable
```
This command creates or updates your `.gitattributes` file with the following content:
```plaintext
*.png filter=lfs diff=lfs merge=lfs -text lockable
```
1. Push the `.gitattributes` file to the remote repository for the changes to take effect.
{{< alert type="note" >}}
After a file type is registered as lockable, it is automatically marked as read-only.
{{< /alert >}}
#### Configure file locks without LFS
To register a file type as lockable without using Git LFS:
1. Edit the `.gitattributes` file manually:
```shell
*.pdf lockable
```
1. Push the `.gitattributes` file to the remote repository.
### Lock and unlock files
To lock or unlock a file with exclusive file locking:
1. Open a terminal window in your repository directory.
1. Run one of the following commands:
{{< tabs >}}
{{< tab title="Lock a file" >}}
```shell
git lfs lock path/to/file.png
```
{{< /tab >}}
{{< tab title="Unlock a file" >}}
```shell
git lfs unlock path/to/file.png
```
{{< /tab >}}
{{< tab title="Unlock a file by ID" >}}
```shell
git lfs unlock --id=123
```
{{< /tab >}}
{{< tab title="Force unlock a file" >}}
```shell
git lfs unlock --id=123 --force
```
{{< /tab >}}
{{< /tabs >}}
### View locked files
To view locked files:
1. Open a terminal window in your repository.
1. Run the following command:
```shell
git lfs locks
```
The output lists the locked files, the users who locked them, and the file IDs.
In the GitLab UI:
- The repository file tree displays an LFS badge for files tracked by Git LFS.
- Exclusively-locked files show a padlock icon.
LFS-Locked files
You can also [view and remove existing locks](../../user/project/file_lock.md) from the GitLab UI.
{{< alert type="note" >}}
When you rename an exclusively-locked file, the lock is lost. You must lock it again to keep it locked.
{{< /alert >}}
### Lock and edit a file
To lock a file, edit it, and optionally unlock it:
1. Lock the file:
```shell
git lfs lock <file_path>
```
1. Edit the file.
1. Optional. Unlock the file when you're done:
```shell
git lfs unlock <file_path>
```
## Related topics
- [File management with the GitLab UI](../../user/project/repository/files/_index.md)
- [Git Large File Storage (LFS) documentation](lfs/_index.md)
- [File locking](../../user/project/file_lock.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Common commands and workflows.
title: File management
breadcrumbs:
- doc
- topics
- git
---
Git provides file management capabilities that help you to track changes,
collaborate with others, and manage large files efficiently.
## File history
Use `git log` to view a file's complete history and understand how it has changed over time.
The file history shows you:
- The author of each change.
- The date and time of each modification.
- The specific changes made in each commit.
For example, to view `history` information about the `CONTRIBUTING.md` file in the root
of the `gitlab` repository, run:
```shell
git log CONTRIBUTING.md
```
Example output:
```shell
commit b350bf041666964c27834885e4590d90ad0bfe90
Author: Nick Malcolm <nmalcolm@gitlab.com>
Date: Fri Dec 8 13:43:07 2023 +1300
Update security contact and vulnerability disclosure info
commit 8e4c7f26317ff4689610bf9d031b4931aef54086
Author: Brett Walker <bwalker@gitlab.com>
Date: Fri Oct 20 17:53:25 2023 +0000
Fix link to Code of Conduct
and condense some of the verbiage
```
## Check previous changes to a file
Use `git blame` to see who made the last change to a file and when.
This helps to understand the context of a file's content,
resolve conflicts, and identify the person responsible for a specific change.
If you want to find `blame` information about a `README.md` file in the local directory:
1. Open a terminal or command prompt.
1. Go to your Git repository.
1. Run the following command:
```shell
git blame README.md
```
1. To navigate the results page, press <kbd>Space</kbd>.
1. To exit out of the results, press <kbd>Q</kbd>.
This output displays the file content with annotations showing the commit SHA, author,
and date for each line. For example:
```shell
58233c4f1054c (Dan Rhodes 2022-05-13 07:02:20 +0000 1) ## Contributor License Agreement
b87768f435185 (Jamie Hurewitz 2017-10-31 18:09:23 +0000 2)
8e4c7f26317ff (Brett Walker 2023-10-20 17:53:25 +0000 3) Contributions to this repository are subject to the
58233c4f1054c (Dan Rhodes 2022-05-13 07:02:20 +0000 4)
```
## Git LFS
Git Large File Storage (LFS) is an extension that helps you manage large files in Git repositories.
It replaces large files with text pointers in Git, and stores the file contents on a remote server.
Prerequisites:
- Download and install the appropriate version of the [CLI extension for Git LFS](https://git-lfs.com) for your operating system.
- [Configure your project to use Git LFS](lfs/_index.md).
- Install the Git LFS pre-push hook. To do this, run `git lfs install` in the root directory of your repository.
### Add and track files
To add a large file into your Git repository and track it with Git LFS:
1. Configure tracking for all files of a certain type. Replace `iso` with your desired file type:
```shell
git lfs track "*.iso"
```
This command creates a `.gitattributes` file with instructions to handle all
ISO files with Git LFS. The following line is added to your `.gitattributes` file:
```plaintext
*.iso filter=lfs -text
```
1. Add a file of that type, `.iso`, to your repository.
1. Track the changes to both the `.gitattributes` file and the `.iso` file:
```shell
git add .
```
1. Ensure you've added both files:
```shell
git status
```
The `.gitattributes` file must be included in your commit.
It if isn't included, Git does not track the ISO file with Git LFS.
{{< alert type="note" >}}
Ensure the files you're changing are not listed in a `.gitignore` file.
If they are, Git commits the change locally but doesn't push it to your upstream repository.
{{< /alert >}}
1. Commit both files to your local copy of the repository:
```shell
git commit -m "Add an ISO file and .gitattributes"
```
1. Push your changes upstream. Replace `main` with the name of your branch:
```shell
git push origin main
```
1. Create a merge request.
{{< alert type="note" >}}
When you add a new file type to Git LFS tracking, existing files of this type
are not converted to Git LFS. Only files of this type, added after you begin tracking, are added to Git LFS. Use `git lfs migrate` to convert existing files to use Git LFS.
{{< /alert >}}
### Stop tracking a file
When you stop tracking a file with Git LFS, the file remains on disk because it's still
part of your repository's history.
To stop tracking a file with Git LFS:
1. Run the `git lfs untrack` command and provide the path to the file:
```shell
git lfs untrack doc/example.iso
```
1. Use the `touch` command to convert it back to a standard file:
```shell
touch doc/example.iso
```
1. Track the changes to the file:
```shell
git add .
```
1. Commit and push your changes.
1. Create a merge request and request a review.
1. Merge the request into the target branch.
{{< alert type="note" >}}
If you delete an object tracked by Git LFS, without tracking it with `git lfs untrack`,
the object shows as `modified` in `git status`.
{{< /alert >}}
### Stop tracking all files of a single type
To stop tracking all files of a particular type in Git LFS:
1. Run the `git lfs untrack` command and provide the file type to stop tracking:
```shell
git lfs untrack "*.iso"
```
1. Use the `touch` command to convert the files back to standard files:
```shell
touch *.iso
```
1. Track the changes to the files:
```shell
git add .
```
1. Commit and push your changes.
1. Create a merge request and request a review.
1. Merge the request into the target branch.
## File locks
File locks help prevent conflicts and ensure that only one person can edit a file at a time.
It's a good option for:
- Binary files that can't be merged. For example, design files and videos.
- Files that require exclusive access during editing.
Prerequisites:
- You must have [Git LFS installed](lfs/_index.md).
- You must have the Maintainer role for the project.
### Configure file locks
To configure file locks for a specific file type:
1. Use the `git lfs track` command with the `--lockable` option. For example, to configure PNG files:
```shell
git lfs track "*.png" --lockable
```
This command creates or updates your `.gitattributes` file with the following content:
```plaintext
*.png filter=lfs diff=lfs merge=lfs -text lockable
```
1. Push the `.gitattributes` file to the remote repository for the changes to take effect.
{{< alert type="note" >}}
After a file type is registered as lockable, it is automatically marked as read-only.
{{< /alert >}}
#### Configure file locks without LFS
To register a file type as lockable without using Git LFS:
1. Edit the `.gitattributes` file manually:
```shell
*.pdf lockable
```
1. Push the `.gitattributes` file to the remote repository.
### Lock and unlock files
To lock or unlock a file with exclusive file locking:
1. Open a terminal window in your repository directory.
1. Run one of the following commands:
{{< tabs >}}
{{< tab title="Lock a file" >}}
```shell
git lfs lock path/to/file.png
```
{{< /tab >}}
{{< tab title="Unlock a file" >}}
```shell
git lfs unlock path/to/file.png
```
{{< /tab >}}
{{< tab title="Unlock a file by ID" >}}
```shell
git lfs unlock --id=123
```
{{< /tab >}}
{{< tab title="Force unlock a file" >}}
```shell
git lfs unlock --id=123 --force
```
{{< /tab >}}
{{< /tabs >}}
### View locked files
To view locked files:
1. Open a terminal window in your repository.
1. Run the following command:
```shell
git lfs locks
```
The output lists the locked files, the users who locked them, and the file IDs.
In the GitLab UI:
- The repository file tree displays an LFS badge for files tracked by Git LFS.
- Exclusively-locked files show a padlock icon.
LFS-Locked files
You can also [view and remove existing locks](../../user/project/file_lock.md) from the GitLab UI.
{{< alert type="note" >}}
When you rename an exclusively-locked file, the lock is lost. You must lock it again to keep it locked.
{{< /alert >}}
### Lock and edit a file
To lock a file, edit it, and optionally unlock it:
1. Lock the file:
```shell
git lfs lock <file_path>
```
1. Edit the file.
1. Optional. Unlock the file when you're done:
```shell
git lfs unlock <file_path>
```
## Related topics
- [File management with the GitLab UI](../../user/project/repository/files/_index.md)
- [Git Large File Storage (LFS) documentation](lfs/_index.md)
- [File locking](../../user/project/file_lock.md)
|
https://docs.gitlab.com/topics/clone
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/clone.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
clone.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Clone a Git repository to your local computer
|
Learn how to clone Git repositories from a GitLab server using different protocols (SSH or HTTPS) and various IDEs.
|
You can clone a Git repository to your local computer. This action creates a copy of the repository and
establishes a connection that synchronizes changes between your computer and the GitLab server.
This connection requires you to add credentials.
You can either [clone with SSH](#clone-with-ssh) or [clone with HTTPS](#clone-with-https).
SSH is the recommended authentication method.
Cloning a repository:
- Downloads all project files, history, and metadata to your local machine.
- Creates a working directory with the latest version of the files.
- Sets up remote tracking to synchronize future changes.
- Provides offline access to the complete codebase.
- Establishes the foundation for contributing code back to the project.
## Clone with SSH
Clone with SSH when you want to authenticate only one time.
1. Authenticate with GitLab by following the instructions in the [SSH documentation](../../user/ssh.md).
1. On the left sidebar, select **Search or go to** and find the project you want to clone.
1. On the project's overview page, in the upper-right corner, select **Code**, then copy the URL for **Clone with SSH**.
1. Open a terminal and go to the directory where you want to clone the files.
Git automatically creates a folder with the repository name and downloads the files there.
1. Run this command:
```shell
git clone <copied URL>
```
1. To view the files, go to the new directory:
```shell
cd <new directory>
```
## Clone with HTTPS
Clone with HTTPS when you want to authenticate each time you perform an operation between your computer and GitLab.
[OAuth credential helpers](../../user/profile/account/two_factor_authentication.md#oauth-credential-helpers) can decrease
the number of times you must manually authenticate, making HTTPS a seamless experience.
1. On the left sidebar, select **Search or go to** and find the project you want to clone.
1. On the project's overview page, in the upper-right corner, select **Code**, then copy the URL for **Clone with HTTPS**.
1. Open a terminal and go to the directory where you want to clone the files.
1. Run the following command. Git automatically creates a folder with the repository name and downloads the files there.
```shell
git clone <copied URL>
```
1. GitLab requests your username and password.
If you have enabled two-factor authentication (2FA) on your account, you cannot use your account password. Instead, you can do one of the following:
- [Clone using a token](#clone-using-a-token) with `read_repository` or `write_repository` permissions.
- Install an [OAuth credential helper](../../user/profile/account/two_factor_authentication.md#oauth-credential-helpers).
If you have not enabled 2FA, use your account password.
1. To view the files, go to the new directory:
```shell
cd <new directory>
```
{{< alert type="note" >}}
On Windows, if you enter your password incorrectly multiple times and an `Access denied` message appears,
add your namespace (username or group) to the path:
`git clone https://namespace@gitlab.com/gitlab-org/gitlab.git`.
{{< /alert >}}
### Clone using a token
Clone with HTTPS using a token if:
- You want to use 2FA.
- You want to have a revocable set of credentials scoped to one or more repositories.
You can use any of these tokens to authenticate when cloning over HTTPS:
- [Personal access tokens](../../user/profile/personal_access_tokens.md).
- [Deploy tokens](../../user/project/deploy_tokens/_index.md).
- [Project access tokens](../../user/project/settings/project_access_tokens.md).
- [Group access tokens](../../user/group/settings/group_access_tokens.md).
For example:
```shell
git clone https://<username>:<token>@gitlab.example.com/tanuki/awesome_project.git
```
## Clone and open in Apple Xcode
Projects that contain a `.xcodeproj` or `.xcworkspace` directory can be cloned
into Xcode on macOS.
1. From the GitLab UI, go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Select **Xcode**.
The project is cloned onto your computer and you are
prompted to open Xcode.
## Clone and open in Visual Studio Code
All projects can be cloned into Visual Studio Code from the GitLab user interface, but you
can also install the [GitLab Workflow extension for VS Code](../../editor_extensions/visual_studio_code/_index.md) to clone from
Visual Studio Code.
Prerequisites:
- [Visual Studio Code](https://code.visualstudio.com/) must be installed on your local machine.
Other versions of VS Code, like VS Code Insiders and VSCodium, are not supported.
- [Configure your browser for IDE protocols](#configure-browsers-for-ide-protocols).
- From the GitLab interface:
1. Go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Under **Open in your IDE**, select **Visual Studio Code (SSH)** or **Visual Studio Code (HTTPS)**.
1. Select a folder to clone the project into.
After Visual Studio Code clones your project, it opens the folder.
- From Visual Studio Code, with the [extension](../../editor_extensions/visual_studio_code/_index.md) installed, use the
extension's [`Git: Clone` command](https://marketplace.visualstudio.com/items?itemName=GitLab.gitlab-workflow#clone-gitlab-projects).
## Clone and open in IntelliJ IDEA
All projects can be cloned into [IntelliJ IDEA](https://www.jetbrains.com/idea/)
from the GitLab user interface.
Prerequisites:
- [IntelliJ IDEA](https://www.jetbrains.com/idea/) must be installed on your local machine.
- [Configure your browser for IDE protocols](#configure-browsers-for-ide-protocols).
To do this:
1. Go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Under **Open in your IDE**, select **IntelliJ IDEA (SSH)** or **IntelliJ IDEA (HTTPS)**.
## Configure browsers for IDE protocols
To ensure that the **Open in IDE** feature is working, you must configure your browsers to handle
custom application protocols, such as `vscode://` or `jetbrains://`.
### Firefox
Firefox handles custom protocols automatically if the required application is installed on your system.
When you first select a custom protocol link, a dialog opens and requests if you want
to open the application. Select **Open link** to allow Firefox to open the application.
If you don't want to be prompted again, select the checkbox to remember your choice.
If the prompt dialog does not open, you need to manually configure Firefox:
1. Open Firefox.
1. On the top right, select the **Open application menu** ({{< icon name="hamburger" >}}).
1. Search for or go to the **Applications** section.
1. Find and select your desired application in the list. For example, `vscode` or `jetbrains`.
1. Select Visual Studio Code or IntelliJ IDEA from the dropdown list, or select **Use other...** to locate the executable.
If your preferred IDE is not listed, you are prompted to choose an application the first time you select the corresponding link.
### Chrome
Chrome handles custom protocols automatically if the required application is installed on your system.
When you first select a custom protocol link in Chrome, a dialog opens and requests if you want
to open the application. Select **Open** to allow Chrome to open the application.
If you don't want to be prompted again, select the checkbox to remember your choice.
## Reduce clone size
As Git repositories grow in size, they can become cumbersome to work with
because of:
- The large amount of history that must be downloaded.
- The large amount of disk space they require.
[Partial clone](https://git-scm.com/docs/partial-clone)
is a performance optimization that allows Git to function without having a
complete copy of the repository. The goal of this work is to allow Git better
handle extremely large repositories.
Git 2.22.0 or later is required.
### Filter by file size
Storing large binary files in Git is usually discouraged, because every large
file added is downloaded by everyone who clones or fetches changes
thereafter. These downloads are slow and problematic, especially when working from a slow
or unreliable internet connection.
Using partial clone with a file size filter solves this problem, by excluding
troublesome large files from clones and fetches. When Git encounters a missing
file, it's downloaded on demand.
When cloning a repository, use the `--filter=blob:limit=<size>` argument. For example,
to clone the repository excluding files larger than 1 megabyte:
```shell
git clone --filter=blob:limit=1m git@gitlab.com:gitlab-com/www-gitlab-com.git
```
This would produce the following output:
```shell
Cloning into 'www-gitlab-com'...
remote: Enumerating objects: 832467, done.
remote: Counting objects: 100% (832467/832467), done.
remote: Compressing objects: 100% (207226/207226), done.
remote: Total 832467 (delta 585563), reused 826624 (delta 580099), pack-reused 0
Receiving objects: 100% (832467/832467), 2.34 GiB | 5.05 MiB/s, done.
Resolving deltas: 100% (585563/585563), done.
remote: Enumerating objects: 146, done.
remote: Counting objects: 100% (146/146), done.
remote: Compressing objects: 100% (138/138), done.
remote: Total 146 (delta 8), reused 144 (delta 8), pack-reused 0
Receiving objects: 100% (146/146), 471.45 MiB | 4.60 MiB/s, done.
Resolving deltas: 100% (8/8), done.
Updating files: 100% (13008/13008), done.
Filtering content: 100% (3/3), 131.24 MiB | 4.65 MiB/s, done.
```
The output is longer because Git:
1. Clones the repository excluding files larger than 1 megabyte.
1. Downloads any missing large files needed to check out the default branch.
When changing branches, Git may download more missing files.
### Filter by object type
For repositories with millions of files and a long history, you can exclude all files and use
[`git sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout) to reduce the size of
your working copy.
```shell
# Clone the repo excluding all files
$ git clone --filter=blob:none --sparse git@gitlab.com:gitlab-com/www-gitlab-com.git
Cloning into 'www-gitlab-com'...
remote: Enumerating objects: 678296, done.
remote: Counting objects: 100% (678296/678296), done.
remote: Compressing objects: 100% (165915/165915), done.
remote: Total 678296 (delta 472342), reused 673292 (delta 467476), pack-reused 0
Receiving objects: 100% (678296/678296), 81.06 MiB | 5.74 MiB/s, done.
Resolving deltas: 100% (472342/472342), done.
remote: Enumerating objects: 28, done.
remote: Counting objects: 100% (28/28), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 28 (delta 0), reused 12 (delta 0), pack-reused 0
Receiving objects: 100% (28/28), 140.29 KiB | 341.00 KiB/s, done.
Updating files: 100% (28/28), done.
$ cd www-gitlab-com
$ git sparse-checkout set data --cone
remote: Enumerating objects: 301, done.
remote: Counting objects: 100% (301/301), done.
remote: Compressing objects: 100% (292/292), done.
remote: Total 301 (delta 16), reused 102 (delta 9), pack-reused 0
Receiving objects: 100% (301/301), 1.15 MiB | 608.00 KiB/s, done.
Resolving deltas: 100% (16/16), done.
Updating files: 100% (302/302), done.
```
For more details, see the Git documentation for
[`sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout).
### Filter by file path
Deeper integration between partial clone and sparse checkout is possible through the
`--filter=sparse:oid=<blob-ish>` filter spec. This mode of filtering uses a format similar to a
`.gitignore` file to specify which files to include when cloning and fetching.
{{< alert type="warning" >}}
Partial clone using `sparse` filters is still experimental. It might be slow and significantly increase
[Gitaly](../../administration/gitaly/_index.md) resource utilization when cloning and fetching.
[Filter all blobs and use sparse-checkout](#filter-by-object-type) instead, because
[`git-sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout) simplifies
this type of partial clone use and overcomes its limitations.
{{< /alert >}}
For more details, see the Git documentation for
[`rev-list-options`](https://git-scm.com/docs/git-rev-list#Documentation/git-rev-list.txt---filterltfilter-specgt).
1. Create a filter spec. For example, consider a monolithic repository with many applications,
each in a different subdirectory in the root. Create a file `shiny-app/.filterspec`:
```plaintext
# Only the paths listed in the file will be downloaded when performing a
# partial clone using `--filter=sparse:oid=shiny-app/.gitfilterspec`
# Explicitly include filterspec needed to configure sparse checkout with
# git config --local core.sparsecheckout true
# git show master:snazzy-app/.gitfilterspec >> .git/info/sparse-checkout
shiny-app/.gitfilterspec
# Shiny App
shiny-app/
# Dependencies
shimmery-app/
shared-component-a/
shared-component-b/
```
1. Clone and filter by path. Support for `--filter=sparse:oid` using the
clone command is not fully integrated with sparse checkout.
```shell
# Clone the filtered set of objects using the filterspec stored on the
# server. WARNING: this step may be very slow!
git clone --sparse --filter=sparse:oid=master:shiny-app/.gitfilterspec <url>
# Optional: observe there are missing objects that we have not fetched
git rev-list --all --quiet --objects --missing=print | wc -l
```
{{< alert type="warning" >}}
Git integrations with `bash`, Zsh, etc and editors that automatically
show Git status information often run `git fetch` which fetches the
entire repository. Disabling or reconfiguring these integrations might be required.
{{< /alert >}}
### Remove partial clone filtering
Git repositories with partial clone filtering can have the filtering removed. To
remove filtering:
1. Fetch everything that has been excluded by the filters, to make sure that the
repository is complete. If `git sparse-checkout` was used, use
`git sparse-checkout disable` to disable it. See the
[`disable` documentation](https://git-scm.com/docs/git-sparse-checkout#Documentation/git-sparse-checkout.txt-emdisableem)
for more information.
Then do a regular `fetch` to ensure that the repository is complete. To check if
there are missing objects to fetch, and then fetch them, especially when not using
`git sparse-checkout`, the following commands can be used:
```shell
# Show missing objects
git rev-list --objects --all --missing=print | grep -e '^\?'
# Show missing objects without a '?' character before them (needs GNU grep)
git rev-list --objects --all --missing=print | grep -oP '^\?\K\w+'
# Fetch missing objects
git fetch origin $(git rev-list --objects --all --missing=print | grep -oP '^\?\K\w+')
# Show number of missing objects
git rev-list --objects --all --missing=print | grep -e '^\?' | wc -l
```
1. Repack everything. This can be done using `git repack -a -d`, for example. This
should leave only three files in `.git/objects/pack/`:
- A `pack-<SHA1>.pack` file.
- Its corresponding `pack-<SHA1>.idx` file.
- A `pack-<SHA1>.promisor` file.
1. Delete the `.promisor` file. The previous step should have left only one
`pack-<SHA1>.promisor` file, which should be empty and should be deleted.
1. Remove partial clone configuration. The partial clone-related configuration
variables should be removed from Git configuration files. Usually only the following
configuration must be removed:
- `remote.origin.promisor`.
- `remote.origin.partialclonefilter`.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Learn how to clone Git repositories from a GitLab server using different
protocols (SSH or HTTPS) and various IDEs.
title: Clone a Git repository to your local computer
breadcrumbs:
- doc
- topics
- git
---
You can clone a Git repository to your local computer. This action creates a copy of the repository and
establishes a connection that synchronizes changes between your computer and the GitLab server.
This connection requires you to add credentials.
You can either [clone with SSH](#clone-with-ssh) or [clone with HTTPS](#clone-with-https).
SSH is the recommended authentication method.
Cloning a repository:
- Downloads all project files, history, and metadata to your local machine.
- Creates a working directory with the latest version of the files.
- Sets up remote tracking to synchronize future changes.
- Provides offline access to the complete codebase.
- Establishes the foundation for contributing code back to the project.
## Clone with SSH
Clone with SSH when you want to authenticate only one time.
1. Authenticate with GitLab by following the instructions in the [SSH documentation](../../user/ssh.md).
1. On the left sidebar, select **Search or go to** and find the project you want to clone.
1. On the project's overview page, in the upper-right corner, select **Code**, then copy the URL for **Clone with SSH**.
1. Open a terminal and go to the directory where you want to clone the files.
Git automatically creates a folder with the repository name and downloads the files there.
1. Run this command:
```shell
git clone <copied URL>
```
1. To view the files, go to the new directory:
```shell
cd <new directory>
```
## Clone with HTTPS
Clone with HTTPS when you want to authenticate each time you perform an operation between your computer and GitLab.
[OAuth credential helpers](../../user/profile/account/two_factor_authentication.md#oauth-credential-helpers) can decrease
the number of times you must manually authenticate, making HTTPS a seamless experience.
1. On the left sidebar, select **Search or go to** and find the project you want to clone.
1. On the project's overview page, in the upper-right corner, select **Code**, then copy the URL for **Clone with HTTPS**.
1. Open a terminal and go to the directory where you want to clone the files.
1. Run the following command. Git automatically creates a folder with the repository name and downloads the files there.
```shell
git clone <copied URL>
```
1. GitLab requests your username and password.
If you have enabled two-factor authentication (2FA) on your account, you cannot use your account password. Instead, you can do one of the following:
- [Clone using a token](#clone-using-a-token) with `read_repository` or `write_repository` permissions.
- Install an [OAuth credential helper](../../user/profile/account/two_factor_authentication.md#oauth-credential-helpers).
If you have not enabled 2FA, use your account password.
1. To view the files, go to the new directory:
```shell
cd <new directory>
```
{{< alert type="note" >}}
On Windows, if you enter your password incorrectly multiple times and an `Access denied` message appears,
add your namespace (username or group) to the path:
`git clone https://namespace@gitlab.com/gitlab-org/gitlab.git`.
{{< /alert >}}
### Clone using a token
Clone with HTTPS using a token if:
- You want to use 2FA.
- You want to have a revocable set of credentials scoped to one or more repositories.
You can use any of these tokens to authenticate when cloning over HTTPS:
- [Personal access tokens](../../user/profile/personal_access_tokens.md).
- [Deploy tokens](../../user/project/deploy_tokens/_index.md).
- [Project access tokens](../../user/project/settings/project_access_tokens.md).
- [Group access tokens](../../user/group/settings/group_access_tokens.md).
For example:
```shell
git clone https://<username>:<token>@gitlab.example.com/tanuki/awesome_project.git
```
## Clone and open in Apple Xcode
Projects that contain a `.xcodeproj` or `.xcworkspace` directory can be cloned
into Xcode on macOS.
1. From the GitLab UI, go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Select **Xcode**.
The project is cloned onto your computer and you are
prompted to open Xcode.
## Clone and open in Visual Studio Code
All projects can be cloned into Visual Studio Code from the GitLab user interface, but you
can also install the [GitLab Workflow extension for VS Code](../../editor_extensions/visual_studio_code/_index.md) to clone from
Visual Studio Code.
Prerequisites:
- [Visual Studio Code](https://code.visualstudio.com/) must be installed on your local machine.
Other versions of VS Code, like VS Code Insiders and VSCodium, are not supported.
- [Configure your browser for IDE protocols](#configure-browsers-for-ide-protocols).
- From the GitLab interface:
1. Go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Under **Open in your IDE**, select **Visual Studio Code (SSH)** or **Visual Studio Code (HTTPS)**.
1. Select a folder to clone the project into.
After Visual Studio Code clones your project, it opens the folder.
- From Visual Studio Code, with the [extension](../../editor_extensions/visual_studio_code/_index.md) installed, use the
extension's [`Git: Clone` command](https://marketplace.visualstudio.com/items?itemName=GitLab.gitlab-workflow#clone-gitlab-projects).
## Clone and open in IntelliJ IDEA
All projects can be cloned into [IntelliJ IDEA](https://www.jetbrains.com/idea/)
from the GitLab user interface.
Prerequisites:
- [IntelliJ IDEA](https://www.jetbrains.com/idea/) must be installed on your local machine.
- [Configure your browser for IDE protocols](#configure-browsers-for-ide-protocols).
To do this:
1. Go to the project's overview page.
1. In the upper-right corner, select **Code**.
1. Under **Open in your IDE**, select **IntelliJ IDEA (SSH)** or **IntelliJ IDEA (HTTPS)**.
## Configure browsers for IDE protocols
To ensure that the **Open in IDE** feature is working, you must configure your browsers to handle
custom application protocols, such as `vscode://` or `jetbrains://`.
### Firefox
Firefox handles custom protocols automatically if the required application is installed on your system.
When you first select a custom protocol link, a dialog opens and requests if you want
to open the application. Select **Open link** to allow Firefox to open the application.
If you don't want to be prompted again, select the checkbox to remember your choice.
If the prompt dialog does not open, you need to manually configure Firefox:
1. Open Firefox.
1. On the top right, select the **Open application menu** ({{< icon name="hamburger" >}}).
1. Search for or go to the **Applications** section.
1. Find and select your desired application in the list. For example, `vscode` or `jetbrains`.
1. Select Visual Studio Code or IntelliJ IDEA from the dropdown list, or select **Use other...** to locate the executable.
If your preferred IDE is not listed, you are prompted to choose an application the first time you select the corresponding link.
### Chrome
Chrome handles custom protocols automatically if the required application is installed on your system.
When you first select a custom protocol link in Chrome, a dialog opens and requests if you want
to open the application. Select **Open** to allow Chrome to open the application.
If you don't want to be prompted again, select the checkbox to remember your choice.
## Reduce clone size
As Git repositories grow in size, they can become cumbersome to work with
because of:
- The large amount of history that must be downloaded.
- The large amount of disk space they require.
[Partial clone](https://git-scm.com/docs/partial-clone)
is a performance optimization that allows Git to function without having a
complete copy of the repository. The goal of this work is to allow Git better
handle extremely large repositories.
Git 2.22.0 or later is required.
### Filter by file size
Storing large binary files in Git is usually discouraged, because every large
file added is downloaded by everyone who clones or fetches changes
thereafter. These downloads are slow and problematic, especially when working from a slow
or unreliable internet connection.
Using partial clone with a file size filter solves this problem, by excluding
troublesome large files from clones and fetches. When Git encounters a missing
file, it's downloaded on demand.
When cloning a repository, use the `--filter=blob:limit=<size>` argument. For example,
to clone the repository excluding files larger than 1 megabyte:
```shell
git clone --filter=blob:limit=1m git@gitlab.com:gitlab-com/www-gitlab-com.git
```
This would produce the following output:
```shell
Cloning into 'www-gitlab-com'...
remote: Enumerating objects: 832467, done.
remote: Counting objects: 100% (832467/832467), done.
remote: Compressing objects: 100% (207226/207226), done.
remote: Total 832467 (delta 585563), reused 826624 (delta 580099), pack-reused 0
Receiving objects: 100% (832467/832467), 2.34 GiB | 5.05 MiB/s, done.
Resolving deltas: 100% (585563/585563), done.
remote: Enumerating objects: 146, done.
remote: Counting objects: 100% (146/146), done.
remote: Compressing objects: 100% (138/138), done.
remote: Total 146 (delta 8), reused 144 (delta 8), pack-reused 0
Receiving objects: 100% (146/146), 471.45 MiB | 4.60 MiB/s, done.
Resolving deltas: 100% (8/8), done.
Updating files: 100% (13008/13008), done.
Filtering content: 100% (3/3), 131.24 MiB | 4.65 MiB/s, done.
```
The output is longer because Git:
1. Clones the repository excluding files larger than 1 megabyte.
1. Downloads any missing large files needed to check out the default branch.
When changing branches, Git may download more missing files.
### Filter by object type
For repositories with millions of files and a long history, you can exclude all files and use
[`git sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout) to reduce the size of
your working copy.
```shell
# Clone the repo excluding all files
$ git clone --filter=blob:none --sparse git@gitlab.com:gitlab-com/www-gitlab-com.git
Cloning into 'www-gitlab-com'...
remote: Enumerating objects: 678296, done.
remote: Counting objects: 100% (678296/678296), done.
remote: Compressing objects: 100% (165915/165915), done.
remote: Total 678296 (delta 472342), reused 673292 (delta 467476), pack-reused 0
Receiving objects: 100% (678296/678296), 81.06 MiB | 5.74 MiB/s, done.
Resolving deltas: 100% (472342/472342), done.
remote: Enumerating objects: 28, done.
remote: Counting objects: 100% (28/28), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 28 (delta 0), reused 12 (delta 0), pack-reused 0
Receiving objects: 100% (28/28), 140.29 KiB | 341.00 KiB/s, done.
Updating files: 100% (28/28), done.
$ cd www-gitlab-com
$ git sparse-checkout set data --cone
remote: Enumerating objects: 301, done.
remote: Counting objects: 100% (301/301), done.
remote: Compressing objects: 100% (292/292), done.
remote: Total 301 (delta 16), reused 102 (delta 9), pack-reused 0
Receiving objects: 100% (301/301), 1.15 MiB | 608.00 KiB/s, done.
Resolving deltas: 100% (16/16), done.
Updating files: 100% (302/302), done.
```
For more details, see the Git documentation for
[`sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout).
### Filter by file path
Deeper integration between partial clone and sparse checkout is possible through the
`--filter=sparse:oid=<blob-ish>` filter spec. This mode of filtering uses a format similar to a
`.gitignore` file to specify which files to include when cloning and fetching.
{{< alert type="warning" >}}
Partial clone using `sparse` filters is still experimental. It might be slow and significantly increase
[Gitaly](../../administration/gitaly/_index.md) resource utilization when cloning and fetching.
[Filter all blobs and use sparse-checkout](#filter-by-object-type) instead, because
[`git-sparse-checkout`](https://git-scm.com/docs/git-sparse-checkout) simplifies
this type of partial clone use and overcomes its limitations.
{{< /alert >}}
For more details, see the Git documentation for
[`rev-list-options`](https://git-scm.com/docs/git-rev-list#Documentation/git-rev-list.txt---filterltfilter-specgt).
1. Create a filter spec. For example, consider a monolithic repository with many applications,
each in a different subdirectory in the root. Create a file `shiny-app/.filterspec`:
```plaintext
# Only the paths listed in the file will be downloaded when performing a
# partial clone using `--filter=sparse:oid=shiny-app/.gitfilterspec`
# Explicitly include filterspec needed to configure sparse checkout with
# git config --local core.sparsecheckout true
# git show master:snazzy-app/.gitfilterspec >> .git/info/sparse-checkout
shiny-app/.gitfilterspec
# Shiny App
shiny-app/
# Dependencies
shimmery-app/
shared-component-a/
shared-component-b/
```
1. Clone and filter by path. Support for `--filter=sparse:oid` using the
clone command is not fully integrated with sparse checkout.
```shell
# Clone the filtered set of objects using the filterspec stored on the
# server. WARNING: this step may be very slow!
git clone --sparse --filter=sparse:oid=master:shiny-app/.gitfilterspec <url>
# Optional: observe there are missing objects that we have not fetched
git rev-list --all --quiet --objects --missing=print | wc -l
```
{{< alert type="warning" >}}
Git integrations with `bash`, Zsh, etc and editors that automatically
show Git status information often run `git fetch` which fetches the
entire repository. Disabling or reconfiguring these integrations might be required.
{{< /alert >}}
### Remove partial clone filtering
Git repositories with partial clone filtering can have the filtering removed. To
remove filtering:
1. Fetch everything that has been excluded by the filters, to make sure that the
repository is complete. If `git sparse-checkout` was used, use
`git sparse-checkout disable` to disable it. See the
[`disable` documentation](https://git-scm.com/docs/git-sparse-checkout#Documentation/git-sparse-checkout.txt-emdisableem)
for more information.
Then do a regular `fetch` to ensure that the repository is complete. To check if
there are missing objects to fetch, and then fetch them, especially when not using
`git sparse-checkout`, the following commands can be used:
```shell
# Show missing objects
git rev-list --objects --all --missing=print | grep -e '^\?'
# Show missing objects without a '?' character before them (needs GNU grep)
git rev-list --objects --all --missing=print | grep -oP '^\?\K\w+'
# Fetch missing objects
git fetch origin $(git rev-list --objects --all --missing=print | grep -oP '^\?\K\w+')
# Show number of missing objects
git rev-list --objects --all --missing=print | grep -e '^\?' | wc -l
```
1. Repack everything. This can be done using `git repack -a -d`, for example. This
should leave only three files in `.git/objects/pack/`:
- A `pack-<SHA1>.pack` file.
- Its corresponding `pack-<SHA1>.idx` file.
- A `pack-<SHA1>.promisor` file.
1. Delete the `.promisor` file. The previous step should have left only one
`pack-<SHA1>.promisor` file, which should be empty and should be deleted.
1. Remove partial clone configuration. The partial clone-related configuration
variables should be removed from Git configuration files. Usually only the following
configuration must be removed:
- `remote.origin.promisor`.
- `remote.origin.partialclonefilter`.
|
https://docs.gitlab.com/topics/get_started
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/get_started.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
get_started.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Get started with Git
|
Work with the Git version control system.
|
Git is a version control system you use to track changes to your code and collaborate with others.
GitLab is a web-based Git repository manager that provides CI/CD and other features to help you
manage your software development lifecycle.
You can use GitLab without knowing Git.
However, it is advantageous to understand Git when you use GitLab for source control.
Learning Git is part of a larger workflow:

## Repositories
A Git repository is a directory that contains all the files, folders, and version
history of your project.
It serves as a central hub where Git manages and tracks changes to your code.
When you initialize a Git repository or clone an existing one, Git creates a hidden directory,
`.git`, inside the project directory.
The directory contains all the essential metadata and objects Git uses to manage your repository,
including the complete history of all changes made to the files.
Git tracks changes at the file level, so you can view the modifications made to individual
files over time.
For more information, see [Repositories](../../user/project/repository/_index.md).
## Working directories
Your working directory is where you make changes to your code.
When you clone a Git repository, you create a local copy of the repository in your working directory.
You can edit files, add new ones, and test your code.
To collaborate, you can:
- Commit: After you make changes in your working directory, commit those changes to your local repository.
- Push: Push your changes to a remote Git repository hosted on GitLab. This makes your changes available to other team members.
- Pull: Pull changes made by others from the remote repository, and ensure that your local repository is updated with the latest changes.
For more information, see [Common Git commands](commands.md).
## Branches
In Git, you can use branches to work on different features, bug fixes, or experiments
simultaneously without interfering with each other's work.
Branching enables you to create an isolated environment where you can make and test
changes without affecting the default branch.
In GitLab, the default branch is usually called `main`.
### Merge a branch
After a feature is complete or a bug is fixed, you can merge your branch into the default branch.
You can do this in a [Merge request](../../user/project/merge_requests/_index.md).
Merging is a safe way to bring changes from one branch into another while preserving the
history of the changes.
If there are conflicts between the branches, for example, if you modify the same lines of code
in both branches, GitLab flags these as [merge conflicts](../../user/project/merge_requests/conflicts.md).
These must be resolved manually by reviewing and editing the code.
### Delete a branch
After a successful merge, you can delete the branch if it is no longer needed.
Deleting unnecessary branches helps keep your repository organized and manageable.
{{< alert type="note" >}}
To ensure no work is lost, verify all changes are incorporated into the default branch
before you delete the branch after the final merge.
{{< /alert >}}
For more information, see [Branches](../../user/project/repository/branches/_index.md).
## Understand the Git workflow
You can manage your code, collaborate with others, and keep your project organized
with a Git workflow.
A standard Git workflow includes the following steps:
1. Clone a repository: Create a local copy of the repository by cloning it to your machine.
You can work on the project without affecting the original repository.
1. Create a new branch: Before you make any changes, it's recommended to create a new branch.
This ensures that your changes are isolated and don't interfere with the work of others on the
default branch.
1. Make changes: Make changes to files in your working directory.
You can add new features, fix bugs, or make other modifications.
1. Stage changes: After you make changes to your files, stage the changes you want to commit.
Staging tells Git which changes should be included in the next commit.
1. Commit changes: Commit your staged changes to your local repository.
A commit saves a snapshot of your work and creates a history of the changes to your files.
1. Push changes: To share your changes with others, push them to the remote repository.
This makes your changes available to other collaborators.
1. Merge your branch: After your changes are reviewed and approved, merge your branch into the
default branch. For example, `main`. This step incorporates your changes into the project.
## Forks
Some organizations, particularly those working with open-source projects, may use
different workflows. For example, [Forks](../../user/project/repository/forking_workflow.md).
A fork is a personal copy of the repository that exists in your own namespace.
Use this workflow when contributing to open-source projects or when your team uses a
centralized repository.
## Install Git
To use Git commands and contribute to GitLab projects, you should download and install
the Git client on your computer.
The installation process varies depending on your operating system.
For example, Windows, MacOS, or Linux.
For information on how to install Git, see [Install Git](how_to_install_git/_index.md).
## Git commands
To interact with Git from the command line, you can use Git commands:
- `git clone`: Clone a repository to your local machine.
- `git branch`: List, create, or delete branches in your local repository.
- `git checkout`: Switch between different branches in your local repository.
- `git add`: Stage changes for commit.
- `git commit`: Commit staged changes to your local repository.
- `git push`: Push local commits to the remote repository.
- `git pull`: Fetch changes from the remote repository and merge them into your local branch.
For more comprehensive information and detailed explanations,
see [Command Git commands](commands.md) guide.
<!--- Use this section when the [Generate an SSH key pair](../user/ssh.md) page is added to the navigation
### Use SSH with Git
When you work with remote repositories, you should use SSH for secure communication.
GitLab uses the SSH protocol to securely communicate with Git.
When you use SSH keys to authenticate to the GitLab remote server,
you don't need to supply your username and password each time.
To learn how to generate and add SSH keys to your GitLab account,
see [Generate an SSH key pair](../user/ssh.md).
-->
## Practice with Git
The best way to learn Git is to practice.
You can create a test project, experiment with different Git commands,
and explore different workflows.
GitLab provides a web-based interface for many Git operations, but you can also use
Git from the command line to interact with GitLab. This provides you with additional
flexibility and control.
For a hands-on approach to learning Git commands, see [Tutorial: Make your first Git commit](../../tutorials/make_first_git_commit/_index.md). For other helpful resources, see [Tutorials: Learn Git](../../tutorials/learn_git.md).
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Work with the Git version control system.
title: Get started with Git
breadcrumbs:
- doc
- topics
- git
---
Git is a version control system you use to track changes to your code and collaborate with others.
GitLab is a web-based Git repository manager that provides CI/CD and other features to help you
manage your software development lifecycle.
You can use GitLab without knowing Git.
However, it is advantageous to understand Git when you use GitLab for source control.
Learning Git is part of a larger workflow:

## Repositories
A Git repository is a directory that contains all the files, folders, and version
history of your project.
It serves as a central hub where Git manages and tracks changes to your code.
When you initialize a Git repository or clone an existing one, Git creates a hidden directory,
`.git`, inside the project directory.
The directory contains all the essential metadata and objects Git uses to manage your repository,
including the complete history of all changes made to the files.
Git tracks changes at the file level, so you can view the modifications made to individual
files over time.
For more information, see [Repositories](../../user/project/repository/_index.md).
## Working directories
Your working directory is where you make changes to your code.
When you clone a Git repository, you create a local copy of the repository in your working directory.
You can edit files, add new ones, and test your code.
To collaborate, you can:
- Commit: After you make changes in your working directory, commit those changes to your local repository.
- Push: Push your changes to a remote Git repository hosted on GitLab. This makes your changes available to other team members.
- Pull: Pull changes made by others from the remote repository, and ensure that your local repository is updated with the latest changes.
For more information, see [Common Git commands](commands.md).
## Branches
In Git, you can use branches to work on different features, bug fixes, or experiments
simultaneously without interfering with each other's work.
Branching enables you to create an isolated environment where you can make and test
changes without affecting the default branch.
In GitLab, the default branch is usually called `main`.
### Merge a branch
After a feature is complete or a bug is fixed, you can merge your branch into the default branch.
You can do this in a [Merge request](../../user/project/merge_requests/_index.md).
Merging is a safe way to bring changes from one branch into another while preserving the
history of the changes.
If there are conflicts between the branches, for example, if you modify the same lines of code
in both branches, GitLab flags these as [merge conflicts](../../user/project/merge_requests/conflicts.md).
These must be resolved manually by reviewing and editing the code.
### Delete a branch
After a successful merge, you can delete the branch if it is no longer needed.
Deleting unnecessary branches helps keep your repository organized and manageable.
{{< alert type="note" >}}
To ensure no work is lost, verify all changes are incorporated into the default branch
before you delete the branch after the final merge.
{{< /alert >}}
For more information, see [Branches](../../user/project/repository/branches/_index.md).
## Understand the Git workflow
You can manage your code, collaborate with others, and keep your project organized
with a Git workflow.
A standard Git workflow includes the following steps:
1. Clone a repository: Create a local copy of the repository by cloning it to your machine.
You can work on the project without affecting the original repository.
1. Create a new branch: Before you make any changes, it's recommended to create a new branch.
This ensures that your changes are isolated and don't interfere with the work of others on the
default branch.
1. Make changes: Make changes to files in your working directory.
You can add new features, fix bugs, or make other modifications.
1. Stage changes: After you make changes to your files, stage the changes you want to commit.
Staging tells Git which changes should be included in the next commit.
1. Commit changes: Commit your staged changes to your local repository.
A commit saves a snapshot of your work and creates a history of the changes to your files.
1. Push changes: To share your changes with others, push them to the remote repository.
This makes your changes available to other collaborators.
1. Merge your branch: After your changes are reviewed and approved, merge your branch into the
default branch. For example, `main`. This step incorporates your changes into the project.
## Forks
Some organizations, particularly those working with open-source projects, may use
different workflows. For example, [Forks](../../user/project/repository/forking_workflow.md).
A fork is a personal copy of the repository that exists in your own namespace.
Use this workflow when contributing to open-source projects or when your team uses a
centralized repository.
## Install Git
To use Git commands and contribute to GitLab projects, you should download and install
the Git client on your computer.
The installation process varies depending on your operating system.
For example, Windows, MacOS, or Linux.
For information on how to install Git, see [Install Git](how_to_install_git/_index.md).
## Git commands
To interact with Git from the command line, you can use Git commands:
- `git clone`: Clone a repository to your local machine.
- `git branch`: List, create, or delete branches in your local repository.
- `git checkout`: Switch between different branches in your local repository.
- `git add`: Stage changes for commit.
- `git commit`: Commit staged changes to your local repository.
- `git push`: Push local commits to the remote repository.
- `git pull`: Fetch changes from the remote repository and merge them into your local branch.
For more comprehensive information and detailed explanations,
see [Command Git commands](commands.md) guide.
<!--- Use this section when the [Generate an SSH key pair](../user/ssh.md) page is added to the navigation
### Use SSH with Git
When you work with remote repositories, you should use SSH for secure communication.
GitLab uses the SSH protocol to securely communicate with Git.
When you use SSH keys to authenticate to the GitLab remote server,
you don't need to supply your username and password each time.
To learn how to generate and add SSH keys to your GitLab account,
see [Generate an SSH key pair](../user/ssh.md).
-->
## Practice with Git
The best way to learn Git is to practice.
You can create a test project, experiment with different Git commands,
and explore different workflows.
GitLab provides a web-based interface for many Git operations, but you can also use
Git from the command line to interact with GitLab. This provides you with additional
flexibility and control.
For a hands-on approach to learning Git commands, see [Tutorial: Make your first Git commit](../../tutorials/make_first_git_commit/_index.md). For other helpful resources, see [Tutorials: Learn Git](../../tutorials/learn_git.md).
|
https://docs.gitlab.com/topics/commit
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/commit.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
commit.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Stage, commit, and push changes
|
Common commands and workflows.
|
When you make changes to files in a repository, Git tracks the changes
against the most recent version of the checked out branch. You can use
Git commands to review and commit your changes to the branch, and push
your work to GitLab.
## Add and commit local changes
When you're ready to write your changes to the branch, you can commit
them. A commit includes a comment that records information about the
changes, and usually becomes the new tip of the branch.
Git doesn't automatically include any files you move, change, or
delete in a commit. This prevents you from accidentally including a
change or file, like a temporary directory. To include changes in a
commit, stage them with `git add`.
To stage and commit your changes:
1. From your repository, for each file or directory you want to add, run `git add <file name or path>`.
To stage all files in the current working directory, run `git add .`.
1. Confirm that the files have been added to staging:
```shell
git status
```
The files are displayed in green.
1. To commit the staged files:
```shell
git commit -m "<comment that describes the changes>"
```
The changes are committed to the branch.
### Write a good commit message
The guidelines published by Chris Beams in [How to Write a Git Commit Message](https://cbea.ms/git-commit/)
help you write a good commit message:
- The commit subject and body must be separated by a blank line.
- The commit subject must start with a capital letter.
- The commit subject must not be longer than 72 characters.
- The commit subject must not end with a period.
- The commit body must not contain more than 72 characters per line.
- The commit subject or body must not contain emoji.
- Commits that change 30 or more lines across at least 3 files should
describe these changes in the commit body.
- Use the full URLs for issues, milestones, and merge requests instead of short references,
as they are displayed as plain text outside of GitLab.
- The merge request should not contain more than 10 commit messages.
- The commit subject should contain at least 3 words.
## Commit all changes
You can stage all your changes and commit them with one command:
```shell
git commit -a -m "<comment that describes the changes>"
```
Be careful your commit doesn't include files you don't want to record
to the remote repository. As a rule, always check the status of your
local repository before you commit changes.
## Send changes to GitLab
To push all local changes to the remote repository:
```shell
git push <remote> <name-of-branch>
```
For example, to push your local commits to the `main` branch of the `origin` remote:
```shell
git push origin main
```
Sometimes Git does not allow you to push to a repository. Instead,
you must [force an update](git_rebase.md#force-push-to-a-remote-branch).
## Push options
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When you push changes to a branch, you can use client-side
[Git push options](https://git-scm.com/docs/git-push#Documentation/git-push.txt--oltoptiongt).
In Git 2.10 and later, use Git push options to:
- [Skip CI jobs](#push-options-for-gitlab-cicd)
- [Push to merge requests](#push-options-for-merge-requests)
In Git 2.18 and later, you can use either the long format (`--push-option`) or the shorter `-o`:
```shell
git push -o <push_option>
```
In Git 2.10 to 2.17, you must use the long format:
```shell
git push --push-option=<push_option>
```
For server-side controls and enforcement of best practices, see
[push rules](../../user/project/repository/push_rules.md) and [server hooks](../../administration/server_hooks.md).
### Push options for GitLab CI/CD
You can use push options to skip a CI/CD pipeline, or pass CI/CD variables.
{{< alert type="note" >}}
Push options are not available for merge request pipelines. For more information,
see [issue 373212](https://gitlab.com/gitlab-org/gitlab/-/issues/373212).
{{< /alert >}}
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `ci.input=<name>=<value>` | Creates a pipeline with the specified inputs. | For example: `git push -o ci.input='stage=test' -o ci.input='security_scan=false'`. Example with an array of strings: `ci.input='["string", "double", "quotes"]'` |
| `ci.skip` | Do not create a CI/CD pipeline for the latest push. Skips only branch pipelines and not [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md). This does not skip pipelines for CI/CD integrations, such as Jenkins. | `git push -o ci.skip` |
| `ci.variable="<name>=<value>"` | Provide [CI/CD variables](../../ci/variables/_index.md) to the CI/CD pipeline, if one is created due to the push. Passes variables only to branch pipelines and not [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md). | `git push -o ci.variable="MAX_RETRIES=10" -o ci.variable="MAX_TIME=600"` |
### Push options for Integrations
You can use push options to skip integration CI/CD pipelines.
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `integrations.skip_ci` | Skip push events for CI/CD integrations, such as Atlassian Bamboo, Buildkite, Drone, Jenkins, and JetBrains TeamCity. Introduced in [GitLab 16.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123837). | `git push -o integrations.skip_ci` |
### Push options for merge requests
Git push options can perform actions for merge requests while pushing changes:
| Push option | Description |
|----------------------------------------------|-------------|
| `merge_request.create` | Create a new merge request for the pushed branch. When pushing from the default branch, you must specify a target branch using the `merge_request.target` option to create a merge request. |
| `merge_request.target=<branch_name>` | Set the target of the merge request to a particular branch, such as: `git push -o merge_request.target=branch_name`. Required when creating a merge request from the default branch. |
| `merge_request.target_project=<project>` | Set the target of the merge request to a particular upstream project, such as: `git push -o merge_request.target_project=path/to/project`. Introduced in [GitLab 16.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132475). |
| `merge_request.merge_when_pipeline_succeeds` | [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/185368) in GitLab 17.11 favor of the `auto_merge` option. |
| `merge_request.auto_merge` | Set the merge request to [auto merge](../../user/project/merge_requests/auto_merge.md). |
| `merge_request.remove_source_branch` | Set the merge request to remove the source branch when it's merged. |
| `merge_request.squash` | Set the merge request to squash all commits into a single commit on merge. Introduced in [GitLab 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158778). |
| `merge_request.title="<title>"` | Set the title of the merge request. For example: `git push -o merge_request.title="The title I want"`. |
| `merge_request.description="<description>"` | Set the description of the merge request. For example: `git push -o merge_request.description="The description I want"`. |
| `merge_request.draft` | Mark the merge request as a draft. For example: `git push -o merge_request.draft`. |
| `merge_request.milestone="<milestone>"` | Set the milestone of the merge request. For example: `git push -o merge_request.milestone="3.0"`. |
| `merge_request.label="<label>"` | Add labels to the merge request. If the label does not exist, it is created. For example, for two labels: `git push -o merge_request.label="label1" -o merge_request.label="label2"`. |
| `merge_request.unlabel="<label>"` | Remove labels from the merge request. For example, for two labels: `git push -o merge_request.unlabel="label1" -o merge_request.unlabel="label2"`. |
| `merge_request.assign="<user>"` | Assign users to the merge request. Accepts username or user ID. For example, for two users: `git push -o merge_request.assign="user1" -o merge_request.assign="user2"`.|
| `merge_request.unassign="<user>"` | Remove assigned users from the merge request. Accepts username or user ID. For example, for two users: `git push -o merge_request.unassign="user1" -o merge_request.unassign="user2"`. |
### Push options for secret push protection
You can use push options to skip [secret push protection](../../user/application_security/secret_detection/secret_push_protection/_index.md).
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `secret_push_protection.skip_all` | Do not perform secret push protection for any commit in this push. | `git push -o secret_push_protection.skip_all` |
### Push options for GitGuardian integration
You can use the same [push option for Secret push protection](#push-options-for-secret-push-protection) to skip GitGuardian secret detection.
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `secret_detection.skip_all` | Deprecated in GitLab 17.2. Use `secret_push_protection.skip_all` instead. | `git push -o secret_detection.skip_all` |
| `secret_push_protection.skip_all` | Do not perform GitGuardian secret detection. | `git push -o secret_push_protection.skip_all` |
### Formats for push options
If your push option requires text containing spaces, enclose the text in
double quotes (`"`). You can omit the quotes if there are no spaces. Some examples:
```shell
git push -o merge_request.label="Label with spaces"
git push -o merge_request.label=Label-with-no-spaces
```
To combine push options to accomplish multiple tasks at once, use
multiple `-o` (or `--push-option`) flags. This command creates a
new merge request, targets a branch (`my-target-branch`), and sets auto-merge:
```shell
git push -o merge_request.create -o merge_request.target=my-target-branch -o merge_request.auto_merge
```
To create a new merge request from the default branch targeting a different branch:
```shell
git push -o merge_request.create -o merge_request.target=feature-branch
```
### Create Git aliases for pushing
Adding push options to Git commands can create very long commands. If
you use the same push options frequently, create Git aliases for them.
Git aliases are command-line shortcuts for longer Git commands.
To create and use a Git alias for the
[auto merge Git push option](#push-options-for-merge-requests):
1. In your terminal window, run this command:
```shell
git config --global alias.mwps "push -o merge_request.create -o merge_request.target=main -o merge_request.auto_merge"
```
1. To use the alias to push a local branch that targets the default branch (`main`)
and auto-merges, run this command:
```shell
git mwps origin <local-branch-name>
```
## Related topics
- [Common Git commands](commands.md)
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Common commands and workflows.
title: Stage, commit, and push changes
breadcrumbs:
- doc
- topics
- git
---
When you make changes to files in a repository, Git tracks the changes
against the most recent version of the checked out branch. You can use
Git commands to review and commit your changes to the branch, and push
your work to GitLab.
## Add and commit local changes
When you're ready to write your changes to the branch, you can commit
them. A commit includes a comment that records information about the
changes, and usually becomes the new tip of the branch.
Git doesn't automatically include any files you move, change, or
delete in a commit. This prevents you from accidentally including a
change or file, like a temporary directory. To include changes in a
commit, stage them with `git add`.
To stage and commit your changes:
1. From your repository, for each file or directory you want to add, run `git add <file name or path>`.
To stage all files in the current working directory, run `git add .`.
1. Confirm that the files have been added to staging:
```shell
git status
```
The files are displayed in green.
1. To commit the staged files:
```shell
git commit -m "<comment that describes the changes>"
```
The changes are committed to the branch.
### Write a good commit message
The guidelines published by Chris Beams in [How to Write a Git Commit Message](https://cbea.ms/git-commit/)
help you write a good commit message:
- The commit subject and body must be separated by a blank line.
- The commit subject must start with a capital letter.
- The commit subject must not be longer than 72 characters.
- The commit subject must not end with a period.
- The commit body must not contain more than 72 characters per line.
- The commit subject or body must not contain emoji.
- Commits that change 30 or more lines across at least 3 files should
describe these changes in the commit body.
- Use the full URLs for issues, milestones, and merge requests instead of short references,
as they are displayed as plain text outside of GitLab.
- The merge request should not contain more than 10 commit messages.
- The commit subject should contain at least 3 words.
## Commit all changes
You can stage all your changes and commit them with one command:
```shell
git commit -a -m "<comment that describes the changes>"
```
Be careful your commit doesn't include files you don't want to record
to the remote repository. As a rule, always check the status of your
local repository before you commit changes.
## Send changes to GitLab
To push all local changes to the remote repository:
```shell
git push <remote> <name-of-branch>
```
For example, to push your local commits to the `main` branch of the `origin` remote:
```shell
git push origin main
```
Sometimes Git does not allow you to push to a repository. Instead,
you must [force an update](git_rebase.md#force-push-to-a-remote-branch).
## Push options
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
When you push changes to a branch, you can use client-side
[Git push options](https://git-scm.com/docs/git-push#Documentation/git-push.txt--oltoptiongt).
In Git 2.10 and later, use Git push options to:
- [Skip CI jobs](#push-options-for-gitlab-cicd)
- [Push to merge requests](#push-options-for-merge-requests)
In Git 2.18 and later, you can use either the long format (`--push-option`) or the shorter `-o`:
```shell
git push -o <push_option>
```
In Git 2.10 to 2.17, you must use the long format:
```shell
git push --push-option=<push_option>
```
For server-side controls and enforcement of best practices, see
[push rules](../../user/project/repository/push_rules.md) and [server hooks](../../administration/server_hooks.md).
### Push options for GitLab CI/CD
You can use push options to skip a CI/CD pipeline, or pass CI/CD variables.
{{< alert type="note" >}}
Push options are not available for merge request pipelines. For more information,
see [issue 373212](https://gitlab.com/gitlab-org/gitlab/-/issues/373212).
{{< /alert >}}
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `ci.input=<name>=<value>` | Creates a pipeline with the specified inputs. | For example: `git push -o ci.input='stage=test' -o ci.input='security_scan=false'`. Example with an array of strings: `ci.input='["string", "double", "quotes"]'` |
| `ci.skip` | Do not create a CI/CD pipeline for the latest push. Skips only branch pipelines and not [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md). This does not skip pipelines for CI/CD integrations, such as Jenkins. | `git push -o ci.skip` |
| `ci.variable="<name>=<value>"` | Provide [CI/CD variables](../../ci/variables/_index.md) to the CI/CD pipeline, if one is created due to the push. Passes variables only to branch pipelines and not [merge request pipelines](../../ci/pipelines/merge_request_pipelines.md). | `git push -o ci.variable="MAX_RETRIES=10" -o ci.variable="MAX_TIME=600"` |
### Push options for Integrations
You can use push options to skip integration CI/CD pipelines.
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `integrations.skip_ci` | Skip push events for CI/CD integrations, such as Atlassian Bamboo, Buildkite, Drone, Jenkins, and JetBrains TeamCity. Introduced in [GitLab 16.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/123837). | `git push -o integrations.skip_ci` |
### Push options for merge requests
Git push options can perform actions for merge requests while pushing changes:
| Push option | Description |
|----------------------------------------------|-------------|
| `merge_request.create` | Create a new merge request for the pushed branch. When pushing from the default branch, you must specify a target branch using the `merge_request.target` option to create a merge request. |
| `merge_request.target=<branch_name>` | Set the target of the merge request to a particular branch, such as: `git push -o merge_request.target=branch_name`. Required when creating a merge request from the default branch. |
| `merge_request.target_project=<project>` | Set the target of the merge request to a particular upstream project, such as: `git push -o merge_request.target_project=path/to/project`. Introduced in [GitLab 16.6](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132475). |
| `merge_request.merge_when_pipeline_succeeds` | [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/185368) in GitLab 17.11 favor of the `auto_merge` option. |
| `merge_request.auto_merge` | Set the merge request to [auto merge](../../user/project/merge_requests/auto_merge.md). |
| `merge_request.remove_source_branch` | Set the merge request to remove the source branch when it's merged. |
| `merge_request.squash` | Set the merge request to squash all commits into a single commit on merge. Introduced in [GitLab 17.2](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/158778). |
| `merge_request.title="<title>"` | Set the title of the merge request. For example: `git push -o merge_request.title="The title I want"`. |
| `merge_request.description="<description>"` | Set the description of the merge request. For example: `git push -o merge_request.description="The description I want"`. |
| `merge_request.draft` | Mark the merge request as a draft. For example: `git push -o merge_request.draft`. |
| `merge_request.milestone="<milestone>"` | Set the milestone of the merge request. For example: `git push -o merge_request.milestone="3.0"`. |
| `merge_request.label="<label>"` | Add labels to the merge request. If the label does not exist, it is created. For example, for two labels: `git push -o merge_request.label="label1" -o merge_request.label="label2"`. |
| `merge_request.unlabel="<label>"` | Remove labels from the merge request. For example, for two labels: `git push -o merge_request.unlabel="label1" -o merge_request.unlabel="label2"`. |
| `merge_request.assign="<user>"` | Assign users to the merge request. Accepts username or user ID. For example, for two users: `git push -o merge_request.assign="user1" -o merge_request.assign="user2"`.|
| `merge_request.unassign="<user>"` | Remove assigned users from the merge request. Accepts username or user ID. For example, for two users: `git push -o merge_request.unassign="user1" -o merge_request.unassign="user2"`. |
### Push options for secret push protection
You can use push options to skip [secret push protection](../../user/application_security/secret_detection/secret_push_protection/_index.md).
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `secret_push_protection.skip_all` | Do not perform secret push protection for any commit in this push. | `git push -o secret_push_protection.skip_all` |
### Push options for GitGuardian integration
You can use the same [push option for Secret push protection](#push-options-for-secret-push-protection) to skip GitGuardian secret detection.
| Push option | Description | Example |
|--------------------------------|-------------|---------|
| `secret_detection.skip_all` | Deprecated in GitLab 17.2. Use `secret_push_protection.skip_all` instead. | `git push -o secret_detection.skip_all` |
| `secret_push_protection.skip_all` | Do not perform GitGuardian secret detection. | `git push -o secret_push_protection.skip_all` |
### Formats for push options
If your push option requires text containing spaces, enclose the text in
double quotes (`"`). You can omit the quotes if there are no spaces. Some examples:
```shell
git push -o merge_request.label="Label with spaces"
git push -o merge_request.label=Label-with-no-spaces
```
To combine push options to accomplish multiple tasks at once, use
multiple `-o` (or `--push-option`) flags. This command creates a
new merge request, targets a branch (`my-target-branch`), and sets auto-merge:
```shell
git push -o merge_request.create -o merge_request.target=my-target-branch -o merge_request.auto_merge
```
To create a new merge request from the default branch targeting a different branch:
```shell
git push -o merge_request.create -o merge_request.target=feature-branch
```
### Create Git aliases for pushing
Adding push options to Git commands can create very long commands. If
you use the same push options frequently, create Git aliases for them.
Git aliases are command-line shortcuts for longer Git commands.
To create and use a Git alias for the
[auto merge Git push option](#push-options-for-merge-requests):
1. In your terminal window, run this command:
```shell
git config --global alias.mwps "push -o merge_request.create -o merge_request.target=main -o merge_request.auto_merge"
```
1. To use the alias to push a local branch that targets the default branch (`main`)
and auto-merges, run this command:
```shell
git mwps origin <local-branch-name>
```
## Related topics
- [Common Git commands](commands.md)
|
https://docs.gitlab.com/topics/cherry_pick
|
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/topics/cherry_pick.md
|
2025-08-13
|
doc/topics/git
|
[
"doc",
"topics",
"git"
] |
cherry_pick.md
|
Create
|
Source Code
|
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
|
Cherry-pick changes with Git
|
Cherry-pick a Git commit when you want to add a single commit from one branch to another.
|
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use `git cherry-pick` to apply the changes from a specific commit to your current
working branch. Use this command to:
- Backport bug fixes from the default branch to previous release branches.
- Copy changes from a fork to the upstream repository.
- Apply specific changes without merging entire branches.
You can also use the GitLab UI to cherry-pick. For more information,
see [Cherry-pick changes](../../user/project/merge_requests/cherry_pick_changes.md).
{{< alert type="warning" >}}
Use `git cherry-pick` carefully because it can create duplicate commits and potentially
complicate your project history.
{{< /alert >}}
## Cherry-pick a single commit
To cherry-pick a single commit from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command. Replace `<commit_sha>` with the SHA of
the commit you identified:
```shell
git cherry-pick <commit_sha>
```
Git applies the changes from the specified commit to your current working branch.
If there are conflicts, a notification is displayed. You can then resolve the
conflicts and continue the cherry-pick process.
## Cherry-pick multiple commits
To cherry-pick multiple commits from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command for each commit,
replacing `<commit_sha>` with the SHA of the commit:
```shell
git cherry-pick <commit_sha_1>
git cherry-pick <commit_sha_2>
...
```
Alternatively, you can cherry-pick a range of commits using the `..` notation:
```shell
git cherry-pick <start_commit_sha>..<end_commit_sha>
```
This applies all the commits between `<start_commit_sha>` and `<end_commit_sha>`
to your current working branch.
## Cherry-pick a merge commit
Cherry-picking a merge commit applies the changes from the merge commit to your current working branch.
To cherry-pick a merge commit from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command with the `-m` option and the index of the parent commit
you want to use as the mainline. Replace `<commit_sha>` with the SHA of the merge commit
and `<parent_index>` with the index of the parent commit. The index starts from `1`. For example:
```shell
git cherry-pick -m 1 <merge-commit-hash>
```
This configures Git to use the first parent as the mainline. To use the second parent as the mainline, use `-m 2`.
## Related topics
- [Cherry-pick changes with the GitLab UI](../../user/project/merge_requests/cherry_pick_changes.md).
- [Commits API](../../api/commits.md#cherry-pick-a-commit)
## Troubleshooting
If you encounter conflicts during cherry-picking:
1. Resolve the conflicts manually in the affected files.
1. Stage the resolved files:
```shell
git add <resolved_file>
```
1. Continue the cherry-pick process:
```shell
git cherry-pick --continue
```
To abort the cherry-pick process and return to the previous state,
use the following command:
```shell
git cherry-pick --abort
```
This undoes any changes made during the cherry-pick process.
|
---
stage: Create
group: Source Code
info: To determine the technical writer assigned to the Stage/Group associated with
this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
description: Cherry-pick a Git commit when you want to add a single commit from one
branch to another.
title: Cherry-pick changes with Git
breadcrumbs:
- doc
- topics
- git
---
{{< details >}}
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
Use `git cherry-pick` to apply the changes from a specific commit to your current
working branch. Use this command to:
- Backport bug fixes from the default branch to previous release branches.
- Copy changes from a fork to the upstream repository.
- Apply specific changes without merging entire branches.
You can also use the GitLab UI to cherry-pick. For more information,
see [Cherry-pick changes](../../user/project/merge_requests/cherry_pick_changes.md).
{{< alert type="warning" >}}
Use `git cherry-pick` carefully because it can create duplicate commits and potentially
complicate your project history.
{{< /alert >}}
## Cherry-pick a single commit
To cherry-pick a single commit from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command. Replace `<commit_sha>` with the SHA of
the commit you identified:
```shell
git cherry-pick <commit_sha>
```
Git applies the changes from the specified commit to your current working branch.
If there are conflicts, a notification is displayed. You can then resolve the
conflicts and continue the cherry-pick process.
## Cherry-pick multiple commits
To cherry-pick multiple commits from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command for each commit,
replacing `<commit_sha>` with the SHA of the commit:
```shell
git cherry-pick <commit_sha_1>
git cherry-pick <commit_sha_2>
...
```
Alternatively, you can cherry-pick a range of commits using the `..` notation:
```shell
git cherry-pick <start_commit_sha>..<end_commit_sha>
```
This applies all the commits between `<start_commit_sha>` and `<end_commit_sha>`
to your current working branch.
## Cherry-pick a merge commit
Cherry-picking a merge commit applies the changes from the merge commit to your current working branch.
To cherry-pick a merge commit from another branch into your current working branch:
1. Check out the branch you want to cherry-pick into:
```shell
git checkout your_branch
```
1. Identify the Secure Hash Algorithm (SHA) of the commit you want to cherry-pick.
To find this, check the commit history or use the `git log` command. For example:
```shell
$ git log
commit 0000011111222223333344444555556666677777
Merge: 88888999999 aaaaabbbbbb
Author: user@example.com
Date: Tue Aug 31 21:19:41 2021 +0000
```
1. Use the `git cherry-pick` command with the `-m` option and the index of the parent commit
you want to use as the mainline. Replace `<commit_sha>` with the SHA of the merge commit
and `<parent_index>` with the index of the parent commit. The index starts from `1`. For example:
```shell
git cherry-pick -m 1 <merge-commit-hash>
```
This configures Git to use the first parent as the mainline. To use the second parent as the mainline, use `-m 2`.
## Related topics
- [Cherry-pick changes with the GitLab UI](../../user/project/merge_requests/cherry_pick_changes.md).
- [Commits API](../../api/commits.md#cherry-pick-a-commit)
## Troubleshooting
If you encounter conflicts during cherry-picking:
1. Resolve the conflicts manually in the affected files.
1. Stage the resolved files:
```shell
git add <resolved_file>
```
1. Continue the cherry-pick process:
```shell
git cherry-pick --continue
```
To abort the cherry-pick process and return to the previous state,
use the following command:
```shell
git cherry-pick --abort
```
This undoes any changes made during the cherry-pick process.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.